Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Text-to-sql agent failing after a couple of questions #4196

Open
CiaraRichmond opened this issue Dec 23, 2024 · 2 comments
Open
Labels
question Further information is requested

Comments

@CiaraRichmond
Copy link

Describe your problem

I have set-up a text-to-sql agent using the template as a starting point. Here is the work flow for this agent:
image

When I start a new chat with the agent I can ask 2-3 questions in a row successfully. But on question 3 or 4 the Deepseek-coder element in top right of above image does not produce SQL code. It seems to hallucinate a false answer in response to the user question. Of course when this is executed as SQL code an incorrect syntax error occurs as you'd expect.

E.g.
Hypothetically the dataset relates to employees and their work location.
How many people work in london?
Correct SQL is produced, it is run successfully and the final node accurately generates a response from the HTML table output from the SQL execution.
I then ask: What are the names of these people?
The workflow works as expected and produces correct response.
But then I ask: How many people work in New York?
The SQL generation element in top right of image will try to make up a response to the question, something like 'New York has 30 employees who work there'. Not only is this not a SQL code output, but it is a complete hallucination as there are not 30 people who work out of New York. I have tried adding very strong wording to the prompt of this element so it strictly returns SQL code, but this has not helped.

Please can someone advise me as to how to ensure that my text-to-sql LLM element only returns SQL code and not a false attempt at answering the question.

@CiaraRichmond CiaraRichmond added the question Further information is requested label Dec 23, 2024
@KevinHuSh
Copy link
Collaborator

Try this out.
image

@CiaraRichmond
Copy link
Author

Ahh thank you very much, seems to have worked fine. Is there any context for why 2 is the best message window size? I played about with other numbers in there for testing and they didn't seem to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants