You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First: Thanks for this outstanding tool! I love the UI and general ease of use 💪
Doing web search is accomplished by sending the whole user prompt to search engines in the q parameter. Other popular tools like perplexity or morphic augment the user query to enhance the search terms. That might greatly improve quality.
Providing multiple (e.g. 3) multi angled search queries could really enhance the experience.
I got a question in this context:
Is the content of the resulting search result fed into the prompt for the LLM or the actual contents of the found sources?
Another thing that caught my attention are the follow up questions. In my current build 1.2.3 there no follow up questions visible.
The text was updated successfully, but these errors were encountered:
Doing web search is accomplished by sending the whole user prompt to search engines in the q parameter. Other popular tools like perplexity or morphic augment the user query to enhance the search terms. That might greatly improve quality.
Hey, thanks for the suggestions! Currently, the LLM generates a search query based on follow-up questions. I will change the flow so it generates a polished search query from the first message.
I got a question in this context:
Is the content of the resulting search result fed into the prompt for the LLM or the actual contents of the found sources?
Currently, all search results are appended directly to the prompt. You can turn off the 'Perform Simple Internet Search' option, and it will perform RAG on the search results instead.
First: Thanks for this outstanding tool! I love the UI and general ease of use 💪
Doing web search is accomplished by sending the whole user prompt to search engines in the q parameter. Other popular tools like perplexity or morphic augment the user query to enhance the search terms. That might greatly improve quality.
Providing multiple (e.g. 3) multi angled search queries could really enhance the experience.
I got a question in this context:
Is the content of the resulting search result fed into the prompt for the LLM or the actual contents of the found sources?
Another thing that caught my attention are the follow up questions. In my current build 1.2.3 there no follow up questions visible.
The text was updated successfully, but these errors were encountered: