Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate and display natural-language descriptions of conversations #6894

Open
neubig opened this issue Feb 22, 2025 · 5 comments
Open

Generate and display natural-language descriptions of conversations #6894

neubig opened this issue Feb 22, 2025 · 5 comments
Labels
enhancement New feature or request

Comments

@neubig
Copy link
Contributor

neubig commented Feb 22, 2025

What problem or use case are you trying to solve?

Currently in the conversation panel in the frontend, the conversations are labeled by a hash and the time.

However, in interfaces like chatgpt, each conversation is labeled in natural language which makes them easy to understand and browse.

Describe the UX of the solution you'd like

Each conversation could be given a short summary that is shown in the browsing window.

Do you have thoughts on the technical implementation?

This could be implemented by using an LLM to summarize the first user message.

CC @tofarr

@neubig neubig added the enhancement New feature or request label Feb 22, 2025
@enyst
Copy link
Collaborator

enyst commented Feb 23, 2025

Seems related: #5464

@tofarr
Copy link
Collaborator

tofarr commented Feb 23, 2025

I wonder if we should introduce settings for this in the global config? An optional LLMConfig instance named conversation_namer, and a perhaps rename every time a MessageAction with a source of user is received?

@enyst
Copy link
Collaborator

enyst commented Feb 24, 2025

IMHO we don't even need to introduce anything in the global config or llm config, we can use the custom llm configurations feature to just define an llm with our chosen name.

We have done that with draft_editor name, e.g. [llm.draft_editor], fill it in and it will just work, the draft feature will use that llm instead of the main llm. Documented here, maybe not great docs, though: https://docs.all-hands.dev/modules/usage/llms/custom-llm-configs

The important part, it seems to me, is that we would reserve a name this way, so we'd need to document that, and maybe choose some name that is unlikely to be used by someone by accident.

The options seem to be:

  1. [llm.conversation_namer], maybe a little too granular? If we rename also e.g. workspace.zip with a summary I suppose we'd use the same llm. 😄
  2. [llm.helper] or [llm.openhands_helper], to stand for "auxiliary LLM that performs some non-agent small things to enhance the UX"
  3. [llm.summarizer], somewhere in-between 1 and 2, but I don't know, if we will want a summarizer for the agent, we shouldn't, IMHO, get them mixed up. The requirements of agent performance are a different thing.
  4. do nothing yet: we could use the main LLM for this feature, and postpone when we tackle the "helper" LLM.

Personally, I'd vote for 2.

@neubig
Copy link
Contributor Author

neubig commented Feb 25, 2025

To be honest I think that issue in this issue can be solved separately. We could first implement this issue using the standard LM, independent if we implement the helper LM thing we could switch over to using that instead. Personally I'm happy with using clots on it for generating summaries in addition to using it in the agent because generating summaries will be so much cheaper than running the agent anyway

@enyst
Copy link
Collaborator

enyst commented Feb 25, 2025

OK, maybe we can think about it later. Just for clarity, it's already implemented, we just need a one-liner to pick it up. If it seems kinda overkill to do it at all, let's not do it yet. In time we may gather more cases and reassess?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants