Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimental: LLM+RAG backbone #164

Open
mastoffel opened this issue Feb 13, 2024 · 0 comments
Open

Experimental: LLM+RAG backbone #164

mastoffel opened this issue Feb 13, 2024 · 0 comments
Assignees

Comments

@mastoffel
Copy link
Collaborator

The idea is to have an option AI=True, which would do the following:

  • drops in an LLM
  • LLM does a quick and structured data analysis to look at key aspects of the data that are relevant for emulation
  • LLM choses either hyperparameters for each model or a hyperparameter search space for each model, which are based on it's information about the data.
  • LLM provides these parameters as json. We would then take the json and run existing autoemulate functions with these parameters.

To do this, we can use RAG. For example, we can include a document detailling each model and parameter and details about the output format needed for autoemulate. Structured json output + RAG should make this fairly robust.

@mastoffel mastoffel self-assigned this Feb 13, 2024
@github-project-automation github-project-automation bot moved this to 📋 Product backlog in AutoEmulate Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 📋 Product backlog
Development

No branches or pull requests

2 participants