You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The idea is to have an option AI=True, which would do the following:
drops in an LLM
LLM does a quick and structured data analysis to look at key aspects of the data that are relevant for emulation
LLM choses either hyperparameters for each model or a hyperparameter search space for each model, which are based on it's information about the data.
LLM provides these parameters as json. We would then take the json and run existing autoemulate functions with these parameters.
To do this, we can use RAG. For example, we can include a document detailling each model and parameter and details about the output format needed for autoemulate. Structured json output + RAG should make this fairly robust.
The text was updated successfully, but these errors were encountered:
The idea is to have an option
AI=True
, which would do the following:autoemulate
functions with these parameters.To do this, we can use RAG. For example, we can include a document detailling each model and parameter and details about the output format needed for
autoemulate
. Structured json output + RAG should make this fairly robust.The text was updated successfully, but these errors were encountered: