You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The goal is to enhance Efsity by introducing StructureMap generation capabilities. This involves sending a Questionnaire Response (QR) to a Large Language Model (LLM) and obtaining resources, such as Patient and Encounter. The subsequent step is to instruct the LLM to create a StructureMap for converting QR to Patient and Encounter.
The process can be extended by running the generated StructureMap against the QR, comparing it with the requested output, and iterating until a maximum number of loops or convergence is achieved. This combined approach involves generating QR and StructureMap consecutively, ensuring seamless integration.
Initial Attempt:
Started with Code Llama using the setup instructions provided here. Downloaded the 7B Large Language Model (LLM) and utilized code similar to example_completion.py for local testing. Encountered an issue as the code required GPU, which was not explicitly mentioned in the installation instructions.
Troubleshooting:
Attempted cloud options for LLM, including the Code Llama Hugging Face model here.
Observed successful functioning with smaller textual data but encountered errors when the input text exceeded a certain length.
Extracted from https://github.com/onaio/canopy/issues/3094
Background:
The goal is to enhance Efsity by introducing StructureMap generation capabilities. This involves sending a Questionnaire Response (QR) to a Large Language Model (LLM) and obtaining resources, such as Patient and Encounter. The subsequent step is to instruct the LLM to create a StructureMap for converting QR to Patient and Encounter.
The process can be extended by running the generated StructureMap against the QR, comparing it with the requested output, and iterating until a maximum number of loops or convergence is achieved. This combined approach involves generating QR and StructureMap consecutively, ensuring seamless integration.
Initial Attempt:
Started with Code Llama using the setup instructions provided here. Downloaded the 7B Large Language Model (LLM) and utilized code similar to example_completion.py for local testing. Encountered an issue as the code required GPU, which was not explicitly mentioned in the installation instructions.
Troubleshooting:
Next Steps:
Currently exploring two additional cloud options:
Resources: OpenSource Code LLMs https://github.com/eugeneyan/open-llms#open-llms-for-code
The text was updated successfully, but these errors were encountered: