You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Howdy! Right now, the openai client is only compatible with language modeling tasks, and the tasks in long_context_tasks.yaml are all question_answering tasks. That being said, we're currently working on supporting the entire gauntlet for the openai client.
We used this branch. hf_eval.yaml, which I've included below, is an example of how we performed long_context evals. Please note this branch is under active development and might change without warning.
@maxisawesome @bmosaicml Hi, could you please give some advice for this issue? Thanks~
Environment
CentOS
python 3.10.13
llm-foundry==0.7.0
git clone https://github.com/mosaicml/llm-foundry.git
cd llm-foundry
pip install cmake packaging
pip install -e ".[gpu]"
pip install flash-attn==2.5.6
To reproduce
Steps to reproduce the behavior:
eval/yamls/openai_eval.yaml
run
composer eval/eval.py eval/yamls/openai_eval.yaml
failed with KeyError
Expected behavior
Successful Evaluation for long_context_tasks
Additional context
The text was updated successfully, but these errors were encountered: