-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prompt and test improvements, cost tracking #237
Conversation
…ary to do it in agents, prompt improvements
Finished benchmarks Test Run Summary
Detailed Results
Total run time: 11.45 minutes |
/workflows/benchmarks agents/token/test_swap.py,agents/token/test_swap_and_send.py |
Finished benchmarks Test Run Summary
Detailed Results
Total run time: 21.34 minutes |
/workflows/benchmarks agents/token/research |
Finished benchmarks Test Run Summary
Detailed Results
Total run time: 41.17 minutes |
/workflows/benchmarks agents/token/research/test_research_and_swap.py::test_research_and_swap_many_tokens_subjective_simple 10 |
Finished benchmarks Test Run Summary
Detailed Results
Total run time: 18.07 minutes |
Co-authored-by: Cesar Brazon <[email protected]>
…lywrap/AutoTx into nerfzael/cost-and-prompt-improvements
/workflows/benchmarks agents/token 1 |
Finished benchmarks Test Run Summary
Detailed Results
Total run time: 15.22 minutes |
Closes: #236
Changes:
--cache
option (default False) to cache LLM requests (will save costs when developing)--max-rounds
option to change max number of rounds (default 100 in CLI, 50 in tests)