-
Notifications
You must be signed in to change notification settings - Fork 21
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
279 changed files
with
144 additions
and
59 deletions.
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{"pageProps":{"frontmatter":{"title":"About"},"content":"\nLarge Model Systems Organization (LMSYS Org) is an open research organization founded by students and faculty from UC Berkeley in collaboration with UCSD and CMU.\n\nWe aim to make large models accessible to everyone by co-development of open models, datasets, systems, and evaluation tools. Our work encompasses research in both machine learning and systems. We train large language models and make them widely available, while also developing distributed systems to accelerate their training and inference.\n\n### Members\n**Student Team** \n[Lianmin Zheng](https://lmzheng.net/), [Ying Sheng](https://sites.google.com/view/yingsheng/home), [Wei-Lin Chiang](https://infwinston.github.io/), [Lisa Dunlap](https://lisabdunlap.com), [Shiyi Cao](https://shiyicao.com/), [Tianle Li](https://codingwithtim.github.io/), [Christopher Chou](https://github.com/BabyChouSr), [Isaac Ong](https://isaacong.me), [Dacheng Li](https://dachengli1.github.io/), [Zhuohan Li](https://people.eecs.berkeley.edu/~zhuohan/), [Zi Lin](https://zi-lin.com/), [Zhanghao Wu](https://zhanghaowu.me/), [Shuo Yang](https://github.com/andy-yang-1), [Siyuan Zhuang](https://github.com/suquark), [Yonghao Zhuang](https://github.com/ZYHowell)\n\n**Faculty Team** \n[Joseph E. Gonzalez](https://people.eecs.berkeley.edu/~jegonzal/), [Ion Stoica](https://people.eecs.berkeley.edu/~istoica/), [Eric P. Xing](http://www.cs.cmu.edu/~epxing/), [Hao Zhang](https://people.eecs.berkeley.edu/~hao/), [Trevor Darrell](https://people.eecs.berkeley.edu/~trevor/)\n\n**Institutions** \nUC Berkeley, UCSD, CMU, MBZUAI\n\n### Contact us\n- Email us at [[email protected]](mailto:[email protected]).\n- Join us on [discord](https://discord.com/invite/HSWAKCrnFx).\n- Follow us on [twitter](https://twitter.com/lmsysorg).\n"},"__N_SSG":true} |
Large diffs are not rendered by default.
Oops, something went wrong.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{"pageProps":{"frontmatter":{"title":"RouteLLM: An Open-Source Framework for Cost-Effective LLM Routing","author":"Isaac Ong*, Amjad Almahairi*, Vincent Wu, Wei-Lin Chiang, Tianhao Wu, Joseph E. Gonzalez, M Waleed Kadous, Ion Stoica","date":"July 1, 2024","previewImg":"/images/blog/routellm/cover.png"},"content":"\nLLMs have demonstrated remarkable capabilities across a range of tasks, but there exists wide variation in their costs and capabilities, as seen from the plot of performance against cost in Figure 1. Very broadly, more capable models tend to be more expensive than less capable models. This leads to a dilemma when deploying LLMs in the real-world - routing all queries to the largest, most capable model leads to the highest-quality responses but can be expensive, while routing queries to smaller models can save costs but may result in lower-quality responses.\n\n<img src=\"/images/blog/routellm/main.png\" style=\"display:block; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 50%\"></img>\n\n<p style=\"color:gray; text-align: center;\">Figure 1: Plot of performance against cost of various LLMs. Performance is measured by Elo on Chatbot Arena, and cost per million tokens assuming a 1:1 input / output ratio. Through routing between two models, we ideally achieve a better performance:cost ratio than can be achieved with either model.</p>\n\n*LLM routing* offers a solution to this problem, whereby each query is first processed by a system that decides which LLM to route it to. Ideally, the system should route all queries that can be sufficiently handled by weaker models to such models, and all other queries to stronger models, minimizing cost while maintaining response quality. However, this turns out to be a challenging problem because the routing system has to infer both the characteristics of an incoming query and different models’ capabilities before routing.\n\nTo tackle this, we present **RouteLLM**, a principled framework for LLM routing based on preference data. We formalize the problem of LLM routing and explore augmentation techniques to improve router performance. We trained four different routers using public data from Chatbot Arena and demonstrate that they can significantly reduce costs without compromising quality, with **cost reductions of over 85% on MT Bench, 45% on MMLU, and 35% on GSM8K** as compared to using only GPT-4, while still achieving 95% of GPT-4 performance. We also publicly release all our code and datasets, including a new [open-source framework](https://github.com/lm-sys/RouteLLM) for serving and evaluating LLM routers.\n\n## Routing Setup\n\nIn our routing setup, we focus on the case where there are two models: a stronger, more expensive model, and a weaker but cheaper model. Given this setup, our objective is to minimize costs while achieving high quality by routing between both models.\n\n<img src=\"/images/blog/routellm/metrics.png\" style=\"display:block; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 45%\"></img>\n\n\n<p style=\"color:gray; text-align: center;\">Figure 2: Random router performance on MT Bench</p>\n\nThis is best understood through Figure 2, which represents the performance of a router that randomly routes between the two models on MT Bench. Specifically, we route between GPT-4 and Mixtral 8x7B here, with their performance denoted by the red and grey dotted lines respectively. For any router, we can plot a similar graph of its performance against the number of the calls made to GPT-4 (which is representative of the cost incurred since the cost of a Mixtral call is negligible).\n\nTo train our routers, we use *preference data*, which each consists of a prompt and a comparison between the response quality of two models on that prompt i.e. this could be a win for the first model, a win for the second model, or a tie. Using preference data allows us to learn about the strengths and weaknesses of different models and how they relate to queries, which is effective for training routers. For our base dataset, we utilize [public data](https://huggingface.co/datasets/lmsys/lmsys-arena-human-preference-55k) from [Chatbot Arena](http://chat.lmsys.org). We also investigate *data augmentation* techniques to further improve performance using both golden-label datasets and a LLM judge.\n\nWe trained four routers using a mix of Chatbot Arena data and data augmentation:\n- A similarity-weighted (SW) ranking router that performs a “weighted Elo calculation” based on similarity\n- A matrix factorization model that learns a scoring function for how well a model can answer a prompt\n- A BERT classifier that predicts which model can provide a better response\n- A causal LLM classifier that also predicts which model can provide a better response\n\n## Results\n\nWe evaluated these routers on three popular benchmarks: [MT Bench](https://arxiv.org/abs/2306.05685), [MMLU](https://arxiv.org/abs/2009.03300), and [GSM8K](https://arxiv.org/abs/2110.14168), presenting results for MT Bench and MMLU below. For evaluation, we route between `gpt-4-1106-preview` as our strong model and `mixtral-8x7b-instruct-v0.1` as our weak model. We use the random router from before as our baseline.\n\n<br />\n<figure style=\"text-align: center\">\n<img src=\"/images/blog/routellm/combined-mt-bench.png\" style=\"display:block; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 90%\"></img>\n</figure>\n\n<p style=\"color:gray; text-align: center;\">Figure 3: Router performance on MT Bench (left) trained only on Arena data (right) trained on Arena data augmented using a LLM judge.</p>\n\nFigure 3 displays the performance of our routers on MT Bench. For routers trained only on the Arena dataset, we observe strong performance for both matrix factorization and SW ranking. Notably, matrix factorization is able to achieve 95% of GPT-4 performance using 26% GPT-4 calls, which is approximately 48% cheaper as compared to the random baseline.\n\nAugmenting the Arena data using an LLM judge leads to significant improvements across all routers. When trained on this augmented dataset, matrix factorization is again the best-performing router, with the number of GPT-4 calls required to achieve 95% GPT-4 performance further halved at 14% of total calls, 75% cheaper than the random baseline.\n\n<br />\n<figure style=\"text-align: center\">\n<img src=\"/images/blog/routellm/combined-mmlu.png\" style=\"display:block; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 90%\"></img>\n</figure>\n\n\n<p style=\"color:gray; text-align: center;\">Figure 4: Router performance on MMLU (left) trained only on Arena data (right) trained on Arena data augmented using golden-label data from the MMLU validation split.</p>\n\nConversely, on MMLU in Figure 4, all routers perform poorly at a near-random level when trained only on the Arena dataset, which we attribute to most MMLU questions being out-of-distribution. However, augmenting the training dataset using golden-label data from the MMLU validation split leads to significant performance improvements across all routers, with our best-performing causal LLM router now requiring only 54% GPT-4 calls to achieve 95% of GPT-4 performance, 14% cheaper than the random baseline. Importantly, this augmented dataset of approximately 1500 samples represents less than 2% of the overall training data, demonstrating the effectiveness of data augmentation even when the number of samples is small.\n\n### Generalizing to Other Models\n\nWhile we route between GPT-4 and Mixtral for the above evaluations, to demonstrate the generalizability of our framework, we also present MT Bench results when routing between a different model pair: Claude 3 Opus and Llama 3 8B. Importantly, we use the same routers *without any retraining*, and responses from Claude 3 Opus and Llama 3 8B are not present in our training data.\n\n<br />\n<img src=\"/images/blog/routellm/mt-bench-claude-llama.png\" style=\"display:block; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 45%\"></img>\n\n<p style=\"color:gray; text-align: center;\">Figure 6: Router performance on MT Bench when routed to Claude 3 Opus and Llama 3 8B.</p>\n\nEven when the model pair is replaced, we observe strong results across all routers on MT Bench in Figure 6, with performance comparable to our original model pair. This suggests that our routers have learned some common characteristics of problems that can distinguish between strong and weak models, which generalize to new model pairs without additional training.\n\n### RouteLLM vs Commercial Offerings\n\n<br />\n<figure style=\"text-align: center\">\n<img src=\"/images/blog/routellm/indep-benchmarks-llama.png\" style=\"display:inline; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 46%\"></img>\n<img src=\"/images/blog/routellm/indep-benchmarks.png\" style=\"display:inline; margin-top: auto; margin-left: auto; margin-right: auto; margin-bottom: auto; width: 45%\"></img>\n</figure>\n\n<p style=\"color:gray; text-align: center;\">Figure 7: Comparison of our router against existing routing systems on MT Bench (left) using gpt-4-turbo-2024-04-09 and llama-2-70b-chat (right) using gpt-4-turbo-2024-04-09 and mixtral-8x7b-instruct-v0.1 </p>\n\nIn Figure 7, we also report the performance of our best-performing routers on MT Bench against [Martian](https://withmartian.com/) and [Unify AI](https://unify.ai/), two commercial LLM routing systems. We use `gpt-4-turbo-2024-04-09` as the strong model and `llama-2-70b-chat` or `mixtral-8x7b-instruct-v0.1` as the weak model depending on the models available. Our routers demonstrate very competitive results, achieving the same performance as these commercial routers while being up to 40% cheaper.\n\n## Conclusion\n\nThese results demonstrate the ability of our routers to achieve significant cost savings while maintaining a high quality of responses. They also highlight the effectiveness of data augmentation in improving routing performance using only a small amount of data, offering a scalable path towards improving routing performance for real-world use cases.\n\nBased on our learnings from this research, we have created an open-source framework for serving and evaluating routers on [GitHub](https://github.com/lm-sys/RouteLLM). We are also releasing all our routers and datasets on [HuggingFace](https://huggingface.co/routellm) for public use.\n\nWe are excited to see what you build on top of this! Please let us know if you face any issues or have any suggestions. For the full details, please refer to our [arXiv](https://arxiv.org/abs/2406.18665) paper.\n\n## Acknowledgements\n\t\t\t\t\nWe are grateful to Tyler Griggs for his valuable feedback on this post.\n\t\t\t\n## Citations\n\n```\n@misc{ong2024routellmlearningroutellms,\n title={RouteLLM: Learning to Route LLMs with Preference Data}, \n author={Isaac Ong and Amjad Almahairi and Vincent Wu and Wei-Lin Chiang and Tianhao Wu and Joseph E. Gonzalez and M Waleed Kadous and Ion Stoica},\n year={2024},\n eprint={2406.18665},\n archivePrefix={arXiv},\n primaryClass={cs.LG},\n url={https://arxiv.org/abs/2406.18665}, \n}\n\n@misc{chiang2024chatbot,\n title={Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference},\n author={Wei-Lin Chiang and Lianmin Zheng and Ying Sheng and Anastasios Nikolas Angelopoulos and Tianle Li and Dacheng Li and Hao Zhang and Banghua Zhu and Michael Jordan and Joseph E. Gonzalez and Ion Stoica},\n year={2024},\n eprint={2403.04132},\n archivePrefix={arXiv},\n primaryClass={cs.AI}\n}\n\n@misc{ding2024hybridllmcostefficientqualityaware,\n title={Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing}, \n author={Dujian Ding and Ankur Mallick and Chi Wang and Robert Sim and Subhabrata Mukherjee and Victor Ruhle and Laks V. S. Lakshmanan and Ahmed Hassan Awadallah},\n year={2024},\n eprint={2404.14618},\n archivePrefix={arXiv},\n primaryClass={cs.LG},\n url={https://arxiv.org/abs/2404.14618}, \n}\n```\n","slug":"2024-07-01-routellm"},"__N_SSG":true} |
File renamed without changes.
File renamed without changes.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
2 changes: 1 addition & 1 deletion
2
...c/e3zFBpj5RhuZsdTLsefDY/_buildManifest.js → ...c/QR8Ie-cIm8I8VX1A2KzTJ/_buildManifest.js
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
File renamed without changes.
File renamed without changes.
Oops, something went wrong.