✨SuperAGI v0.0.14✨
🚀 Enhanced Local LLM Support with Multi-GPU 🎉
New Feature Highlights 🌟
⚙️ Local Large Language Model (LLM) Integration:
- SuperAGI now supports the use of local large language models, allowing users to leverage their own models seamlessly within the SuperAGI framework.
- Easily configure and integrate your preferred LLMs for enhanced customization and control over your AI agents.
⚡️ Multi-GPU Support:
- SuperAGI now provides multi-GPU support for improved performance and scalability.
How to Use
To enable Local Large Language Model (LLM) with Multi-GPU support, follow these simple steps:
- LLM Integration:
- Add your model path in the celery and backend volumes in the
docker-compose-gpu.yml
file. - Run the command:
docker compose -f docker-compose-gpu.yml up --build
- Open
localhost:3000
in your browser. - Add a local LLM model from the model section.
- Use the added model for running your agents.
- Add your model path in the celery and backend volumes in the
What’s Changed
- Local LLM Integration with Multi-GPU Support by @rounak610 in #1391, #1351, #1306