Nexent is an open-source agent platform that turns process-level natural language into complete multimodal agents — no diagrams, no wiring. Built on the MCP tool ecosystem, Nexent provides model integration, data processing, knowledge-base management, and zero-code agent development. Our goal is simple: to bring data, models, and tools together in one smart hub, making daily workflows smarter and more connected.
One prompt. Endless reach.
- 🌐 Visit our official website to learn more
- 🚀 Try it now to experience the power of Nexent
Nexent.Demo.mp4
If you want to go fast, go alone; if you want to go far, go together.
We have released Nexent v1, and the platform is now relatively stable. However, there may still be some bugs, and we are continuously improving and adding new features. Stay tuned: we will announce v2.0 soon!
- 🗺️ Check our Feature Map to explore current and upcoming features.
- 🔍 Try the current build and leave ideas or bugs in the Issues tab.
Rome wasn't built in a day.
If our vision speaks to you, jump in via the Contribution Guide and shape Nexent with us.
Early contributors won't go unnoticed: from special badges and swag to other tangible rewards, we're committed to thanking the pioneers who help bring Nexent to life.
Most of all, we need visibility. Star ⭐ and watch the repo, share it with friends, and help more developers discover Nexent — your click brings new hands to the project and keeps the momentum growing.
Resource | Minimum |
---|---|
CPU | 2 cores |
RAM | 6 GiB |
Software | Docker & Docker Compose installed |
git clone https://github.com/ModelEngine-Group/nexent.git
cd nexent/docker
cp .env.example .env # fill only nessasary configs
bash deploy.sh
When the containers are running, open http://localhost:3000 in your browser and follow the setup wizard.
We recommend the following model providers:
Model Type | Provider | Notes |
---|---|---|
LLM & VLLM | Silicon Flow | Free tier available |
LLM & VLLM | Alibaba Bailian | Free tier available |
Embedding | Jina | Free tier available |
TTS & STT | Volcengine Voice | Free for personal use |
Search | EXA | Free tier available |
You'll need to input the following information in the model configuration page:
- Base URL
- API Key
- Model Name
The following configurations need to be added to your .env
file (we'll make these configurable through the frontend soon):
- TTS and STT related configurations
- EXA search API Key
ℹ️ Due to core features development, currently, we only support Jina Embedding model. Support for other models will be added in future releases. For Jina API key setup, please refer to our FAQ.
- Browse the FAQ for common install issues.
- Drop questions in our Discord community.
- File bugs or feature ideas in GitHub Issues.
Want to build from source or add new features? Check the Contribution Guide for step-by-step instructions.
Prefer to run Nexent from source code? Follow our Developer Guide for detailed setup instructions and customization options.
1
Smart agent prompt generation
Turn plain language into runnable prompts. Nexent automatically chooses the right tools and plans the best action path for every request.
2
Scalable data process engine
Process 20+ data formats with fast OCR and table structure extraction, scaling smoothly from a single process to large-batch pipelines.
3
Personal-grade knowledge base
Import files in real time, auto-summarise them, and let agents access both personal and global knowledge instantly, also knowing what it can get from each knowledge base.
4
Internet knowledge search
Connect to 5+ web search providers so agents can mix fresh internet facts with your private data.
5
Knowledge-level traceability
Serve answers with precise citations from web and knowledge-base sources, making every fact verifiable.
6
Multimodal understanding & dialogue
Speak, type, files, or show images. Nexent understands voice, text, and pictures, and can even generate new images on demand.
7
MCP tool ecosystem
Drop in or build Python plug-ins that follow the MCP spec; swap models, tools, and chains without touching core code.
1📝 Code Output May Be Misinterpreted as Executable
In Nexent conversations, if the model outputs code-like text, it may sometimes be misinterpreted as something that should be executed. We will fix this as soon as possible.
We welcome all kinds of contributions! Whether you're fixing bugs, adding features, or improving documentation, your help makes Nexent better for everyone.
If you are an external developer and want to contribute to this project, please follow these steps:
- Fork the repository
- Click the "Fork" button at the top right of the repository page to create your own copy.
- Clone your fork
- Use
git clone https://github.com/your-username/your-forked-repo.git
to clone your fork to your local machine.
- Use
- Commit and push your changes
- Make your changes, then use
git add .
,git commit -m "Your message"
, andgit push origin
to push to your fork.
- Make your changes, then use
- Open a Pull Request
- Go to your forked repository on GitHub, click the "Contribute" button, and then select "Open Pull Request" to propose merging your changes into the main repository.
Please make sure your PR follows the project's contribution guidelines and passes all required checks.
- 📖 Read our Contribution Guide to get started
- 🐛 Report bugs or suggest features in GitHub Issues
- 💬 Join our Discord community to discuss ideas
Join our Discord community to chat with other developers and get help!
Nexent is licensed under the MIT with additional conditions. Please read the LICENSE file for details.