An intelligent AI support assistant built with the Julep platform to help developers understand and use Julep effectively. This project demonstrates best practices for building AI applications with Julep, including web crawling, document indexing, and conversational AI capabilities.
- Intelligent Documentation Crawling: Automatically crawls and indexes Julep documentation
- RAG-powered Conversations: Uses Retrieval-Augmented Generation for accurate, context-aware responses
- Interactive Chat Interface: Built with Chainlit for a smooth user experience
- Feedback System: Collects and validates user feedback for continuous improvement
- Workflow Examples: Demonstrates complex Julep workflows including web crawling and document processing
julep-assistant/
├── agent.yaml # Agent configuration (name, instructions, model)
├── task/ # Julep task definitions
│ ├── main.yaml # Main workflow task
│ ├── crawl.yaml # Web crawling sub-task
│ └── full_task.yaml # Complete task with all steps
├── scripts/ # Utility scripts
│ ├── crawler.py # Standalone web crawler
│ └── indexer.py # Document indexing utility
├── chainlit-ui/ # Web interface
│ ├── app.py # Main Chainlit application
│ ├── feedback/ # Feedback handling system
│ └── requirements.txt # Python dependencies
└── julep-assistant-notebook.ipynb # Interactive notebook demo
- Python 3.8+
- Julep API key (get one at platform.julep.ai)
- Spider API key for web crawling
- Clone the repository:
git clone https://github.com/yourusername/julep-assistant.git
cd julep-assistant
- Install dependencies:
pip install -r chainlit-ui/requirements.txt
- Set up environment variables:
# Create a .env file in the project root
cp .env.example .env
# Edit .env and add your API keys:
# JULEP_API_KEY=your_julep_api_key_here
# AGENT_UUID=ce7be83e-db8b-4ba9-808e-7cade6812e98 # Or create your own
# SPIDER_API_KEY=your_spider_api_key_here # Optional, for web crawling
cd chainlit-ui
chainlit run app.py
This will start the web interface at http://localhost:8000
Open julep-assistant-notebook.ipynb
to explore:
- Creating and configuring Julep agents
- Defining and executing tasks
- Setting up RAG-powered conversations
- Monitoring task executions
Web Crawler:
python scripts/crawler.py --url https://docs.julep.ai --max-pages 100
Document Indexer:
python scripts/indexer.py
- Agent Configuration: The assistant is configured with specific instructions and knowledge about Julep
- Document Indexing: Crawls and indexes Julep documentation for RAG
- Hybrid Search: Uses both vector and text search for optimal retrieval
- Contextual Responses: Generates accurate answers based on retrieved documentation
- Defines the assistant's personality, capabilities, and instructions
- Uses Claude Sonnet 3.5 model for high-quality responses
- main.yaml: Entry point workflow
- crawl.yaml: Web crawling sub-task with Spider integration
- full_task.yaml: Complete workflow including crawling and indexing
- Interactive chat interface
- Session management
- Feedback collection and validation
- Real-time streaming responses
Variable | Description | Required |
---|---|---|
JULEP_API_KEY |
Your Julep API key | Yes |
AGENT_UUID |
UUID for the Julep agent | No (uses default) |
JULEP_ENV |
Julep environment (production/development) | No (defaults to production) |
SPIDER_API_KEY |
Spider API key for web crawling | No (only for crawling tasks) |
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Julep AI - The platform for building stateful AI applications
- UI powered by Chainlit
- Web crawling by Spider
For questions about this project, please open an issue on GitHub.
For Julep-specific questions, visit the Julep Documentation or join the Julep Discord.