A REST API service that processes incoming webhook requests using LLM (Large Language Model) and sends generated responses to specified endpoints.
- Asynchronous webhook processing with Redis message broker
- OpenAI GPT integration for text generation
- Conversation history tracking with PostgreSQL
- Rate limiting for LLM API calls
- Docker containerization
- OpenAPI documentation
- Comprehensive test coverage
- Python 3.12
- Litestar (FastAPI-like web framework)
- PostgreSQL (conversation storage)
- Redis (message broker)
- OpenAI API (LLM provider)
- Docker & Docker Compose
- Poetry (dependency management)
- Alembic (database migrations)
- Pytest (testing)
- Docker and Docker Compose
- Python 3.12+
- Poetry
- OpenAI API key
- Clone the repository:
git clone https://github.com/ACK1D/webhook-llm.git
cd webhook-llm
- Install dependencies:
poetry install
- Create a
.env
file with the following variables:
OPENAI_API_KEY=<your_openai_api_key>
DATABASE_URL=<your_database_url>
REDIS_URL=<your_redis_url>
RPM_LIMIT=<your_rpm_limit>
or use the .env.example
file as a template and fill in the missing values.
cp .env.example .env
- Start the services:
docker compose up -d
GET / Response:
{"status": "ok", "service": "LLM Webhook Service", "version": "0.1.0"}
POST /webhook Request:
{
"message": "Hello, how are you?",
"callback_url": "https://example.com/callback",
}
Response:
{
"status": "accepted",
"message": "Request placed in queue",
"conversation_id": "904d270d-69ba-4057-b712-b349d4ff0d5a"
}
- GET /schema (ReDoc)
- GET /schema/swagger (Swagger)
- GET /schema/elements (Elements)
- GET /schema/rapidoc (RapiDoc)
The service consists of three main components:
- API Service: Handles incoming webhook requests and queues them in Redis
- Worker Service: Processes queued messages, calls OpenAI API, and sends responses
- Database: Stores conversation history and messages
- Run tests:
poetry run pytest
- Run API locally:
poetry run python -m app.main
- Run Worker locally:
poetry run python -m app.worker
The service can be configured using environment variables:
OPENAI_API_KEY
: OpenAI API keyOPENAI_MODEL
: Model to use (default: gpt-3.5-turbo)RPM_LIMIT
: Rate limit for OpenAI API callsREDIS_URL
: Redis connection URLDATABASE_URL
: PostgreSQL connection URLDEBUG
: Enable debug mode
app/
: Application codetests/
: Test codedocker/
: Dockerfilescompose.yml
: Docker Compose filealembic.ini
: Alembic configurationapp/migrations/
: Database migrationsapp/config.py
: Configuration settingsapp/models/
: Database modelsapp/services/
: Service codeapp/main.py
: Main application entry pointapp/worker.py
: Worker entry point