Disclaimer: This project is pre-alpha and not ready for use. We are currently in the process of building out the core functionality. Please check back soon!
Welcome to DungeonDriver, your indispensable ally for creating truly immersive tabletop gaming experiences! Whether you're weaving intricate narratives, managing epic encounters, or guiding a band of adventurers through a world of your creation, our tool stands ready to help. Our goal is to provide a dynamic set of tools that allows you to focus on storytelling and player engagement, while we handle the logistical details—all from the comfort of your own hardware.
Are you ready to revolutionize your role as a Dungeon Master? Gather your party, embark on tales of suspense and excitement, and let DungeonMaster Assistant be your trusted co-pilot in managing the game. Get ready to elevate your Dungeon Master experience like never before!
- Cloud Based Inference
- Local Inference (GPU & CPU)
- Game Manual Q&A
- RPG Focused Stable Diffusion Prompt Generation
- LLM Chat with Session History
- Dynamic Campaign Creation: Utilize our intelligent assistant to generate and manage intricate and immersive campaigns tailored to your party's preferences.
- Adaptive Encounters: Experience the ease of running thrilling battles and challenges that adapt in real-time based on player actions and decisions.
- Automated World-Building: Leverage the Dungeon Master assistant to build rich, dynamic worlds filled with diverse locations, NPCs, and events.
- Player Engagement Tools: Enhance your games with tools designed to engage players, track character progression, and share memorable moments.
- Real-Time Assistance: Get real-time suggestions for game scenarios, rules references, and narrative prompts, allowing you to focus more on storytelling and less on logistics.
- Community Connection: Share your unique campaigns, learn from other Dungeon Masters, and celebrate your victories within our DM assistant community.
This project is split into two main components :
- DungeonDriver : A interface containing table top RPG logic and interfaces with the AI Driver inference server
- AI Driver : A FastAPI inference server that handles all of the details of running inference on the models.
These items are currently set up to run locally through docker-compose but can be easily split into separate containers for deployment.
AI Driver Technologies :
- Server/API : FastAPI w/gunicorn
- Database: Sqlite/SQLAlchemy/Alembic
- Agents : Langchain/Langsmith
- Vectorstores : FAISS, Pinecone
- CPU Inference : Ctransformers (GGML)
- Cache: Redis
DungeonDriver Technologies :
- UI : Gradio
graph TB
subgraph AIDriver
API[Server/API : FastAPI w/gunicorn]
DB[Database : Sqlite/SQLAlchemy/Alembic]
AG[Agents : Langchain/Langsmith]
VS[Vectorstores : FAISS, Pinecone]
CI[CPU Inference : Ctransformers]
API --> DB
API --> AG
API --> VS
API --> CI
end
UI --> API
subgraph DungeonDriver
UI[UI : Gradio]
end
DungeonDriver and its companion inference server AI Driver are meant to be run inside of container for easy deployment in any environment. Docker and Docker Compose must be installed in your environment. Cuda drivers are required for GPU support. Drivers should be capable of running cuda 12.x
To pull from the latest release image
docker pull fearnworks/aidriver:main
docker pull fearnworks/dungeondriver:main
Dockerhub links : AI Driver DungeonDriver
Currently this proces is only tested on Ubuntu 22.04 Clone the repo
git clone https://github.com/fearnworks/dungeondriver
cd dungeondriver
Install dependencies for the build chain and python virtual environment
Dependencies : (easy install)
chmod +x ./scripts/initial_server_setup_with_deps.sh
./scripts/initial_server_setup_with_deps.sh
cp .envtemplate .env
Setup nvidia drivers
chmod +x ./scripts/nvidia_driver_setup.sh
sudo ./scripts/nvidia_driver_setup.sh
Reboot your machine to ensure the drivers are loaded
Ai Driver Server run
sudo docker-compose up --build
Download the models and place them in the top level artifacts folder. There is a helper script here : download_model
We'll be adding more mature support for model downloads in the future
We welcome contributions to DungeonDriver! Follow the Contributing Guidelines to get started.
If you receive a Failed to create llm 'llama' delete your llama model in the artifacts folder and try again. This is an issue with the download from hugging face
To do so run the following command from the root of the project
rm -rf ./artifacts/* # or the specific model folder
source .server-venv/bin/activate
python ai_driver/ai_driver/scripts/download_model.py
Got questions or feedback? Feel free to reach out to us here on github!