Skip to content

🚂🤖🪄 Orchestrate AI Workflows w/ 💯Python & TDD ✅

License

Notifications You must be signed in to change notification settings

rabbidave/conductor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚂🤖🪄Conductor

License: MIT

  • Deployable interface (GUI/API) that (locally) orchestrates multiple (remote and/or local) Language Models.
  • Python code is executed and tested against assertions after validation by the configurable SecurityManager class

Try it yourself in *nix or WSL:

sudo apt-get update && sudo apt-get install -y git python3 python3-venv && \ # Authenticates User
if [ ! -d "conductor_venv" ]; then python3 -m venv conductor_venv; fi && \ 

source conductor_venv/bin/activate && \
pip install --upgrade pip gradio openai GitPython && \ # Localizes Dependencies

if [ -d "conductor" ]; then (cd conductor && git pull); else git clone https://github.com/rabbidave/conductor.git; fi && \
cd conductor && python app.py

Features

  • Multi LM Interaction: Engage with multiple LMs sequentially for mob-style programming; w/ configurable exit criteria
  • Localized Code Execution: Deploy wherever you want code executed via blocks with language specifiers e.g. ```python
  • Automated Test Execution: Tests are automatically detected and executed from code blocks e.g. ```python-test
  • Test-Driven Generation: Generation stops after a configurable number of successful tests and criteria have passed
  • Configurable IDs, APIs, and Parameters: Easily switch between different URLs/LMs and Generation Settings
  • Dynamic Environment Configuration: Modify model IDs, API URLs, maximum tokens, temperature, and top-p via the UI or .env file
  • Detailed Logging: Comprehensive and configurable logs available in the logs/ directory for debugging
  • Model Aliases: Model Aliases allow you to set up the interaction with two distinct models or two instances of the same model

Prerequisites

  • Python 3.7+: Ensure you have Python 3.7 or a newer version installed

Usage

  1. Run the App
python app.py
  1. Access the UI:

    • Open http://localhost:31337 in your browser
  2. Interact with the LMs:

    • Enter your prompt in "Input Message" textbox
    • Code blocks are automatically executed:
    • Generation continues until n-tests completed
    • Use "Stop Generation" to halt manually
    • Use "Clear Conversation" to start fresh
    • "Show Last Code" and "Show Last Output" display recent executions

Example

  1. Prompt:
Create a function that calculates the square of a number and test it.
  1. Model Response:
def square(x):
    return x * x

print(square(5))  # Test output
assert square(0) == 0, "Square of 0 should be 0"
assert square(5) == 25, "Square of 5 should be 25"
assert square(-2) == 4, "Square of -2 should be 4"

Security Note

  • Always use caution with code generated by an LLM
  • Code execution is sandboxed with configurable allow/block lists
  • Do not execute code from untrusted sources

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Contributions welcome! Please submit issues and pull requests on GitHub.

About

🚂🤖🪄 Orchestrate AI Workflows w/ 💯Python & TDD ✅

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages