-
Notifications
You must be signed in to change notification settings - Fork 872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "Capabilities" endpoint/API #274
Comments
Here's a proposed response to The idea is to introduce the concept of an "LLM Provider" that supports one or more models. Whether a tool or model is "enabled", for now, would depend on if the requisite env variables are set or not. The goal is to make it so UIs consuming the OpenGPTs backend could toggle LLMs and tools on and off based on how a given backend is configured. {
"capabilities": {
"llms": [
{
"provider": "OpenAI",
"models": [
{
"id": "openai_gpt3_turbo",
"title": "OpenAI GPT 3.5 Turbo",
"supports_tools": true,
"supports_streaming": true,
"enabled": true
},
{
"id": "openai_gpt4_turbo",
"title": "OpenAI GPT 4 Turbo",
"supports_tools": true,
"supports_streaming": true,
"enabled": true
}
]
},
{
"provider": "Anthropic",
"models": [
{
"id": "anthropic_claude_2",
"title": "Claude 2",
"supports_tools": true,
"supports_streaming": true,
"enabled": true
}
]
},
{
"provider": "Amazon Bedrock",
"models": [
{
"id": "amazon_bedrock_claude_2",
"title": "Claude 2",
"supports_tools": true,
"supports_streaming": true,
"enabled": true
}
]
},
{
"provider": "Azure",
"models": [
{
"id": "azure_gpt4_turbo",
"title": "GPT 4 Turbo",
"supports_tools": true,
"supports_streaming": true,
"enabled": false
}
]
},
{
"provider": "Google",
"models": [
{
"id": "google_gemini",
"title": "Gemini",
"supports_tools": true,
"supports_streaming": true,
"enabled": false
}
]
},
{
"privider": "Ollama",
"models": [
{
"id": "ollama_llama2",
"title": "Ollma - Llama2",
"supports_tools": false,
"supports_streaming": true,
"enabled": true
},
{
"id": "ollama_mistral",
"title": "Ollma - Mistral",
"supports_tools": false,
"supports_streaming": true,
"enabled": true
},
{
"id": "ollama_openchat",
"title": "Ollma - Openchat",
"supports_tools": false,
"supports_streaming": true,
"enabled": true
},
{
"id": "ollama_orca2",
"title": "Ollma - Orca2",
"supports_tools": false,
"supports_streaming": true,
"enabled": true
}
]
}
]
},
"tools": [
{
"id": "action_server_by_robocorp",
"title": "Action Server by robocorp",
"description": "Run AI actions with [Robocorp Action Server](https://github.com/robocorp/robocorp).",
"enabled": true
},
{
"id": "ai_action_runner_by_connery",
"title": "AI Action Runner by Connery",
"description": "Connect OpenGPTs to the real world with [Connery](https://github.com/connery-io/connery).",
"enabled": true
},
{
"id": "ddg_search",
"title": "DuckDuckGo Search",
"description": "Search the web with [DuckDuckGo](https://pypi.org/project/duckduckgo-search/).",
"enabled": true
},
{
"id": "arxiv_search",
"title": "ArXiv Search",
"description": "Searches [Arxiv](https://arxiv.org/).",
"enabled": false
},
{
"id": "you_search",
"title": "You.com Search",
"description": "Uses [You.com](https://you.com/) search, optimized responses for LLMs.",
"enabled": true
},
{
"id": "sec_filings_kai_ai",
"title": "SEC Filings (Kay.ai)",
"description": "Searches through SEC filings using [Kay.ai](https://www.kay.ai/).",
"enabled": true
},
{
"id": "ai_action_runner_by_connery",
"title": "AI Action Runner by Connery",
"description": "Connect OpenGPTs to the real world with [Connery](https://github.com/connery-io/connery).",
"enabled": true
},
{
"id": "wikipedia",
"title": "Wikipedia",
"description": "Searches [Wikipedia](https://pypi.org/project/wikipedia/).",
"enabled": true
}
]
} |
I added this functionality myself: a tool that points to an API endpoint (which I exposed in the frontend) which lists all tools found in api/tools.py and another that lists all public GPTs. I created an "building assistant" GPT that then uses these tools to figure out if the user requester functionality already exists, and if not, tells the user how to build it given the available tools. I actually want to the make this "building assistant" the entry point/ first thing a user sees when they come to my self-hosted version of OpenGPTs. something like: from app.agent import Tool as AVAILABLE_AGENT_TOOLS def get_all_tools_info(tool_union): @app.get("/list_tools", description="Gets the tools available") and I did the same for GPTs, so I created a tool that searches the existing GPTs/assistants. import json
|
What?
Add a REST API endpoint such as
/api/v1/capabilities
that returns a nested structure describing what LLMs and Tools the given OpenGPTs API instance supports. This would enable UIs to dynamically show/hide OpenGPTs options like LLMs and Tools.A hypothetical response to
GET /api/v1/capabilities
might look something like:Why?
Currently, the OpenGPTs UI lets you select Models and Tools that may not be configured. The UI will happily let you create assistants with models that aren't configured. When used, the backend will spit out a trace, and the frontend won't do anything.
Implementing this endpoint would enable API clients (e.g. UIs) to only present options that the OpenGPTs instance is actually configured to support.
Implementation Considerations
With the current state of the codebase and dependency stack, the path of least resistance is likely to implement such a feature by checking for existence of LLM/Tool-specific environment variables.
Longer, more scalable solutions might involve moving away from environment variable driven configurations to something like a configuration file.
The text was updated successfully, but these errors were encountered: