Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Array-Based Configuration for LLM Providers with Target Flag #74

Open
kardolus opened this issue Oct 2, 2024 · 0 comments
Open

Array-Based Configuration for LLM Providers with Target Flag #74

kardolus opened this issue Oct 2, 2024 · 0 comments

Comments

@kardolus
Copy link
Owner

kardolus commented Oct 2, 2024

Problem

Currently, the configuration for the ChatGPT CLI uses a singular configuration format for LLMs and models. This works well if you're using only one LLM at a time, but it becomes cumbersome when switching between different providers (OpenAI, Perplexity, Llama, etc.) or models.

For example, the current configuration looks like this:

name: openai
api_key: 
model: gpt-4o
max_tokens: 4096
context_window: 8192
...

When switching between providers or models, users have to either edit the configuration file or rely on environment variables. This makes switching between multiple setups more complicated and less efficient.

Proposed Solution

  1. Introduce Array-Based Configuration for LLMs:
    Convert the current configuration from a single setup to an array-based format where multiple configurations for different LLMs and models can be stored.

    Example:

    providers:
      - name: openai
        api_key: 
        model: gpt-4o
        max_tokens: 4096
        context_window: 8192
        ...
      - name: llama
        api_key: 
        model: llama-2-13b-chat
        max_tokens: 4096
        context_window: 8192
        ...
      - name: perplexity
        api_key: 
        model: llama-3.1-sonar
        max_tokens: 4096
        context_window: 8192
        ...
  2. Add a --target Flag to Dynamically Select a Configuration:
    Add a --target flag that allows users to select which configuration (provider and model) to use for a specific command.

    Example:

    chatgpt --target openai "Who is Max Verstappen?"
    chatgpt --target llama "Tell me a joke"
    chatgpt --target perplexity "Summarize this article"

    This way, users can quickly switch between configurations without having to edit the config.yaml file or rely on environment variables.

Benefits

  • Ease of Use: With array-based configurations and the --target flag, users can easily switch between LLM providers and models.
  • Cleaner Configuration: Avoids the need for multiple environment variables or manual configuration file edits for each LLM.
  • Better Flexibility: Supports different LLM providers (OpenAI, Llama, Perplexity, etc.) and models without requiring reconfiguration.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Todo
Development

No branches or pull requests

1 participant