Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add o3-mini model support and fix cost calculations #102

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

cmann50
Copy link

@cmann50 cmann50 commented Feb 1, 2025

Added support for o3-mini model including:

  • Added o3-mini model to model_details.tmpl with correct token rates and specs:

    • Input: .10/1M tokens (/opt/homebrew/bin/bash.0000011 per token)
    • Output: .40/1M tokens (/opt/homebrew/bin/bash.0000044 per token)
    • Context length: 200k tokens
    • Support for function calling, vision, and parallel function calling
  • Added temperature handling for o3/o1 models

    • Skip temperature parameter for o3/o1 models since they don't support it
    • Maintain temperature parameter for other models (e.g., gpt-4)
  • Added support for reasoning_effort parameter

    • Added configuration in codai-config.yml
    • Added environment variable support (CHAT_REASONING_EFFORT)
    • Added command line flag support
  • Code improvements:

    • Better cost calculation formatting
    • Cleaned up unused imports
    • Improved code organization

@meysamhadeli
Copy link
Owner

Added support for o3-mini model including:

  • Added o3-mini model to model_details.tmpl with correct token rates and specs:

    • Input: .10/1M tokens (/opt/homebrew/bin/bash.0000011 per token)
    • Output: .40/1M tokens (/opt/homebrew/bin/bash.0000044 per token)
    • Context length: 200k tokens
    • Support for function calling, vision, and parallel function calling
  • Added temperature handling for o3/o1 models

    • Skip temperature parameter for o3/o1 models since they don't support it
    • Maintain temperature parameter for other models (e.g., gpt-4)
  • Added support for reasoning_effort parameter

    • Added configuration in codai-config.yml
    • Added environment variable support (CHAT_REASONING_EFFORT)
    • Added command line flag support
  • Code improvements:

    • Better cost calculation formatting
    • Cleaned up unused imports
    • Improved code organization

Hi, Thanks for contribution :)
I had some comments, please take a look :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants