Skip to content

Commit

Permalink
Use gpt-4o-mini that is cheaper and better
Browse files Browse the repository at this point in the history
  • Loading branch information
platisd committed Jul 19, 2024
1 parent ae92349 commit 5ed310d
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ jobs:
| `github_token` | The GitHub token to use for the Action | Yes | |
| `openai_api_key` | The [OpenAI API key] to use, keep it hidden | Yes | |
| `pull_request_id` | The ID of the pull request to use | No | Extracted from metadata |
| `openai_model` | The [OpenAI model] to use | No | `gpt-3.5-turbo` |
| `openai_model` | The [OpenAI model] to use | No | `gpt-4o-mini` |
| `max_tokens` | The maximum number of **prompt tokens** to use | No | `1000` |
| `temperature` | Higher values will make the model more creative (0-2) | No | `0.6` |
| `sample_prompt` | The prompt to use for giving context to the model | No | See `SAMPLE_PROMPT` |
Expand Down
2 changes: 1 addition & 1 deletion action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ inputs:
openai_model:
description: 'OpenAI model to use, needs to be compatible with the chat/completions endpoint'
required: false
default: 'gpt-3.5-turbo'
default: 'gpt-4o-mini'
max_tokens:
description: 'Maximum number of prompt tokens to use'
required: false
Expand Down
4 changes: 2 additions & 2 deletions autofill_description.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def main():
allowed_users = os.environ.get("INPUT_ALLOWED_USERS", "")
if allowed_users:
allowed_users = allowed_users.split(",")
open_ai_model = os.environ.get("INPUT_OPENAI_MODEL", "gpt-3.5-turbo")
open_ai_model = os.environ.get("INPUT_OPENAI_MODEL", "gpt-4o-mini")
max_prompt_tokens = int(os.environ.get("INPUT_MAX_TOKENS", "1000"))
model_temperature = float(os.environ.get("INPUT_TEMPERATURE", "0.6"))
model_sample_prompt = os.environ.get("INPUT_MODEL_SAMPLE_PROMPT", SAMPLE_PROMPT)
Expand Down Expand Up @@ -170,7 +170,7 @@ def main():
patch = pull_request_file["patch"]
completion_prompt += f"Changes in file {filename}: {patch}\n"

max_allowed_tokens = 2048 # 4096 is the maximum allowed by OpenAI for GPT-3.5
max_allowed_tokens = 8000
characters_per_token = 4 # The average number of characters per token
max_allowed_characters = max_allowed_tokens * characters_per_token
if len(completion_prompt) > max_allowed_characters:
Expand Down

0 comments on commit 5ed310d

Please sign in to comment.