Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: No valid completion model args passed in - {'model': 'gpt-4-32k' #57

Open
natea opened this issue Nov 15, 2024 · 5 comments

Comments

@natea
Copy link

natea commented Nov 15, 2024

When I run the main.py file and pass in gpt-4-32k as the model, I'm getting this error:

 % python main.py --model gpt-4-32k --targetlang nodejs
Is your source project a Python project? [y/N]: y
◐ Reading Python project from directory '/Users/nateaune/Documents/code/gpt-migrate/benchmarks/flask-nodejs/source', with entrypoint 'app.py'.
◑ Outputting nodejs project to directory '/Users/nateaune/Documents/code/gpt-migrate/benchmarks/flask-nodejs/target'.
Source directory structure:

        ├── db.py
        ├── requirements.txt
        ├── storage/
            │   └── items.json
        ├── .gitignore
        └── app.py

Traceback (most recent call last):

  File "/Users/nateaune/Documents/code/gpt-migrate/gpt_migrate/main.py", line 127, in <module>
    app()

  File "/Users/nateaune/Documents/code/gpt-migrate/gpt_migrate/main.py", line 87, in main
    create_environment(globals)

  File "/Users/nateaune/Documents/code/gpt-migrate/gpt_migrate/steps/setup.py", line 15, in create_environment
    llm_write_file(prompt,

  File "/Users/nateaune/Documents/code/gpt-migrate/gpt_migrate/utils.py", line 52, in llm_write_file
    file_name,language,file_content = globals.ai.write_code(prompt)[0]

  File "/Users/nateaune/Documents/code/gpt-migrate/gpt_migrate/ai.py", line 23, in write_code
    response = completion(

  File "/Users/nateaune/.pyenv/versions/gpt-migrate/lib/python3.10/site-packages/litellm/utils.py", line 98, in wrapper
    raise e

  File "/Users/nateaune/.pyenv/versions/gpt-migrate/lib/python3.10/site-packages/litellm/utils.py", line 89, in wrapper
    result = original_function(*args, **kwargs)

  File "/Users/nateaune/.pyenv/versions/gpt-migrate/lib/python3.10/site-packages/litellm/timeout.py", line 44, in wrapper
    result = future.result(timeout=local_timeout_duration)

  File "/Users/nateaune/.pyenv/versions/3.10.10/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()

  File "/Users/nateaune/.pyenv/versions/3.10.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception

  File "/Users/nateaune/.pyenv/versions/gpt-migrate/lib/python3.10/site-packages/litellm/timeout.py", line 35, in async_func
    return func(*args, **kwargs)

  File "/Users/nateaune/.pyenv/versions/gpt-migrate/lib/python3.10/site-packages/litellm/main.py", line 248, in completion
    raise exception_type(model=model, original_exception=e)

  File "/Users/nateaune/.pyenv/versions/gpt-migrate/lib/python3.10/site-packages/litellm/utils.py", line 273, in exception_type
    raise original_exception # base case - return the original exception

  File "/Users/nateaune/.pyenv/versions/gpt-migrate/lib/python3.10/site-packages/litellm/main.py", line 242, in completion
    raise ValueError(f"No valid completion model args passed in - {args}")

ValueError: No valid completion model args passed in - {'model': 'gpt-4-32k', 'messages': [{'role': 'user', 'content': 'The following prompt is a composition of prompt sections, each with different preference levels. Higher preference levels override lower preference levels. The lowest preference level, PREFERENCE LEVEL 1, will likely be a broad guideline of some sort. Prompt sections with higher preference levels are likely to be more specific instructions that would override sections with lower preference levels. Each prompt section will start with "PREFERENCE LEVEL (level)".\\n\\n PREFERENCE LEVEL 1\n\nHere are the guidelines for this prompt:\n\n1. Follow the output instructions precisely and do not make any assumptions. Your output will not be read by a human; it will be directly input into a computer for literal processing. Adding anything else or deviating from the instructions will cause the output to fail.\n2. Think through the answer to each prompt step by step to ensure that the output is perfect; there is no room for error.\n3. Do not use any libraries, frameworks, or projects that are not well-known and well-documented, unless they are explicitly mentioned in the instructions or in the prompt.\n4. In general, use comments in code only sparingly.\\n\\n PREFERENCE LEVEL 2\n\nYou are a pragmatic principal engineer at Google. You are about to get instructions for code to write. This code must be as simple and easy to understand, while still fully expressing the functionality required. Please note that the code should be complete and fully functional. No placeholders. However, only write what you are asked to write. For instance, if you\'re asked to write a function, only write the function; DO NOT include import statements. We will do those separately.\n\nPlease strictly follow this styling guideline with no deviations. Variables will always be snake_case; either capital or lowercase. Functions will always be camelCase. Classes will always be PascalCase. Please follow this guideline even if the source code does not follow it.\n\nFinally, please follow these guidelines: \\n\\n PREFERENCE LEVEL 3\n\nPlease create a Dockerfile for a generic app in the following framework: nodejs. This new app is being transpiled to nodejs from Python, where the entrypoint file name is app.py. Please use the same file name besides the extension if in a different language, unless nodejs requires a certain naming convention for the entrypoint file, such as main.ext etc. Be sure to include a dependencies installation step with your choice of file name. No need to write any comments. Exposed port should be 8080.\\n\\n PREFERENCE LEVEL 4\n\nWe will be using the output you provide as-is to create new files, so please be precise and do not include any other text. Your output needs to be ONE file; if your output contains multiple files, it will break the system. Your output should consist ONLY of the file name, language, and code, in the following format:\n\nfile_name.ext\n```language\nCODE\n```'}], 'functions': [], 'function_call': '', 'temperature': 0.0, 'top_p': 1, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': 10000, 'presence_penalty': 0, 'frequency_penalty': 0, 'logit_bias': {}, 'user': '', 'force_timeout': 60, 'azure': False, 'logger_fn': None, 'verbose': False, 'optional_params': {'temperature': 0.0, 'max_tokens': 10000}}
``
@tomascharad
Copy link

+1

@tomascharad
Copy link

It seems that the model gpt-4-32 is not longer available, you have to use openrouter to access it.

@CrackAndDie
Copy link

the same problem but with gpt-4o-mini

@Silvio-evasys
Copy link

Same problem using openrouter:

ValueError: No valid completion model args passed in - {'model': 'openrouter/openai/gpt-4-32k'

@Silvio-evasys
Copy link

Is there any solution for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants