-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(vertex-ai): function calling: convert markdown 'samples' to python #12811
base: main
Are you sure you want to change the base?
Conversation
Here is the summary of changes. You are about to add 9 region tags.
This comment is generated by snippet-bot.
|
@@ -39,7 +39,7 @@ def generate_function_call() -> GenerationResponse: | |||
vertexai.init(project=PROJECT_ID, location="us-central1") | |||
|
|||
# Initialize Gemini model | |||
model = GenerativeModel("gemini-1.5-flash-002") | |||
model = GenerativeModel("gemini-1.5-pro-002") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gemini Flash is not working anymore with this code sample (see the log):
File "/workspace/generative_ai/function_calling/test_function_calling.py", line 31, in test_function_calling
response = basic_example.generate_function_call()
File "/workspace/generative_ai/function_calling/basic_example.py", line 75, in generate_function_call
function_call = response.candidates[0].function_calls[0]
IndexError: list index out of range
The actual response from the model is now:
candidates {
content {
role: "model"
parts {
text: "I cannot answer this question. The available function `get_current_weather` has no implementation."
}
}
avg_logprobs: -0.17872051000595093
finish_reason: STOP
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created a general issue to initiate the discussion about current CI process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reporting this, Valeriy-Burlaka. The Gemini Flash model appears to have been deprecated. The change to gemini-1.5-pro-002
in the pull request reflects the current, supported model. Your created issue is the appropriate place to track the broader CI implications.
def function_declaration_from_func(): | ||
# [START generativeaionvertexai_gemini_function_calling_declare_from_function] | ||
# Define a function. Could be a local function or you can import the requests library to call an API | ||
def multiply_numbers(numbers: List[int]) -> int: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is almost a copy of the original function declaration.
I added the typings to fix the error 400 that was lurking in this doc sample:
...
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Unable to submit request because one or more function parameters didn't specify the schema type field. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling"
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.75.10:443 {created_time:"2024-12-02T15:20:19.676923+01:00", grpc_status:3, grpc_message:"Unable to submit request because one or more function parameters didn\'t specify the schema type field. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling"}"
>
...
google.api_core.exceptions.InvalidArgument: 400 Unable to submit request because one or more function parameters didn't specify the schema type field. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling
The unit-test utilizing a chat message is necessary to test correctness of the function declaration.
@@ -52,12 +55,14 @@ def test_function_calling_advanced_function_selection() -> None: | |||
) | |||
|
|||
|
|||
@pytest.mark.skip(reason="Blocked on b/... ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both samples that use OpenAI client are not working anymore (log):
File "/workspace/generative_ai/function_calling/test_function_calling.py", line 59, in test_function_calling_basic
response = chat_function_calling_basic.generate_text()
File "/workspace/generative_ai/function_calling/chat_function_calling_basic.py", line 39, in generate_text
client = openai.OpenAI(
File "/workspace/generative_ai/function_calling/.nox/py-3-9/lib/python3.9/site-packages/openai/_client.py", line 122, in __init__
super().__init__(
File "/workspace/generative_ai/function_calling/.nox/py-3-9/lib/python3.9/site-packages/openai/_base_client.py", line 825, in __init__
self._client = http_client or SyncHttpxClientWrapper(
File "/workspace/generative_ai/function_calling/.nox/py-3-9/lib/python3.9/site-packages/openai/_base_client.py", line 723, in __init__
super().__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'proxies'
- generated xml file: /workspace/generative_ai/function_calling/sponge_log.xml -
=========================== short test summary info ============================
FAILED test_function_calling.py::test_function_calling_basic - TypeError: __i...
FAILED test_function_calling.py::test_function_calling_config - TypeError: __...
========================= 2 failed, 5 passed in 14.85s =========================
Link to the doc section with OpenAI samples: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/function-calling#python-openai
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created a general issue to initiate the discussion about current CI process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error TypeError: __init__() got an unexpected keyword argument 'proxies'
in the OpenAI client indicates a version mismatch or incompatibility with the openai
library and the way it handles proxy settings. The proxies
argument was likely added in a newer version. To resolve this, you should:
- Check the
openai
library version: Ensure you're using a version compatible with your current environment. Check yourrequirements.txt
andpyproject.toml
(if used) for the specified version. If it's outdated, update it to the latest version that supports theproxies
argument (or a version that doesn't require it if you're not using proxies). - Review proxy configuration: If you are using proxies, double-check your proxy settings in your environment variables or configuration files. Incorrectly formatted proxy settings can also cause this error. If you are not using proxies, remove any proxy settings from your code or environment.
- Virtual environment consistency: Ensure your testing environment is consistent with your development environment. Use the same virtual environment or container for both to avoid version conflicts.
- Dependency resolution: Try running
pip install --upgrade pip
and thenpip install -r requirements.txt
(or your equivalent package manager command) to ensure all dependencies are correctly resolved and updated.
The issue you've created is a good first step to track this problem. Consider adding more details about your environment (Python version, OpenAI library version, proxy configuration if applicable) to help diagnose the root cause.
|
||
|
||
def function_declaration_from_dict() -> object: | ||
# [START generativeaionvertexai_gemini_function_calling_declare_from_dict1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msampathkumar , I know we want only one pair of region tags per sample file, but just look at this doc section — we will have to add 8 new micro-files to convert all markdown used there 🤯 . Given that all markdown samples for "Step 1-5" sub-sections could basically compose into 1 coherent sample with different region tags, maybe we could make an exception from the "1 region tag" rule here? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! We can do that. Let us catchup on this today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@msampathkumar , please check the updated sample.
Following the idea of the "How to ..." section (creating an example app block by block), I compiled it into one code sample and added tests. As it runs together, it now lives together, in one file!
@backoff.on_exception(backoff.expo, ResourceExhausted, max_time=10) | ||
def test_function_calling_basic() -> None: | ||
response = chat_function_calling_basic.generate_text() | ||
assert "get_current_weather" in response.choices[0].message.tool_calls[0].id | ||
|
||
|
||
@pytest.mark.skip(reason="Blocked on b/... ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will need help with creating a real ticket in Buganizer.
response = chat_session.send_message("What will be 1 multiplied by 2?") | ||
assert response is not None | ||
|
||
extract_sales_prompt = """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am adding a test for each toolbox function created by this sample because of this reason. The toolbox is lazily evaluated, and it's not enough to simply create a function declaration to check that it's working.
Only when we trigger a "function call" response from the model with a target prompt it validates the declaration and can throw an error.
PROJECT_ID = os.getenv("GOOGLE_CLOUD_PROJECT") | ||
|
||
|
||
def create_app() -> object: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consolidates code samples for the https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#how-works doc section into a single, testable script.
Each sub-step of this section presents a discrete code snippet (block) for the final application, demonstrating a single function calling concept at a time. This compilation creates a complete, maintainable implementation for DevRel purposes, reducing the amount of separate files we'd have if we keep each snippet in a separate file.
The original snippets and their documentation order are preserved through separate region tags, with (minor fixes) applied.
# [START generativeaionvertexai_function_calling_example_syntax] | ||
from vertexai.generative_models import FunctionDeclaration, GenerationConfig, GenerativeModel, Tool | ||
|
||
gemini_model = GenerativeModel( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to preserve the "spirit" of the original .md sample (i.e., a very short snippet that only shows how to declare a model with toolbox), while making it actually copyable, testable, and working. That's why I even moved the init
call outside of the sample but added all necessary imports for proper functioning.
Description
Convert Markdown 'samples' for Function Calling Guides & API Reference pages:
Checklist
nox -s py-3.9
&&nox -s py-3.12
nox -s lint