Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support basic function call for gemini (google-generativeai) #17696

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

ex0ns
Copy link
Contributor

@ex0ns ex0ns commented Feb 2, 2025

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Version Bump?

Did I bump the version in the pyproject.toml file of the package I am updating? (Except for the llama-index-core package)

  • Yes
  • No

Type of Change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.

  • I added new unit tests to cover this change
  • I believe this change is already covered by existing unit tests

Suggested Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added Google Colab support for the newly added notebooks.
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I ran make format; make lint to appease the lint gods

I tried to make the changes as backward compatible as possible.
I did not introduce any wrapper type for the function call structure returned by Gemini.
The goal is to provide an easy access through the .additional_kwargs["function_calls"] accessor (or through the raw attribute of MessageResponse). This also implement a workaround around an existing issue in the google-generativeai lib that makes it impossible to use function call with the llama index wrapper (as .text is always accessed).
We could release this as a minor version bump instead of patch.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Feb 2, 2025
@@ -303,3 +311,70 @@ async def gen() -> ChatResponseAsyncGen:
)

return gen()

def chat_with_tools(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this implementation is correct? We shouldn't need to implement chat_with_tools, only

  • _prepare_chat_with_tools()
  • get_tool_calls_from_response()
  • _validate_chat_with_tools_response() (optional)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see get_tool_calls_from_response() implemented, so the typical usage will not work

resp = llm.chat_with_tools(..)
tool_calls = llm.get_tool_calls_from_response(resp)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for spotting this, I copied the implementation of chat_with_tools in my first version as I didn't planned to add the exception handler in the handle response method, but it's indeed not needed.

I will implement get_tool_calls_from_response, I didn't spot this method in the FunctionCallingLLM class, thanks !

return tool_selections

@llm_completion_callback()
async def astream_complete(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about this method, the CustomLLM implement it that way:

async def astream_complete(
self, prompt: str, formatted: bool = False, **kwargs: Any
) -> CompletionResponseAsyncGen:
async def gen() -> CompletionResponseAsyncGen:
for message in self.stream_complete(prompt, formatted=formatted, **kwargs):
yield message
# NOTE: convert generator to async generator
return gen()

But the OpenAI implementation is exactly as follow, I don't really know which one to pick.

@ex0ns
Copy link
Contributor Author

ex0ns commented Feb 3, 2025

About the coverage, I think it's because the tests are skipped as the environment variable is not set/not propagated when running the tests.
After a small tweak to pants.toml:

[test]
use_coverage = true
extra_env_vars = ["GOOGLE_API_KEY"]

I ran the following command:

pants --level=error --no-local-cache test --test-use-coverage --coverage-py-filter="['llama-index-integrations/llms/llama-index-llms-gemini/llama_index']" ./::

✓ llama-index-integrations/llms/llama-index-llms-gemini/tests/test_llms_gemini.py succeeded in 8.59s (memoized).

Name                                                                                        Stmts   Miss  Cover
---------------------------------------------------------------------------------------------------------------
llama-index-integrations/llms/llama-index-llms-gemini/llama_index/llms/gemini/__init__.py       2      0   100%
llama-index-integrations/llms/llama-index-llms-gemini/llama_index/llms/gemini/base.py         153     52    66%
llama-index-integrations/llms/llama-index-llms-gemini/llama_index/llms/gemini/utils.py         50     13    74%
---------------------------------------------------------------------------------------------------------------
TOTAL                                                                                         205     65    68%


Wrote html coverage report to `dist/coverage/python`

Wrote xml coverage report to `dist/coverage/python`

I have explicitely set the model in the tests otherwise it default to the first one in the list, which is an exp model (with low RPM).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants