Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable tool calling for OpenAI/Azure OpenAI endpoints #1418

Open
harisyammnv opened this issue Aug 20, 2024 · 3 comments
Open

Enable tool calling for OpenAI/Azure OpenAI endpoints #1418

harisyammnv opened this issue Aug 20, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@harisyammnv
Copy link
Contributor

Describe your feature request

We are using a version of Huggingchat as a POC in our company, We have made adjustments to use our hosted langserve endpoints for RAG with our vector DBs, but our user's are interested in adhoc file upload and asking questions on the pdf adhoc.
I see that, this feature is already implemented when using Cohere models. I would like to know what would be needed to do the same for OpenAI endpoints, because from the current implementation it was not clear to me.

is this even possible? @nsarrazin would need your input here

@harisyammnv harisyammnv added the enhancement New feature or request label Aug 20, 2024
@nsarrazin
Copy link
Collaborator

Hi! so this is possible using the tool calling functionality. Not sure about the status of tool calling on OpenAI endpoints will need to take a look but overall if it's supported there then it should also work.

We use a tool that accepts pdf inputs and returns their content as plain text. see here in the huggingchat prod config:

chat-ui/chart/env/prod.yaml

Lines 347 to 374 in 27adf0d

{
"_id": "000000000000000000000002",
"displayName": "Document Parser",
"description": "Use this tool to parse any document and get its content in markdown format.",
"color": "yellow",
"icon": "cloud",
"baseUrl": "huggingchat/document-parser",
"name": "document_parser",
"endpoint": "/predict",
"inputs": [
{
"name": "document",
"description": "Filename of the document to parse",
"paramType": "required",
"type": "file",
"mimeTypes": 'application/*'
},
{
"name": "filename",
"paramType": "fixed",
"value": "document.pdf",
"type": "str"
}
],
"outputComponent": "textbox",
"outputComponentIdx": 0,
"showOutput": false
},

@harisyammnv
Copy link
Contributor Author

harisyammnv commented Aug 26, 2024

@nsarrazin : I tried the config and enabled tools = true but i get the following error when i upload the document and ask a question, I copied the error message from the network console:
{"type":"status","status":"error","message":"Input buffer contains unsupported image format"}

This is my current model config in .env file:

{
  "id": "gpt-4o",
  "name": "gpt-4o",
  "displayName": "gpt-4o",
  "logoUrl": "https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/microsoft-logo.png",
  "modelUrl": "https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models",
  "description": "This is the default bot that you can use if you want to have generic answers",
  "tools": true,
  "parameters": {
      "temperature": 0.0,
      "max_new_tokens": 4096,
  },
  "promptExamples": [
    ],
  "endpoints": [
      {
          "type": "openai",
          "baseURL": "https://<base>.openai.azure.com/openai/deployments/gpt-4o",
          "defaultHeaders": {
              "api-key": "...."
          },
          "defaultQuery": {
              "api-version": "2024-05-01-preview"
          }
      }
  ]
},

Can you give more pointers if you can

Thanks

@harisyammnv
Copy link
Contributor Author

Ok i think i see the problem now, the openai endpoints file does not have the tool calling parameters like the one on cohere endpoints file. I will try to go deeper and check what i can do here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants