Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad request: Task not found for this model #1415

Open
NITHISH-Projects opened this issue Aug 18, 2024 · 1 comment
Open

Bad request: Task not found for this model #1415

NITHISH-Projects opened this issue Aug 18, 2024 · 1 comment
Labels
support A request for help setting things up

Comments

@NITHISH-Projects
Copy link

Hi all,
I am facing the following issue when using HuggingFaceEndpoint for my custom finetuned model in my repository "Nithish-2001/RAG-29520hd0-1-chat-finetune" which is public with gradio.

llm_name: Nithish-2001/RAG-29520hd0-1-chat-finetune
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/Nithish-2001/RAG-29520hd0-1-chat-finetune

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 763, in predict
output = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 288, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1931, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1516, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "", line 90, in conversation
response = qa_chain.invoke({"question": message, "chat_history": formatted_chat_history})
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/conversational_retrieval/base.py", line 169, in _call
answer = self.combine_docs_chain.run(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 603, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 381, in call
return self.invoke(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/base.py", line 138, in _call
output, extra_return_dict = self.combine_docs(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/stuff.py", line 257, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 316, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py", line 170, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 381, in call
return self.invoke(
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 164, in invoke
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 154, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 126, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 138, in generate
return self.llm.generate_prompt(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 750, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 944, in generate
output = self._generate_helper(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 787, in _generate_helper
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 774, in _generate_helper
self._generate(
File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py", line 1508, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain_community/llms/huggingface_endpoint.py", line 265, in _call
response = self.client.post(
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/inference/_client.py", line 273, in post
hf_raise_for_status(response)
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: JoV91bXHMHzsDi4vyFyXj)

Bad request:
Task not found for this model.

Kindly please help me with this issue.
THank you.

@NITHISH-Projects NITHISH-Projects added the support A request for help setting things up label Aug 18, 2024
@nsarrazin
Copy link
Collaborator

I believe this is most likely a config issue. Please share your full config (and redact out any secrets) for chat-ui and any inference endpoint you may be using so we can take a look!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
support A request for help setting things up
Projects
None yet
Development

No branches or pull requests

2 participants