-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation Script Fails Due to Invalid JSON Response from Authenticated Endpoint #97
Comments
Yeah, an authenticated endpoint is tricky. I know one developer got an approach working here: I can try to make that an officially supported thing here. |
Thank you - I was able to reach the authenticated endpoint with the approach you linked. |
+1. Same request here. |
Matt just updated the login docs for azure-search-openai-demo to share another way to get a token: That only works if you disable built-in auth, however, and use MSAL SDK only. We haven't determined why it's no working with built-in auth yet, we're chatting with App Service team. |
Thanks for the heads up, @pamelafox! |
I tried the newly documented approach to get a token (after upgrading to the latest commit and setting AZURE_DISABLE_APP_SERVICES_AUTHENTICATION to true).
I tried to run az login as instructed in the error message but then I get: So I tried to remove access_as_user (I'm not sure it should be there). Different warning now:
Also hinting to do az login, which I do. Now I get a different error during az login: For reference, my server and client IDs are: I'm running out of time to troubleshoot now but I still want to update on my progress. I will continue later today. |
I got the furthest using this method to get a token:
This is what I get:
I'm getting an empty reply (effectively the same error as OP). |
Thanks for tagging me, I'll try to take a look |
@mattgotteiner do you plan on working on this in the short term? It would be extremely useful to be able to do evaluation runs on a securely deployed application. Or any advice on changes required to make it work? |
This issue is for a: (mark with an
x
)Minimal steps to reproduce
ai-rag-chat-evaluator
repository.Any log messages given by the failure
@Myname ➜ /workspaces/ai-rag-chat-evaluator (main) $ python -m scripts evaluate --config=config.json
[17:57:29] INFO Running evaluation from config /workspaces/ai-rag-chat-evaluator/config.json
INFO Replaced results_dir in config with timestamp
INFO Using Azure OpenAI Service with Azure Developer CLI Credential
INFO Running evaluation using data from /workspaces/ai-rag-chat-evaluator/example_input/qa.jsonl
INFO Sending a test question to the target to ensure it is running...
ERROR Failed to send a test question to the target due to error:
Response from target https://MYBACKEND.azurewebsites.net/chat is not valid JSON:
Make sure that your configuration points at a chat endpoint that returns a single JSON object.
ERROR Evaluation was terminated early due to an error ⬆
Expected/desired behavior
The evaluation script should successfully communicate with the chat endpoint, and the evaluation should proceed without errors.
OS and Version?
Versions
Mention any other details that might be useful
The target_url in my config.json points to https://MYBACKEND.azurewebsites.net/chat.
The chat endpoint should return a single JSON object but seems not to be in the expected format.
The application is deployed as per the instructions in the repository documentation.
The app at https://MYBACKEND.azurewebsites.net has enabled authentication which might be affecting the evaluation script.
The text was updated successfully, but these errors were encountered: