How to view and debug logs - no LLM response shown and test case failure #2004
Replies: 3 comments 8 replies
-
Hey @kevinmessiaen - do you have an idea of what could be wrong? |
Beta Was this translation helpful? Give feedback.
-
Hello @ClarkKentIsSuperman, It seems that your LLM client (the one used for generating adversarial inputs) has returned an output in a wrong JSON format - it contains a phrase after the dict.
Normally it happens when the LLM used is not a powerful or recent model. Could you share which LLM client and model are you using as the default one? By the way, we recommend to use GPT-4o whenever it's possible. |
Beta Was this translation helpful? Give feedback.
-
Also, I'm fine using Mistral - would I add the
|
Beta Was this translation helpful? Give feedback.
-
I need to help how to view and debug logs - where would the log be for this type of error:
in my results.html (i'm running just via cmd line in Mac - not a notebook) its showing failures, but looks like there is just no response from the LLM sometimes:
Beta Was this translation helpful? Give feedback.
All reactions