Rendered prompt does not show equivalent JSON payload when calling OpenAI API directly #6817
-
Hi, Is the rendered prompt being 'translated' to the acceptable JSON payload before passing to OpenAI? Handlebar template and resulting rendered promptname: transcribeScreenshot
description: Extract text from image
template: |
<message role='user'>
<text>what is this image</text>

</message>
template_format: handlebars Output:
OpenAI API direct call with PostmanRequest {
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "<message role='user'><text>what is this image</text></message>"
}
]
} Output {
"id": "chatcmpl-xxxxx",
"object": "chat.completion",
"created": 1718774484,
"model": "gpt-4o-2024-05-13",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "I'm currently unable to view images directly. However, you can describe the image to me, and I can help provide information or context based on your description!"
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 40,
"completion_tokens": 31,
"total_tokens": 71
},
"system_fingerprint": "fxxxxx"
} |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
That's correct.
You can define Examples of logging handler: Example of logging handler injection to see raw request made to OpenAI: |
Beta Was this translation helpful? Give feedback.
@vicperdana
That's correct.
You can define
DelegatingHandler
, which you can inject inHttpClient
and pass thatHttpClient
when you register Azure/OpenAI chat completion service.Examples of logging handler:
semantic-kernel/dotnet/src/InternalUtilities/samples/InternalUtilities/BaseTest.cs
Line 103 in 745e64a
semantic-kernel/dotnet/src/Plugins/Plugins.MsGraph/Connectors/Client/MsGraphC…