You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I haven't looked into token-usage efficiency when creating my current implementation of the content generation for listicles. Since using more tokens costs more money, we should look into how to lessen it.
As of now there are three prompts passed to OpenAI that are somewhat long:
Since the listicle provided could vary in formatting, this prompt is created where we pass along the above context + new (stating we want JSON) to get a JSON response that we can programmatically parse in our image processing step.
I haven't looked into token-usage efficiency when creating my current implementation of the content generation for listicles. Since using more tokens costs more money, we should look into how to lessen it.
As of now there are three prompts passed to OpenAI that are somewhat long:
The text was updated successfully, but these errors were encountered: