Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate making prompts more efficient (token-usage) #11

Open
inFocus7 opened this issue Jan 1, 2024 · 0 comments
Open

Investigate making prompts more efficient (token-usage) #11

inFocus7 opened this issue Jan 1, 2024 · 0 comments

Comments

@inFocus7
Copy link
Owner

inFocus7 commented Jan 1, 2024

I haven't looked into token-usage efficiency when creating my current implementation of the content generation for listicles. Since using more tokens costs more money, we should look into how to lessen it.

As of now there are three prompts passed to OpenAI that are somewhat long:

  1. Listicle Creation (ChatGPT)
    • This is where we prompt using the fields in the web ui to generate the listicle information.
  2. JSON-ify Listicle (ChatGPT)
    • Since the listicle provided could vary in formatting, this prompt is created where we pass along the above context + new (stating we want JSON) to get a JSON response that we can programmatically parse in our image processing step.
  3. Image Generation (DALL-E)
    • At the end, we loop through the JSON and prompt DALL-E to generate portrait images based on each's description.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant