Skip to content

Commit

Permalink
a bit more formatting
Browse files Browse the repository at this point in the history
  • Loading branch information
mertyg committed Jun 12, 2024
1 parent eee2f9e commit 4305fe3
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ If you know PyTorch, you know 80% of TextGrad.
Let's walk through the key components with a simple example. Say we want to use GPT-4o to generate a punchline for TextGrad.
```python
import textgrad as tg
# Step 1: Get an initial response from an LLM
# Step 1: Get an initial response from an LLM.
model = tg.BlackboxLLM("gpt-4o")
punchline = model(tg.Variable("write a punchline for my github package about optimizing compound AI systems", role_description="prompt", requires_grad=False))
punchline.set_role_description("a concise punchline that must hook everyone")
Expand All @@ -34,23 +34,24 @@ punchline.set_role_description("a concise punchline that must hook everyone")
Initial `punchline` from the model:
> Supercharge your AI synergy with our optimization toolkit – where compound intelligence meets peak performance!
Not bad, but we (gpt-4o, i guess) can do better! Let's optimize the punchline using TextGrad.
Not bad, but maybe GPT-4o can do better! Let's optimize the punchline using TextGrad. In this case `punchline` would be the variable we want to optimize and improve.
```python
# Step 2: Define the loss function and the optimizer, just like in PyTorch!
loss_fn = tg.TextLoss("We want to have a super smart and funny punchline. Is the current one concise and addictive? Is the punch fun, makes sense, and subtle enough?")
optimizer = tg.TGD(parameters=[punchline])
```

```python
# Step 3: Do the loss computation, backward pass, and update the punchline
# Step 3: Do the loss computation, backward pass, and update the punchline.
loss = loss_fn(punchline)
loss.backward()
optimizer.step()
```

Optimized punchline:
> Boost your AI with our toolkit – because even robots need a tune-up!
Okay this model isn’t really ready for a comedy show yet (and maybe a bit cringy) but it is clearly trying. But who gets to maxima in one step?
Okay this model isn’t really ready for a comedy show yet but it is clearly trying. But who gets to maxima in one step?

<br>
We have many more examples around how TextGrad can optimize all kinds of variables -- code, solutions to problems, molecules, prompts, and all that!
Expand Down

0 comments on commit 4305fe3

Please sign in to comment.