Skip to content

Commit

Permalink
update example
Browse files Browse the repository at this point in the history
  • Loading branch information
vinid committed Jun 12, 2024
1 parent c41bbae commit 581b5e8
Showing 1 changed file with 17 additions and 8 deletions.
25 changes: 17 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,9 @@ question_string = ("If it takes 1 hour to dry 25 shirts under the sun, "
"how long will it take to dry 30 shirts under the sun? "
"Reason step by step")

question = tg.Variable(question_string, role_description="question to the LLM", requires_grad=False)
question = tg.Variable(question_string,
role_description="question to the LLM",
requires_grad=False)

answer = model(question)
```
Expand All @@ -56,19 +58,26 @@ As you can see, **the model's answer is incorrect.** We can optimize the answer
answer.set_role_description("concise and accurate answer to the question")

# Step 2: Define the loss function and the optimizer, just like in PyTorch!
# Here, we don't have SGD, but we have TGD (Textual Gradient Descent) that works with "textual gradients".
# Here, we don't have SGD, but we have TGD (Textual Gradient Descent)
# that works with "textual gradients".
optimizer = tg.TGD(parameters=[answer])
evaluation_instruction = f"Here's a question: {question_string}. Evaluate any given answer to this question, be smart, logical, and very critical. Just provide concise feedback."

# TextLoss is a natural-language specified loss function that describes how we want to evaluate the reasoning.
evaluation_instruction = (f"Here's a question: {question_string}. "
"Evaluate any given answer to this question, "
"be smart, logical, and very critical. "
"Just provide concise feedback.")


# TextLoss is a natural-language specified loss function that describes
# how we want to evaluate the reasoning.
loss_fn = tg.TextLoss(evaluation_instruction)
```
> :brain: loss: [...] Your step-by-step reasoning is clear and logical,
> but it contains a critical flaw in the assumption that drying time is directly proportional
> to the number of shirts. [...]
> but it contains a critical flaw in the assumption that drying time is
> directly proportional to the number of shirts. [...]
```python
# Step 3: Do the loss computation, backward pass, and update the punchline. Exact same syntax as PyTorch!
# Step 3: Do the loss computation, backward pass, and update the punchline.
# Exact same syntax as PyTorch!
loss = loss_fn(answer)
loss.backward()
optimizer.step()
Expand Down

0 comments on commit 581b5e8

Please sign in to comment.