From c7ee7353af093a2fa771a9a949e9d5a59f5d8a7d Mon Sep 17 00:00:00 2001 From: Erik Dunteman <44653944+erik-dunteman@users.noreply.github.com> Date: Fri, 8 Nov 2024 13:45:30 -0800 Subject: [PATCH] Update recipes/3p_integrations/modal/many-llamas-human-eval/README.md Co-authored-by: Hamid Shojanazeri --- recipes/3p_integrations/modal/many-llamas-human-eval/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/recipes/3p_integrations/modal/many-llamas-human-eval/README.md b/recipes/3p_integrations/modal/many-llamas-human-eval/README.md index 5daa6f223..1c3c1b661 100644 --- a/recipes/3p_integrations/modal/many-llamas-human-eval/README.md +++ b/recipes/3p_integrations/modal/many-llamas-human-eval/README.md @@ -10,7 +10,7 @@ It seeks to increase model performance not through scaling parameters, but by sc This experiment built by the team at [Modal](https://modal.com), and is described in the following blog post: -[Beat GPT-4o at Python by searching with 100 dumb LLaMAs](https://modal.com/blog/llama-human-eval) +[Beat GPT-4o at Python by searching with 100 small Llamas](https://modal.com/blog/llama-human-eval) The experiment has since been upgraded to use the [Llama 3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) model, and runnable end-to-end using the Modal serverless platform.