Skip to content

Commit

Permalink
Merge pull request #4498 from kyo-takano/patch-1
Browse files Browse the repository at this point in the history
Fix a typo in  `Gemma_Distributed_Fine_tuning_on_TPU.ipynb`
  • Loading branch information
sagelywizard authored Apr 11, 2024
2 parents 5372a7f + 56bc7ce commit b844a14
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions notebooks/Gemma_Distributed_Fine_tuning_on_TPU.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"source": [
"## Overview\n",
"\n",
"Gemma is a family of lightweight, state-of-the-art open models built from research and technology used to create Google Gemini models. Gemma can be further finetuned to suit specific needs. But Large Language Models, such as Gemma, can be very large in size and some of them may not fit on a sing accelerator for finetuning. In this case there are two general approaches for finetuning them:\n",
"Gemma is a family of lightweight, state-of-the-art open models built from research and technology used to create Google Gemini models. Gemma can be further finetuned to suit specific needs. But Large Language Models, such as Gemma, can be very large in size and some of them may not fit on a single accelerator for finetuning. In this case there are two general approaches for finetuning them:\n",
"1. Parameter Efficient Fine-Tuning (PEFT), which seeks to shrink the effective model size by sacrificing some fidelity. LoRA falls in this category and the [Fine-tune Gemma models in Keras using LoRA](https://ai.google.dev/gemma/docs/lora_tuning) tutorial demonstrates how to finetune the Gemma 7B model `gemma_instruct_7b_en` with LoRA using KerasNLP on a single GPU.\n",
"2. Full parameter finetuning with model parallelism. Model parallelism distributes a single model's weights across multiple devices and enables horizontal scaling. You can find out more about distributed training in this [Keras guide](https://keras.io/guides/distribution/).\n",
"\n",
Expand Down Expand Up @@ -4232,4 +4232,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}

0 comments on commit b844a14

Please sign in to comment.