Skip to content

Commit 1d2d197

Browse files
models: Add notebook for finetuning Google CodeGemma model (#103)
Co-authored-by: Anjali Shah <[email protected]>
1 parent 68d7988 commit 1d2d197

File tree

3 files changed

+592
-2
lines changed

3 files changed

+592
-2
lines changed

models/Codegemma/README.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Codegemma
2+
3+
[Codegemma](https://ai.google.dev/codegemma/docs) is a family of decoder-only, text-to-text large language models for programming, built from the same research and technology used to create the [Gemini models](https://blog.google/technology/ai/google-gemini-ai/). Codegemma models have open weights and offer pre-trained variants and instruction-tuned variants. These models are well-suited for a variety of code generation tasks. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop, or your own cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone.
4+
For more details, refer the the [Codegemma model card](https://ai.google.dev/codegemma/docs/model_card) released by Google.
5+
6+
7+
## Customizing Gemma with NeMo Framework
8+
9+
Gemma models are compatible with [NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/index.html). In this repository we have two notebooks that covert different ways of customizing Gemma.
10+
11+
### Paramater Efficient Fine-Tuning with LoRA
12+
13+
[LoRA tuning](https://arxiv.org/abs/2106.09685) is a parameter efficient method for fine-tuning models, where we freeze the base model parameters and update an auxilliary "adapter" with many fewer weights. At inference time, the adapter weights are combined with the base model weights to produce a new model, customized for a particular use case or dataset. Because this adapter is so much smaller than the base model, it can be trained with far fewer resources than it would take to fine-tune the entire model. In this example, we'll show you how to LoRA-tune small models like the Gemma models on a single GPU.
14+
15+
[Get Started Here](./lora.ipynb)
16+
17+
### Supervised Fine-Tuning for Instruction Following (SFT)
18+
19+
Supervised Fine-Tuning (SFT) is the process of fine-tuning all of a model’s parameters on supervised data of inputs and outputs. It teaches the model how to follow user specified instructions and is typically done after model pre-training. This example will describe the steps involved in fine-tuning Gemma for instruction following. Gemma was released with a checkpoint already fine-tuned for instruction-following, but here we'll learn how we can tune our own model starting with the pre-trained checkpoint to acheive a similar outcome.
20+
21+
Full fine-tuning is more resource intensive than Low Rank adaptation, so for SFT we'll need multiple GPUs, as opposed to the single GPU used for LoRA.
22+
23+
[Get Started Here](./)
24+
25+
## Download the base model
26+
27+
For all of our customization and deployment processes, we'll need to start off with a pre-trained version of CodeGemma in the `.nemo` format. You can download the base model in `.nemo` format from the NVIDIA GPU Cloud, or convert checkpoints from another framework into a `.nemo` file. You can choose to use the 2B parameter or 7B parameter CodeGemma models for this notebook -- the 2B model will be faster to customize, but the 7B model will be more capable.
28+
29+
You can download either model from the NVIDIA NGC Catalog, using the NGC CLI. The instructions to install and configure the NGC CLI can be found [here](https://ngc.nvidia.com/setup/installers/cli).
30+
31+
To download the model, execute one of the following commands, based on which model you want to use:
32+
33+
```bash
34+
ngc registry model download-version "nvidia/nemo/codegemma_2b_base:1.0"
35+
```
36+
37+
or
38+
39+
```bash
40+
ngc registry model download-version "nvidia/nemo/codegemma_7b_base:1.0"
41+
```
42+
43+
## Getting NeMo Framework
44+
45+
NVIDIA NeMo Framework is a generative AI framework built for researchers and PyTorch developers working on large language models (LLMs), multimodal models (MM), automatic speech recognition (ASR), and text-to-speech synthesis (TTS). The primary objective of NeMo is to provide a scalable framework for researchers and developers from industry and academia to more easily implement and design new generative AI models by being able to leverage existing code and pretrained models.
46+
47+
You can pull a container that includes the version of NeMo Framework and all dependencies needed for these notebooks with the following:
48+
49+
```bash
50+
docker pull nvcr.io/nvidia/nemo:24.03.codegemma
51+
```
52+
53+
The best way to run this notebook is from within the container. You can do that by launching the container with the following command
54+
55+
```bash
56+
docker run -it --rm --gpus all --ipc host --network host -v $(pwd):/workspace nvcr.io/nvidia/nemo:24.03.codegemma
57+
```
58+
59+
Then, from within the container, start the jupyter server with
60+
61+
```bash
62+
jupyter lab --no-browser --port=8080 --allow-root --ip 0.0.0.0
63+
```

0 commit comments

Comments
 (0)