Skip to content

Commit

Permalink
No public description
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 631083668
  • Loading branch information
goldvitaly authored and colaboratory-team committed May 6, 2024
1 parent b844a14 commit 70d3a70
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion google/colab/_dataframe_summarizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def _summarize_columns(df: pd.DataFrame, n_samples: int = 3):
properties["dtype"] = "string"
except TypeError:
properties["dtype"] = str(dtype)
elif pd.api.types.is_categorical_dtype(column):
elif isinstance(column, pd.CategoricalDtype):
properties["dtype"] = "category"
elif pd.api.types.is_datetime64_any_dtype(column):
properties["dtype"] = "date"
Expand Down
4 changes: 2 additions & 2 deletions notebooks/Gemma_Distributed_Fine_tuning_on_TPU.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"source": [
"## Overview\n",
"\n",
"Gemma is a family of lightweight, state-of-the-art open models built from research and technology used to create Google Gemini models. Gemma can be further finetuned to suit specific needs. But Large Language Models, such as Gemma, can be very large in size and some of them may not fit on a single accelerator for finetuning. In this case there are two general approaches for finetuning them:\n",
"Gemma is a family of lightweight, state-of-the-art open models built from research and technology used to create Google Gemini models. Gemma can be further finetuned to suit specific needs. But Large Language Models, such as Gemma, can be very large in size and some of them may not fit on a sing accelerator for finetuning. In this case there are two general approaches for finetuning them:\n",
"1. Parameter Efficient Fine-Tuning (PEFT), which seeks to shrink the effective model size by sacrificing some fidelity. LoRA falls in this category and the [Fine-tune Gemma models in Keras using LoRA](https://ai.google.dev/gemma/docs/lora_tuning) tutorial demonstrates how to finetune the Gemma 7B model `gemma_instruct_7b_en` with LoRA using KerasNLP on a single GPU.\n",
"2. Full parameter finetuning with model parallelism. Model parallelism distributes a single model's weights across multiple devices and enables horizontal scaling. You can find out more about distributed training in this [Keras guide](https://keras.io/guides/distribution/).\n",
"\n",
Expand Down Expand Up @@ -4232,4 +4232,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}

0 comments on commit 70d3a70

Please sign in to comment.