From b4b706a1be2a91879e8d9a38d6658e72fbae7a58 Mon Sep 17 00:00:00 2001 From: Katie Wetstone <46792169+klwetstone@users.noreply.github.com> Date: Tue, 26 Oct 2021 12:51:46 -0700 Subject: [PATCH] Final review for docs formatting (#159) * update models page link, limit CLI demo vid width * highlight correct lines * remove spaces from chimp&see * general densepose page edits * standard capitalization DensePose * densepose typos * remove megadetectorlite config default None * add note about pulling in model defaults * config formatting fixes * typos * correct link * copy editing * Update docs/docs/models/densepose.md * Update docs/docs/yaml-config.md Co-authored-by: Emily Miller --- HISTORY.md | 2 +- docs/docs/configurations.md | 18 +++++++++-------- docs/docs/contribute/index.md | 10 +++++----- docs/docs/extra-options.md | 6 +++--- .../docs/models/{denspose.md => densepose.md} | 20 +++++++++---------- docs/docs/models/species-detection.md | 5 ++--- docs/docs/predict-tutorial.md | 10 +++++----- docs/docs/quickstart.md | 8 ++++---- docs/docs/train-tutorial.md | 10 +++++----- docs/docs/yaml-config.md | 4 ++-- docs/mkdocs.yml | 2 +- 11 files changed, 48 insertions(+), 47 deletions(-) rename docs/docs/models/{denspose.md => densepose.md} (72%) diff --git a/HISTORY.md b/HISTORY.md index 9912d643..984ff62d 100644 --- a/HISTORY.md +++ b/HISTORY.md @@ -12,7 +12,7 @@ The core algorithm in `zamba` v1 was a [stacked ensemble](https://en.wikipedia.o learning models, whose individual predictions were combined in the second level of the stack to form the final prediction. -In v2, the stacked ensemble algorithm from v1 is replaced with three more powerful [single-model options](../models/index.md): `time_distributed`, `slowfast`, and `european`. The new models utilize state-of-the-art image and video classification architectures, and are able to outperform the much more computationally intensive stacked ensemble model. +In v2, the stacked ensemble algorithm from v1 is replaced with three more powerful [single-model options](models/species-detection.md): `time_distributed`, `slowfast`, and `european`. The new models utilize state-of-the-art image and video classification architectures, and are able to outperform the much more computationally intensive stacked ensemble model. ### New geographies and species diff --git a/docs/docs/configurations.md b/docs/docs/configurations.md index 10caefa1..6ca2ab3a 100644 --- a/docs/docs/configurations.md +++ b/docs/docs/configurations.md @@ -16,7 +16,9 @@ Here's a helpful diagram which shows how everything is related. The [`VideoLoaderConfig` class](api-reference/data-video.md#zamba.data.video.VideoLoaderConfig) defines all of the optional parameters that can be specified for how videos are loaded before either inference or training. This includes selecting which frames to use from each video. -All video loading arguments can be specified either in a [YAML file](yaml-config.md) or when instantiating the [`VideoLoaderConfig`](configurations.md#video-loading-arguments) class in Python. Some can also be specified directly in the command line. +All video loading arguments can be specified either in a [YAML file](yaml-config.md) or when instantiating the [`VideoLoaderConfig` class](api-reference/data-video.md#zamba.data.video.VideoLoaderConfig) in Python. Some can also be specified directly in the command line. + +Each model comes with a default video loading configuration. If no user-specified video loading configuration is passed - either through a YAML file or the Python `VideoLoaderConfig` class - all video loading arguments will be set based on the defaults for the given model. === "YAML file" ```yaml @@ -87,7 +89,7 @@ Only load frames that correspond to [scene changes](http://www.ffmpeg.org/ffmpeg #### `megadetector_lite_config (MegadetectorLiteYoloXConfig, optional)` -The `megadetector_lite_config` is used to specify any parameters that should be passed to the [MegadetectorLite model](models/index.md#megadetectorlite) for frame selection. For all possible options, see the [`MegadetectorLiteYoloXConfig` class](api-reference/models-megadetector_lite_yolox.md#zamba.models.megadetector_lite_yolox.MegadetectorLiteYoloXConfig). If `megadetector_lite_config` is `None` (the default), the MegadetectorLite model will not be used to select frames. +The `megadetector_lite_config` is used to specify any parameters that should be passed to the [MegadetectorLite model](models/species-detection.md#megadetectorlite) for frame selection. For all possible options, see the [`MegadetectorLiteYoloXConfig` class](api-reference/models-megadetector_lite_yolox.md#zamba.models.megadetector_lite_yolox.MegadetectorLiteYoloXConfig). If `megadetector_lite_config` is `None` (the default), the MegadetectorLite model will not be used to select frames. #### `frame_selection_height (int, optional), frame_selection_width (int, optional)` @@ -182,7 +184,7 @@ Path to a model checkpoint to load and use for inference. The default is `None`, #### `model_name (time_distributed|slowfast|european, optional)` -Name of the model to use for inference. The three model options that ship with `zamba` are `time_distributed`, `slowfast`, and `european`. See the [Available Models](models/index.md) page for details. Defaults to `time_distributed` +Name of the model to use for inference. The three model options that ship with `zamba` are `time_distributed`, `slowfast`, and `european`. See the [Available Models](models/species-detection.md) page for details. Defaults to `time_distributed` #### `gpus (int, optional)` @@ -233,7 +235,7 @@ By default, before kicking off inference `zamba` will iterate through all of the #### `model_cache_dir (Path, optional)` -Cache directory where downloaded model weights will be saved. If None and the MODEL_CACHE_DIR environment variable is not set, will use your default cache directory (e.g. `~/.cache`). Defaults to `None` +Cache directory where downloaded model weights will be saved. If None and the `MODEL_CACHE_DIR` environment variable is not set, will use your default cache directory (e.g. `~/.cache`). Defaults to `None` @@ -291,11 +293,11 @@ Path to a model checkpoint to load and resume training from. The default is `Non #### `scheduler_config (zamba.models.config.SchedulerConfig, optional)` -A [PyTorch learning rate schedule](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) to adjust the learning rate based on the number of epochs. Scheduler can either be `default` (the default), `None`, or a [`torch.optim.lr_scheduler`](https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py). If `default`, +A [PyTorch learning rate schedule](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) to adjust the learning rate based on the number of epochs. Scheduler can either be `default` (the default), `None`, or a [`torch.optim.lr_scheduler`](https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py). #### `model_name (time_distributed|slowfast|european, optional)` -Name of the model to use for inference. The three model options that ship with `zamba` are `time_distributed`, `slowfast`, and `european`. See the [Available Models](models/index.md) page for details. Defaults to `time_distributed` +Name of the model to use for inference. The three model options that ship with `zamba` are `time_distributed`, `slowfast`, and `european`. See the [Available Models](models/species-detection.md) page for details. Defaults to `time_distributed` #### `dry_run (bool, optional)` @@ -307,7 +309,7 @@ The batch size to use for training. Defaults to `2` #### `auto_lr_find (bool, optional)` -Whether to run a [learning rate finder algorithm](https://arxiv.org/abs/1506.01186) when calling `pytorch_lightning.trainer.tune()` to try to find an optimal initial learning rate. The learning rate finder is not guaranteed to find a good learning rate; depending on the dataset, it can select a learning rate that leads to poor model training. Use with caution. See the PyTorch Lightning [docs](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#auto-lr-find) for more details. Defaults to `False`. +Whether to run a [learning rate finder algorithm](https://arxiv.org/abs/1506.01186) when calling `pytorch_lightning.trainer.tune()` to try to find an optimal initial learning rate. The learning rate finder is not guaranteed to find a good learning rate; depending on the dataset, it can select a learning rate that leads to poor model training. Use with caution. See the PyTorch Lightning [docs](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#auto-lr-find) for more details. Defaults to `False` #### `backbone_finetune_config (zamba.models.config.BackboneFinetuneConfig, optional)` @@ -343,7 +345,7 @@ Directory in which to save model checkpoint and configuration file. If not speci #### `overwrite (bool, optional)` - If `True`, will save outputs in `save_dir` and overwrite the directory if it exists. If False, will create an auto-incremented `version_n` folder within `save_dir` with model outputs. Defaults to `False`. + If `True`, will save outputs in `save_dir` and overwrite the directory if it exists. If False, will create an auto-incremented `version_n` folder within `save_dir` with model outputs. Defaults to `False` #### `skip_load_validation (bool, optional)` diff --git a/docs/docs/contribute/index.md b/docs/docs/contribute/index.md index a5ae9b76..554b3877 100644 --- a/docs/docs/contribute/index.md +++ b/docs/docs/contribute/index.md @@ -2,12 +2,12 @@ `zamba` is an open source project, which means _you_ can help make it better! -## Develop the github repository +## Develop the GitHub repository -To get involved, check out the Github [code repository](https://github.com/drivendataorg/zamba). +To get involved, check out the GitHub [code repository](https://github.com/drivendataorg/zamba). There you can find [open issues](https://github.com/drivendataorg/zamba/issues) with comments and links to help you along. -`zamba` uses continuous integration and test-driven-development to ensure that we always have a working project. So what are you waiting for? `git` going! +`zamba` uses continuous integration and test-driven development to ensure that we always have a working project. So what are you waiting for? `git` going! ## Installation for development @@ -22,9 +22,9 @@ $ pip install -r requirements-dev.txt ## Running the `zamba` test suite -The included `Makefile` contains code that uses pytest to run all tests in `zamba/tests`. +The included [`Makefile`](https://github.com/drivendataorg/zamba/blob/master/Makefile) contains code that uses pytest to run all tests in `zamba/tests`. -The command is (from the project root), +The command is (from the project root): ```console $ make tests diff --git a/docs/docs/extra-options.md b/docs/docs/extra-options.md index 8772844e..f93960b9 100644 --- a/docs/docs/extra-options.md +++ b/docs/docs/extra-options.md @@ -45,7 +45,7 @@ Say that you have a large number of videos, and you are more concerned with dete === "Python" In Python, video resizing can be specified when `VideoLoaderConfig` is instantiated: - ```python hl_lines="6 7 8" + ```python hl_lines="7 8 9" from zamba.data.video import VideoLoaderConfig from zamba.models.config import PredictConfig from zamba.models.model_manager import predict_model @@ -111,7 +111,7 @@ A simple option is to sample frames that are evenly distributed throughout a vid ### MegadetectorLite -You can use a pretrained object detection model called [MegadetectorLite](models/index.md#megadetectorlite) to select only the frames that are mostly likely to contain an animal. This is the default strategy for all three pretrained models. The parameter `megadetector_lite_config` is used to specify any arguments that should be passed to the MegadetectorLite model. If `megadetector_lite_config` is None, the MegadetectorLite model will not be used. +You can use a pretrained object detection model called [MegadetectorLite](models/species-detection.md#megadetectorlite) to select only the frames that are mostly likely to contain an animal. This is the default strategy for all three pretrained models. The parameter `megadetector_lite_config` is used to specify any arguments that should be passed to the MegadetectorLite model. If `megadetector_lite_config` is None, the MegadetectorLite model will not be used. For example, to take the 16 frames with the highest probability of detection: @@ -144,7 +144,7 @@ For example, to take the 16 frames with the highest probability of detection: train_model(video_loader_config=video_loader_config, train_config=train_config) ``` -If you are using the [MegadetectorLite](models/index.md#megadetectorlite) for frame selection, there are two ways that you can specify frame resizing: +If you are using the [MegadetectorLite](models/species-detection.md#megadetectorlite) for frame selection, there are two ways that you can specify frame resizing: - `frame_selection_width` and `frame_selection_height` resize images *before* they are input to the frame selection method. If both are `None`, the full size images will be used during frame selection. Using full size images for selection is recommended for better detection of smaller species, but will slow down training and inference. - `model_input_height` and `model_input_width` resize images *after* frame selection. These specify the image size that is passed to the actual model. diff --git a/docs/docs/models/denspose.md b/docs/docs/models/densepose.md similarity index 72% rename from docs/docs/models/denspose.md rename to docs/docs/models/densepose.md index 9325e787..b068c7f2 100644 --- a/docs/docs/models/denspose.md +++ b/docs/docs/models/densepose.md @@ -1,12 +1,12 @@ -# Densepose +# DensePose ## Background -Facebook AI Research has published a model, DensePose ([Neverova et al, 2021](https://arxiv.org/abs/2011.12438v1)), which can be used to get segmentations for animals that appear in videos. This was trained on the following animals, but often works for other species as well: sheep, zebra, horse, giraffe, elephant, cow, ear, cat, dog. Here's an example of the segmentation output for a frame: +DensePose ([Neverova et al, 2021](https://arxiv.org/abs/2011.12438v1)) is a model published by Facebook AI Research that can be used to get segmentations for animals that appear in videos. The model was trained on the following animals, but often works for other species as well: bear, cat, cow, dog, elephant, giraffe, horse, sheep, zebra. Here's an example of the segmentation output for a frame: ![segmentation of duiker](../media/seg_out.jpg) -Additionally, the model provides mapping of the segmentation output to specific anatomy for chimpanzees. This can be helpful for determining the orientation of chimpanzees in videos and for their behaviors. Here is an example of what that output looks like: +Additionally, the model provides mapping of the segmentation output to specific anatomy for chimpanzees. This can be helpful for determining the orientation of chimpanzees in videos and for understanding their behaviors. Here is an example of what that output looks like: ![chimpanzee texture output](../media/texture_out.png) @@ -14,18 +14,18 @@ For more information on the algorithms and outputs of the DensePose model, see t ## Outputs -The Zamba package supports running Densepose on videos to generate three types of outputs: +The Zamba package supports running DensePose on videos to generate three types of outputs: - A `.json` file with details of segmentations per video frame. - - A `.mp4` file where the original video has the segmentation rendered on top of animal so that the output can be vsiually inspected. - - A `.csv` (when `--output-type chimp_anatomy`) that contains the height and width of the bounding box around each chimpanzee, the frame number and timestamp of the observation, and the percentage of pixels in the bounding box that correspond with each anatomical part. + - A `.mp4` file where the original video has the segmentation rendered on top of animal so that the output can be visually inspected. + - A `.csv` that contains the height and width of the bounding box around each chimpanzee, the frame number and timestamp of the observation, and the percentage of pixels in the bounding box that correspond with each anatomical part. This is specified by adding `--output-type chimp_anatomy`. -Generally, running the densepose model is computationally intensive. It is recommended to run the model at a relatively low framerate (e.g., 1 frame per second) to generate outputs for a video. Another caveat is that because the output JSON output contains the full embedding, these files can be quite large. These are not written out by default. +Running the DensePose model is fairly computationally intensive. It is recommended to run the model at a relatively low framerate (e.g., 1 frame per second) to generate outputs for a video. JSON output files can also be quite large because they contain the full embedding. These are not written out by default. -In order to use the densepose model, you must have PyTorch already installed on your system, and then you must install the `densepose` extra: +In order to use the DensePose model, you must have [PyTorch](https://pytorch.org/get-started/locally/) already installed on your system. Then you must install the `densepose` extra: ```bash -pip install torch # see https://pytorch.org/get-started/locally/ +pip install torch pip install "zamba[densepose]" ``` @@ -47,7 +47,7 @@ Once that is done, here's how to run the DensePose model: ## Getting help diff --git a/docs/docs/models/species-detection.md b/docs/docs/models/species-detection.md index 43b14719..13529358 100644 --- a/docs/docs/models/species-detection.md +++ b/docs/docs/models/species-detection.md @@ -99,9 +99,7 @@ The `time_distributed` model was built by re-training a well-known image classif ### Training data -`time_distributed` was trained using data collected and annotated by partners at [The Max Planck Institute for -Evolutionary Anthropology](https://www.eva.mpg.de/index.html) and [Chimp & -See](https://www.chimpandsee.org/). +`time_distributed` was trained using data collected and annotated by partners at [The Max Planck Institute for Evolutionary Anthropology](https://www.eva.mpg.de/index.html) and [Chimp&See](https://www.chimpandsee.org/). The data included camera trap videos from: @@ -266,6 +264,7 @@ video_loader_config: ``` You can choose different frame selection methods and vary the size of the images that are used by passing in a custom [YAML configuration file](../yaml-config.md). The two requirements for the `slowfast` model are that: + - the video loader must return 32 frames - videos inputted into the model must be at least 200 x 200 pixels diff --git a/docs/docs/predict-tutorial.md b/docs/docs/predict-tutorial.md index f788c862..607480ca 100644 --- a/docs/docs/predict-tutorial.md +++ b/docs/docs/predict-tutorial.md @@ -6,7 +6,7 @@ This tutorial goes over the steps for using `zamba` if: * You already have `zamba` installed (for details see the [Installation](install.md) page) * You have unlabeled videos that you want to generate labels for -* The possible class species labels for your videos are included in the list of possible [zamba labels](models/index.md#species-classes). If your species are not included in this list, you can [retrain a model](train-tutorial.md) using your own labeled data and then run inference. +* The possible class species labels for your videos are included in the list of possible [zamba labels](models/species-detection.md#species-classes). If your species are not included in this list, you can [retrain a model](train-tutorial.md) using your own labeled data and then run inference. ## Basic usage: command line interface @@ -25,7 +25,7 @@ To run `zamba predict` in the command line, you must specify `--data-dir` and/or * **`--data-dir PATH`:** Path to the folder containing your videos. * **`--filepaths PATH`:** Path to a CSV file with a column for the filepath to each video you want to classify. The CSV must have a column for `filepath`. Filepaths can be absolute or relative to the data directory. -All other flags are optional. To choose a model, either `--model` or `--checkpoint` must be specified. Use `--model` to specify one of the three [pretrained models](models/index.md) that ship with `zamba`. Use `--checkpoint` to run inference with a locally saved model. `--model` defaults to `time_distributed`. +All other flags are optional. To choose a model, either `--model` or `--checkpoint` must be specified. Use `--model` to specify one of the three [pretrained models](models/species-detection.md) that ship with `zamba`. Use `--checkpoint` to run inference with a locally saved model. `--model` defaults to `time_distributed`. ## Basic usage: Python package @@ -55,7 +55,7 @@ For detailed explanations of all possible configuration arguments, see [All Opti ## Default behavior -By default, the [`time_distributed`](models/index.md#time-distributed) model will be used. `zamba` will output a `.csv` file with rows labeled by each video filename and columns for each class (ie. species). The default prediction will store all class probabilities, so that cell (i,j) can be interpreted as *the probability that animal j is present in video i.* +By default, the [`time_distributed`](models/species-detection.md#time-distributed) model will be used. `zamba` will output a `.csv` file with rows labeled by each video filename and columns for each class (ie. species). The default prediction will store all class probabilities, so that cell (i,j) can be interpreted as *the probability that animal j is present in video i.* By default, predictions will be saved to `zamba_predictions.csv` in your working directory. You can save predictions to a custom directory using the `--save-dir` argument. @@ -96,9 +96,9 @@ Add the path to your video folder. For example, if your videos are in a folder c ### 2. Choose a model for prediction -If your camera videos contain species common to Central or West Africa, use either the [`time_distributed` model](models/index.md#time-distributed) or [`slowfast` model](models/index.md#slowfast) model. `slowfast` is better for blank and small species detection. `time_distributed` performs better if you have many different species of interest, or are focused on duikers, chimpanzees, and/or gorillas. +If your camera videos contain species common to Central or West Africa, use either the [`time_distributed` model](models/species-detection.md#time-distributed) or [`slowfast` model](models/species-detection.md#slowfast) model. `slowfast` is better for blank and small species detection. `time_distributed` performs better if you have many different species of interest, or are focused on duikers, chimpanzees, and/or gorillas. -If your videos contain species common to Europe, use the [`european` model](models/index.md#european). +If your videos contain species common to Europe, use the [`european` model](models/species-detection.md#european). Add the model name to your command. The `time_distributed` model will be used if no model is specified. For example, if you want to use the `slowfast` model to classify the videos in `example_vids`: diff --git a/docs/docs/quickstart.md b/docs/docs/quickstart.md index 5fce19d0..76846358 100644 --- a/docs/docs/quickstart.md +++ b/docs/docs/quickstart.md @@ -1,6 +1,6 @@ # Quickstart - + This section assumes you have successfully installed `zamba` and are ready to train a model or identify species in your videos! @@ -79,7 +79,7 @@ eleph.mp4,elephant leopard.mp4,leopard ``` -There are three pretrained models that ship with `zamba`: `time_distributed`, `slowfast`, and `european`. Which model you should use depends on your priorities and geography (see the [Available Models](models/index.md) page for more details). By default `zamba` will use the `time_distributed` model. Add the `--model` argument to specify one of other options: +There are three pretrained models that ship with `zamba`: `time_distributed`, `slowfast`, and `european`. Which model you should use depends on your priorities and geography (see the [Available Models](models/species-detection.md) page for more details). By default `zamba` will use the `time_distributed` model. Add the `--model` argument to specify one of other options: ```console $ zamba predict --data-dir example_vids/ --model slowfast @@ -87,9 +87,9 @@ $ zamba predict --data-dir example_vids/ --model slowfast ## Training a model -You can continue training one of the [models](models/index.md) that ships with `zamba` by either: +You can continue training one of the [models](models/species-detection.md) that ships with `zamba` by either: -* Finetuning with additional labeled videos where the species are included in the list of [`zamba` class labels](models/index.md#species-classes) +* Finetuning with additional labeled videos where the species are included in the list of [`zamba` class labels](models/species-detection.md#species-classes) * Finetuning with labeled videos that include new species In either case, the commands for training are the same. Say that we have labels for the videos in the `example_vids` folder saved in `example_labels.csv`. To train a model, run: diff --git a/docs/docs/train-tutorial.md b/docs/docs/train-tutorial.md index dfdd0dbd..d58f525a 100644 --- a/docs/docs/train-tutorial.md +++ b/docs/docs/train-tutorial.md @@ -1,4 +1,4 @@ -# User tutorial: Training a model on labaled videos +# User tutorial: Training a model on labeled videos This section walks through how to train a model using `zamba`. If you are new to `zamba` and just want to classify some videos as soon as possible, see the [Quickstart](quickstart.md) guide. @@ -9,7 +9,7 @@ This tutorial goes over the steps for using `zamba` if: `zamba` can run two types of model training: -* Finetuning a model with labels that are a subset of the possible [zamba labels](models/index.md#species-classes) +* Finetuning a model with labels that are a subset of the possible [zamba labels](models/species-detection.md#species-classes) * Finetuning a model to predict an entirely new set of labels The process is the same for both cases. @@ -71,7 +71,7 @@ For detailed explanations of all possible configuration arguments, see [All Conf ## Default behavior -By default, the [`time_distributed`](models/index.md#time-distributed) model will be used as a starting point. You can specify where the outputs should be saved with `--save-dir`. If no save directory is specified, `zamba` will write out incremental `version_n` folders to your current working directory. For example, a model finetuned from the provided `time_distributed` model (the default) will be saved in `version_0`. +By default, the [`time_distributed`](models/species-detection.md#time-distributed) model will be used as a starting point. You can specify where the outputs should be saved with `--save-dir`. If no save directory is specified, `zamba` will write out incremental `version_n` folders to your current working directory. For example, a model finetuned from the provided `time_distributed` model (the default) will be saved in `version_0`. `version_0` contains: @@ -148,7 +148,7 @@ Add the path to your labels with `--labels`. For example, if your videos are in #### Labels `zamba` has seen before -Your labels may be included in the list of [`zamba` class labels](models/index.md#species-classes) that the provided models are trained to predict. If so, the relevant model that ships with `zamba` will essentially be used as a checkpoint, and model training will resume from that checkpoint. +Your labels may be included in the list of [`zamba` class labels](models/species-detection.md#species-classes) that the provided models are trained to predict. If so, the relevant model that ships with `zamba` will essentially be used as a checkpoint, and model training will resume from that checkpoint. #### Completely new labels @@ -156,7 +156,7 @@ You can also train a model to predict completely new labels - the world is your ### 3. Choose a model for training -Any of the models that ship with `zamba` can be trained. If you're training on entirely new species or new ecologies, we recommend starting with the [`time_distributed` model](models/index.md#time-distributed) as this model is less computationally intensive than the [`slowfast` model](models/index.md#slowfast). +Any of the models that ship with `zamba` can be trained. If you're training on entirely new species or new ecologies, we recommend starting with the [`time_distributed` model](models/species-detection.md#time-distributed) as this model is less computationally intensive than the [`slowfast` model](models/species-detection.md#slowfast). However, if you're tuning a model to a subset of species (e.g. a `european_beaver` or `blank` model), use the model that was trained on data that is most similar to your new data. diff --git a/docs/docs/yaml-config.md b/docs/docs/yaml-config.md index f965073d..6baffabc 100644 --- a/docs/docs/yaml-config.md +++ b/docs/docs/yaml-config.md @@ -85,7 +85,7 @@ In our user tutorials, we refer to `train_model` and `predict_model` functions. In the command line, the default configuration for each model is passed in using a specified YAML file that ships with `zamba`. You can see the default configuration YAML files on [Github](https://github.com/drivendataorg/zamba/tree/master/zamba/models/official_models) in the `config.yaml` file within each model's folder. -For example, the default configuration for the [`time_distributed` model](models/index.md#time-distributed) is: +For example, the default configuration for the [`time_distributed` model](models/species-detection.md#time-distributed) is: ```yaml train_config: @@ -124,4 +124,4 @@ public_checkpoint: time_distributed_9e710aa8c92d25190a64b3b04b9122bdcb456982.ckp ## Templates -To make modifying existing mod easier, we've set up the official models as templates in the [`templates` folder](https://github.com/drivendataorg/zamba/tree/master/templates). Just fill in your data directory and labels, make any desired tweaks to the model config, and then kick off some [training](train_tutorial.md). Happy modeling! +To make modifying the existing defaults easier, we've set up the official models as templates in the [`templates` folder](https://github.com/drivendataorg/zamba/tree/master/templates). Just fill in your data directory and labels, make any desired tweaks to the model config, and then kick off some [training](train-tutorial.md). Happy modeling! diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 61fcf87b..a2973618 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -24,7 +24,7 @@ nav: - Guide to common optional parameters: "extra-options.md" - "Available Models": - Species detection: "models/species-detection.md" - - DensePose: "models/denspose.md" + - DensePose: "models/densepose.md" - "Advanced Options": - All configuration options: "configurations.md" - Using YAML configuration files: "yaml-config.md"