Animal behavior is closely related to their internal state and external environment. Quantifying animal behavior is a fundamental step in ecology, neuroscience, psychology, and various other fields. However, there exist enduring challenges impeding multi-animal tracking advancing towards higher accuracy, larger scale, and more complex scenarios, especially the similar appearance and frequent interactions of animals of the same species.
Growing demands in quantitative ethology have motivated concerted efforts to develop high-accuracy and generalized tracking methods. Here, we present UDMT, the first unsupervised multi-animal tracking method that achieves state-of-the-art performance without requiring any human annotations. The only thing users need to do is to click the animals in the first frame to specify the individuals they want to track.
We demonstrate the state-of-the-art performance of UDMT on five different kinds of model animals, including mice, rats, Drosophila, C. elegans, and Betta splendens. Combined with a head-mounted miniaturized microscope, we recorded the calcium transients synchronized with mouse locomotion to decipher the correlations between animal locomotion and neural activity.
For more details, please see the companion paper where the method first appeared: "Unsupervised multi-animal tracking for quantitative ethology".
- Ubuntu 20.04 (or newer)
- Python 3.8
- Pytorch 1.7.1
- NVIDIA GPU (GeForce RTX 3090) + CUDA (11.7)
-
Create a virtual environment and install PyTorch. In the 3rd step, please select the correct Pytorch version that matches your CUDA version from https://pytorch.org/get-started/previous-versions/.
$ conda create -n udmt python=3.8 $ conda activate udmt $ conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch $ sudo apt-get install ninja-build $ sudo apt-get install libturbojpeg
-
We made a installable pip release of UDMT [pypi]. You can install it by entering the following command:
$ pip install udmt-pip
- Windows 10
- Python 3.8
- Pytorch 1.7.1
- NVIDIA GPU (GeForce RTX 3090) + CUDA (11.0)
-
Create a virtual environment and install PyTorch. In the 3rd step, please select the correct Pytorch version that matches your CUDA version from https://pytorch.org/get-started/previous-versions/.
$ conda create -n udmt python=3.8 $ conda activate udmt $ pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
-
We made a installable pip release of UDMT [pypi]. You can install it by entering the following command:
$ pip install udmt-pip
-
Install Precise ROI pooling: If your environment is the same as ours, directly copy
<UDMT_install_path>\udmt\env_file\prroi_pool.pyd
to<Anaconda_install_path>\anaconda3\envs\udmt\Lib\site-packages
. Otherwise, buildprroi_pool.pyd
file with Visual Studio with the tutorial. -
Install libjpeg-turbo: You can download installer from the official libjpeg-turbo Sourceforge repository, install it and copy
<libjpeg-turbo_install_path>\libjpeg-turbo64\bin\turbojpeg.dll
to the directory from the system PATHC:\Windows\System32
.
We have released the Python source code and a user-friendly GUI of UDMT to make it an easily accessible tool for quantitative ethology and neuroethology.
-
Once you have UDMT installed, start by opening a terminal. Activate the environment and download the codes with:
$ conda activate udmt $ git clone https://github.com/cabooster/UDMT.git $ cd UDMT/
-
To launch the GUI, simply enter in the terminal:
$ python -m udmt.gui.launch_script
If you would like to try the GUI with a smaller dataset first, we provide demo videos (5-mice video & 7-mice video) and pre-trained models (model for 5-mice and 7-mice).
- When creating a project, you can select the folder containing the demo video to import it.
- If you want to skip the Network Training process, place the downloaded model into the
your_project_path/models
folder before running the Analyze Video step.
Below is the tutorial video for the GUI. For detailed instructions on installing and using the GUI, please visit our website.
More demo videos are presented on our website.
If you use this code, please cite the companion paper where the original method appeared:
- Yixin Li, Xinyang Li, Qi Zhang, et al. Unsupervised multi-animal tracking for quantitative ethology. bioRxiv (2025). https://doi.org/10.1101/2025.01.23.634625
@article {Li2025.01.23.634625,
title = {Unsupervised multi-animal tracking for quantitative ethology},
author = {Li, Yixin and Li, Xinyang and Zhang, Qi and Zhang, Yuanlong and Fan, Jiaqi and Lu, Zhi and Li, Ziwei and Wu, Jiamin and Dai, Qionghai},
journal = {bioRxiv}
year = {2025},
publisher = {Cold Spring Harbor Laboratory},
doi = {10.1101/2025.01.23.634625}
}