Skip to content

Analysis: extraction

Jeffrey Markowitz edited this page Feb 11, 2019 · 8 revisions

An example of a bad quality recording can be found here and a good quality recording here.

Let's say you have a folder of raw data (unzip it if you made a tarball during acquisition),

.
├── depth.dat
├── depth_ts.txt
├── metadata.json
├── proc
├── rgb.mp4
└── rgb_ts.txt

Extraction and alignment of the mouse from raw depth videos is done using the MoSeq2-extract library. Most usage patterns are available via passing --help on the command line. To see the available command,

moseq2-extract --help

Then to see the options available for each command, e.g.,

moseq2-extract extract --help

Extracting data (interactive)

To extract data, simply point moseq2 extract to any depth.dat file,

moseq2-extract extract ~/my_awesome_data/depth.data

This automatically select an ROI and extract data to the proc folder where depth.dat is located. When the extraction is completely, the results are stored in proc/results.h5, and a movie of the extraction is stored in proc/results.mp4. You will likely want to use a flip classifier, which corrects for any 180 degree ambiguities in the angle detection. To download one of the pre-trained classifiers, use this command,

moseq2-extract download-flip-file

This will present a menu of classifiers you can download to ~/moseq2. After downloading, use a flip-classifier with the flip-classifier option,

moseq2-extract extract depth.dat --flip-classifier ~/moseq2/new_flip_classifier.pkl

When the extraction completes, your folder should look like this,

.
├── depth.dat
├── depth_ts.txt
├── metadata.json
├── proc
│   ├── bground.tiff
│   ├── first_frame.tiff
│   ├── results_00.h5
│   ├── results_00.mp4
│   ├── results_00.yaml
│   └── roi_00.tiff
├── rgb.mp4
└── rgb_ts.txt

If everything worked, you should see an extraction that looks (within reason) like this,

Extracting data (batch)

For batch extractions, you will need moseq2-batch installed. First, you need to generate a configuration file to use as the basis for the batch job. This can be generated with the defaults by issuing the following,

moseq2-extract generate-config

This produced a file, config.yaml, that will specify the options for your batch. You will need to copy the path to your flip classifier to the key flip_classifier. As an example,

crop_size:
- 80
- 80
bg_roi_dilate:
- 10
- 10
bg_roi_shape: ellipse
bg_roi_index: 0
bg_roi_weights:
- 1
- 0.1
- 1
min_height: 10
max_height: 100
fps: 30
flip_classifier: /home/jm447/moseq2/flip_classifier_k2_largemicewithfiber.pkl
flip_classifier_smoothing: 51
use_tracking_model: true
tracking_model_ll_threshold: -100
tracking_model_mask_threshold: -16
tracking_model_ll_clip: -100
tracking_model_segment: true
cable_filter_iters: 0
cable_filter_shape: rectangle
cable_filter_size:
- 5
- 5
tail_filter_iters: 1
tail_filter_size:
- 9
- 9
tail_filter_shape: ellipse
spatial_filter_size:
- 3
temporal_filter_size:
- 0
chunk_size: 1000
chunk_overlap: 0
output_dir:
write_movie: true
use_plane_bground: false
config_file:

Next, run the following command to format the commands for a Slurm cluster, and pipe them to a file for executing later, batch_command.sh.

moseq2-batch extract-batch -c config.yaml > batch_command.sh
chmod a+x batch_command.sh
./batch_command.sh

This ensures the file has the appropriate permissions and then executes the file as a command. Slurm should report that jobs have been submitted.

Flip classification

A Jupyter notebook detailing how to train a flip classifier can be found here