Skip to content

Commit

Permalink
Merge pull request #33 from MMathisLab/master
Browse files Browse the repository at this point in the history
Usability enhancements
  • Loading branch information
Pavol Bauer authored Mar 15, 2021
2 parents 3ab929f + d8febd8 commit d8558bc
Show file tree
Hide file tree
Showing 12 changed files with 538 additions and 364 deletions.
106 changes: 106 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
#lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log*.ckpt
snapshot-*

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# dotenv
.env

# virtualenv
.venv
venv/
ENV/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ The workflow of VAME consists of 5 steps and we explain them in detail [here](ht

## Installation
To get started we recommend using [Anaconda](https://www.anaconda.com/distribution/) with Python 3.6 or higher.
Here, you can create a [virtual enviroment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) to store all the dependencies necessary for VAME.
Here, you can create a [virtual enviroment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) to store all the dependencies necessary for VAME. (you can also use the VAME.yaml file supplied here, byt simply openning the terminal, running `git clone https://github.com/LINCellularNeuroscience/VAME.git`, then type `cd VAME` then run: `conda env create -f VAME.yaml`).

* Install the current stable Pytorch release using the OS-dependent instructions from the [Pytorch website](https://pytorch.org/get-started/locally/). Currently, VAME is tested on PyTorch 1.5.
* Go to the locally cloned VAME directory and run `python setup.py install` in order to install VAME in your active Python environment.
* Go to the locally cloned VAME directory and run `python setup.py install` in order to install VAME in your active conda environment.
* Install the current stable Pytorch release using the OS-dependent instructions from the [Pytorch website](https://pytorch.org/get-started/locally/). Currently, VAME is tested on PyTorch 1.5. (Note, if you use the conda file we supply, PyTorch is already installed and you don't need to do this step.)

## Getting Started
First, you should make sure that you have a GPU powerful enough to train deep learning networks. In our paper, we were using a single Nvidia GTX 1080 Ti to train our network. A hardware guide can be found [here](https://timdettmers.com/2018/12/16/deep-learning-hardware-guide/). Once you have your hardware ready, try VAME following the [workflow guide](https://github.com/LINCellularNeuroscience/VAME/wiki/1.-VAME-Workflow).
Expand All @@ -26,9 +26,9 @@ First, you should make sure that you have a GPU powerful enough to train deep le
### Authors and Code Contributors
VAME was developed by Kevin Luxem and Pavol Bauer.

The development of VAME is heavily inspired by [DeepLabCut](https://github.com/AlexEMG/DeepLabCut/).
The development of VAME is heavily inspired by [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut/).
As such, the VAME project management codebase has been adapted from the DeepLabCut codebase.
The DeepLabCut 2.0 toolbox is © A. & M. Mathis Labs [www.deeplabcut.org](www.deeplabcut.org), released under LGPL v3.0.
The DeepLabCut 2.0 toolbox is © A. & M.W. Mathis Labs [deeplabcut.org](http:\\deeplabcut.org), released under LGPL v3.0.

### References
VAME preprint: [Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion](https://www.biorxiv.org/content/10.1101/2020.05.14.095430v1)
Expand Down
26 changes: 26 additions & 0 deletions VAME.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# VAME.yaml
#
# install: conda env create -f VAME.yaml
# update: conda env update -f VAME.yaml
name: VAME
channels:
- pytorch
- defaults
dependencies:
- python=3.7
- pip
- torchvision
- jupyter
- nb_conda
- pip:
- pytest-shutil
- scipy
- numpy
- matplotlib
- pathlib
- pandas
- ruamel.yaml
- sklearn
- pyyaml
- opencv-python-headless
- h5py
4 changes: 3 additions & 1 deletion vame/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,13 @@
"""
import sys
sys.dont_write_bytecode = True

from vame.initialize_project import init_new_project
from vame.model import create_trainset
from vame.model import rnn_model
from vame.model import evaluate_model
from vame.analysis import behavior_segmentation
from vame.analysis import behavior_quantification
from vame.analysis import motif_videos
from vame.util.csv_to_npy import csv_to_numpy
from vame.util import auxiliary
61 changes: 26 additions & 35 deletions vame/analysis/behavior_structure.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def get_adjacency_matrix(labels, n_cluster):
adjacency_matrix = np.zeros((n_cluster,n_cluster), dtype=np.float64)
cntMat = np.zeros((n_cluster))
steps = len(labels)

for i in range(n_cluster):
for k in range(steps-1):
idx = labels[k]
Expand All @@ -37,17 +37,17 @@ def get_adjacency_matrix(labels, n_cluster):
cntMat[idx2] = cntMat[idx2] +1
temp_matrix[i] = cntMat
cntMat = np.zeros((n_cluster))

for k in range(steps-1):
idx = labels[k]
idx2 = labels[k+1]
if idx == idx2:
continue
adjacency_matrix[idx,idx2] = 1
adjacency_matrix[idx2,idx] = 1

transition_matrix = get_transition_matrix(temp_matrix)

return adjacency_matrix, transition_matrix


Expand All @@ -59,63 +59,63 @@ def get_transition_matrix(adjacency_matrix, threshold = 0.0):
transition_matrix=np.nan_to_num(transition_matrix)
return transition_matrix


def consecutive(data, stepsize=1):
data = data[:]
return np.split(data, np.where(np.diff(data) != stepsize)[0]+1)


def get_network(path_to_file, file, cluster_method, n_cluster):
if cluster_method == 'kmeans':
labels = np.load(path_to_file + '/'+str(n_cluster)+'_km_label_'+file+'.npy')
else:
labels = np.load(path_to_file + '/'+str(n_cluster)+'_gmm_label_'+file+'.npy')
adj_mat, transition_matrix = get_adjacency_matrix(labels, n_cluster=n_cluster)

adj_mat, transition_matrix = get_adjacency_matrix(labels, n_cluster=n_cluster)
motif_usage = np.unique(labels, return_counts=True)
cons = consecutive(motif_usage[0])
if len(cons) != 1:
used_motifs = list(motif_usage[0])
usage_list = list(motif_usage[1])

for i in range(n_cluster):
if i not in used_motifs:
used_motifs.insert(i, i)
usage_list.insert(i,0)

# for i in range(len(cons)):
# index = cons[i][-1]+1
# usage_list.insert(index,0)
# if index != cons[i+1][-1]+1:
# usage_list.insert(index+1,0)

usage = np.array(usage_list)

motif_usage = usage
else:
motif_usage = motif_usage[1]

np.save(path_to_file+'/behavior_quantification/adjacency_matrix.npy', adj_mat)
np.save(path_to_file+'/behavior_quantification/transition_matrix.npy', transition_matrix)
np.save(path_to_file+'/behavior_quantification/motif_usage.npy', motif_usage)
np.save(path_to_file+'/behavior_quantification/motif_usage.npy', motif_usage)



def behavior_quantification(config, model_name, cluster_method='kmeans', n_cluster=30):
config_file = Path(config).resolve()
cfg = read_config(config_file)

files = []
if cfg['all_data'] == 'No':
all_flag = input("Do you want to quantify your entire dataset? \n"
"If you only want to use a specific dataset type filename: \n"
"yes/no/filename ")
else:
else:
all_flag = 'yes'

if all_flag == 'yes' or all_flag == 'Yes':
for file in cfg['video_sets']:
files.append(file)

elif all_flag == 'no' or all_flag == 'No':
for file in cfg['video_sets']:
use_file = input("Do you want to quantify " + file + "? yes/no: ")
Expand All @@ -125,22 +125,13 @@ def behavior_quantification(config, model_name, cluster_method='kmeans', n_clust
continue
else:
files.append(all_flag)


for file in files:
path_to_file=cfg['project_path']+'results/'+file+'/'+model_name+'/'+cluster_method+'-'+str(n_cluster)

if not os.path.exists(path_to_file+'/behavior_quantification/'):
os.mkdir(path_to_file+'/behavior_quantification/')

get_network(path_to_file, file, cluster_method, n_cluster)







for file in files:
path_to_file=os.path.join(cfg['project_path'],"results",file,"",model_name,"",cluster_method+'-'+str(n_cluster))


if not os.path.exists(os.path.join(path_to_file,"behavior_quantification")):
os.mkdir(os.path.join(path_to_file,"behavior_quantification"))

get_network(path_to_file, file, cluster_method, n_cluster)
print("data saved! You can proceed to running vame.motif_videos...")
Loading

0 comments on commit d8558bc

Please sign in to comment.