From c8da4554e30fb40a5753ce452f12abfce3021e95 Mon Sep 17 00:00:00 2001
From: jbwallace123 <41006280+jbwallace123@users.noreply.github.com>
Date: Mon, 22 Jan 2024 14:41:33 -0500
Subject: [PATCH] update docs
---
.github/workflows/docker_test.yml | 53 -------------------------------
docs/source/How To.rst | 31 ++++++++++--------
2 files changed, 17 insertions(+), 67 deletions(-)
delete mode 100644 .github/workflows/docker_test.yml
diff --git a/.github/workflows/docker_test.yml b/.github/workflows/docker_test.yml
deleted file mode 100644
index cd28285..0000000
--- a/.github/workflows/docker_test.yml
+++ /dev/null
@@ -1,53 +0,0 @@
-name: build_docker
-
-on:
- push:
- branches:
- - main
- - dev
- #pull_request:
- # branches:
- # - main
- # - dev
- workflow_call:
- secrets:
- DOCKERENV:
- required: true
- description: 'Access to AWS RDS server'
-
-jobs:
- build_docker:
- env:
- JHUB_VER: 1.4.2
- PY_VER: 3.9
- DIST: debian
- WORKFLOW_VERSION: 0.1.0
- REPO_OWNER: bernardosabatinilab
- REPO_NAME: sabatini-datajoint-pipeline
- CONTAINER_USER: anaconda
- DJ_HOST: sabatini-dj-prd01.cluster-cjvmzxer50q5.us-east-1.rds.amazonaws.com
- DJ_USER: jbw25
- DJ_PASS: ${{ secrets.DOCKERENV }}
- DATABASE_PREFIX: sabatini_dj_
- RAW_ROOT_DATA_DIR: /home/${CONTAINER_USER}/inbox
- PROCESSED_ROOT_DATA_DIR: /home/${CONTAINER_USER}/outbox
- runs-on: ubuntu-20.04
-
- steps:
- - name: Checkout code
- uses: actions/checkout@v2
-
- - name: Set up Docker Compose
- run: |
- sudo apt-get update
- sudo apt-get install -y docker-compose
-
- - name: Build and run Docker Compose
- run: |
- docker-compose -f ./docker/standard_worker/dist/debian/docker-compose-standard_worker.yaml -p sabatini-datajoint-pipeline_standard build
-
- - name: Clean up Docker Compose
- run: |
- docker-compose -f ./docker/standard_worker/dist/debian/docker-compose-standard_worker.yaml down
-
-
diff --git a/docs/source/How To.rst b/docs/source/How To.rst
index f1372e9..5221818 100644
--- a/docs/source/How To.rst
+++ b/docs/source/How To.rst
@@ -8,7 +8,7 @@ If you are new to DataJoint, we recommend getting started by learning about the
More information can be found in the `DataJoint documentation `_.
We can run the workflow using the provided docker containers (for more information :doc:`WorkerDeployment`). Or, we can
-run locally using the `provided jupyter notebooks `_.
+run locally using the `provided jupyter notebooks `_.
These notebooks provide a good starting point and can be modified to fit your needs, just remember to check that your kernel is set
to the ``sabatini-datajoint`` kernel.
@@ -336,19 +336,25 @@ You can also run the pipeline manually by running the following:
Ephys pipeline
##############
The ephys pipeline is designed to process neuropixel data acquired with SpikeGLX. It will run through Kilosort2.5 and use
-`ecephys `_ for post-processing.
-The ``/Outbox`` directory will be automatically populated with the processed data.
+`ecephys `_ for post-processing. Currently, we have two workflows for processing the data:
+a docker container or a manual pipeline through the provided jupyter notebook.
Input data
----------
You will need all of the output files from SpikeGLX: ``.ap.bin``, ``.lf.bin``, ``.ap.meta``, and ``.lf.meta``. You can also use data that you have pre-processed throught CatGT.
-Running the ephys pipeline
---------------------------
+Running the ephys pipeline through the docker container
+-------------------------------------------------------
Once you have inserted the ``Subject``, ``Session``, and ``SessionDirectory`` tables and you have the appropriate files in place,
you can then proceed with running the ephys pipeline by simply upping the spike_sorting_local_worker docker container detailed in :doc:`WorkerDeployment`.
+It will automatically detect the new data and process it and populate the ``EphysRecording``, ``CuratedClustering``, ``WaveformSet``, and ``LFP`` tables.
+
+Running the ephys pipeline manually
+-----------------------------------
+We have provided an ephys jupyter notebook that will guide you through the ephys pipeline. Importantly, you will have to configure your spike sorter
+of choice and the paths to the data in the notebook.
-Using the docker container is the recommended way to run the pipeline. If you must run the pipeline manually, please contact the database manager.
+`Ephys jupyter notebook `_.
Table organization
------------------
@@ -380,25 +386,22 @@ The calcium imaging processing pipeline will populate the ``imaging`` table.
DeepLabCut pipeline
###################
-The DeepLabCut pipeline is designed to process videos through DeepLabCut. It will automatically populate the ``/Outbox`` directory with the processed data.
-
-**Important Note**: This pipeline assumes that you have already created a DeepLabCut project and have a trained network. If you have not done this, please
-refer to the `DeepLabCut documentation `_.
+The DeepLabCut pipeline is designed to process and annotate videos through DeepLabCut. We have updated the workflow so that you can run DeepLabCut from
+beginning to end through the provided jupyter notebook.
Input data
----------
-You will need a pretrained network organized in the following format: ``/Inbox/dlc_projects/PROJECT_PATH``. You will also need to have the videos you would like to process
+Once you have created your ``project_folder``, it is important that you place it in ``/Inbox/dlc_projects/PROJECT_PATH``. You will also need to have the videos you would like to process
organized in the following format: ``/Inbox/Subject/dlc_behavior_videos/*.avi``.
Running the DeepLabCut pipeline
-------------------------------
-This is a manual pipeline. You will need to run the provided `DeepLabCut jupyter notebook `_.
+This is a manual pipeline. You will need to run the provided ``_.
You will need to edit all of the relevant information and paths in the notebook.
Table organization
------------------
-The DeepLabCut processing pipeline will populate the ``model`` table.
-
+The DeepLabCut processing pipeline will populate the ``model`` and ``train`` tables.