This repository contains the setup and admin scripts for managing the Video Face Recognition (VFR) system. The VFR system was created as an independent study for my Master of Science program at the University of Florida. The target is a NVIDIA Jetson TX2 running Ubuntu 16.04.
This repository is now archived; no pull requests will be considered.
- Core - The daemon that runs the facial recognition and training
- Dashboard - The web interface which users can check status and manage the VFR system
- Persistence - A simple repository containing the database schema and initialization scripts
./install-arm64-deps.sh
(don't usesudo
as it installs stuff in the user folder, it will prompt for asudo
password)- This script installs Docker CE and Docker Compose dependencies necessary for running the VFR system. Since it targets the TX2 board, it installs Docker CE for the
arm64
architecture.
- This script installs Docker CE and Docker Compose dependencies necessary for running the VFR system. Since it targets the TX2 board, it installs Docker CE for the
- Add the following lines to your
~/.bashrc
files. Change as you see fit:export DLIB_INCLUDE_DIR=/usr/local/include export DLIB_LIB_DIR=/usr/local/lib export CUDA_LIB_DIR=/usr/local/cuda/lib64 export CUDNN_LIB_DIR=/usr/lib/aarch64-linux-gnu export OPENBLAS_LIB_DIR=/usr/local/lib export OPENCV4NODEJS_DISABLE_AUTOBUILD=1 export OPENCV_LIB_DIR=/usr/lib export OPENCV_INCLUDE_DIR=/usr/include
exec bash
to make sure you load the environment variables we added in the prior step./update.sh
will thengit pull
,yarn
, andyarn build
sudo ./initialize-and-start.sh
- This script runs
docker-compose up -d
which will create the containers
- This script runs
I had issues with NVIDIA's SDK Manager installing the appropriate libraries. It downloaded them into my ~/Downloads/sdkm_downloads
folder on the host machine; I then used scp
to transfer them manually to the TX2 and install the ones I wanted by navigating to the appropriate directory and running the below commands:
sudo dpkg -i cuda-repo-l4t-10-0-local-10.0.166_1.0-1_arm64.deb
sudo dpkg -i /var/cuda-repo-10-0-local-10.0.166/*.deb
sudo dpkg -i libcudnn7_7.3.1.28-1+cuda10.0_arm64.deb
sudo dpkg -i libcudnn7-dev_7.3.1.28-1+cuda10.0_arm64.deb
sudo dpkg -i libopencv_3.3.1-2-g31ccdfe11_arm64.deb
sudo dpkg -i libopencv-dev_3.3.1-2-g31ccdfe11_arm64.deb
The install-arm64-deps.sh
script does not add the user to the docker
user group for security reasons. Most of these scripts will need to be executed with sudo
because of this. You can read on how non-root users can manage docker here.
The initialize-and-start.sh
and destroy.sh
scripts will run docker-compose up -d
and docker-compose down
commands respectively. These will create and destroy containers. While the services write any persistent data to disk in the .data
folder and should seem unaffected from a down/destroy
command, this should just kept in mind that they will destroy the docker containers. You may want to use stop/start/restart
functionality for your specific use case (despite at the time of writing this, it shouldn't really matter).
The start.sh
, stop.sh
, and restart.sh
scripts in this directory are just wrappers around docker-compose start|stop|restart
.
Adminer, a SQL admin webapp, is available by navigating to http://localhost:9002
. The default credentials are as follows:
- System:
PostgreSQL
- Server:
vfr-persistence:5432
- Username:
postgres
- Password:
postgres
- Database:
vfr
There is also a docker volume for the database contents which will survive a ./destroy.sh
. You can see where the volume is located using docker inspect pg_data
.
If you need to export/import persistent data for any reason (regular backups, hardware issues/upgrades, etc.), there are two helper scripts for that as well.
In both cases, you should stop the VFR system.
Before exporting you should stop the VFR system.
Running ./export.sh
will create a compressed tarball of the .data
directory with the vfr-export-YYYYmmdd-HHMMss.tar.gz
pattern. You can then transfer this file wherever you'd like to store it.
Unfortunately I wasn't able to get the database files to be easily exported. The export.sh
script will only export the training images and facial recognition model. To export the database, you can access Adminer and clicking the Export
link on the left sidebar.
Before importing you should stop the VFR system.
Copy the tarball that you had previously exported to this directory. Run sudo ./import.sh <vfr-export-YYYYmmdd-HHMMss.tar.gz>
.
For safe keeping, import.sh
will conduct an export
before importing to prevent data loss since importing will overwrite the .data
directly completely. You can disable the preliminary export (though this will delete your .data
directory upon import!) by adding the --no-backup
argument: sudo ./import.sh --no-backup <vfr-export-YYYYmmdd-HHMMss.tar.gz>
Unfortunately I wasn't able to get the database files to be easily imported. The import.sh
script will only import the training images and facial recognition model. To import the database, you can access Adminer and clicking the Import
link on the left sidebar.
You can clean the persistent data by running sudo rm -r .data
from this directory.
To develop on the VFR system, I suggest the following steps:
git clone https://github.com/SeesePlusPlus/video-face-recognition.git vfr
cd vfr
yarn
- This downloads any missing dependenciesyarn build
- This builds the Typescript sources of VFR
Of course, change the Git urls appropriately if you have forked the repositories, though keep the folder structure as the docker-compose.yml
is expecting it for the build steps.
If the repo was updated externally (i.e. not on your target device), ./update.sh
will run git pull
, yarn clean:build
, yarn
, and yarn build
for you.
There are two commands in the base of the monorepo for cleaning your development environment:
yarn clean:build
only cleans the build artifacts (i.e.packages/*/dist/*.js
); it will not clean your dependencies since installing those can take awhile on the target hardwareyarn clean:all
(:warning: this will not prompt you if you'd like to clean stuff) will clean all your dependencies and build artifacts, you will need to runyarn
before runningyarn build
to download/build the NodeJS dependencies