Skip to content

Latest commit

 

History

History
executable file
·
128 lines (81 loc) · 7.28 KB

Readme.md

File metadata and controls

executable file
·
128 lines (81 loc) · 7.28 KB

This fork was heavily modified and runs with Python 3.6 on a Nvidia Jetson series (JetPack 4.3 / L4T 32.3.x with CUDA capabilities 5.3,6.2,7.2).

Home Surveillance with Facial Recognition

Smart security is the future, and with the help of the open source community and technology available today, an affordable intelligent video analytics system is within our reach. This application is a low-cost, adaptive and extensible surveillance system focused on identifying and alerting for potential home intruders. It can integrate into an existing alarm system and provides customizable alerts for the user. It can process several IP cameras and can distinguish between someone who is in the face database and someone who is not (a potential intruder).

System overview

Installation and Usage

Docker

The docker image is currently broken. Manual build should work, see section "Build instructions" below based on nvcr.io/nvidia/l4t-base:r32.3.1 and was tested on a Jetson Nano, but should also run on TX1, TX2, Xavier AGX and Xavier NX


  1. Clone Repo
git clone https://github.com/domcross/home_surveillance.git
  1. Pull Docker Image
docker pull domcross/home_surveillance_jetson
  1. Run Docker image, make sure you mount your home (for Ubuntu) directory as a volume so you can access your local files
docker run -v /home/:/host -p 5000:5000 -t -i domcross/home_surveillance_jetson  /bin/bash
  • Navigate to the home_surveillance project inside the volume within your Docker container
  • Move into the system directory
cd home_surveillance/system
  1. Run WebApp.py
python WebApp.py
  • Visit localhost:5000
  • Login Username: admin Password admin

Notes and Features

Camera Settings

  • To add your own IP camera simply add the URL of the camera into field on the camera panel and choose 1 out of the 5 processing settings and your preferred face detection method.
  • Camera configuration will automatically saved to config.json

Customizable Alerts

  • The Dashboard allows you to configure your alerts. Edit SurveillanceSystem.py for Apprise service and Mycroft host. Email and RPI-alarm trigger alerts are deprecated and will be removed in a future version..
  • The alerts panel allows you to set up certain events such as the recognition of a particular person or motion detection so that you receive an email alert when the event occurs. The confidence slider sets the accuracy that you would like to use for recognition events. By default, you'll receive a notification if a person is recognised with a percentage greater than 50%.

Face Recognition and the Face Database

  • Faces that are detected are shown in the faces detected panel on the Dashboard.
  • There is currently no formal database setup, and the faces are stored in the aligned-images & training-images directories.
  • To add faces to the database add a folder of images with the name of the person to the training-images directory and retrain the classifier by selecting the retrain database on the client dashboard. Images can also be added through the dashboard but can currently only be added one at a time.
  • To perform accurate face recognition, twenty or more face images should be used. Furthermore, images taken in the surveillance environment (i.e. use the IP cameras to capture face images - this can be achieved by using the face_capture option in the SurveillanceSystem script and creating your own face directory) produce better results as a posed to adding images taken else where.
  • A person is classified as unknown if they are recognised with a confidence lower than 20% or are predicted as unknown by the classifier.

Security

  • Unfortunately, the only security that has been implemented includes basic session management and hard coded authentication. Where each user is faced with a login page. Data and password encryption is a feature for future development.

Some Issues and Limitations

  • Occasionally Flask disconnects, and this causes the video streaming to break. Fixing this may involve using another web framework. However, it could also be fixed by adding a camera reload function which will be added shortly.
  • Currently, the tracking algorithm is highly dependent upon the accuracy of the background model generated by the MotionDetector object. The tracking algorithm is based on a background subtraction approach, and if the camera is placed in an outdoor environment where there is likely to be moving trees, vast changes in lighting, etc it may not be able to work efficiently.
  • Both Dlib's and OpenCV's face detection methods produce false positives now and again. The system does incorporate some mitigation for these false detections by using more rigorous parameters, and background subtraction to ignore any detections that occur outside the region of interest.
  • The more people and face images you have in the database the longer it takes to train the classifier, it may take up to several minutes. This is where the GPU of the Jetson Nano helps to improve training speed. To speed things up may want to reduce image size before uploading. Maximum recommend pixel width of 1200 for (for landscape) and height of 900 (for portrait) is recommended.

Ideas for Future developement

  • Database Implementation

  • Improved Security

  • Open set recognition for accurate identification of unknown people

  • Behaviour recognition using neural networks

  • Optimising motion detection and tracking algorithms

  • Integration with third party services such as Facebook to recognise your friends

  • The addition of home automation control features

and many more...

Build instructions

You need to install (in most cases this means download and make/build manually):

The Dockerfile does not work but gives you an idea what is necessary...

Beware - building and installing all dependencies takes 12+ hours on a Jetson Nano!

License


Copyright 2020, domcross (Github) and Copyright 2016, Brandon Joffe, All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

References