This is a TensorFlow implementation of the face recognizer described in the paper "FaceNet: A Unified Embedding for Face Recognition and Clustering".
The code is tested using tensorflow 1.7 (CPU mode) or tensorflow-gpu 1.7 (GPU mode) under Windows 10 with Python 3.6.13 (Anaconda Environment).
You can create the Anaconda Environment of this project by the following commands:
$ conda create -n facenet python=3.6 && conda activate facenet
$ cd facenet
$ pip install -r requirements.txt
$ pip uninstall -y tensorflow
$ pip install tensorflow-gpu==1.7.0
$ pip install xlrd==1.2.0
Some packages require a specific version for the program to work properly:
Note that the training and prediction environment need to be consistent to predict face identity successfully. So don't update or delete packages arbitrarily!
The GPU Environment Configuration is Visual Studio 2017 + python 3.6.12 + tensorflow-gpu 1.7.0 + CUDA 9.0 + cuDNN 7.0.5 + facenet site-packages
.
Date | Update |
---|---|
2021-04-16 | Completed depolyment on Azure GPU VM whose configuration is 24 vCPUs (Xeon(R) CPU), 448G Memory and 4 NVIDIA Tesla V100 card. |
2021-04-16 | Added function to find similar face in face recognition model and corresponding APIs and HTML page. |
2021-04-13 | Added batch prediction mode code in server and corresponding APIs and HTML page. |
2021-03-23 | Added single prediction mode code in server and corresponding APIs and HTML page. |
2021‑03‑20 | Completed training on face recognition classifier model with dedicated face dataset. |
Click here to see API specification.
Model name | LFW accuracy | Training dataset | Architecture |
---|---|---|---|
20180402-114759 | 0.9965 | VGGFace2 | Inception ResNet v1 |
NOTE: If you use any of the models, please do not forget to give proper credit to those providing the training dataset as well.
Training classifier model on your own face dataset.
The code of main server is heavily inspired by the facenet implementation and facenet-realtime-face-recognition implementation.
The code of video server is heavily inspired by the facial-recognition-vzideo-facenet implementation (video recognition), video_streaming_with_flask_example implementation (video streaming online) and flask-opencv-streaming implementation (video streaming online).
Facenet's standard operating procedures and some automation procedures need to be carried out to get the classifier model for back-end server to use for prediction.
Install FFmpeg
$ python src/crop.py C:/Users/PC/Desktop/kol_video C:/Users/PC/Desktop/kol_crop
Copy all files (det1.npy, det2.npy, det3.npy) under src/align/
from facenet to folder src/align/
except align_dataset_mtcnn.py
and detect_face.py
.
$ python src/align/align_dataset_mtcnn.py datasets/kol_crop datasets/kol_160 --image_size 160 --margin 32
$ python src/align/align_dataset_mtcnn.py datasets/kol_crop datasets/kol_160 --image_size 160 --margin 32 --gpu_memory_fraction 0.5 # If there is not enough memory in the GPU
Manually clean data set kol_160: in each subfolder, pictures that are not belong to this KOL (manually delete), pictures with facial occlusion (such as hand occlusion, object occlusion, text GIF occlusion, etc., manually delete), non-face images (manually delete).
$ python src/classifier.py TRAIN datasets/kol_160 models/20180402-114759/20180402-114759.pb models/kol.pkl
$ python src/classifier.py CLASSIFY datasets/kol_160 models/20180402-114759/20180402-114759.pb models/kol.pkl
$ python src/predict.py datasets/kol_160/01/01_0001.jpg models/20180402-114759 models/kol.pkl
$ python src/predict.py datasets/kol_160/01/01_0001.jpg models/20180402-114759 models/kol.pkl --gpu_memory_fraction 0.5 # If there is not enough memory in the GPU
$ python src/classifier.py TRAIN datasets/training_data_aligned models/20180402-114759/20180402-114759.pb models/newglint_classifier.pkl
$ python src/video_recognize.py
Simply execute the automatic processing by the following order on the web.
1. {Main Server} /trainModel
2. Select:
- upload image zip
- upload video zip
3. {Process Server} /
4. {Process Server} /processingProgress
5. {Process Server} /align
6. {Process Server} /clean
7. {Process Server} /import_clean
8. {Process Server} /import_result
9. {Process Server} /train_model
Notice:
- In {Process Server} /clean, the number
if dist < 1.06:
is represented by the horizontal distance between the face, when the face distance is smaller than 1.06 can be regarded as the same person. - If the GPU memory is not enough, the CPU can be directly used to run by
pip install tensorflow==1.7.0
.
Please edit config.ini
which locate in root directory to change the value of shared variable.
Click here to see how to edit config.ini
file.
Firstly, quickly start the server from the command:
$ conda activate facenet
$ python server.py
Secondly, open web browser: http://127.0.0.1:{main.server.port}
(detail in config.ini
)
Copy all files (det1.npy, det2.npy, det3.npy) under src/align/
from facenet to folder other-server/process/align
except align_dataset_mtcnn.py
and detect_face.py
.
$ cd other-server/process
$ python server.py
Copy all files (det1.npy, det2.npy, det3.npy) under src/align/
from facenet to folder other-server/video/align
except align_dataset_mtcnn.py
and detect_face.py
.
$ cd other-server/video
$ python server.py
# you can compare app.py and server.py to obvserve the server performance
Notice:
- The conda environment to run server.py must be the one in which the .pkl model is trained.
- Before run the server, copy folder datasets and models which contains 20180402-114759 to the root directory.
- In order to display image properly on HTML page, the path of the image needs to be relative path to the root directory for each project (mian or sub).
Contributions are always welcome! Feel free to dive in!
Please read the contribution guideline first, then open an issue open an issue or submit PRs.
This repository follows the Contributor Covenant Code of Conduct.
MIT © Elaine Zhong
Contributions are always welcome! Feel free to dive in!
Please read the contribution guideline first, then open an issue open an issue or submit PRs.
This repository follows the Contributor Covenant Code of Conduct.
This project exists thanks to all the people who contribute.
MIT © Elaine Zhong