trainer.py
was used to train our model initiallyinference.py
was used to test the model
test_gesture.py
is used to gather landmark coordinates for sequences of images using Mediapipe Hand Landmarker- see https://developers.google.com/mediapipe/solutions/vision/hand_landmarker for more details
prepare_data.py
is used to convert the jester dataset image sequences into landmarks returned in test_gesture.pymodel.py
is used to train a LSTM model on the prepared data- model is saved to a checkpoint every 5 epochs
- model history is saved to trainHistoryDict
Evaluate_models.ipynb
is used to load model history and visualize/benchmark the performance- bella_model.h5 is saved after the training is complete
- pruned_and_quantized.tflite is then created with a signature specified to enable use in the tflite runtime
test_tflite.py
is used to test the accuracy of the tensorflow lite model on Jester test datasingle_gesture_classifier.py
is used to collect a 3 single second gesture and then perform classificationreal_time_classifier.py
is used to perform continuous gesture classification in real-time
Performing asynchronous gesture classification, using the file 'Test_tflite.py':
swipe_down.mp4
Performing real-time gesture classification, using the file 'Test_gesture.py':
PXL_20231215_005556192.3.mp4
IMG_2368.online-video-cutter.com.1.mov
Shivam Sharma: trainer.py
, inference.py
Isabella Feeney: test_gesture.py
, prepare_data.py
, model.py
, Evaluate_models.ipynb
, test_tflite.py
, single_gesture_classifier.py
, real_time_classifier.py