American Sign Language Alphabet Recognition using Pretrained CNN Models: Comparison Between Fine-tuning and Feature Extraction
CSC413 Final Project
American Sign Language (ASL) has provided many deaf and speech-impaired individuals within Canada and US with much needed way to express their thoughts and feelings. However, for the people who have never learned ASL, it is difficult to understand without an interpreter. With the lack of a large, centralized dataset on ASL and the abundance of different pretrained Convolution Neural Network (CNN) models. One of the easiest implementation is to use transfer learning methods on the pretrained models to re-purpose their gained knowledge for this new classification problem. The goal of this work is to provide a comparison between the two of the most utilized transfer learning methods: fine-tuning and feature extraction with a variety of pretrained CNN models, to hopefully provide some guidelines on the advantages and disadvantages of these methods.
5.1 Report
Yijing Chen, Ming Liu, Wenfei Wang
5.2 Data preparation and Augmentation
Ming Liu
5.3 Building Experiment Model
Ming Liu, Wenfei Wang
5.4 Experiment Testing
Ming Liu, Wenfei Wang\