Skip to content

An autonomous speechreading algorithm to help the deaf or hard-of-hearing by translating visual lip movements in live-time into coherent sentences. This algorithm uses deep learning, computer vision, and natural language processing models.

Notifications You must be signed in to change notification settings

allenye66/Computer-Vision-Lip-Reading

Repository files navigation

Automated-Speech-Recognition

Note: a new and improved version of my project can be found here: https://github.com/allenye66/Computer-Vision-Lip-Reading-2.0

Read the full paper here: https://docs.google.com/document/d/1EMjR0lDjjZqXpzbsRqz87UugfXYeVfhSny7nkTfzvEY/edit

More than 13% of U.S. adults suffer from hearing loss. Some causes include exposure to loud noises, physical head injuries, and presbycusis. We propose using an autonomous speechreading algorithm to help the deaf or hard-of-hearing by translating visual lip movements in live-time into coherent sentences. We accomplish this by using a supervised ensemble deep learning model to classify lip movements into phonemes, then stitch phonemes back into words. Our dataset consists of images of segmented mouths that are each labeled with a phoneme. We process our images by first downsizing them to 64 by 64 pixels in order to speed up training time and reduce the memory needed. Afterward, we perform Gaussian Blurring to blur edges, reduce contrast, and smooth sharp curves and also perform data augmentation to train the model to be less prone to overfitting. Our first computer vision model is a 1-D CNN (convolutional neural network) that imitates the famous VGG architecture. Next, we use a similar architecture for a 2-D CNN. We then perform ensemble learning, specifically using the voting technique. Our 1-D and 2-D CNN achieves a balanced accuracy of 31.7% and 17.3% respectively. Our ensemble techniques raise the balanced accuracy to 33.29%. We use the balanced accuracy as our metric due to using an unbalanced dataset. Human experts achieve only ~30 percent accuracy after years of training, which our models match after a few minutes of training.

About

An autonomous speechreading algorithm to help the deaf or hard-of-hearing by translating visual lip movements in live-time into coherent sentences. This algorithm uses deep learning, computer vision, and natural language processing models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages