Sign Language Recognition Using Deep Learning
DOI:
https://doi.org/10.47750/pnr.2022.13.S03.070Keywords:
Artificial Neural Network, Computer Vision, Deep Learning, MediaPipe, Sign Language.Abstract
Sign language is the primary language of the people with speech and hearing impairment. Hearing impaired people use sign language to express themselves, participate in conversations, learn, and live as normal a life as possible. When deaf and dumb persons try to converse in sign language with those who aren't familiar with it, a problem occurs. This is where modern technology can step in. Although many existing projects have proposed methods to alleviate the problem but most of these have used external sensor or certain algorithms that do not work well under certain conditions like variation in skin color, inclusion of facial data and similarity between few signs and gestures. In our project we have used MediaPipe to solve the problem of variation in skin color as it is a very accurate hand tracking framework by Google. We have also created 3 sets of customs datasets to train 3 different deep learning models — 2 of these models are used specifically used for predicting particular groups of letters which are similar to each other. This fixes the issue arising when similar signs are encountered. The propose prototype can be used to help educate the elementary school children using which they can learn to fingerspell the American sign language alphabets and string them into basic words and learn them in a picturized manner. At the end our prototype should be able to detect the sign alphabets from the live video, print them on screen and display the image of the words formed using the sign alphabets.