Manuscript Title:

DESIGN AND IMPLEMENTATION OF CNN FOR SIGN LANGUAGE RECOGNITION

Author:

IMRAN KHAN, SOHAIL RANA, NADIA MUSTAQIM ANSARI, RIZWAN IQBAL, TALHA TARIQ, MAQSOOD UR REHMAN AWAN, ADNAN WAQAR

DOI Number:

DOI:10.17605/OSF.IO/NKHJT

Published : 2022-11-10

About the author(s)

1. IMRAN KHAN - Department of Telecommunication Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan.
2. SOHAIL RANA - Department of Electronic Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan.
3. NADIA MUSTAQIM ANSARI - Department of Electronic Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan.
4. RIZWAN IQBAL - Department of Telecommunication Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan.
5. TALHA TARIQ - Department of Electronic Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan.
6. MAQSOOD UR REHMAN AWAN - Department of Electronic Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan.
7. ADNAN WAQAR - Department of Electronic Engineering, Dawood University of Engineering & Technology, Karachi, Pakistan.

Full Text : PDF

Abstract

Every day, we witness numerous deaf, mute, and blind people. They have a hard time interacting with others. Sign Language is a language in which we use hand movements and gestures to communicate with people who are mainly deaf and hard of hearing. This paper proposes a system to recognize hand gestures using different libraries of Python and Deep Learning Algorithms, to process the image and predict the gestures. The Web Camera captures images of various gestures used as input, and this project shows the sign language recognition of 1-10 digits hand gestures, including OK and Salaam. Modules for preprocessing and feature extraction, model training, testing, and sign-to-text translation are included in the proposed system. Various CNN architectures and preprocessing techniques, such as greyscale and thresholding, were built and evaluated using our dataset to improve recognition accuracy. Our proposed system of convolutional neural networks (CNNs) obtains an average accuracy of 98.76 percent for real-time hand gesture identification on a dataset with nine hand movements and 500 photos for each gesture.


Keywords

Sign Language Recognition, MLP, Deep Learning, Feature Extraction, Neural Network.