Deep Learning for Sign Language Recognition: Exploring VGG16 and ResNet50 Capabilities
Authors: Jatin Sharma, Kanwarpartap Singh Gill, Mukesh Kumar and Ruchira Rawat
Publishing Date: 09-11-2024
ISBN: 978-81-955020-9-7
Abstract
For the deaf and hard of hearing to communicate with one another, sign language recognition (SLR) is an absolute must. The use of two popular deep learning models—VGG16 and ResNet50—for SLR-related tasks is examined in this study. We achieved remarkable success rates of 99.92% and 99.95% in sign language gesture recognition using the VGG16 and ResNet50 architectures, respectively. We found that these models worked well at accurately reading hand and gesture motions, which made it much easier for people to communicate with one another using sign language. With the use of cutting-edge deep learning methods, our study is making strides towards better SLR systems, which might lead to more accessible and inclusive communication.
Keywords
Sign language recognition, Deep learning, VGG16, ResNet50, Gesture recognition.
Cite as
Jatin Sharma, Kanwarpartap Singh Gill, Mukesh Kumar and Ruchira Rawat, "Deep Learning for Sign Language Recognition: Exploring VGG16 and ResNet50 Capabilities", In: Mukesh Saraswat and Rajani Kumari (eds), Applied Intelligence and Computing, SCRS, India, 2024, pp. 115-124. https://doi.org/10.56155/978-81-955020-9-7-13