Artificial Intelligence (AI) has been used to develop various kinds of translation models to improve communication amongst users and break language barriers across regions. Companies like Google and Facebook use AI to develop advanced translation models for their services. Now, a third-year engineering student from India has created an AI model that can detect American Sign Language (ASL) and translate them into English in real-time.

Priyanjali Gupta, a student at the Vellore Institute of Technology (VIT), shared a video on her LinkedIn profile, showcasing a demo of the AI-based ASL Detector in action. Although the AI model can detect and translate sign languages into English in real-time, it supports only a few words and phrases at the moment. These include Hello, Please, Thanks, I Love You, Yes, and No.

Although Gupta’s post on LinkedIn garnered numerous positive responses and appreciation from the community, an AI-vision engineer pointed out that the transfer learning method used in her model is “trained by other experts” and it is the “easiest thing to do in AI.” Gupta acknowledged the statement and wrote that building “a deep learning model solely for sign detection is a really hard problem but not impossible.”