Created by students at the University of Houston, MyVoice is a device that aims to translate sign language into sound and vice versa.
Just as Stanford University undergraduate Adam Duran recently developed his Braille-writing software to help blind people communicate visually with a standard touchscreen, a new innovation we’ve spotted is doing a similar thing for those with hearing disabilities. Created by students at the University of Houston, MyVoice is a device that aims to translate sign language into sound and vice versa. While sign language has long proven an effective way to enable communication between deaf and non-deaf people, it takes time to learn and many hearing people don’t have the skill, using another person to interpret instead. Recently awarded first place at the American Society of Engineering Education Gulf Southwest Annual Conference, MyVoice is big enough to fit in the hand yet contains a built-in microphone, speaker, soundboard, video camera and monitor. Much like the Microsoft Kinect can detect individual gestures, the device recognizes hand movements and is able to translate them into their corresponding spoken version using a synthesized voice. It can also recognize voice, translating words into sign language symbols on its display. Jeffrey Seto, one of the students working on the project, told the university’s website: “The biggest difficulty was sampling together a database of images of the sign languages. It involved 200-300 images per sign.” Having created a prototype, the group are now looking to develop the device to increase its capabilities and are hoping to find someone to back the project to bring it to the market. Investors – one to get involved in?