The technology simplifies and speeds up data input with simple hand gestures
Spotted: For many computer systems to work correctly, we must train them to recognise objects from video or image data. But the way we do this is not only inefficient, it is also complicated work that is best saved for the experts. A pair of researchers from the Interactive Intelligent Systems Lab of the University of Tokyo decided to change this, creating software that allows anyone to train a machine learning system to recognise objects using natural hand gestures. Unlike previous methods, their simplified innovation also means that any extra, unnecessary data is cut out from the process, making for far more accurate object recognition.
Zhongyi Zhou and Koji Yatani’s model, called LookHere, is based on models that capture images of objects presented by hand. But where the LookHere software goes a step further is its incorporation of the user’s hand gestures into the processing stage. This way, objects can be accentuated over background imagery, similar to how someone might point out an important object to another person.
Talking about the inspiration for LookHere, Zhongyi Zhou explains, “in a typical object training scenario, people can hold an object up to a camera and move it around so a computer can analyse it from all angles to build up a model. However, machines lack our evolved ability to isolate objects from their environments”. LookHere, instead, is intuitive. This means that individuals can save time teaching, and machines learn more effectively.
This system, which the authors claim can build models up to 14 times faster than rival approaches, has been published on GitHub under the permissive Creative Commons Zero 1.0 Universal public-domain license.
Written By: Georgia King