Man and machine without boundaries

Jawad A Anjum
Staff Writer

 Jawad A Anjum examines gesture recognition technology as the world moves towards seamless interaction between humanity and technology.

Gesture recognition technology is here. We have seen it with multi-touch functionality on our smartphones, but it is the 3D implementation that is creating real possibilities. From the Wii to the Xbox Kinect, we have experienced the technology at first hand, but we are nowhere near reaching its full potential. That would be something more along the lines of the Steven Spielberg film Minority Report, set in 2054, which was a stunning depiction of, among other things, a spectacular array of future developments in technology.

Some have already been realised since the film’s release in 2002. The design house Oblong has been responsible for the G-Speak system, with which users wear gloves to navigate in six degrees of freedom through multiple screens in the most intuitive fashion possible. You can rotate, zoom, spin, swipe or drag and the gestures are just as you would expect.  Hand gestures are merely the tip of a very big iceberg. Movements of the head such as nodding may also be incorporated alongside facial gestures. Crude versions of facial-gesture recognition software are available to download free online.

“All of this is made possible with hardware like stereoscopic, depth-aware cameras and wireless input devices ranging from gloves to glasses, a whole array of sensors that work hard to read you.”

I won’t even get into “affective computing”, which proposes to interpret human emotions, and is already showing positive results; this article is not nearly long enough to go into the possibilities made available by this technology. YouTube will give you some great examples of various labs around the world developing hundreds of different applications. Some of the more significant of these, for example, are uses such as communicating with a device in sign language, education in classrooms in delivered through a hyper-interactive environment and medical diagnostics using real size models and all data available presented in any form desired.

All of this is made possible with hardware like stereoscopic, depth-aware cameras and wireless input devices ranging from gloves to glasses, a whole array of sensors that work hard to read you. The information thus gained goes through several million man-hours worth of code to give you a visible display. As with most futuristic technology, gesture recognition is as of yet obstructively expensive, quite glitchy and only designed for specific situations, not to mention the issues with accuracy of recognition, similar to those of the poorly developed speech-recognition software on your laptop right now. It is firmly on the horizon, but to give an expected date, or even an expected year, would be foolish as advancements like these are quite unpredictable. The ultimate goal is to have a “natural user interface”: a seamless interaction between humanity and technology, a conglomeration of developments with which you never have to learn how to use the technology, and instead it would learn how to read you. Nothing you do or say would be extraneous to your normal routine, because the technology would feel like it was not even there. In other words, it would feel natural. I cannot wait.

Note: For a quick, free and easy sample of hand gesture recognition software, try the app Flutter to control iTunes and Spotify. You can download it free at flutterapp.com.