This ‘brave new world’ for man machine interface has been steadily evolving over the last decade or so and is based in the research carried out into augmented reality, which provides the ability to overlay the real world with generated material in order to enhance our capabilities.
In its most basic form, even a printed map, or the audible commentary in a museum is augmented reality, but the term is usually used to describe visual overlays onto the real world.
There have been impressive advances. A very good example would be the applications that overlay map data onto the screen of a smartphone, allowing you to look “through” the phone and see a view of the real world with enhanced data. such as the example below, a useful app showing the closest cycle hire facility in London.
However, the above success can be used to illustrate the three key usability and technology challenges that face truly “worn” augmented systems.
Identifying real world objects. Object recognition is still in its infancy and the only two proven methods for allowing the system to know what it is looking at are to a) cover them with special markers or QR codes or b) make it so large that GPS can identify it (as with cycle hire sites). The chances of a system finding your car-keys are still remote for the medium term
Overlaying reality. Being able to use goggles with accurately positioned enhancements would currently rely on a system that tracks eye position. If you need an example, move your head from side to side but keep reading this. Now imagine a pair of goggles that could move the text, or image to match what your eyes are doing. It’s not easy. Far easier to require the user to look at a generated version of reality, which is what happens when you look at the world through the smartphone
Communicating with the system. Voice and gestural interfaces work to a certain extent, but even in the orchestrated demonstration by Motorola, it would have been far quicker for the user to move and click a mouse. Take any of these systems into the real world, where not all conversation is aimed at the computer and where gestures are made for all sorts of other reasons, and the chances of the worn computer sorting out the inputs from the other noise is small.
Don’t get me wrong. This is an exciting area of research with numerous potential applications in the medical space. But I have found through hard won experience that the reality is a long way from the aspiration.