Goggles are becoming a platform for several reasons:
- They are a physical platform. Once you securely position goggles on the head, you now have a physical base to attach additional sensors and peripherals: cameras, trackers, depth sensors and more.
- Portable computing is becoming evermore powerful, thereby creating an incentive to process and analyze sensory data on the goggles as opposed to transmitting large amount of information to some computing base. Furthermore, a key part of the value of goggles is their portability, so the ability to process 'locally' - on the goggle - contributes to the realization of this value.
- As goggles become increasingly immersive, the value of sensors increase as a way to tie the experience into physical objects in the immediate surroundings, as well as connect the actions and context of the user to what is happening 'inside' the display.
One could look at the following diagram - courtesy of Sensics - as a good illustration to what these sensors might be:
One could imaging several types of sensors as feeding into the goggle:
- Orientation and position sensors - whether for the head, limbs or objects such as a gaming sword
- Cameras that might provide visible, IR or depth map information
- Positional sensors such as GPS or indoor location sensors
- Eye tracking sensors to understand gaze direction and potentially provide user interface
- Biometric sensors such as heart rate, perspiration, blood pressure. An eye tracker can also provide pupil size which is another biometric input.
- and more
One would then need to turn this sensor data into useful information. For instance, turn position and orientation of various body parts into understanding of gestures; turn a depth map into detection of nearby obstacles; detect faces and markers from a video image.
As we have discussed before, a virtual reality abstraction layer is going to speed up this process as it will allow those that turn data into information to relieve themselves from the need to worry about the particular formats and features of individual sensors and focus on a category of sensors.
There are several places where this transformation could happen: on the goggle with the help of an embedded processor (as shown in the above diagram); Near the goggle on a device such as a tablet or powerful smartphone; At a stationary device such as a desktop PC or gaming console. Performing this transformation in or near the goggle allows running the application software in or near the goggle, leading to a truly portable solution.
What is the best place to do so? As Nelson Mandela - and others - have said "Where you stand depends on where you sit". If you are a PC vendor that is married to everything happening on the PC, you shudder at the notion of the goggle being an independent computing platform. If you are a goggle vendor, you might find that this architecture opens up additional opportunities that do not exist when you are physically and logically tied to a stationary platform. If you are a vendor making phones or tablets, this might allow you to position the phone as a next-generation portable gaming platform.
So, beyond innovations in displays and optics, I think we will see lots of interesting sensors and sensor fusion applications in 2014.
What do you think? Let me know.