Today, my company introduced the SmartGoggle, the solution for the common virtual reality goggle.
The issue with today's goggles is obvious: they are mindless; just a monitor on your head. Just like a monitor has a video input and will display almost anything you pipe into that video input, a traditional goggle will display that signal in front of your eyes. If you provide two signals, or side-by-side video, it can do so in stereo as well.
But a traditional goggle, like a monitor, requires an external video source. It can be a computer, in which case you can do something interactive. It can be your iPod, so you can use the goggle as a media viewer. Useful, but boring.
While all of this is going on, tablets and smart phones have become increasingly powerful in their computing and graphics capabilities. So, as we were thinking about how to make goggles better, it became obvious that putting some good processing power inside the goggles could do a lot of good. For one thing, you could run applications on the goggles and not have to carry around an external computer with you. The goggles, and these applications, could go with you anywhere, which is cool. These could be connected applications - streaming something from the Web, or these could be local applications such as those using an on-board camera to drive an augmented reality app. Yes, you should still be able to use an external video source to drive your goggle, but now you can also drive it 'from the inside'. This is a bit like the new crop of smart TVs that are becoming very popular. You could connect them to your cable provider or DVD player, but they can also stream a Netflix movie or surf the web using an on-board processor. So far, so good.
The problem of the user interaction still remains. Sure, you can put a head tracker so that the view can change as you rotate your head, but this is typically not enough for true interaction with the application. You can use an external device - a phone, joystick or even a Kinect - as input, but these approaches have limitations: you have to carry them with you; you have to stand in front of a sensor; you are limited in your tracking area; good, but not good enough for goggles.
We then started thinking: what if we put a camera that can track your hands and make that information - both in raw form as well as after gestural analysis - available to the application running on the goggle. That might be cool, because the hands go with you everywhere and because cameras on your head can often see your hands regardless of where you are and which direction you are looking at. Better yet, if we put an array of cameras on the head, we can get depth perception of the hand location as well as get a really wide tracking area. We call this 'first person hand tracking'.
Last, we realized that many goggles are essentially the same on the inside. Sensics, for instance, makes a commercial goggle and then repackages it into a different enclosure for training and simulation applications. Anyone wanting to build a goggle will need to cover several areas: driving displays; head tracking; video processing and more. Given this, it made sense to design a module that essentially encapsulates all these functions and allows goggle developers to focus on the design/styling aspects of the goggles rather than on building everything from scratch time and again.
We've been working on this for a while and are excited with the progress and the initial feedback we are receiving. If you combine these three innovations: on-board Android machine, real-time hand tracking from a first-person perspective, and a 'system on a module' approach for encapsulating most of what's needed to build a goggle, we think you get something. A SmartGoggle.
Remember when the iPhone came out and suddenly people started realizing that it's not just a phone, but much more than that? We think SmartGoggles can be to mindless goggles what smart phones are to flip phones. A major step forward, which we are very excited to take today.
No comments:
Post a Comment