Thursday, June 6, 2013

Understanding the User's Context as it relates to Virtual Reality Goggles

This post is a continuation of an earlier post titled "The barriers to consumer virtual reality may not be what you think"

dwim - Do what I mean button :-) #wwdc11 on Twitpic
Understanding context means understanding what the user is doing or trying to do based on the user's actions. It's the difference between data and information. For instance, it could be determining that the user is aiming a weapon in a video game by analyzing posture and movement. It could be realizing that the user is excited based on perspiration and heart rate, or deducing that the user is walking by combining data from arm and leg sensors. If using a computer, a perfect understanding of context could merge the keyboard into a "DWIM-Do What I Mean" button.

When properly implemented, context is a beautiful way to streamline and simplify human-computer (or human-VR goggle) interaction.

My interest in context started in the year 2000 when I co-founded Unwired Express, a company that built a software platform for contextual delivery of information to mobile devices such as (at that time) a Palm Pilot, Blackberry or a phone with text messaging capabilities. For instance, by seeing a customer meeting in a salesperson's calendar and looking traffic reports, the platform could advise the salesperson to leave early because of traffic. On the way, an email from that particular customer would generate an alert to the salesperson's phone - because these emails are probably of elevated importance just before a meeting - whereas other emails might be deferred. These are just sample use cases: the platform was quite generic and could deduce context from a wide range of enterprise software systems, location information, Web services and more. In retrospect, I think the idea was good, but probably much ahead of its time. Today, Google Now seems to be looking at many of the same use cases. The image below shows two tongue-in-cheek advertisements we did to showcase the forward looking capabilities of the system in understanding what activity you are engaged in an putting it in context.
Unwired Express advertisements for context-aware mobile computing
At Unwired Express, we founded The Context Alliance which brought together representatives from leading companies such as IBM, Sun Microsystems, Motorola and Navteq to discuss context-related issues. One event was held at the context-aware home at Georgia Tech, a home that was built on-campus to research context-related applications and use cases. If you want to read more about the work at Unwired Express, take a peek at this article on Context Awareness and Mobile Computing or at one of our patent filings

Context used to be a data collection problem - how do you get data from multiple systems and devices and get it to a central point in a timely, accurate manner. This is less of a problem in a VR goggle scenario since most of the sensors used to understand context are either on and very near the user. I believe the near-term effort for context-aware VR is (after analyzing the use cases) to construct a good-enough collection of sensors that would provide the data which a context engine would turn into information.

More in that in future posts.

Comment? Question? Correction? Write a comment and I will try to address it.
Have a VR-related topic you'd like me to discuss? Write a comment and I'll consider it for a future post.

No comments: