|dwim - Do what I mean button :-) #wwdc11 on Twitpic|
When properly implemented, context is a beautiful way to streamline and simplify human-computer (or human-VR goggle) interaction.
My interest in context started in the year 2000 when I co-founded Unwired Express, a company that built a software platform for contextual delivery of information to mobile devices such as (at that time) a Palm Pilot, Blackberry or a phone with text messaging capabilities. For instance, by seeing a customer meeting in a salesperson's calendar and looking traffic reports, the platform could advise the salesperson to leave early because of traffic. On the way, an email from that particular customer would generate an alert to the salesperson's phone - because these emails are probably of elevated importance just before a meeting - whereas other emails might be deferred. These are just sample use cases: the platform was quite generic and could deduce context from a wide range of enterprise software systems, location information, Web services and more. In retrospect, I think the idea was good, but probably much ahead of its time. Today, Google Now seems to be looking at many of the same use cases. The image below shows two tongue-in-cheek advertisements we did to showcase the forward looking capabilities of the system in understanding what activity you are engaged in an putting it in context.
|Unwired Express advertisements for context-aware mobile computing|
Context used to be a data collection problem - how do you get data from multiple systems and devices and get it to a central point in a timely, accurate manner. This is less of a problem in a VR goggle scenario since most of the sensors used to understand context are either on and very near the user. I believe the near-term effort for context-aware VR is (after analyzing the use cases) to construct a good-enough collection of sensors that would provide the data which a context engine would turn into information.
More in that in future posts.