Monday, October 14, 2013

Where are the VR Abstraction Layers?

"A printer waiting for a driver"
Once upon a time, application software included printer driver. If you wanted to use Wordperfect, or Lotus 1-2-3, you had to have a driver for your printer included in that program. Then, operating systems such as Windows or Mac OS came along and included printer drivers that could be used by any application. Amongst many other things, these operating systems provided abstraction layers - as an application developer, you no longer had to know exactly what printer you are printing to because the OS had a generic descriptor that told you about the printer capabilities and provided a standard interface to print.

The same is true for game controllers. The USB HID (Human Interface Device) descriptor tells you how many controls are in a game controller, and what it can do, so when you write a game, you don't have to worry about specific types of controllers. Similarly, if you make game controllers and conform to the HID specifications, existing applications are ready for you because of this abstraction layer.

Where are the abstraction layers for virtual reality? There are many types of VR goggles, but surely they can be characterized by a reasonably simple descriptor that might contain:

  • Horizontal and vertical field of view
  • Number of video inputs: one or two
  • Supported video modes (e.g. side by side, two inputs, etc.)
  • Recommended resolution
  • Audio and microphone capabilities
  • Optical distortion function
  • See through or immresive configuration
  • etc
Similarly, motion trackers can be described using:
  • Refresh rate (e.g. 200 Hz)
  • Capabilities: yaw, pitch, roll, linear acceleration
  • Ability to detect magnetic north
  • etc,
Today, when an application developer wants to make their application compatible with a head-mounted display, they have to understand the specific parameters of these devices. The process of enhancing the application involves two parts:
  1. Generic: change the application so that it supports head tracking; add two view frustums to support 3D; modify the camera point; understand the role of the eye separation; move the GUI elements to a position that can be easily seen on the screen; etc.
  2. HMD-specific: understand the specific intricacies of an HMD and make the application compatible with it

If these abstraction layers widely existed, the 2nd step would be replaced by supporting the generic HMD driver or head tracker driver. Once done, the manufacturers would need to write a good driver and viola! users can start using their gear immediately.

VR application frameworks like Vizard from WorldViz provide an abstraction layer, but they are not as powerful as modern game engines. There are some early efforts such as I'm in VR to provide middleware, but I think a standard for an abstraction layer has yet to be created and gain serious steam. What's holding the industry back?

UPDATE: Eric Hodgson of the Redirected Walking fame reminded me of VRPN as an abstraction layer for motion trackers, designed to provide a motion tracking API to applications either locally or over a network. As Eric notes, VRPN does not apply to display devices but does abstract the tracking information. I think that because of it being available on numerous operating systems, VRPN does not provide particularly good plug-and-play capabilities. Also, it's socket-based connectivity is excellent for tracking devices that, at most, provide several hundred lightweight messages a second. To be extended into HMDs, several things would need to happen, including:

  • Create a descriptor message for HMD capabilities
  • Plug and play (which would also be great for the motion tracking)
  • The information about HMDs can be transferred over a socket, but if the abstraction layer does anything that is graphics related (in the same way OpenGL or DirectX abstract the graphics card), it would need to move away from running over sockets.


No comments: