|"A printer waiting for a driver"|
The same is true for game controllers. The USB HID (Human Interface Device) descriptor tells you how many controls are in a game controller, and what it can do, so when you write a game, you don't have to worry about specific types of controllers. Similarly, if you make game controllers and conform to the HID specifications, existing applications are ready for you because of this abstraction layer.
Where are the abstraction layers for virtual reality? There are many types of VR goggles, but surely they can be characterized by a reasonably simple descriptor that might contain:
- Horizontal and vertical field of view
- Number of video inputs: one or two
- Supported video modes (e.g. side by side, two inputs, etc.)
- Recommended resolution
- Audio and microphone capabilities
- Optical distortion function
- See through or immresive configuration
- Refresh rate (e.g. 200 Hz)
- Capabilities: yaw, pitch, roll, linear acceleration
- Ability to detect magnetic north
- Generic: change the application so that it supports head tracking; add two view frustums to support 3D; modify the camera point; understand the role of the eye separation; move the GUI elements to a position that can be easily seen on the screen; etc.
- HMD-specific: understand the specific intricacies of an HMD and make the application compatible with it
If these abstraction layers widely existed, the 2nd step would be replaced by supporting the generic HMD driver or head tracker driver. Once done, the manufacturers would need to write a good driver and viola! users can start using their gear immediately.
VR application frameworks like Vizard from WorldViz provide an abstraction layer, but they are not as powerful as modern game engines. There are some early efforts such as I'm in VR to provide middleware, but I think a standard for an abstraction layer has yet to be created and gain serious steam. What's holding the industry back?
UPDATE: Eric Hodgson of the Redirected Walking fame reminded me of VRPN as an abstraction layer for motion trackers, designed to provide a motion tracking API to applications either locally or over a network. As Eric notes, VRPN does not apply to display devices but does abstract the tracking information. I think that because of it being available on numerous operating systems, VRPN does not provide particularly good plug-and-play capabilities. Also, it's socket-based connectivity is excellent for tracking devices that, at most, provide several hundred lightweight messages a second. To be extended into HMDs, several things would need to happen, including:
- Create a descriptor message for HMD capabilities
- Plug and play (which would also be great for the motion tracking)
- The information about HMDs can be transferred over a socket, but if the abstraction layer does anything that is graphics related (in the same way OpenGL or DirectX abstract the graphics card), it would need to move away from running over sockets.