Sunday, December 20, 2015

Phone-based VR without the Phone

There are two main VR configurations today: PC-based VR (e.g OSVR HDK, HTC VIVE, etc.) and phone-based VR (e.g. Samsung Gear VR, Google Cardboard, etc.). Each approach carries different advantages: PC-based VR allow using high-power graphics cards; Phone-based VR is highly portable and battery-operated.

The rationale behind phone-based VR solutions is two-fold:

  1. If you already have a phone, the incremental investment for a VR experience is low. Gear VR is under $100 now, and Cardboard is much cheaper. For casual VR, popping a phone in a carrier is a convenient solution.
  2. Today's phones have high-resolution screens, integrated cameras, increasingly better motion sensors and many additional capabilities that are very useful for VR. Because they are mass-produced, they are very cost-effective.
However, upon closer investigation it appears that a user of phone-based VR is paying - both in cost and in weight - for phone components that are not truly necessary for VR. For instance one could argue that the following components are not truly necessary:
  • Cellular connectivity
  • Phone body. If the phone is permanently integrated into the cradle, one does not need the weight of the body itself.
  • Touch screen
  • and more...
Now that VR is gaining steam, VR devices are going to have modules and components that are designed specifically for VR instead of settling for parts from other electronics devices. For instance we'll see screens made for VR (small size, higher refresh rate, low persistence). Given this, it would not be surprising to start seeing phone vendors provide pre-integrated VR goggles that are essentially a phone without the unnecessary components coupled with a cradle/head strap. These would be lighter than separate phone + cradle and probably also less expensive than the combination.

What else would you like to see in 2016?

Monday, December 7, 2015

Complex VR installations and "my word processor only works on Epson printers"

No one likes to buy software that only works on one piece of hardware.

Imagine I sold you this wonderful word processor but that it could print only on Epson printers. You might say "That's ok, I have an Epson printer today and I really like it", but then you'd worry: what happens if I get an HP printer next year? Will I need to buy a new version? Will this wonderful word processor support HP in the future? So maybe you'd be reluctant to buy this word processor from me.

Same in VR.

What if you buy a game and it works only on one HMD? Will you need a new version next year when you get a new HMD next year? Will the game support this HMD? What if you buy a shiny new hand sensor next year. Will the game work on it?

and... what if you are building an experience such as a large-scale public VR project for which you need to take 'best of breed' components from various vendors? Maybe your HMD is from one vendor, and the gaming weapon is from another and your wide-area tracking system is from a third. You'd like to find a way to support all of these, while giving you the ability to upgrade them in the future. What if your wide-area tracking system is on order, or on development, but you need to develop with another tracking system in the interim?

That's one problem that the OSVR software platform can solve. OSVR supports over 100 devices today, and allows you to mix and match devices as you please. Changing from one device to another is as simple as changing a configuration file, almost like selecting which printer you want to print the document on.

For instance, we just returned from the I/ITSEC training and simulation show in Orlando. We wanted to show the OSVR HMD (BTW, check out this great video showing multi-user training using the OSVR HDK). We also wanted to show the dSight, a higher-end HMD. These two HMDs are quite different:

  • The OSVR HDK has a single HD1080 input, the dSight has inputs.
  • The OSVR HDK has a circular field of view of just over 90 degrees. The dSight has a rectangular field of view of about 120x64 degrees.
  • The two units use different IMUs: an internal IMU (based on a Bosch chip) for the HDK, a YEI tracker for the dSight.
  • They have different distortion functions
  • and so forth
Using OSVR, we just had to change the configuration file of the OSVR service, and could run the same demo without changing a single line of code. In fact, if a new combination of HMD and tracking system came out tomorrow, we would not need to bother the application developer. Just get a plugin or a configuration file for OSVR and run with it.

This concept also works for wide-area systems that have multiple players in them. You might have a centralized tracking system (for instance, from ART), that tracks multiple players and objects. Each OSVR client (e.g. the per-user application) can be configured to receive just the particular players of interest and then use this data to render on the HMD of choice. You might even have different HMDs in the same space.

This flexibility is a win for everyone:
  • Game developers can develop knowing that their games will work on many different hardware platforms.
  • Hardware developers know that as soon as they configure or develop an OSVR plugin, their new hardware will work on pre-existing OSVR-based games
  • End-users and integrators have the peace of mind knowing that their investments are future-friendly.
or you can settle for always printing on Epson.

Monday, November 9, 2015

Using VR (and OSVR) in roller coaster rides and other out-of-home applications

Ubisoft Rabbids demo at E3 2015
We've seen a lot of interest in using virtual reality for theme park and other "out of home" applications. To me, this interest is very justified. Virtual reality allows creating new types of experiences. It is much cheaper for an operator of a roller coaster, for instance, to have visitors use VR to upgrade a ride as opposed to building an entire new track. VR allows changing the experience from ride to ride as well as to modulate the intensity of the experience.

Similarly, the notion of a VR cafe has also been raised. Just like in the 1990's when high-speed Internet access was not prevalent, a VR cafe would allow casual VR users to experience gaming VR without having to purchase a VR headset and a high-performance computer for home use.

There are several common requirements to all these use cases:

  • VR goggles need to be rugged enough to withstand heavy use, accidental drops or the occasional teenager that tries to take it apart.
  • Nearly-universal fit is needed so that little or no adjustment is required for a good VR experience. Some attractions will have an operator that can offer some assistance, but maintaining the throughput of an attraction is an important requirements.
  • Need to be able to quickly clean and sanitize goggles between users. This might use a disposable face mask, or a way to quickly wipe down the headset.

In additional, some applications also require:

  • Integration of the VR goggles into an existing frame (e.g. racing helmet for a racing simulator)
  • Cable management
  • Integration of other peripherals such a multi-person position tracking

Consumer VR headsets are not immediately suitable for these applications, though can definitely be used to prototype the initial design. On the other hand, building a completely new HMD and then creating a high-performance rendering infrastructure for it is also an expensive and time-consuming endeavor.

What to do? One option is to use OSVR for either the hardware and/or software portions of this effort.

Because OSVR hardware is open-source and designed to be hacked and changed, it is possible to take existing OSVR components - for instance the display, electronics and optics - and then package them as required to address the particular needs of the attraction.

Similarly, the OSVR software framework provides high-performance rendering and plugins for many game engines across a wide variety of operating systems, HMDs and peripherals. OSVR can also be extended to new types of HMDs if custom hardware is created.

My company can also help with creating semi-custom designs primarily based on pre-existing building blocks and with optimizing a software infrastructure to support a particular set of hardware peripherals

On a personal note, I had a chance to try a pre-production version of the VR ride at Europapark in Germany and it was quite an experience. During my visit, I tried the same ride twice: once with a track being shown in the virtual world (giving some hint of what turn or roll will happen next) and the other without it (making me feel like a pinball in space).

Out-of-home VR is becoming possible and cost effective, and that's an exciting development both for theme park owners as well as for showcasing VR to the public.

I'll be at the IAAPA show later this month in Orlando. If you want to meet and discuss some opportunities, drop me a note.

Monday, October 26, 2015

The Video Processing FPGA inside the OSVR HDK and the Sensics dSight

Now that OSVR hacker developer kits are in the wild, @skyworxx Mark Schramm has posted some teardown photos of the 1.2 version of the HDK. He notes the FPGA on the board, and I thought I'd take the opportunity to explain what the FPGA can do in the HDK, and in it's high-end brother the dSight

The FPGA and the DRAM chips next to it can perform real-time video processing, giving hackers the ability to experiment with hardware-level transformations that do not require cooperation from the video-generating device or application.

Some of the options that are currently implemented:
1. Pass-through. This is the simplest mode and it does not involve the DRAMs. 1080x1920 video from the HDMI receiver is essentially passed through to the MIPI interface that drives the OLED screen.

2. Image rotation. This mode allows real-time 90 degree rotation, so that standard landscape mode 1920x1080 video can be presented in the 1080x1920 display. To do this, a full video frame is stored in the DRAM chips while the previous frame is sent to the display. This 90 degree rotation does cost you 1 frame of latency, but can be very useful in some of the following scenarios:

  • Video coming from a 1920x1080 source such as as DVD player or in "replicated desktop" mode.
  • Video coming over a low-latency wireless link. These links primarily support 1920x1080 today and not the native 1080x1920 mode.
3. Conversion of full screen to side-by-side mode. Ever saw a desktop in the HMD and found yourself squinting to see each half at a time. When this conversion mode is enabled, the video signal is converted into two identical copies of the signal, that can then be shown on both eyes at the same time. Control over this mode is via a command via the HID feature interface or with a simple utility.

Additional modes that are not yet implemented but can be implemented by the community:
  • Real-time distortion correction. If you have a non-cooperative video source or just prefer to use your GPU for something else, real-time distortion correction in the FPGA can be useful.
  • Resolution up-scaling: converting from lower resolution into the full resolution of the HDK
  • Color enhancements (e.g. gamma, HSI improvements)
  • Rearrange the video signal. One cool application that we saw from one of our partners is rearranging a 1080x1920 on the output of a GPU to reformat it into a non-legible 1920x1080 image, send that over the low-latency wireless video link and then use the FPGA to unscramble the image. This allows wireless video transmission without "paying" the 1-frame latency penalty.
If a manufacturer was very cost-conscious, they probably would not include the FPGA in the design, but as a hacker developer kit, we think it's an excellent exploratory option.

What could you do with it?

Monday, October 5, 2015

Embedded OSVR and what it can do for Phones and Game Consoles

What do you do when you need to add a VR peripheral like an eye tracker or a motion suit to a phone or a game console? Tear your hair out, probably.

There are multiple reasons why this is not easy. Depending on the particular setup, common reasons are:

  • Closed programming environment that prevents you from working with an unknown peripheral, without going through a tedious approval process.
  • Not enough processing power to service the peripheral without negatively impacting the performance of the game.
  • No physical connectivity: lack of available USB ports.
  • Lack of device drivers supporting the particular peripheral.
  • No appetite to connecting additional wires to an otherwise portable or wireless system.
The OSVR software framework might be able to help. OSVR has two parts: the "OSVR Server" and "OSVR client". The Server processes data as it comes from the various peripherals and sensors and then presents it to Client in a device-agnostic format. For instance, once an eye tracker interface has been defined, the server can translate the vendor-specific implementation of eye tracking data (such as for an SMI eye tracker) into the universal device-independent reports. The Client is part of the game or VR application that uses this data.

In many cases, the Server and Client will reside on the same CPU (or sometimes even in the same process), and that is the standard deployment for a PC environment. However, the Client and Server can also be connected through a network interface such as WiFi or Bluetooth. In fact, a Client could connect to multiple servers, some residing locally, and some residing elsewhere over a network.

What does this mean? It means that you can connect VR peripherals to one CPU and run the game on another CPU.

For instance:
  • Game (including OSVR Client) runs on phone. Peripherals (positional tracking, for example) including OSVR server runs on PC. Game receives 60 FPS updates from PC via Wifi or Bluetooth. Since PC does all the heavy lifting, the 60 FPS position updates are very short and low-bandwidth. 
  • Game runs on XBOX, eye tracking runs on external CPU (such as super-compact single-board computers from Gumstix or Variscite) which uses the OSVR server to provide the universal eye tracking interface. By the way, these single-board computers could run OSVR using the operating system of your choice - Linux, Android, Windows Embedded) regardless of what operating systems runs with the client. 
My guess is that we will see additional examples of this over the coming months.

What would you use this for?

Tuesday, September 29, 2015

Advancing VR support for Mac

Given that Oculus/Facebook mothballed support for VR on Mac, the OSVR team at Sensics decided to step up and fill that void.

Once successful, this will provide an active path to continue to work with Oculus (and many other HMDs) on Mac. It will also pave the way to make use of the many capabilities of OSVR including positional and predictive tracking, device-independent integration for cameras, gestures, skeleton, locomotion devices and more.

We are seeking community assistance in making this happen faster. Below are short descriptions of what we know and what we are seeking. Comments, feedback and offers to assist will be most appreciated. 

What we know 

  • OSVR is built to be multi-platform. We have OSVR working on Windows, Android and Linux and have had some reports of success in working on Mac. 
  • We have OSVR building on both Linux (with Clang and GCC) and Windows. All the commits must pass testing on the CI for Linux and Windows. 
  • OSVR supports multiple HMDs including those made by Oculus, OSVR, Sensics, Vuzix. Vive support is coming, as well as better integration with Cardboard. Support includes display parameters, distortion. Asynchronous time warp supported on Windows/NVIDIA environment but we are working to expand this to AMD/Intel as well as other platforms. 
  • OSVR has been integrated into WebVR. Bugzilla patch has been submitted and reviewed by the Mozilla team. WebVR demos work on top of OSVR-supported HMDs. This means that OSVR work will also help support WebVR on Mac and on other platforms
  • We know OSVR was successfully built on Mac last December (prior to public release), and all external dependencies have Mac support, so we should have a good idea of the missing pieces.

Status of Facebook/Oculus support in OSVR

Display support for Oculus in OSVR is already entirely independent of Oculus SDK. It uses distortion parameters from OSVR Distortionizer and our own distortion shader, and display descriptor. So, OSVR support for Oculus on Mac comes down to sensor access (and sensor fusion, as in the Oculus case, the SDK or runtime does the IMU fusion). 

Currently, the OSVR-Oculus plugin master branch uses a VRPN driver written to access the Rift SDK, which is built against a 0.4.x release, and fully functional and multi-platform. For instance, here is a demo of OSVR running Rift on Linux. Here is another demo showing the OSVR Palace demo running on Unity over Linux with OSVR/Oculus:

A branch contains a direct OSVR PluginKit driver (no VRPN involved) that builds against a newer Rift SDK, but there are bugs in it, so it is unfinished. If looking to handle Rift on "officially unsupported" (by Facebook) platforms, including Mac and Linux, will want to develop an OSVR PluginKit driver using an open-source driver stack. 

There are several, though some of them claim a non-copyleft license while incorporating some well-known GPL sensor fusion, so problematic that way. OpenHMD appears intended to be a broader approach than just an open-source sensor access for Oculus, but is BSL1.0 licensed and includes sensor fusion, so could serve the purpose. There are almost surely others and we would be happy to receive referrals.

Contributions/assistance wanted

Starting with the OSVR codebase ( 
  • Set up homebrew builds for libfunctionality and jsoncpp (if needed) 
  • Potential implementation (libltdl or other) needed in libfunctionality (which is a simple wrapper for loading plugins at runtime) 
  • Set up homebrew build for OSVR-Core (though it won't build fully at first). Known implementation details needed: 
    • usbserialenum - could just use dummy impl right now 
    • plugin search path code (in PluginHost) 
  • Provide input on best open-source library to access Rift sensors 
 Comments, feedback and offers to assist will be most appreciated. Let's do this together!

Monday, September 21, 2015

OSVR Roadmap: Creating an Ecosystem of Interoperable VR Hardware and Software

OSVR (Open Source Virtual Reality) aims to create an open and universal standard for the discovery, configuration, and operation of VR/AR devices. Created by Sensics and Razer, OSVR includes two independent components: 1) the OSVR software platform and 2) the open-source Hacker Development Kit VR headset.

Since the OSVR launch in January this year, nearly 250 organizations including Intel, NVIDIA, Xilinx, Ubisoft, Leap Motion, and many others have joined the OSVR ecosystem. Concurrent with the expansion of the OSVR community, the capabilities of the software platform have grown by leaps and bounds.

The Sensics team architected the OSVR software platform and is its official maintainer. Below, I describe my personal perspective on the road ahead for the OSVR software along several paths: interfaces and devices, game engines, low-latency rendering, operating systems, utilities, and high-level processing.
The OSVR HDK is modular with open-source design plans

The Big Picture

The goal of the OSVR software framework is to make it easy to create compelling, high-performance VR/AR applications that:
  • Work on as many VR/AR displays and peripherals as possible.
  • Support even those devices that were not available at the time the application was created. Just like you don’t need to upgrade your word processor when you buy a new printer, you should not have to upgrade your game when a new HMD becomes available.
  • If desired, can run on a wide range of operating systems and computing platforms.
  • Take advantage of the unique features of individual devices, as opposed to reaching for the ‘lowest common denominator’.
  • Are not locked into a particular device, peripheral, development environment, programming language or app store.
Directory of OSVR projects
The OSVR framework aims to achieve these goals while open-sourcing the vast majority of the software in order to:
  • Encourage participation from the broader VR/AR community.
  • Provide adopters the security and peace of mind that they desire.
  • Accelerate the pace of development by including a wide community of contributors.
  • Allow adopters to customize the platform to their specific needs.

Last, OSVR leverages existing open-source projects (OpenCV, CMake, VRPN) and is designed with a modular, plugin-based architecture so that:
  • Participants can opt to keep modules closed-sourced so as to protect IP.
  • Adopters can choose a small footprint deployment by choosing only the modules they need.
  • Functionality such as support for new devices can be added after the fact.

Interfaces and Devices


In “OSVR speak”, an interface is a pipe for data of a certain type (“interface class”). Devices are said to expose interfaces to the OSVR core, and in turn, to an application. Such interfaces include tracker (provides position, orientation, or full pose), button, analog (axis of a joystick, analog trigger value), eye tracker (gaze direction), gesture, imager (single image or image stream), locomotion (for omnidirectional treadmills), skeleton (bone structure) and display (output device).

A single physical device may provide data in more than one interface class, just like a multi-function printer might look to an operating system as a printer, a scanner, and a fax. For instance, here are some interfaces exposed by popular devices:

Imagine that you are developing a viewer for 360 degree videos and you want to allow the users interact with it using gestures. With OSVR, because you work through a universal device-independent gesture interface, you can use any device that has an OSVR plugin. In the above example, these devices would be products from Leap Motion, NOD, YEI and Softkinetic. All these devices would expose gestures to the app in a standardized way. Contrast this approach with the hassle of having to integrate every one of these devices individually. Moreover, when new devices such as the Intel RealSense camera get an OSVR plugin that includes a gesture interface, they work immediately work with your app without having to change a single line of code.

Current Status

To date, the OSVR team has primarily focused on creating the various types of interfaces, connecting popular devices that illustrate that these interfaces work, and creating simulation plugins that allow developers to work with simulated data in lieu of using a real device. Aside from native OSVR plugin, OSVR also inherits support for about 100 devices from VRPN, a popular open-source project.

Future Plans

In the coming months, I believe that the OSVR team, OSVR hardware partners, and other OSVR contributors will significantly expand the number of devices supported by OSVR. Specifically, I am aware of plans and work in progress to support Intel Realsense, Tobii eye trackers, NOD devices, Leap Motion camera, HTC Vive and others.

Game Engines


OSVR performs a very important service for both game engines as well as for hardware devices that wish to be supported by these engines.
Figure 1 – Without OSVR: each device needs multiple plugins
The problem is that there are many graphics and game engines (Unity, Unreal, Crytek, VBS, Mongame, OpenVR, Unigine, WebVR, etc.) and numerous hardware devices. If each hardware vendor wants to be supported by each engine, a lot of drivers will need to be written and many person-years will be spent in integration and optimization. Moreover, this “many-to-many” connectivity (seen in Figure 1) puts a lot of stress on engine developers as they need to continuously educate and support hardware developers as well as relentlessly work to keep up with the latest hardware and driver versions.
Figure 2 – With OSVR: a single OSVR plugin makes it easy to support many engines

In contrast, when using OSVR, a game engine needs to have a single integration with OSVR to support all OSVR-supported hardware. The OSVR game engine integration provides a harmonized, standardized way to connect to all devices that support the relevant OSVR interfaces: whether tracker, gesture, skeleton, eye trackers or others. The OSVR team and community can work with each engine developer to create an optimized integration, so there is no need for hardware developers to learn and re-learn the intricacies of each engine. If a new hardware device comes on the market, the hardware manufacturer (or an interested member of the OSVR community) can create a device plugin for OSVR, thus automatically achieving connectivity into all OSVR-supported game engines. Because OSVR is open-source and plugin-based, the availability of such plugin does not depend on the priorities of Sensics or any other company. A developer can simply start from an existing open-source plugin and quickly modify it for the specific API of the new device. The result is illustrated in Figure 2.

Current Status

OSVR is integrated Unity, Unreal Engine, and Monogame

Future Plans

  • A plugin for OpenVR (Valve) is in beta and should be completed soon. This will allow OSVR-supported displays and devices to be used with OpenVR/SteamVR games, though subject to the limitations of the OpenVR API.
  • An OSVR backend for WebVR in Mozilla Firefox has been submitted to the Mozilla project and we expect it will become part of the Firefox WebVR nightly build very soon.
  • A team of students is working on a Blender Game Engine plugin. As part of this effort, they are creating a Python wrapper for the OSVR Core client functionality, which should allow easy integration into other engines such as WorldViz Vizard.
  • Possible integrations into CryEngine, Bohemia VBS3 and others are being discussed.
Due to conscious design decisions made when developing the client (game/application) API, because there are multiple language wrappers available for the native C OSVR API (C++, .Net and soon Python), and because existing integrations are open-sourced, it is easy to integrate OSVR into additional engines, including internal engines and custom applications.

Low-latency Rendering


The elapsed time from sensing to rendering—sometimes called ‘motion to photon’ latency—has been the subject of intense scrutiny towards creating more comfortable and immersive experiences. Latency comes from multiple sources including: how often do the sensors generate data? How quickly does that data get to the application? Are data points extrapolated into the future to create ‘predictive’ tracking? How quickly can the application render the image through the graphics stack? Can the application make ‘time warping’ corrections at the last instant?

Current Status

OSVR systematically addresses these points as follows:
  • Data rate from sensors: The OSVR HDK provides 100 Hz positional data and 400 Hz “sensor-fused” orientation data.
  • Speed in which data reaches the application: OSVR integrates with ETW (Event Tracing for Windows) which is a powerful tool for latency and performance analysis (See a brief tutorial). ETW helps optimize the full software stack—from the game to the sensors—towards achieving optimal performance.
  • Predictive tracking: OSVR currently includes predictive orientation tracking. When using the OSVR HDK, this is derived from angular velocity reports that are provided at 400Hz to the controlling PC. When using HMDs that do not provide angular velocity, the velocity can be extracted from consecutive yaw/pitch/roll reports. Predictive tracking looks 16 milliseconds into the future and reduces the apparent latency.
  • Direct render: The Sensics/OSVR Render Manager (supported on Windows/NVIDIA platforms and includes a Unity Plugin) provides optimal low-latency rendering on any OSVR-supported device. It includes:
  1. Direct to HMD Mode: Enable an application to treat VR Headsets as head mounted displays that are accessible only to VR applications, bypassing rendering delays typical for Windows displays. Direct to HMD Mode is available for both Direct3D and OpenGL applications.
  2. Front-Buffer Rendering: Renders directly to the front buffer to reduce latency.
  • Time warping: OSVR includes Asynchronous Time Warp as part of the Render Manager. It reduces latency by making just-in-time adjustments to the rendered image based on the latest head orientation after scene rendering but before sending the pixels to the display. It includes texture overfill on all borders for both eyes and supports all translations and rotations, given an approximate depth to apply to objects in the image.

Future Plans

The OSVR team is working to expand the range of supported “Render Manager” graphics cards to include AMD and Intel. We are also looking to add Render Manager capabilities on Android and other non-Windows platforms by collaborating with graphics vendors for these platforms.

With regards to the Render Manager itself, in the very near future, the following enhancements will be released:
  • Integrated Distortion Correction: handling the per-color distortion found in some HMDs requires post-rendering distortion. Today, OSVR performs this on the client/application side, but moving distortion correction into the Render Manager provides additional benefits. The same buffer-overfill rendering used in Asynchronous Time Warp will provide additional image regions for rendering.
  • High-Priority Rendering: increasing the priority of the rendering thread associated with the final pixel scan-out ensures that every frame is displayed on time.
  • Time Tracking: indicating to the application what time the future frame will be displayed lets it render the appropriate scene. This also enables the Render Manager to do predictive tracking when producing the rendering transformations and asynchronous time warp. The system also reports the time taken by previous rendering cycles, informing application when to simplify the scene to maintain an optimal update rate.
  • Add Render Manager support in additional engines.
  • Use ETW to perform additional engine and plugin optimizations.
  • Design of a plugin-based API for advanced rendering capabilities. This would allow open-source release and public development of a cross-platform Render Manager stack with the ability to load vendor-specific code (which may be closed-source if required by those vendors).

Operating Systems


VR/AR products come in many shapes and forms and reside on several different computing platforms. The obvious examples are PC-based VR, which typically uses Windows, and mobile VR, which typically uses Android phones. The goal of OSVR is to both support a wide range of operating systems as well as provide a consistent programming interface regardless of the particular operating system. In an ideal case, OS-specific details (such as graphics drivers, file system details) as they relate to creating the VR/AR experience are all abstracted by OSVR.

Current Status

OSVR currently supports the following operating systems:
  • Windows, including Windows-specific support for direct rendering.
  • Android, including the ability to use internal sensors, camera and other on-board peripherals. Currently, this uses the CrystaX NDK to build native-code applications. Unity/Android also works on top of OSVR/Android.
  • Linux: The OSVR engine has complete support for Linux and the code is tested on Linux before Windows binaries are released. Unity/Linux should be possible but has not been tested yet.

Future Plans

  • Include Android and possibly Linux binaries in Unity plugin releases.
  • Add OSX support.
  • Add iOS support.
  • Add RenderManager support to other platforms, working with platform vendors and manufacturers of graphics chips as required.
  • Validate correct operation of additional game engines for non-Windows operating systems.
  • Add plugins for OS-specific/platform-specific peripherals and capabilities.



OSVR utilities are standalone programs that are perform useful functions in support of OSVR development and deployment. True to the OSVR philosophy, OSVR utilities are also open-sourced.

Current Status

The following utilities currently exist:
  • Distortionizer: The distortionizer is an interactive tool that helps determine the RGB optical distortion parameters of a display. Sometimes, these parameters are provided by the optical design team. Other times, they need to be measured. The output of the distortionizer is a set of distortion correction coefficients that automatically feeds into the Render Manager.
  • Latency test: This combines open-source hardware (based on low-cost Arduino components) and open-source software to provide end-to-end latency measurements.
  • Tracker Viewer: a graphical utility to dynamically display position and orientation of several tracked objects.

Future Plans

I am aware of several additional utilities under development:
  • Windows installer for the runtime components of OSVR.
  • Interactive configuration utility to allow configuring eye separation, height and other user-specific parameters.

High-level Processing


High-level processing modules (Analysis Plugins in “OSVR speak”) are software modules that convert data into higher-level information. For instance, a gesture engine plugin can convert a stream of XYZ coordinates to a recognized gesture; an eye tracker plugin can take live feed from a camera pointed at the eye and provide real-time gaze direction; a sensor-fusion plugin can mesh data various tracker interfaces into more accurate reports.

Current Status

The OSVR team and community have primarily been focused on building the lower-level infrastructure before adding optional processing layers on top of them. At present, two analysis plugins exist:
  • 1-Euro filter: This is a data smoothing filter that, as currently applied, improves stability of the output from a Razer Hydra.
  • Predictive tracker: Estimates head orientation into the near future as a method to reduce perceived latency.

Future Plans

A unified API for easily developing analysis plugins and allowing their configuration is in progress. Several analysis plugins are under consideration and in various stages of design:
Sensor fusion: to combine the output from multiple sensors into more accurate information.
Augmented reality: to allow detecting objects in the scene.
Eye tracking: to convert pupil camera images into pupil coordinates.

We are working to simplify the process of writing analysis plugins, to provide open-source examples and are very open to community contributions.


OSVR, like VR in general, is a work in progress. I am very pleased with the quality of partners, the breadth of functionality, and the level of community involvement that we have seen in the eight months since launching OSVR. Having said that, there is still plenty of work to be done: whether in supporting new devices, supporting a wider range of engines and development environments, in making devices more useful by creating analysis plugins, or in providing high-performance rendering across multiple platforms. I am excited to have played a role in building OSVR and look forward to the months and years ahead.

Sunday, August 9, 2015

The promise and perils of using Fresnel lenses

1: Cross section of  Fresnel lens
2. Cross section of equivlent conventional lens
Image source: Wikipeadia
The use of Fresnel lenses in optical systems VR goggles is not new, but has attracted additional attention in the past year. What are Fresnel lenses and what’s good and not-so-good about using them?

Fresnel lenses were invented nearly 200 years ago by Augistin-Jean Fresnel, a French physicist. The idea is ingeniously simple: the degree to which a lens bends a light ray that hits it depends on material (and hence the index of refraction) from which the lens is made and on the angle of incidence between the light and the surface of the lens. The problem that a Fresnel lens solves is as follows: a classic spherical lens can get very heavy (and expensive) if the curvature and radius is sufficiently large. Since the light bending is essentially determined by the angle at the surface of the lens, could we make a lens that has the same surface curvature at each point of incidence but is not as thick and heavy?

A Fresnel lens (lens 1 in the figure to the right) achieves this by segmenting the classical lens (lens 2) and bringing together small segments of the right curvature. Notice how pretty much each point of the Fresnel lens has the same curvature as the corresponding point in the classical lens.

The original use for which Mr. Fresnel invented the lens was lighthouses (the tall maritime tower, not the Valve tracking system). Focusing the light from the lighthouse into a beam required a very large lens, and using a Fresnel design, this lens could be much thinner, lighter and cheaper than a big chunk of glass. These lenses then also found use a rear window lenses for Minivans (for example, this one) or large lightweight magnifiers.

Weight, thickness and cost are also important in HMDs, and thus many vendors have experimented with such design. Sensics, for instance, has rights to a patent that uses Fresnel lenses in wide-field designs such as the piSight shown below.
This model of the Sensics piSight uses Fresnel lenses as part of the optical system
There are two problems with Fresnel lenses. The main problem is what happens when light hits the ridges, those peaks in the lens that do not correspond to actual curvature in the original lens. When light hits these points it is scatted, and scattered light in an optical system reduces contrast. Thus, you will often see that a Fresnel lens produces a more “milky” image with lower contrast. The second problem is a more technical one – it is more difficult to simulate a Fresnel lens in an optical design software.

Having said that, these systems can be designed and simulated. Here are three variations of 90+ degree optical systems that were designed by Sensics:

90+ degree design using two aspheres
90+ degree design, 1 Fresnel, 1 asphere
90+ degree design, two Fresnel lenses

The first design (two classical lenses) weighs about 16 g per eye. The third design weighs about 2 g per eye. 

As readers of this blog already heard many times, optical design is a study in tradeoffs. If weight is key, Fresnel may be a great option. If performance is most important, Fresnel lenses might not be the first choice.

For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here

Monday, August 3, 2015

Using OSVR Software in Professional-Grade Simulators

A few weeks ago, I met with representatives from a large multi-national defense contractor. They are looking for high-performance HMDs for a new simulator and wanted to explore some of the higher-end Sensics products.

After reviewing the HMDs, the conversation turned to software. As is often the case, this company uses a special-purpose engine (as opposed to a game engine like Unity or Unreal). The precise tracking hardware for their simulator is still in flux, and could also change between the development and deployment phase. The HMD hardware could also evolve over time.

How can they mitigate development costs in spite of the changing hardware? The easy answer: use OSVR software.

The OSVR software framework supports many HMDs and dozens and dozens of tracking systems and input devices. These are all presented to the application software under one common software interface. For software that works on top of OSVR, supporting a different hardware configuration is as simple as changing a couple of configuration files. No need to recompile the application. Integrating OSVR into a custom game engine is also fairly simple because OSVR:

  • Includes multiple language bindings
  • Supports asynchronous (callback) as well as synchronous (blocking) function calls
  • Provides open-source examples on how to integrate OSVR into other game engines such as Unity, Unreal, SteamVR and Monogame.
  • Makes it easy to define new devices if required
Because OSVR is built on top of VRPN, a de-facto industrial standard in controlling VR devices, it enjoys a very wide range of common as well as esoteric tracking systems. Being free and open source, you can't beat it's price.

We're hoping to deliver HMDs to this exciting simulator, but regardless of what HMD is chosen, I think OSVR software should be strongly considered as a device-independent framework for the simulator.

Sunday, July 19, 2015

Beyond gaming: virtual reality helps people with vision disabilities

Over the past two years, Sensics has been working with our customer Visionize and a group of researchers from the Wilmer Eye Institute at Johns Hopkins University on applying the group's combined expertise towards creating a solution to help people with vision disabilities. The Los Angeles Times published a story today about one of the models in this line. It's a good opportunity to describe this Low Vision project in greater detail and shine the light on non-gaming application of consumer VR.

Prototype of Visionize low-vision system is used to magnify the image of the boy above the monitor. The monitor - used for illustration purposes only - shows a screencast of what the user of the system sees in each eye
Low vision is a common problem. It is estimated that there are about 2.5 million people in the United States - so over 0.75% of the population - that suffer from low vision (defined as best corrected visual acuity less than 20/60 in the better seeing eye). While low vision is typically associated with aging, there are also a large number of kids who are born with vision disabilities or develop them in their early years. Additionally, hundreds of thousands additional patients enter the low-vision population every year.

The impact of low vision ranges from difficulty in reading to difficulties in recognizing people, places and objects. Disease progression can often be controlled, but the existing damage is permanent. Macular Degeneration, a disease that destroys the area of central vision (the fovea), is the most common low vision pathology. Because the resolution at the fovea is higher than the rest of the eye, the overall visual acuity is reduced.

Optical or digital magnifiers are popular with the low vision population and can be effective for static activities such as watching TV or reading. However, they are more challenging for use in dynamic activities such as walking:
A magnifier hides part of the text

  • They might be too large or cumbersome to hold
  • If they magnify the entire visual field, the user loses peripheral vision in a significant way. For instance, if the person using a 5x magnifier has a total field of vision of 100 degrees and if the magnifier covered their entire visual field, then only 20 degrees of the real world would be mapped into this 100 degrees, preventing effective movement.
  • If just part of the image is magnified, there is part of the scene that is completely hidden underneath the magnification, as is illustrated in the diagram on the right

To address this problem, we developed a non-linear magnification algorithm that magnifies the image at the point of interest but creates a continuous image so that nothing is lost at the edges. In the model covered in the LA Times, a Samsung Gear VR system is used. The on-board camera provides a live view of the environment and the customized algorithms perform real-time 60 FPS enhancement to present a smartly-magnified image to the user. Parameters such as size and amount of magnification of the "bubble" can be easily controlled. In some cases, these depend on the viewing conditions and in others they can be customized to the particular vision disability of the user.

More advanced models use different types of HMDs and have been tested with multiple cameras and other powreful additions. More about this as well as additional vision enhancements will perhaps be covered at a future opportunity.

An illustration of the magnified bubble can be seen in the diagram below:

and a video illustrating operating an ATM machine with the system (as well as other examples) can be seen in the Visionize site:

These days, gaming gets the majority of the press attention for virtual reality, but other applications exist. For us, the ability to work on a product that truly improves the quality of life for people with vision disabilities, is truly heart-warming.

Wednesday, May 13, 2015

What every VR game engine needs

When game engines are used for VR, they have to include many new capabilities: stereo rendering, higher frame rate, distortion correction, latency control and more. But one topic that is often overlooked is that VR game engine also have to deal with a wide variety of VR peripherals, each with their own API.

A non-VR engine primarily interfaces with a game controller, keyboard and mouse. These are well-understood peripherals with standard software interfaces. New types of keyboards, controllers and mice do appear on the market, but they share essentially identical interfaces to pre-existing devices.

In contrast, there is a much greater variety of VR devices: HMDs, motion and position trackers, hand and finger sensors, eye trackers, body suits, locomotion devices, force feedback devices, augmented reality cameras and more. Furthermore, each class of devices does not offer a common interface: working with a Leap Motion sensor is different than a Softkinetic one or an Intel Realsense one, even though the capabilities that they provide are similar.

The result? An endless effort to keep up. Consider the diagram below, showing just a small selection of available VR engines and a small subset of VR devices

VR devices and VR engines face a constant struggle to support each other
Consider a Sixense STEM motion controller. The Sixense team would probably like to make the STEM available to a widest possible range of software engines so that a developer can use STEM regardless of their engine of choice. Same goes for every other peripheral. Conversely, an Unreal Engine developer wants to support the maximum reasonable number of devices so that the game can reach the maximum number of users. Every permutation of VR device and game engine may have value. For the device vendor, there is a need how to build, test, optimize and deploy plugins for various engines. For the engine vendor, there is constant pressure - from both game developers and hardware makers - to offer device support.

That's why every VR game engine needs a middleware abstraction layer like the OSVR SDK. OSVR factors devices into common interfaces - tracker, skeleton, imager, etc. - and then provides a standard device-independent interface to the game engine. Just like desktop scanners offer a TWAIN interface that applications can use regardless of the scanner vendor, OSVR offers optimized skeleton, eye tracker and other VR interfaces that work regardless of the underlying hardware.

VR Middleware like OSVR dramatically simplifies the device and engine support effort

Consider the advantages. With OSVR:
  • A device vendor writes a single interface to OSVR and immediately gains access to high-performance plugins to a wide range of game engines. In the particular case of OSVR, these plugins are free and open-source.
  • Instead of attempting to work with hundreds and vendors and devices, an engine company can work with the OSVR team to create an optimized interface for each particular type of device. As new devices of the same type enter the market, supporting this new device is as easy as writing an OSVR plugin. Engine companies do not need to go over the effort of teaching yet another hardware vendor how to work well with their engine. The effort can be focused on optimizing the OSVR driver.
  • Game developers are spared the effort of selecting particular vendors. By using the OSVR plugin for their favorite game engines, a wide range of devices is supported out of the box. 
  • As new devices enter the market, games do no need to recompile and redistribute. In fact, that game might not even know that it is now supporting a new hardware vendor
  • OSVR interfaces are defined collectively by industry leaders that understand the power of the abstraction layer. This is done in a transparent and open-source way.
  • The number of engine and device combination increases, while the number of software interfaces to be written, debugged and optimized significantly decreases.
Game engines could still use vendor-specific function calls to activate some special functions of a peripheral, but the need to so quickly decreases.

Yet another advantage is the ability to add in-line analysis plugins to the data path. For instance, often times it is desired to convert hand and finger movements into higher-level gestures. The gesture engine could be vendor-specific, but that would limit its adoption in the game. It could be game-engine specific but that would limit market adoption of gestures as a user interface method. Or, it could be an OSVR plugin that works on top of all existing hand and finger sensors (using their standardized interface) and provides a standard gesture interface to many game engines. Furthermore, this gesture engine can be written by a company or a group that are experts in gestures, instead of forcing a hardware vendor or a game engine to master yet another technology.

Whether you make engines, hardware or smart analysis software, using VR middleware such as the OSVR API deserves serious consideration.

Saturday, April 25, 2015

Open Source Augmented Reality?

Illustration from Meta SpaceGlasses
Can the OSVR project be used not just for virtual reality but also for augmented reality? Absolutely! Here why:

OSVR includes two independent parts: an open-source HMD (the "Hacker Development Kit") and the OSVR framework, a free and open-source software platform that provides an easy and standard way to discover, configure and operate a wide range of virtual reality and augmented peripherals.

Examples of such peripherals could be head trackers, hand and finger sensors (like Leap Motion and SoftKinetic), gesture control devices (such as the Myo armband and the Nod ring), cameras, eye trackers and many others. It turns out that most of these devices are used not only in virtual reality applications but also in augmented reality applications and so the services that the OSVR framework provides are just as useful for AR as they are for VR.

What are these services that OSVR provides? There are many different services, but some of them are:

  • Discovery. Determine what peripherals and devices are connected to the system
  • Configuration and autoconfiguration. Configure each peripheral and the interconnection between them. 
  • Operation. Extract data and events from the peripherals and pass it on to the application in a standardized way. Support both a state (synchronous) and an event (asynchronous) programming model or any combination thereof.
  • Store and restore configuration to/from a file or the cloud. 
  • Provide user parameter such as eye separation.
  • Provide optimized connectors to popular engines such as Unity and Unreal.
  • Support of multiple operating systems and hardware platforms.
For instance, if you are developing an application that uses the Leap Motion sensor but also want users to operate it with the SoftKinetic camera, you have two options:

Option one: do it yourself and learn the individual APIs of each of these cameras and create multiple pieces of code to support them. Then, as these APIs evolve over time, you have to continuously upgrade your application. After a while, you want to use yet a third type of camera, say the Intel RealSense, so now you need to add yet another branch of code. Of course, each of these devices may report data differently. They may use different coordinate systems, or different units. You application needs to handle all these configurations.

Option two: use OSVR. You need to learn just one API which abstracts the various types of cameras. As new cameras are added, the hardware vendor (or the OSVR community) are likely to quickly add a new plugin to OSVR, meaning that your application works the same way with just a download of a new OSVR plugin. If the community does not work fast enough, OSVR is open-source and well documented so that you can write a plugin yourself. Coordinate systems and units are consistent. You are not strongly dependent on one particular piece of hardware.

OSVR also provide a growing list of analysis plugins that latch onto the output of device plugins and provide high-value processing such as a gesture engine (converting motion into gestures), data smoothing, target recognition and more. 

Next time you think about our AR software architecture, think OSVR. 

Monday, March 23, 2015


Arles (image source: Wikipedia)
I'm heading to the city of Arles in the south of France (someone had to go, right?) to participate in the IEEE VR conference and exhibition. This would probably be my sixth or seventh IEEE VR but going to it the feeling and the goals are different.

What's unique at IEEE VR is the it is first and foremost an academic conference, not a VR exhibition. Hundreds and hundreds of researchers (many of which are Sensics customers) come to share, learn and discuss their research, experience cutting-edge demos that are not yet mature enough to show up at a GDC or CES. Because of the renewed interest in VR, I'm sure there will quite a few corporate visitors that were missing from previous years and wish to pick up trends, technologies and partners.

I am chairng a panel discussion on the resurgence of open-source VR. My co-panelists (Sebastien Kuntz from MiddleVR, Goeffrey Subileau from Dassault Systemes and Bill Sherman from the Desert Research Institute) will seek to answer several questions including:
  • What’s new (relative to 1-2 years ago) in open-source and closed-source VR software
  • When should I use closed-source and when should I use open-source?
  • Should I contribute to open-source projects, and if so, why?
  • What’s missing in current open-source VR?
  • Is there an opportunity to combine open-source and closed-source frameworks
and of course we will take questions from the audience.

Later in the conference, Dr, Ryan Pavlik and I will be presenting a technical overview of OSVR, targeting both industry and academia.

When not in session, we will be demonstrating the OSVR HDK and the Sensics dSight at the exhibit area.

But what I am most interested in doing at the conference is listening. I want to hear about all the great research that is out there. I want to have in-depth conversations with people who might want to become OSVR partners, whether it is to hack the OSVR HDK, write a smart software plugin for the software platform or create some new kind of VR experience.

If you are going, look me up! If you're not, stay tuned on these pages for what I found at the show.

Monday, March 9, 2015

A brief overview of the OSVR source-code repositories

The OSVR team opened up most of the source code repositories to the public this weekend, and a few additional repositories will be opened in the coming days. Because Sensics is a founding contributor to this open-source VR project, many have asked us for a brief overview of the project.

The best place to start is If you haven't read the 'introduction to OSVR whitepaper' you might want to do so.

There are several github projects under the /osvr organization. They are as follows:

Key projects:

  • OSVR-Core : this is the heart of the project. The OSVR_server executable connects the game to the OSVR hardware and software components.


  • OSVR-Tracker-Viewer is a utility that graphically shows the position and orientation of the head and hand controllers. It is also an OSVR client, and thus an example on how to connect to and extract data from the server
  • Distortionizer: a utility to estimate distortion correction parameters for various HMDs and a shader to implement the parameters estimated by the Distortionizer. OSVR has JSON descriptor files for HMDs (and many other objects) and the distortion parameters are part of that JSON file

Game engine plugins:

  • OSVR-Unity includes a prefab component that can be imported into Unity. 
  • OSVR-Unreal (to be released later this week) is an Unreal Engine plugin

Development tools:

  • OSVR-Boxstarter is a Boxstarter install that helps quickly set up a development environment on a Windows machine
  • OSVR-JSON-Editor is the source code for a tool (deployed version here) that helps create and edit the JSON descriptor files
  • OSVR-JSON-Schemas is a repository for such JSON files


  • OSVR-Oculus-Rift provides a plugin that allows using the position and orientation data of an Oculus device inside OSVR. 
  • OSVR-Vuzix (to be released later this week) does the same for Vuzix headsets
Additional projects are coming. There is also a wiki page. Issues are currently tracked as part of the Github pages of the projects and we are looking to add an open-source project management tool.

OSVR (licensed under Apache 2.0 license) aims to create a multi-platform framework to detect, configure and operate a wide range of VR devices as well as to allow smart plugins that turn data into useful information. This is a big undertaking and there is a lot more work to be done to fulfill that vision, so we are looking for all the help we can get. I am super encouraged by the support and feedback we are getting from developers all over the world that believe in the open-source concept, and want to make their contributions towards moving VR forward. 

Let me know what you think. What's missing, what your priorities are and how we can get you involved. Welcome to OSVR.

Saturday, February 28, 2015

GDC, OSVR and the Power of the Network

I'll be at the Game Developers Conference next week to talk OSVR with anyone interested in the open-source virtual reality movement. You are likely to find me at the Razer OSVR exhibit (booth 602, South Hall) but you can also contact me to set up a meeting.

Towards the end of the week we will open up the OSVR software source code to everyone on Github on the /osvr organization. We can't wait to help the VR community - a community of innovators, of dreamers, of artists and artisans - dive into OSVR and, together, make it even better.

The power of OSVR is the power of the network. If only two people in the world had an email address, email would be pretty useless. Now that "everyone" has an email address, it is invaluable. The more companies that connect their hardware into the OSVR software framework, the more compelling it will be for game developers to create games on top of OSVR. The more games are created, the more valuable it is to add hardware to it. The value of the framework grows exponentially with the number of products that connect to it.

That's why we are working to make plugging into OSVR software framework as easy as possible. We do this through example programs, through developer assistance, through white papers that show how to convert a VR Unity into Unity over OSVR.

We announced OSVR in January. As I write this, there are about a dozen companies making HMDs that are supporting OSVR. In our experience, it takes between a few hours to a few days to fully integrate an HMD into OSVR. Once integrated, all software that is correctly written with the OSVR framework works on such HMD. If it's that easy, why wait?

BTW, it seems that there will be new HMDs announced at GDC, and we look forward to supporting them as well. Our goal is that OSVR application developers can truly use their code with pretty much any hardware on the market.

We see multiple eye tracking vendors, multiple 3D audio vendors, multiple hand and finger tracking solutions, all coming into the OSVR network. We've added about 50 partners this year. The more, the merrier, because we want to give consumers the choice of selecting the best hardware and software combination for their experience. There are numerous Android devices out there - different screen sizes, wide range of cameras, wide range of CPUs, an endless selection of unique features. Why should VR be different? Why would you want a 'one size fits all' solution?

Of course if your type of device has not been integrated into OSVR yet, it's going to be a bit more work as we would need to work together to define the interface and define it in such a way that it is not unique to your device but rather covers a wide range of similar devices. If you made the first printer that integrated into Windows, you had to work a little harder, but once a print layer services was defined for Windows, it was easy to connect other printers. The OSVR team wants to help in defining the interface and doing everything we can from the OSVR side to make such integration straightforward.

Have a new device or software package that you want to integrate into OSVR? Come talk to us.

The power of OSVR is the power of the community. How many people would we need to hire if we wanted to do OSVR completely in-house and get all the features that are truly desired by the community? 10 people? 100 people? 1000 people? Even if we had 1000 employees working on this, would we have features out quickly enough to satisfy everyone's different set of priorities? Probably not.

That's why OSVR is open. Want to connect to a custom sensor development board that OSVR does not support at the moment? Download the example on how to write a tracker plug-in and do it yourself. Want to support a game engine that is not currently supported? Follow the example of a community member that connected OSVR to a new game engine. Think you have a good 'time warp' rendering algorithm or a new SLAM algorithm or can improve system performance? Show us (and the world) how good you are!

We'll be putting up a Wiki page with some suggestions for new features and improvements, but let's drive the platform forward together.

There is power in being open. Aside from open-sourcing the OSVR hardware and software, we are building upon other open-source projects so that OSVR advances as they advance:

  • OSVR builds upon some aspects of VRPN (see the list of supported devices on the main VRPN page), so that we can leverage many of the existing device drivers. As an aside, OSVR adds a descriptor file to each device so that the capabilities of the device are well-understood by the OSVR application, something that was missing from the original VRPN implementation.
  • The OSVR imager class builds upon OpenCV which provides an incredible amount of computer vision and image processing algorithms, some highly optimized for particular GPUs.
  • Other libraries such as Eigen and Boost, so that we don't have to reinvent the wheel

See you at GDC. Come see our demos, but more importantly, come talk to us about how we can work together. See you in San Francisco!

The post "GDC, OSVR and the Power of the Network" first appeared on the VRguy's blog at 

Sunday, February 22, 2015

Connecting a Smartphone to your VR Headset: How and Why?

If you have the right combination of a smartphone, an adapter and a VR headset, you can use the smartphone to drive the headset. Let's look at how it's done and why you might want to do it.


Many phones the ability to output a copy of their screen to an external monitor. Two prevalent standards to do so are MHL (Mobile High-definition Link) and SlimPort. Both provide a copy of the phone display to an external HDMI connector. For this purpose of driving a VR headset, both should be fine, and the choice depends on which method your phone supports. For example, for an LG G3 phone, this SlimPort adapter worked well for us.

Now that you have this in HDMI format, typically at 1920x1080, you'll want a VR headset that supports this resolution. This is a bit more tricky than it sounds because several 1080p headsets expect 1080x1920 portrait mode video format and thus are incompatible with the smartphone output. However, headsets such as the OSVR HDK and the professional Sensics dSight and zSight 1920 can accept standard 1080p landscape-mode video and display the phone output right away.

This approach is not limited to phones. Tablet such as the Google Nexus 7 also features a SlimPort output.


Obviously, phones or tablets have access to content that would be interesting to watch using a VR headset. For instance, has plenty of side-by-side movies that are suitable for a VR headset.

Headsets that accept standard 1920x1080 video typically have on-board video processing. This can also be used to turn a regular image into a side-by-side image by replicating it across both sides of the display. This can be very useful to experience non-3D content or standard applications.

One could also imagine a game being run on the phone and the headset being used as a display device. A Bluetooth motion tracker could be installed on the headset and communicate with the phone, or if you have the appropriate software installed on the phone (such as the OSVR software framework), you could communicate with the headset's USB tracker.

The alternative, of course, is to wear the phone on your head with the appropriate adapter - whether the Samsung Gear VR, Google Cardboard, Zeiss VR One and so forth.

What are the advantages of wearing the phone on your head?
  • No cables required.
  • No need to purchase SlimPort or MHL adapter.
  • Can use phone camera within application.
What are advantages of using the MHL or SlimPort method?
  • Can be used with any compatible phone or tablet
  • Reduces weight - no need to wear the battery, the phone case or other unnecessary components on the head. This is especially valuable with a tablet.
  • Allows using the phone's touch screen.
  • With the right headset, can also experience non-3D content or regular applications.
  • Can quickly connect and disconnect if required.
It's worth a try, in my opinion

Saturday, February 7, 2015

Building Wireless VR Goggles

Wireless is good. Most people prefer cordless or mobile phones over corded ones; a wireless network connection over a wired one; wireless game controllers; wireless speakers; wireless charging. You get the point.

Would a wireless goggle - one that does not require cables to connect to a PC - be desirable? I certainly think so. With a wireless goggle, you could:
  • Use the goggle in the living room even if the computer is on your desk
  • Have multiple people use multiple goggles in the same space - such as an arcade - without tripping over each other's wires
  • Avoid the risk of wrapping the goggle's umbilical cord around you as you turn
  • Have greater freedom of movement

While wireless solutions already exist for professional-grade goggles (see this 2012 video of the Sensics zSight to the right), let's examine what it would take to do this for a consumer-grade product such as the OSVR HDK (Hacker Developer Kit)

For a wireless solution, one would need to consider three key components: video transmission (how you get the video to the goggles), wireless tracking (how the PC gets information about where the user is) and power.

Video Transmission

Not all wireless video solutions are created the same, because not all aim to solve the same problem. Here is what's important to look for:
  • Low latency. Streaming a movie or a sports event to your TV does not require tight control over latency. If the movie is shown with a 1-second delay, that is perfectly acceptable. Streaming a game to a goggle with a 1-second delay is completely unacceptable. Sometimes, the need to control latency also dictates whether compression can be used. Compressed video saves on bandwidth but takes time to compress at the transmitter and decompress at the receiver.
  • No line-of-sight requirement. Some wireless video solutions require there the transmitter can "see" the receiver, which is typically called 'line of sight'. Those solutions typically don't work for VR goggles because the whole premise of wireless goggles is to allow the user to move around as well as turn. Such turning will inadvertently result in breaking the line of sight from transmitter to receiver and thus dropping the connection. Other use cases, such as having the PC in one room and using the goggles in another probably don't even have line of sight to begin with. This requirement that the wireless signal can go through objects or walls also influences the wireless transmission frequency. Higher frequency transmission (e.g. 60 GHz) results in shorter wavelength which in turn results in greater difficulty in going through walls or humans. Lower frequency bands such as 5 GHz does a better job in penetrating through obstacles and is thus more suitable for wireless goggles.
  • Support the right resolution. Wireless video solutions were originally designed with home entertainment in mind, and thus focus on supporting standard home video resolutions of 1080p (1920x1080) or 720p (1280x720). However, many consumer goggles use smartphone displays that have a native resolution of 1080x1920 - portrait mode - as opposed to the traditional 1920x1080 landscape mode. Many wireless video links do not support 1080x1920. One solution, that is now available as an option with the OSVR HDK is to use on-goggle FPGA to perform the video rotation, so that the wireless link still carries the standard 1920x1080 resolution but it is rotated in real-time to 1080x1920. This enables using wireless video links with such products (note: see the 'no free lunch' section below)
  • Ability to support multiple independent video links. While this is not required if there is just one video link from the PC to the goggles, it might be required if 1) there are multiple goggles in the same space, and they each want to operate independently; 2) there is a need to carry video, such as from a video camera, from the goggles back to the PC (see this wireless camera) or 3) if the goggles require multiple HD1080p signals such as those goggles that have dual screens


Goggles are interactive and thus very often require head orientation and sometimes head position tracking to be sent back to the controlling application. Thus, cutting the video cable is not enough and one needs to find a solution for wireless tracking.

Some wireless transmission technologies - such as WHDI - support an in-band downstream data link. If the transmission of video is considered 'upstream' - from video transmitter to video receiver - a downstream data downlink is sending information from the video receiver (where the goggles are) to the video transmitter (where the PC is). These data downlinks can often carry USB HID (Human Interface Device) messages such as those coming from game controllers and provide a reasonable bandwidth of about 100 kbps. If on-board orientation tracking is formatted into a compatible message, head tracking can be carried to the PC on the same link as the video. This is how tracking is performed in the 2012 zSight video above. The downside of this approach is the update rate. Assuming 60 FPS video, WHDI transmits the downstream data during the blanking periods inbetween frames. Thus, USB HID transmission is also provided at 60 Hz, which is sometimes not interactive enough.

Another method is to use or embed a wireless tracker such as the YEI 3-space sensor. Adding an out-of-band data link ((not on the same link as the video transmission) to a tracker can provide high-rate tracking information without needing a cable.

Other methods are to use trackers that have some kind of base station, such as the Sixense STEM. In this case, the tracker base is connected to the PC. An advantage of this approach is that beyond the head, other parts of the body can also be tracked, as is also the case with the PrioVR suit.


Cutting the cord means also providing local power to the goggles as well as other local consumers of energy - wireless receiver, wireless tracker, on-board camera where applicable and so forth. Including the goggle and wireless receiver, one could expect a power draw of 5-10 Watts (1-2 Amps on +5V). This can typically be satisfied with a 1 or 2 high-current battery such as those being marketed to charge phones and tablets. These batteries are rated in mAh (milliampere-hour) or Ah (ampere-hour). An Ampere-hour means that the battery can theoretically provide 1 ampere of current for an hour. For instance, a 10000 mAh battery that outputs +5V can theoretically provide 10 Watts (2 Amps x 5V) for 5 hours. A typical AA battery provides 2-3Ah. A typical external battery for smartphones can provide 10Ah and a typical car battery can provide about 50Ah (though it would be quite heavy). However, because of many factors - such as the drop in voltage when the battery drains - these figures are a someone optimistic. However, one can often plan for 1-2 hours between charges on an external smartphone battery.

There's no free lunch

What are the downsides of a wireless VR solution?
  • It costs more. One would need to factor in the price of the wireless video link, a potential price increase in the tracking costs, and the costs of a battery
  • There may be a price to pay in video latency. If you use an on-board FPGA to rotate the image from 1920x1080 to 1080x1920, you add 1 frame's worth of latency, or 16mSec. This is because you have the store the entire image in memory before you can start outputting the rotated image.
  • You are often limited to 60 FPS video. Because most wireless video solutions were designed for home entertainment, they provide 1080p @ 60 FPS. Newer solutions that aim to provide higher resolutions might be able to address this issue.
  • You have to carry the battery and the receiver. This is can be done in a beltpack or small backpack.
  • You have to recharge or change the battery from time to time.

What about using a phone for your HMD?

One way to completely overcome this issue is not use a PC. Phone-based HMDs such as Google Cardboard perform all the processing and display on the local phone, thus eliminating the need for wireless video transmission. However, one could probably safely assume that a powerful PC would always have more computing power, more graphics power and access to a wider range of peripherals than a phone, and thus some PC-based experiences could not be replicated on a phone.

Should you do it?

Having discussed the advantages and disadvantages, should you use a wireless HMD? At the very least, you will want to experience it. Having no cables can be quite liberating and is certainly worth a try. Let me know what you think!

The post "Building Wireless VR Goggles" first appeared on the VRguy's blog at