Monday, November 9, 2015

Using VR (and OSVR) in roller coaster rides and other out-of-home applications

Ubisoft Rabbids demo at E3 2015
We've seen a lot of interest in using virtual reality for theme park and other "out of home" applications. To me, this interest is very justified. Virtual reality allows creating new types of experiences. It is much cheaper for an operator of a roller coaster, for instance, to have visitors use VR to upgrade a ride as opposed to building an entire new track. VR allows changing the experience from ride to ride as well as to modulate the intensity of the experience.

Similarly, the notion of a VR cafe has also been raised. Just like in the 1990's when high-speed Internet access was not prevalent, a VR cafe would allow casual VR users to experience gaming VR without having to purchase a VR headset and a high-performance computer for home use.

There are several common requirements to all these use cases:

  • VR goggles need to be rugged enough to withstand heavy use, accidental drops or the occasional teenager that tries to take it apart.
  • Nearly-universal fit is needed so that little or no adjustment is required for a good VR experience. Some attractions will have an operator that can offer some assistance, but maintaining the throughput of an attraction is an important requirements.
  • Need to be able to quickly clean and sanitize goggles between users. This might use a disposable face mask, or a way to quickly wipe down the headset.

In additional, some applications also require:

  • Integration of the VR goggles into an existing frame (e.g. racing helmet for a racing simulator)
  • Cable management
  • Integration of other peripherals such a multi-person position tracking

Consumer VR headsets are not immediately suitable for these applications, though can definitely be used to prototype the initial design. On the other hand, building a completely new HMD and then creating a high-performance rendering infrastructure for it is also an expensive and time-consuming endeavor.

What to do? One option is to use OSVR for either the hardware and/or software portions of this effort.

Because OSVR hardware is open-source and designed to be hacked and changed, it is possible to take existing OSVR components - for instance the display, electronics and optics - and then package them as required to address the particular needs of the attraction.

Similarly, the OSVR software framework provides high-performance rendering and plugins for many game engines across a wide variety of operating systems, HMDs and peripherals. OSVR can also be extended to new types of HMDs if custom hardware is created.

My company can also help with creating semi-custom designs primarily based on pre-existing building blocks and with optimizing a software infrastructure to support a particular set of hardware peripherals

On a personal note, I had a chance to try a pre-production version of the VR ride at Europapark in Germany and it was quite an experience. During my visit, I tried the same ride twice: once with a track being shown in the virtual world (giving some hint of what turn or roll will happen next) and the other without it (making me feel like a pinball in space).

Out-of-home VR is becoming possible and cost effective, and that's an exciting development both for theme park owners as well as for showcasing VR to the public.

I'll be at the IAAPA show later this month in Orlando. If you want to meet and discuss some opportunities, drop me a note.

Monday, October 26, 2015

The Video Processing FPGA inside the OSVR HDK and the Sensics dSight

Now that OSVR hacker developer kits are in the wild, @skyworxx Mark Schramm has posted some teardown photos of the 1.2 version of the HDK. He notes the FPGA on the board, and I thought I'd take the opportunity to explain what the FPGA can do in the HDK, and in it's high-end brother the dSight

The FPGA and the DRAM chips next to it can perform real-time video processing, giving hackers the ability to experiment with hardware-level transformations that do not require cooperation from the video-generating device or application.

Some of the options that are currently implemented:
1. Pass-through. This is the simplest mode and it does not involve the DRAMs. 1080x1920 video from the HDMI receiver is essentially passed through to the MIPI interface that drives the OLED screen.

2. Image rotation. This mode allows real-time 90 degree rotation, so that standard landscape mode 1920x1080 video can be presented in the 1080x1920 display. To do this, a full video frame is stored in the DRAM chips while the previous frame is sent to the display. This 90 degree rotation does cost you 1 frame of latency, but can be very useful in some of the following scenarios:

  • Video coming from a 1920x1080 source such as as DVD player or in "replicated desktop" mode.
  • Video coming over a low-latency wireless link. These links primarily support 1920x1080 today and not the native 1080x1920 mode.
3. Conversion of full screen to side-by-side mode. Ever saw a desktop in the HMD and found yourself squinting to see each half at a time. When this conversion mode is enabled, the video signal is converted into two identical copies of the signal, that can then be shown on both eyes at the same time. Control over this mode is via a command via the HID feature interface or with a simple utility.

Additional modes that are not yet implemented but can be implemented by the community:
  • Real-time distortion correction. If you have a non-cooperative video source or just prefer to use your GPU for something else, real-time distortion correction in the FPGA can be useful.
  • Resolution up-scaling: converting from lower resolution into the full resolution of the HDK
  • Color enhancements (e.g. gamma, HSI improvements)
  • Rearrange the video signal. One cool application that we saw from one of our partners is rearranging a 1080x1920 on the output of a GPU to reformat it into a non-legible 1920x1080 image, send that over the low-latency wireless video link and then use the FPGA to unscramble the image. This allows wireless video transmission without "paying" the 1-frame latency penalty.
If a manufacturer was very cost-conscious, they probably would not include the FPGA in the design, but as a hacker developer kit, we think it's an excellent exploratory option.

What could you do with it?

Monday, October 5, 2015

Embedded OSVR and what it can do for Phones and Game Consoles

What do you do when you need to add a VR peripheral like an eye tracker or a motion suit to a phone or a game console? Tear your hair out, probably.

There are multiple reasons why this is not easy. Depending on the particular setup, common reasons are:

  • Closed programming environment that prevents you from working with an unknown peripheral, without going through a tedious approval process.
  • Not enough processing power to service the peripheral without negatively impacting the performance of the game.
  • No physical connectivity: lack of available USB ports.
  • Lack of device drivers supporting the particular peripheral.
  • No appetite to connecting additional wires to an otherwise portable or wireless system.
The OSVR software framework might be able to help. OSVR has two parts: the "OSVR Server" and "OSVR client". The Server processes data as it comes from the various peripherals and sensors and then presents it to Client in a device-agnostic format. For instance, once an eye tracker interface has been defined, the server can translate the vendor-specific implementation of eye tracking data (such as for an SMI eye tracker) into the universal device-independent reports. The Client is part of the game or VR application that uses this data.

In many cases, the Server and Client will reside on the same CPU (or sometimes even in the same process), and that is the standard deployment for a PC environment. However, the Client and Server can also be connected through a network interface such as WiFi or Bluetooth. In fact, a Client could connect to multiple servers, some residing locally, and some residing elsewhere over a network.

What does this mean? It means that you can connect VR peripherals to one CPU and run the game on another CPU.

For instance:
  • Game (including OSVR Client) runs on phone. Peripherals (positional tracking, for example) including OSVR server runs on PC. Game receives 60 FPS updates from PC via Wifi or Bluetooth. Since PC does all the heavy lifting, the 60 FPS position updates are very short and low-bandwidth. 
  • Game runs on XBOX, eye tracking runs on external CPU (such as super-compact single-board computers from Gumstix or Variscite) which uses the OSVR server to provide the universal eye tracking interface. By the way, these single-board computers could run OSVR using the operating system of your choice - Linux, Android, Windows Embedded) regardless of what operating systems runs with the client. 
My guess is that we will see additional examples of this over the coming months.

What would you use this for?

Tuesday, September 29, 2015

Advancing VR support for Mac

Given that Oculus/Facebook mothballed support for VR on Mac, the OSVR team at Sensics decided to step up and fill that void.

Once successful, this will provide an active path to continue to work with Oculus (and many other HMDs) on Mac. It will also pave the way to make use of the many capabilities of OSVR including positional and predictive tracking, device-independent integration for cameras, gestures, skeleton, locomotion devices and more.

We are seeking community assistance in making this happen faster. Below are short descriptions of what we know and what we are seeking. Comments, feedback and offers to assist will be most appreciated. 

What we know 

  • OSVR is built to be multi-platform. We have OSVR working on Windows, Android and Linux and have had some reports of success in working on Mac. 
  • We have OSVR building on both Linux (with Clang and GCC) and Windows. All the commits must pass testing on the CI for Linux and Windows. 
  • OSVR supports multiple HMDs including those made by Oculus, OSVR, Sensics, Vuzix. Vive support is coming, as well as better integration with Cardboard. Support includes display parameters, distortion. Asynchronous time warp supported on Windows/NVIDIA environment but we are working to expand this to AMD/Intel as well as other platforms. 
  • OSVR has been integrated into WebVR. Bugzilla patch has been submitted and reviewed by the Mozilla team. WebVR demos work on top of OSVR-supported HMDs. This means that OSVR work will also help support WebVR on Mac and on other platforms
  • We know OSVR was successfully built on Mac last December (prior to public release), and all external dependencies have Mac support, so we should have a good idea of the missing pieces.

Status of Facebook/Oculus support in OSVR

Display support for Oculus in OSVR is already entirely independent of Oculus SDK. It uses distortion parameters from OSVR Distortionizer and our own distortion shader, and display descriptor. So, OSVR support for Oculus on Mac comes down to sensor access (and sensor fusion, as in the Oculus case, the SDK or runtime does the IMU fusion). 

Currently, the OSVR-Oculus plugin master branch uses a VRPN driver written to access the Rift SDK, which is built against a 0.4.x release, and fully functional and multi-platform. For instance, here is a demo of OSVR running Rift on Linux. Here is another demo showing the OSVR Palace demo running on Unity over Linux with OSVR/Oculus:

A branch contains a direct OSVR PluginKit driver (no VRPN involved) that builds against a newer Rift SDK, but there are bugs in it, so it is unfinished. If looking to handle Rift on "officially unsupported" (by Facebook) platforms, including Mac and Linux, will want to develop an OSVR PluginKit driver using an open-source driver stack. 

There are several, though some of them claim a non-copyleft license while incorporating some well-known GPL sensor fusion, so problematic that way. OpenHMD appears intended to be a broader approach than just an open-source sensor access for Oculus, but is BSL1.0 licensed and includes sensor fusion, so could serve the purpose. There are almost surely others and we would be happy to receive referrals.

Contributions/assistance wanted

Starting with the OSVR codebase ( 
  • Set up homebrew builds for libfunctionality and jsoncpp (if needed) 
  • Potential implementation (libltdl or other) needed in libfunctionality (which is a simple wrapper for loading plugins at runtime) 
  • Set up homebrew build for OSVR-Core (though it won't build fully at first). Known implementation details needed: 
    • usbserialenum - could just use dummy impl right now 
    • plugin search path code (in PluginHost) 
  • Provide input on best open-source library to access Rift sensors 
 Comments, feedback and offers to assist will be most appreciated. Let's do this together!

Monday, September 21, 2015

OSVR Roadmap: Creating an Ecosystem of Interoperable VR Hardware and Software

OSVR (Open Source Virtual Reality) aims to create an open and universal standard for the discovery, configuration, and operation of VR/AR devices. Created by Sensics and Razer, OSVR includes two independent components: 1) the OSVR software platform and 2) the open-source Hacker Development Kit VR headset.

Since the OSVR launch in January this year, nearly 250 organizations including Intel, NVIDIA, Xilinx, Ubisoft, Leap Motion, and many others have joined the OSVR ecosystem. Concurrent with the expansion of the OSVR community, the capabilities of the software platform have grown by leaps and bounds.

The Sensics team architected the OSVR software platform and is its official maintainer. Below, I describe my personal perspective on the road ahead for the OSVR software along several paths: interfaces and devices, game engines, low-latency rendering, operating systems, utilities, and high-level processing.
The OSVR HDK is modular with open-source design plans

The Big Picture

The goal of the OSVR software framework is to make it easy to create compelling, high-performance VR/AR applications that:
  • Work on as many VR/AR displays and peripherals as possible.
  • Support even those devices that were not available at the time the application was created. Just like you don’t need to upgrade your word processor when you buy a new printer, you should not have to upgrade your game when a new HMD becomes available.
  • If desired, can run on a wide range of operating systems and computing platforms.
  • Take advantage of the unique features of individual devices, as opposed to reaching for the ‘lowest common denominator’.
  • Are not locked into a particular device, peripheral, development environment, programming language or app store.
Directory of OSVR projects
The OSVR framework aims to achieve these goals while open-sourcing the vast majority of the software in order to:
  • Encourage participation from the broader VR/AR community.
  • Provide adopters the security and peace of mind that they desire.
  • Accelerate the pace of development by including a wide community of contributors.
  • Allow adopters to customize the platform to their specific needs.

Last, OSVR leverages existing open-source projects (OpenCV, CMake, VRPN) and is designed with a modular, plugin-based architecture so that:
  • Participants can opt to keep modules closed-sourced so as to protect IP.
  • Adopters can choose a small footprint deployment by choosing only the modules they need.
  • Functionality such as support for new devices can be added after the fact.

Interfaces and Devices


In “OSVR speak”, an interface is a pipe for data of a certain type (“interface class”). Devices are said to expose interfaces to the OSVR core, and in turn, to an application. Such interfaces include tracker (provides position, orientation, or full pose), button, analog (axis of a joystick, analog trigger value), eye tracker (gaze direction), gesture, imager (single image or image stream), locomotion (for omnidirectional treadmills), skeleton (bone structure) and display (output device).

A single physical device may provide data in more than one interface class, just like a multi-function printer might look to an operating system as a printer, a scanner, and a fax. For instance, here are some interfaces exposed by popular devices:

Imagine that you are developing a viewer for 360 degree videos and you want to allow the users interact with it using gestures. With OSVR, because you work through a universal device-independent gesture interface, you can use any device that has an OSVR plugin. In the above example, these devices would be products from Leap Motion, NOD, YEI and Softkinetic. All these devices would expose gestures to the app in a standardized way. Contrast this approach with the hassle of having to integrate every one of these devices individually. Moreover, when new devices such as the Intel RealSense camera get an OSVR plugin that includes a gesture interface, they work immediately work with your app without having to change a single line of code.

Current Status

To date, the OSVR team has primarily focused on creating the various types of interfaces, connecting popular devices that illustrate that these interfaces work, and creating simulation plugins that allow developers to work with simulated data in lieu of using a real device. Aside from native OSVR plugin, OSVR also inherits support for about 100 devices from VRPN, a popular open-source project.

Future Plans

In the coming months, I believe that the OSVR team, OSVR hardware partners, and other OSVR contributors will significantly expand the number of devices supported by OSVR. Specifically, I am aware of plans and work in progress to support Intel Realsense, Tobii eye trackers, NOD devices, Leap Motion camera, HTC Vive and others.

Game Engines


OSVR performs a very important service for both game engines as well as for hardware devices that wish to be supported by these engines.
Figure 1 – Without OSVR: each device needs multiple plugins
The problem is that there are many graphics and game engines (Unity, Unreal, Crytek, VBS, Mongame, OpenVR, Unigine, WebVR, etc.) and numerous hardware devices. If each hardware vendor wants to be supported by each engine, a lot of drivers will need to be written and many person-years will be spent in integration and optimization. Moreover, this “many-to-many” connectivity (seen in Figure 1) puts a lot of stress on engine developers as they need to continuously educate and support hardware developers as well as relentlessly work to keep up with the latest hardware and driver versions.
Figure 2 – With OSVR: a single OSVR plugin makes it easy to support many engines

In contrast, when using OSVR, a game engine needs to have a single integration with OSVR to support all OSVR-supported hardware. The OSVR game engine integration provides a harmonized, standardized way to connect to all devices that support the relevant OSVR interfaces: whether tracker, gesture, skeleton, eye trackers or others. The OSVR team and community can work with each engine developer to create an optimized integration, so there is no need for hardware developers to learn and re-learn the intricacies of each engine. If a new hardware device comes on the market, the hardware manufacturer (or an interested member of the OSVR community) can create a device plugin for OSVR, thus automatically achieving connectivity into all OSVR-supported game engines. Because OSVR is open-source and plugin-based, the availability of such plugin does not depend on the priorities of Sensics or any other company. A developer can simply start from an existing open-source plugin and quickly modify it for the specific API of the new device. The result is illustrated in Figure 2.

Current Status

OSVR is integrated Unity, Unreal Engine, and Monogame

Future Plans

  • A plugin for OpenVR (Valve) is in beta and should be completed soon. This will allow OSVR-supported displays and devices to be used with OpenVR/SteamVR games, though subject to the limitations of the OpenVR API.
  • An OSVR backend for WebVR in Mozilla Firefox has been submitted to the Mozilla project and we expect it will become part of the Firefox WebVR nightly build very soon.
  • A team of students is working on a Blender Game Engine plugin. As part of this effort, they are creating a Python wrapper for the OSVR Core client functionality, which should allow easy integration into other engines such as WorldViz Vizard.
  • Possible integrations into CryEngine, Bohemia VBS3 and others are being discussed.
Due to conscious design decisions made when developing the client (game/application) API, because there are multiple language wrappers available for the native C OSVR API (C++, .Net and soon Python), and because existing integrations are open-sourced, it is easy to integrate OSVR into additional engines, including internal engines and custom applications.

Low-latency Rendering


The elapsed time from sensing to rendering—sometimes called ‘motion to photon’ latency—has been the subject of intense scrutiny towards creating more comfortable and immersive experiences. Latency comes from multiple sources including: how often do the sensors generate data? How quickly does that data get to the application? Are data points extrapolated into the future to create ‘predictive’ tracking? How quickly can the application render the image through the graphics stack? Can the application make ‘time warping’ corrections at the last instant?

Current Status

OSVR systematically addresses these points as follows:
  • Data rate from sensors: The OSVR HDK provides 100 Hz positional data and 400 Hz “sensor-fused” orientation data.
  • Speed in which data reaches the application: OSVR integrates with ETW (Event Tracing for Windows) which is a powerful tool for latency and performance analysis (See a brief tutorial). ETW helps optimize the full software stack—from the game to the sensors—towards achieving optimal performance.
  • Predictive tracking: OSVR currently includes predictive orientation tracking. When using the OSVR HDK, this is derived from angular velocity reports that are provided at 400Hz to the controlling PC. When using HMDs that do not provide angular velocity, the velocity can be extracted from consecutive yaw/pitch/roll reports. Predictive tracking looks 16 milliseconds into the future and reduces the apparent latency.
  • Direct render: The Sensics/OSVR Render Manager (supported on Windows/NVIDIA platforms and includes a Unity Plugin) provides optimal low-latency rendering on any OSVR-supported device. It includes:
  1. Direct to HMD Mode: Enable an application to treat VR Headsets as head mounted displays that are accessible only to VR applications, bypassing rendering delays typical for Windows displays. Direct to HMD Mode is available for both Direct3D and OpenGL applications.
  2. Front-Buffer Rendering: Renders directly to the front buffer to reduce latency.
  • Time warping: OSVR includes Asynchronous Time Warp as part of the Render Manager. It reduces latency by making just-in-time adjustments to the rendered image based on the latest head orientation after scene rendering but before sending the pixels to the display. It includes texture overfill on all borders for both eyes and supports all translations and rotations, given an approximate depth to apply to objects in the image.

Future Plans

The OSVR team is working to expand the range of supported “Render Manager” graphics cards to include AMD and Intel. We are also looking to add Render Manager capabilities on Android and other non-Windows platforms by collaborating with graphics vendors for these platforms.

With regards to the Render Manager itself, in the very near future, the following enhancements will be released:
  • Integrated Distortion Correction: handling the per-color distortion found in some HMDs requires post-rendering distortion. Today, OSVR performs this on the client/application side, but moving distortion correction into the Render Manager provides additional benefits. The same buffer-overfill rendering used in Asynchronous Time Warp will provide additional image regions for rendering.
  • High-Priority Rendering: increasing the priority of the rendering thread associated with the final pixel scan-out ensures that every frame is displayed on time.
  • Time Tracking: indicating to the application what time the future frame will be displayed lets it render the appropriate scene. This also enables the Render Manager to do predictive tracking when producing the rendering transformations and asynchronous time warp. The system also reports the time taken by previous rendering cycles, informing application when to simplify the scene to maintain an optimal update rate.
  • Add Render Manager support in additional engines.
  • Use ETW to perform additional engine and plugin optimizations.
  • Design of a plugin-based API for advanced rendering capabilities. This would allow open-source release and public development of a cross-platform Render Manager stack with the ability to load vendor-specific code (which may be closed-source if required by those vendors).

Operating Systems


VR/AR products come in many shapes and forms and reside on several different computing platforms. The obvious examples are PC-based VR, which typically uses Windows, and mobile VR, which typically uses Android phones. The goal of OSVR is to both support a wide range of operating systems as well as provide a consistent programming interface regardless of the particular operating system. In an ideal case, OS-specific details (such as graphics drivers, file system details) as they relate to creating the VR/AR experience are all abstracted by OSVR.

Current Status

OSVR currently supports the following operating systems:
  • Windows, including Windows-specific support for direct rendering.
  • Android, including the ability to use internal sensors, camera and other on-board peripherals. Currently, this uses the CrystaX NDK to build native-code applications. Unity/Android also works on top of OSVR/Android.
  • Linux: The OSVR engine has complete support for Linux and the code is tested on Linux before Windows binaries are released. Unity/Linux should be possible but has not been tested yet.

Future Plans

  • Include Android and possibly Linux binaries in Unity plugin releases.
  • Add OSX support.
  • Add iOS support.
  • Add RenderManager support to other platforms, working with platform vendors and manufacturers of graphics chips as required.
  • Validate correct operation of additional game engines for non-Windows operating systems.
  • Add plugins for OS-specific/platform-specific peripherals and capabilities.



OSVR utilities are standalone programs that are perform useful functions in support of OSVR development and deployment. True to the OSVR philosophy, OSVR utilities are also open-sourced.

Current Status

The following utilities currently exist:
  • Distortionizer: The distortionizer is an interactive tool that helps determine the RGB optical distortion parameters of a display. Sometimes, these parameters are provided by the optical design team. Other times, they need to be measured. The output of the distortionizer is a set of distortion correction coefficients that automatically feeds into the Render Manager.
  • Latency test: This combines open-source hardware (based on low-cost Arduino components) and open-source software to provide end-to-end latency measurements.
  • Tracker Viewer: a graphical utility to dynamically display position and orientation of several tracked objects.

Future Plans

I am aware of several additional utilities under development:
  • Windows installer for the runtime components of OSVR.
  • Interactive configuration utility to allow configuring eye separation, height and other user-specific parameters.

High-level Processing


High-level processing modules (Analysis Plugins in “OSVR speak”) are software modules that convert data into higher-level information. For instance, a gesture engine plugin can convert a stream of XYZ coordinates to a recognized gesture; an eye tracker plugin can take live feed from a camera pointed at the eye and provide real-time gaze direction; a sensor-fusion plugin can mesh data various tracker interfaces into more accurate reports.

Current Status

The OSVR team and community have primarily been focused on building the lower-level infrastructure before adding optional processing layers on top of them. At present, two analysis plugins exist:
  • 1-Euro filter: This is a data smoothing filter that, as currently applied, improves stability of the output from a Razer Hydra.
  • Predictive tracker: Estimates head orientation into the near future as a method to reduce perceived latency.

Future Plans

A unified API for easily developing analysis plugins and allowing their configuration is in progress. Several analysis plugins are under consideration and in various stages of design:
Sensor fusion: to combine the output from multiple sensors into more accurate information.
Augmented reality: to allow detecting objects in the scene.
Eye tracking: to convert pupil camera images into pupil coordinates.

We are working to simplify the process of writing analysis plugins, to provide open-source examples and are very open to community contributions.


OSVR, like VR in general, is a work in progress. I am very pleased with the quality of partners, the breadth of functionality, and the level of community involvement that we have seen in the eight months since launching OSVR. Having said that, there is still plenty of work to be done: whether in supporting new devices, supporting a wider range of engines and development environments, in making devices more useful by creating analysis plugins, or in providing high-performance rendering across multiple platforms. I am excited to have played a role in building OSVR and look forward to the months and years ahead.

Sunday, August 9, 2015

The promise and perils of using Fresnel lenses

1: Cross section of  Fresnel lens
2. Cross section of equivlent conventional lens
Image source: Wikipeadia
The use of Fresnel lenses in optical systems VR goggles is not new, but has attracted additional attention in the past year. What are Fresnel lenses and what’s good and not-so-good about using them?

Fresnel lenses were invented nearly 200 years ago by Augistin-Jean Fresnel, a French physicist. The idea is ingeniously simple: the degree to which a lens bends a light ray that hits it depends on material (and hence the index of refraction) from which the lens is made and on the angle of incidence between the light and the surface of the lens. The problem that a Fresnel lens solves is as follows: a classic spherical lens can get very heavy (and expensive) if the curvature and radius is sufficiently large. Since the light bending is essentially determined by the angle at the surface of the lens, could we make a lens that has the same surface curvature at each point of incidence but is not as thick and heavy?

A Fresnel lens (lens 1 in the figure to the right) achieves this by segmenting the classical lens (lens 2) and bringing together small segments of the right curvature. Notice how pretty much each point of the Fresnel lens has the same curvature as the corresponding point in the classical lens.

The original use for which Mr. Fresnel invented the lens was lighthouses (the tall maritime tower, not the Valve tracking system). Focusing the light from the lighthouse into a beam required a very large lens, and using a Fresnel design, this lens could be much thinner, lighter and cheaper than a big chunk of glass. These lenses then also found use a rear window lenses for Minivans (for example, this one) or large lightweight magnifiers.

Weight, thickness and cost are also important in HMDs, and thus many vendors have experimented with such design. Sensics, for instance, has rights to a patent that uses Fresnel lenses in wide-field designs such as the piSight shown below.
This model of the Sensics piSight uses Fresnel lenses as part of the optical system
There are two problems with Fresnel lenses. The main problem is what happens when light hits the ridges, those peaks in the lens that do not correspond to actual curvature in the original lens. When light hits these points it is scatted, and scattered light in an optical system reduces contrast. Thus, you will often see that a Fresnel lens produces a more “milky” image with lower contrast. The second problem is a more technical one – it is more difficult to simulate a Fresnel lens in an optical design software.

Having said that, these systems can be designed and simulated. Here are three variations of 90+ degree optical systems that were designed by Sensics:

90+ degree design using two aspheres
90+ degree design, 1 Fresnel, 1 asphere
90+ degree design, two Fresnel lenses

The first design (two classical lenses) weighs about 16 g per eye. The third design weighs about 2 g per eye. 

As readers of this blog already heard many times, optical design is a study in tradeoffs. If weight is key, Fresnel may be a great option. If performance is most important, Fresnel lenses might not be the first choice.

Monday, August 3, 2015

Using OSVR Software in Professional-Grade Simulators

A few weeks ago, I met with representatives from a large multi-national defense contractor. They are looking for high-performance HMDs for a new simulator and wanted to explore some of the higher-end Sensics products.

After reviewing the HMDs, the conversation turned to software. As is often the case, this company uses a special-purpose engine (as opposed to a game engine like Unity or Unreal). The precise tracking hardware for their simulator is still in flux, and could also change between the development and deployment phase. The HMD hardware could also evolve over time.

How can they mitigate development costs in spite of the changing hardware? The easy answer: use OSVR software.

The OSVR software framework supports many HMDs and dozens and dozens of tracking systems and input devices. These are all presented to the application software under one common software interface. For software that works on top of OSVR, supporting a different hardware configuration is as simple as changing a couple of configuration files. No need to recompile the application. Integrating OSVR into a custom game engine is also fairly simple because OSVR:

  • Includes multiple language bindings
  • Supports asynchronous (callback) as well as synchronous (blocking) function calls
  • Provides open-source examples on how to integrate OSVR into other game engines such as Unity, Unreal, SteamVR and Monogame.
  • Makes it easy to define new devices if required
Because OSVR is built on top of VRPN, a de-facto industrial standard in controlling VR devices, it enjoys a very wide range of common as well as esoteric tracking systems. Being free and open source, you can't beat it's price.

We're hoping to deliver HMDs to this exciting simulator, but regardless of what HMD is chosen, I think OSVR software should be strongly considered as a device-independent framework for the simulator.