Tuesday, September 3, 2013

Overcoming Optical Distortion

In a previous post, we discussed what optical distortion is and why it is important. In this post, we will discuss ways to correct or overcome distortion.
jenny downing via Compfight cc
There are four main options:

1. Do nothing and let users live with the distortion. In our experience, geometrical distortion of less than 5-7% is acceptable for mainstream professional applications. One of our competitors in the professional market made a nice living for several years by selling an HMD that had close to 15% distortion. However, some customers felt that it was good enough and that the HMD had other things going for it, such as low contrast and high power consumption. For gaming, it may be that larger distortion is also acceptable.

2. Improve the optical design. Lower distortion can certainly be a design goal. However, the TANSTAAFL principle holds ("There ain't no such thing as a free lunch", as popularized by Robert Heinlein) and to get lower distortion, you'd typically have to agree to relax other requirements such as weight, material selection, eye relief, number of elements, cost, transmissivity or others. Even for a standard eMagin SXGA display, my company has found that different customers seek different sets of requirements, which is why we offer two different OEM modules for this display, one design with lower weight and the other with lower distortion and generally higher performance.

3. Fix it in the GPU. The Graphics Processing Unit (GPU) of a modern graphics card or the GPU embedded inside many ARM chips are capable of performing geometry remapping to accommodate the geometrical distortion in the optics in a process called texture mapping. The upside of this approach is that it does not increase the direct system cost. The downside is that it requires modifying the program generating the content. If the content comes from a source that you have less control of (such as a camera, or a game that you previously purchased), you are unable to correct for the distortion.

4. Correct it in the goggle electronics. One could construct high-speed electronics (such as these) that perform real-time distortion correction for a known distortion function. This adds cost and complexity to the system but can work with any content regardless of where it came from. Done correctly, it need not add significant latency to the video signal.

Additional options, often application-dependent, also exist. For instance, we have several customers that use the goggles to present visual stimuli to their subjects. If the stimuli are simple such as a moving dot on the screen, the program generating them can take into account the distortion function while generating the stimuli, thus correcting for the distortion without help from the GPU



For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here.

2 comments:

Unknown said...

It occurs to me that doing GPU distortion correction by warping the framebuffer (assuming the resolution as the display) will yield a significant loss in definition. The moment that a pixel center shifts and is subsequently interpolated over 4 adjacent pixels, the image quality becomes noticeably blurrier and is quite degraded. This could be improved by rendering to a framebuffer with a higher resolution, distortion, and downsampling to the resolution of the final display. It could also be improved by doing the distortions directly in the vertex shader, though this may be very complicated for certain applications.

VRGuy said...

The interpolation could be over 4 adjacent pixels, as you describe. This is called bi-linear interpolation. You could also do a bi-cubic interpolation which takes 16 adjacent pixels and produces better results, but at the expense of processing power.

I agree that quality can be improved by rendering at a higher resolution, though that requires more memory and more GPU resources.

As we discussed on the post, better optics can reduce the distortion to levels that are low enough to not need GPU-based distortion correction, but such solution is often more expensive to produce.

So, as usual, we have a study in tradeoffs. No free lunch. One can decide to throw everything at the GPU and live with "cheap" optics, or invest more money in the optics, or invest in processing electronics. You can ask the marketing group to convince the customers that distortion at the edges of the visual field is not that important for a particular application