Photo Credit: davidyuweb via Compfight cc |
The short answer is: GPUs can overcome some, but not all aberrations. Let's look deeper into this question.
Optical aberrations are the result of imperfect optical systems. Every optical system is imperfect, though of course some imperfections are more noticeable than others. There are several key types of aberrations in HMD optics which take an image from a screen and pass it through viewing optics:
- Geometrical distortion, which we covered in a previous post would cause a square image to appear curved. The most common variants are pincushion distortion and barrel distortion.
- Color aberration. Optical systems impact different colors in different ways, as can be seen in a rainbow or when light passes through a prism. This results in color breakup where a white dot in the original screen breaks up into its primary colors when passing through the optical system.
- Spot size (also referred to as astigmatism), which shows how a tiny dot on the original screen appears through the optical system. Beyond the theoretical limits (diffraction limit), imperfect optical systems cause this tiny dot to appear as a blurred circle or ellipse. In essence, the optical system is unable to perfectly focus each point from the source screen. When the spot size becomes large enough, it blurs the distinction between adjacent pixels and can make viewing the image increasingly difficult.
Which of these issues can be corrected by a GPU, assuming no practical limits on processing power?
Geometrical distortion can be corrected in most cases. One approach is for the GPU to remap the image generated by the software so that it compensates for known optical distortion. For instance, if the image through the optical system appears as if the corners of a square are pulled inwards, the GPU would morph that part of the image by pushing these corners outwards. Another approach is to render the image up-front with the understanding of the distortion, such as the algorithm covered in this article about an Intel researcher.
Color aberration may also be addressed, though it is more complex. Theoretically, the GPU can understand not only the generic distortion function for a given optical system, but the color-specific one as well, and remap the color components in the pixels accordingly. This requires understanding not only the optical system but also the primary colors that are being used in a particular display. Not all "greens", for instance, are identical
Where the GPU fails is in correcting astigmatism. If the optical system causes some parts of the image to be defocused, the GPU cannot generate an image that will 're-focus' the system. In simpler optics, this phenomena is particularly noticeable away from the center of the image.
One might say that some defocus in the edge of an image is not an issue since the central vision of a person is much better then the peripheral vision, but this argument does not take into account the rotation of the eye and the desire to see details away from the center.
Another discussion is the cost-effectiveness of improving optics, or the "how good is good-enough" debate. Better optics often cost more, perhaps weigh more, and not everyone needs this improved performance or is willing to pay for it. Obviously, less distortion is better to more distortion, but at what price?
Higher-performance GPUs might cost more, or might require more power. This might prove to be important in portable systems such as smartphones or goggles with on-board processors (such as the SmartGoggles), so fixing imperfections on the GPU is not as 'free' as it might appear at first glance.
HMD design is a study in tradeoffs. Modern GPUs are able to help overcome some imperfections in low-cost optical systems, but they are not the solution to all the important issues.
Expert interviews and tutorials can also be found on the Sensics Insight page here