Wednesday, May 29, 2013

Linear Acceleration and Motion Trackers, a follow-up

After the post What you should know about head trackers I was asked to comment about linear movement in motion trackers.

Since inertial motion trackers include an accelerometer, it can measure the magnitude and direction of linear acceleration. Most of those that took high-school physics remember that the velocity is the derivative of position and that acceleration is the derivative of velocity. If so, can we integrate (sum up) linear acceleration readings to obtain velocity and then integrate velocity to obtain the X/Y/Z position of the head?

Theoretically yes, but practically that is not possible because of the accumulation of error. Every linear acceleration reading is slightly inaccurate, whether because of sensor error or of insufficient precision in reporting the acceleration reading. Because we have inaccurate acceleration, we will get inaccurate velocity from it, and because we have inaccurate velocity, we will get inaccurate position. In real life, we will see two problems in position reading: drift and repeatability.

The drift problem will present itself as a changing reading of the position even if the motion tracker is perfectly still. The repeatability problem will manifest itself as a changed reading when returning to the same position. For instance, assuming you are standing with an HMD and the motion tracker is 1.8m (6 feet) off the ground. Now, crouch and stand up several times, returning to the same position. Most likely, after many times of doing this, the X/Y/Z reading of the motion tracker will be substantially different than when you started.

Does measuring linear acceleration in the motion tracker is of any use other than for sensor fusion? Of course. Linear acceleration can give you a good sense of what the user is doing in the short-term: jump, duck, juke left/right, lunge forward/back. All of these can be very useful for gaming or other interactive experience, as long as you are not assuming that linear acceleration gives you accurate position information.


Comment? Question? Correction? Write a comment and I will try to address it.
Have a VR-related topic you'd like me to discuss? Write a comment and I'll consider it for a future post.


For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here

Saturday, May 25, 2013

What you should know about Head Trackers

In this post, I will cover useful information about motion trackers in the context of head-mounted displays: how they work, what features are important, and what you want to think about when integrating them.

It used to be that you'd have to pay $2000 or more for a accurate head tracking, but these days good trackers come included inside many professional HMDs such as the Sensics zSight and even in some of the recent consumer goggle entries.

Head tracker basics

All head trackers measure rotational orientation - yaw, pitch and roll.
Head Yaw, Pitch and Roll (from www.resourceonbalance.com)
Yaw is the side to side movement as in looking left and right. Pitch would be the motion when you look up or down. And roll is tilting the head side to side.

This post deals with inertial head-trackers as opposed to head-tracking systems that use markers, cameras, ultrasound or other methods. Inertial head trackers are the most popular type of trackers in HMDs 

Most inertial head trackers include three sensors: accelerometer, gyroscope and magnetometer. The accelerometer measures the linear acceleration along three axes, including the earth's gravitational force, thus providing information regarding both the magnitude and direction of acceleration. The gyroscope measures angular velocity. The magnetometer measures the strength and direction of a magnetic field, most notably the earth's magnetic field. When all three sensors are used, and each provides measurement in three directions, the resulting head tracker is typically referred to as a '9-axis' or '9-sensor' module. Vendors sometimes refer to this as a '9 degree of freedom' module but this is often confusing and inaccurate.

What role do the vendors of head tracker modules play?

xSight HMD with IntertialLabs head tracker
Companies that make head trackers - Intersense, IntertialLabs, Hillcrest Labs just to name a few - don't make the sensors themselves. The accelerometers, gyroscopes and magnetometers are from other vendors such as Panasonic, Robert Bosch, InvenSense, Kionix, Seiko Epson, STMicroelectronics, Honeywell and Analog Devices. What the head tracker companies do with these sensors are several things:
  • Sensor Fusion. This is arguable the most important function, involving combining the data from the various sensors into output that is more accurate than each of the sensors. For example, the accelerometer senses gravity and thus is very helpful in determining what the vertical axis (up/down) is. However, if the module is in motion, other non-gravity accelerations can cause error in the axis determination. The module can query the gyroscope to see if there is angular velocity and determine the direction of gravity only when the module is not rotating. Similarly, the magnetometer can help calibrate the reading of the gyroscope and align yaw readings with the magnetic north. Module vendors often have their own "sensor fusion" algorithms and that is a key part of their intellectual property.
  • Calibration. The individual sensors might not be linear or might output a 'non-zero' reading when they are in complete rest. Calibration parameters can be different from one sensor vendor to another and between two sensors from the same vendor. Calibration parameters can also be temperature-dependent so calibration ends up being an important part of combining sensors into a head tracking module.
  • Data smoothing and filtering. Sensor output is noisy. If the heading from a motion tracker module is used to change the viewpoint in a virtual world, sensor noise gets translated into jitter or other undesired image artifacts. Data smoothing could be as simple as averaging samples over a sliding window or as sophisticated as Kalman Filter algorithms. Data smoothing can also be contextual - when the sensor is approaching rest, one kind of data smoothing may be used whereas when the sensor is moving quickly, different data smoothing might be applicable. More on data smoothing below as we discuss performance.
  • Predictive tracking. In an effort to reduce the apparent response time of a tracker (more on that below), some trackers including algorithms that try to predict the orientation of the head a short time into the future. For instance, the algorithm could simplistically say "I see that the head rotated left at a constant rate of 10 degrees/sec for the last couple of seconds, so I could estimate where the head will be 20-30 mSec into the future".
  • Format translation. Orientation data can be consumed in various formats by the application: Euler angles, Quaternions, Rotational Matrices and more. An on-board processor on the module often provides the host computer with this data in the most convenient format. Aside from the actual content data, modules can offer the data in various physical layer and low-level protocols such as USB, RS-232/RS-485, SPI and more.
  • Additional processing. Some modules go as far as including an on-board gesture engine that can analyze movements and report gestures when recognized. Those modules that perform this kind of additional processing on-board help reduce computational load on the host computers.

Key performance parameters

There are several key performance parameters that differentiate between the various tracker modules.
  • Refresh rate. The rate - number of times of second - that a tracker can report new orientation data has come into the spotlight in the last year with the increased focus on virtual reality gaming. The quicker the game is able to sense the motion of the user, the lower the delay will be between motion and screen updates. However, one must pay attention to what data is received in each cycle - is it raw or smoothed/filtered. The sensors on a tracker module can report raw data very quickly, but if the tracker module does not provide smoothing and filtering, the host computer will have to do that. For instance, a tracker module might provide 1000 updates per second of raw data, but 250 updates per second of processed, smoothed data. The choice of the physical interface between the tracker module and the host can impact the refresh rate. For instance, using an RS-232-based protocol would likely limit the number of messages that can be received per second as a result of the baud rate and message length.
  • Resolution. A low resolution tracker module would create a noticeable distraction in the visual experience, especially when moving the head in a low speed. As an example, if a tracker only reported yaw readings in 1 degree resolution, rotating the head slowly in a head-tracked environment would result in a 'ratchet' effect where the image stays static in spite of head movement and then makes a large, noticeable jump once the tracker sensed a different position. This is particularly noticeable in images that simulate narrow field of view such as when looking through a rifle sight. A good tracker would provide resolutions that are better than 0.1 degrees in all directions.
  • Accuracy. Resolution aside, how much is the tracker reading correlated with actual yaw/pitch/roll? Accuracy matters when trying to match tracker orientation with a real-world object. For instance, an completely inaccurate yaw heading would make navigation practically impossible. Another example might be in augmented reality applications where the application needs to place a graphical overlay on a particular point in real space. Accuracy also matters when using two or more orientation orientation trackers in the same application. For instance, if a soldier is wearing a tracked HMD and is  aiming a simulated weapon that is also tracked, matching the orientation of the HMD with the weapon is very important for training effectiveness.
  • Repeatability. This shows whether returning to the same orientation heading in the physical world also equates to obtaining the same readings from the tracker module.
Compass Interference
  • Magnetic field compensation. Have you ever seen the compass interference sign on an iPhone navigation app? This happens because other metal objects in the area cause confusion with the built-in compass/magnetometer, resulting in unstable heading reading. In an HMD context, this could be a problem in certain usage scenarios. I remember doing a demo at USC a few years ago and we set up the demo right underneath a large metal sculpture hanging from the ceiling. It took us awhile to understand why tracking was completely off! While in our case we could move away from the sculpture, a soldier that uses an HMD to train inside a tank cannot move away from the tank. In this case, some vendors can ignore the magnetometer, which precludes the tracker module from understanding where 'true north' is but provides a reasonably stable but slowly drifting reading. Other more sophisticated solutions include manual or automatic algorithms to sense and compensate for the surrounding metal objects.

Trackers modules for head tracking vs. tracker modules for other uses

Orientation trackers can be used for many applications that are not HMDs: remote control devices, smart phones and more. Even though the underlying technology is the same, HMD applications often need a tweaking of tracker parameters such as smoothing. Several years ago, my company started using a tracker that was originally made for a 3D pointer. When such a pointer is placed on a desk, it is not being used for pointing and the original firmware used this state to reset various tracker parameters and turn the tracker off to save energy. However, when an HMD moving very slowly, it may be that the user is trying to focus on a particular feature in the virtual world. Resetting tracker parameters - thus causing yaw/pitch/roll correction - is the wrong thing to do. We worked with the tracker vendor to correct this in a special firmware build, but the lesson is that one needs to investigate the tracker performance settings and make sure that they are optimized for head-mounted use.

UPDATE: see also our follow-on post Linear acceleration and motion trackers

Comment? Question? Correction? Write a comment and I will try to address it.
Have a VR-related topic you'd like me to discuss? Write a comment and I'll consider it for a future post.



For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here


Friday, May 10, 2013

What is Binocular Overlap and why should you care?

Binocular Overlap

Binocular overlap refers to the visible overlapping portion between the two eyes of a stereoscopic vision system. In other words, it describes how much of the viewed scene can be seen by both eyes as opposed to by just one of the eyes.
Binocular overlap.
Source: David Johnson, University of Utah

The visual field varies from person to person, but typically extends 60 inward (toward the nose) and 100 degrees outwards, and typically 60 degrees above and 75 below the horizontal meridian. As such, the binocular overlap region is 120 degrees horizontally (-60 to +60 degrees for each eye). Since each eye can see about 160 degrees, the binocular overlap is 120/160 = 75%.


Binocular overlap is particularly important for depth perception. When the brain sees an object with both eyes, the relative angles in which this object is visible gives an estimate of how far this object is located. If the object is far, far away, the angle in which it is seen by both eyes is practically the same. If the object is very close, the angles are much different.

Binocular overlap in goggles

Manufacturers of goggles and HMDs have a decision to make with regards to how much binocular overlap to incorporate in their products. Let's examine a standard professional HMD with eyepieces that each have 60 degree diagonal field of view and a 4:3 aspect ratio. Using the "diagonal field of view and aspect ratio conversion table", 60 degrees diagonal translate into 48 degrees horizontal and 36 degrees vertical. As you can see in the drawing below, there is a left eye and a right eye:

Independent images of the left and right eyepieces, each at 60 degree diagonal field of view

Note: the fill colors in the left and right eyes are just to separate them graphically and are not intended to convey anything else. If the eyepieces were installed to have 100% overlap between them, the binocular (combined) field of view would also be 48 degrees horizontal and 36 degrees vertical, resulting in a binocular field of view of 60 degrees diagonal. This is shown below. If you are looking at an object that is far away (and thus the distance between the eyes is not relevant), everything that can be seen in the left can also be seen in the right.

100% overlap, 60 degree binocular  field of view
Now let's assume that the manufacturer decided to install the eyepieces with 75% horizontal overlap, meaning not with full overlap between the eyepieces but with partial overlap. The result is shown below. 75% overlap means that the overlapping region between the eyepieces is 75% of 48 degrees, so 36 degrees. Thus, the binocular horizontal field of view is 60 degrees: 12 degrees that are just shown in the left eyepiece, 36 overlapping degrees and 12 degrees shown in the right eyepiece. Thus, the diagonal field of view of the combined image is sqrt(60*60 + 36*36) = 70 degrees, larger than what we get with the 100% overlap:

Partial (75%) overlap, 70 degree total diagonal field of view made of individual 60 degree eyepieces

Advantages of partial overlap

  • Wider field of view, and thus greater immersion. In the example above, we created an HMD with 70-degree field of view using 60-degree eyepieces.
  • Improved aspect ratio. The aspect ratio of the eyepieces in our example was 4:3 = 1.333. The aspect ratio over using 75% overlap became 60:36, or 5:3 = 1.666. This aspect ratio is between the 16:9 HD1080 (1920x1080) standard and the 16:10 WUXGA (1920x1200) standard, making the result more suitable for viewing wide screen content, assuming the wide screen content is correctly divided between the two screens (more on that below).

Disadvantages of partial overlap

  • Binocular rivalry. Consider the a green circle that is shown in the eyepieces. Because of the location of the circle, it will be fully shown in the left eyepiece but cut off in the right eyepiece. In fact, when a person looks through both eyepieces at the same time, that person might notice the leftmost border of the right eye and this might look unusual or distracting. The image in the binocular view continues more to the left, but the right eyepiece no longer shows the object. Some people may find this distracting because of binocular rivalry. Instead of seeing a summation of the two images, our perception switches from one image to the other. If the field of view is larger than in our example, say 100 degrees in each eye, this is less of a problem because the discontinuity of the image is outside the central vision area.

Binocular rivalry caused by a partial-overlapping visual system


  • Compatibility challenges with non-3D content. One of the nice things in a fully-overlapped system is that you can view standard content - a computer desktop, Microsoft Word, a YouTube video or live video from a Web cam - without much effort. The same content is presented on both eyes. The application does not need to know that is is viewed in a goggle as opposed to a computer monitor In the case of partial-overlap, that is not the case. If the same exact image is presented in both eyes, eye strain will result because the eyes will try to merge the two images even though they are shown in different angles. In most cases, applications need to be aware that they are being viewed in a partial-overlap system. The exception is hardware that automatically splits a wide screen image into two left and right images (see the description of the zSight electronics below), but that is often not available.

What's the exact math?

Check out this post for the exact math and useful reference tables.

Can you have it both ways (a bit promotional)?

What if you don't want to make the trade-off between between immersion/field of view and compatibility/binocular rivalry? Can you have it both ways?

The zSight HMD has an innovative mechanism (which is patent pending) that allows users to switch between full and partial overlap. A lever that is located between the eyepieces can be moved from left to right. In the left position, the eyepieces are arranged for full overlap. In the right position, the eyepieces are tilted so that they are partially-overlapped. Moreover, the zSight electronics identifies when it is presented with a wide screen video signal and automatically creates two different images, one for each eye.

Left: zSight in full overlap mode; Right: zSight in partial-overlap mode


For additional VR tutorials on this blog, click here

Expert interviews and tutorials can also be found on the Sensics Insight page here