Since inertial motion trackers include an accelerometer, it can measure the magnitude and direction of linear acceleration. Most of those that took high-school physics remember that the velocity is the derivative of position and that acceleration is the derivative of velocity. If so, can we integrate (sum up) linear acceleration readings to obtain velocity and then integrate velocity to obtain the X/Y/Z position of the head?
Theoretically yes, but practically that is not possible because of the accumulation of error. Every linear acceleration reading is slightly inaccurate, whether because of sensor error or of insufficient precision in reporting the acceleration reading. Because we have inaccurate acceleration, we will get inaccurate velocity from it, and because we have inaccurate velocity, we will get inaccurate position. In real life, we will see two problems in position reading: drift and repeatability.
The drift problem will present itself as a changing reading of the position even if the motion tracker is perfectly still. The repeatability problem will manifest itself as a changed reading when returning to the same position. For instance, assuming you are standing with an HMD and the motion tracker is 1.8m (6 feet) off the ground. Now, crouch and stand up several times, returning to the same position. Most likely, after many times of doing this, the X/Y/Z reading of the motion tracker will be substantially different than when you started.
Does measuring linear acceleration in the motion tracker is of any use other than for sensor fusion? Of course. Linear acceleration can give you a good sense of what the user is doing in the short-term: jump, duck, juke left/right, lunge forward/back. All of these can be very useful for gaming or other interactive experience, as long as you are not assuming that linear acceleration gives you accurate position information.