Photo Credit: Saad Faruque via Compfight |
- Change the viewpoint of the user to reflect actions such as jumping, ducking or leaning forward.
- Show hands and other objects in the displayed image. A common complaint of users in virtual reality is that they can't see their hands.
- Connect the physical and virtual world. For instance, by detecting hand position, a software program can implement the option to move virtual objects by touching them.
- Detect gestures. By analyzing position over time, a gesture can be detected. For instance, a user might draw the number "8" in air and have the software detect it.
There are several methods of tracking position and I felt it is worthwhile to describe some of them. This post focuses on tracking for virtual reality applications, so we will not look at vehicle tracking, tracking of firemen in buildings and so forth. In no particular order, here are some of the popular tracking methods include magnetic, inertial, optical and acoustic tracking as well as hybrid tracking that combines multiple methods.
I've described these tracking methods as well as others such as depth map in a guest blog post at RoadToVR. Please click here to read that post on the RoadToVR site.
For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here
I've described these tracking methods as well as others such as depth map in a guest blog post at RoadToVR. Please click here to read that post on the RoadToVR site.
For additional VR tutorials on this blog, click here
Expert interviews and tutorials can also be found on the Sensics Insight page here
No comments:
Post a Comment