tag:blogger.com,1999:blog-75950968119010965712024-02-18T22:08:03.664-05:00The VRguy's BlogYuval Boger (VRGuy) has been doing VR since 2006. He shares his experience and views on HMDs and VR technologies.
<br>
<br>
Also, check out the <a href="http://sensics.com/our-podcasts/">VRguy podcasts</a> where I host industry experts for deeper conversations about VR and AR.VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.comBlogger171125tag:blogger.com,1999:blog-7595096811901096571.post-42924860181752770682017-11-20T18:42:00.000-05:002018-02-28T19:00:15.568-05:00 VRguy podcast: Dr. Daniel Laby discusses Sports and Performance Vision<br />
<div style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;">
<img height="200" src="https://i1.wp.com/sensics.com/wp-content/uploads/2017/11/DanielLaby.jpg?resize=336%2C400" width="168" /></div>
<br />
My guest today is Dr. Daniel Laby, of Sports Vision. Dr Laby began his work in sports vision more than two decades ago with the MLB’s Los Angeles Dodgers. He has also been responsible for the visual performance of the New York Mets and St Louis Cardinals, and currently works with the Boston Red Sox, Cleveland Indians, Houston Astros, Tampa Bay Rays and Chicago Cubs. Dr. Laby spent three seasons working with the NBA’s Boston Celtics as well as the NHL’s Los Angeles Kings. He also worked with the US Olympic team prior to the Beijing Olympic Games in 2008 and attended the games with the team.<br />
<br />
Dr Laby has been fortunate to have contributed to an MLB American League Championship team as well as 4 World Series Championship teams.<br />
<div>
<br /></div>
<div>
<a href="http://sensics.com/vrguy-podcast-episode-29-dr-daniel-laby-discusses-sports-performance-vision/" target="_blank">Listen in here</a> or subscribe on iTunes.</div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-6643171770789976372017-11-11T09:40:00.001-05:002017-11-11T09:40:26.096-05:00VRGuy podcast episode: Thomas Wagner on VR Roller Coasters<div class="separator" style="clear: both; text-align: center;">
<a href="https://i1.wp.com/sensics.com/wp-content/uploads/2017/11/ThomasWagner-e1510187438597.jpg?resize=400%2C306" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="306" data-original-width="400" height="244" src="https://i1.wp.com/sensics.com/wp-content/uploads/2017/11/ThomasWagner-e1510187438597.jpg?resize=400%2C306" width="320" /></a></div>
Thomas Wagner is the CEO of VR Coaster. VR Coaster’s technology helps park operators convert traditional roller coasters into VR Coasters. Thomas and I discuss lessons learned from dozens of VR rides installed around the world.<div>
<br /></div>
<div>
<a href="http://sensics.com/vrguy-podcast-episode-28-thomas-wagner-vr-roller-coasters/" target="_blank">Listen to the podcast or read the transcript here</a></div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-870833254598516362017-09-26T09:41:00.000-04:002017-11-11T09:41:38.222-05:00VR's Dirty Secret<br /><br />I traveled a lot in the past few weeks. All the hotels I stayed in had something in common: they had clean towels and fresh bed sheets.<br /><br />But what if they didn’t? If the towels were dirty or the sheets looked like they were slept in, I would had complained.<br /><br />But what if the hotel said: you are right: these are the old sheets, but we vacuumed the bed; these are the towels that the previous guest used, but we sprayed some Fabreze on them. Would I be happy? Of course not.<br /><br />How about VR? Using a VR headset in an arcade after a stranger sweated in it for 30 minutes does not sound so appealing either. I’m not particularly interested in sampling the sweat, makeup, lice or whatever else the previous user had.<br /><br />Hygiene in VR, and particularly in public installations, is an important issue that needs to be addressed. To me, that is VR’s dirty secret.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-66308816565033335612017-09-12T09:50:00.000-04:002017-09-12T09:50:00.169-04:00Mozart meets Virtual Reality<div class="separator" style="clear: both; text-align: center;">
<a href="https://i1.wp.com/sensics.com/wp-content/uploads/2015/08/withBSO.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="800" data-original-width="600" height="320" src="https://i1.wp.com/sensics.com/wp-content/uploads/2015/08/withBSO.jpg" width="240" /></a></div>
I'm a VR professional. I'm also an amateur violinist. Not too shabby, and getting better with practice.<br />
Once a year, I take a week off to play with the wonderful musicians of the Baltimore Symphony Orchestra. Alongside the pros and other amateurs, we practice and perform great classical music works.<br />
<br />
That week got me thinking about VR can help various aspects of the performing arts.<br />
One key area is <strong>audience engagement</strong>. The performance feels much different on stage than off it. VR can put the audience in places that money can't buy. Just in front of the conductor. In the middle of the violin section. At the back of the stage where the percussion players are. The audience can experience the excitement of music making from within.<br />
<br />
Indeed, several major orchestras are already experimenting with VR. The Los Angeles Philharmonic recording Beethoven's Fifth Symphony in 360-degrees. <a href="http://www.laphil.com/orchestravr" rel="noopener" target="_blank">That recording is free to download</a>. The Philharmonia Orchestra of London <a href="http://www.philharmonia.co.uk/vr" rel="noopener" target="_blank">has made similar recordings</a>. The Baltimore Symphony Orchestra <a href="https://www.facebook.com/BSOAcademy/photos/a.466249776718680.109313.139072719436389/1465186686824979/?type=3&theater" rel="noopener" target="_blank">shared an open rehearsal</a>.<br />
<br />
Critics are applauding this inside view into the music. Audiences are getting a unique immersive experience. Most major orchestras as struggling with balancing their budgets. Engaging young audiences with a VR experience can sell tickets and attract new followers. One day, one could imagine a completely virtual experience. A music lover in Iowa could attend a Berlin Philharmonic concert without international travel.<br />
<br />
After all, Movies evolved from just filming a stage play to many cameras with movement. Why should attending a concert stay the same for hundreds of years?<br />
<br />
Another area where VR can be useful is performance anxiety. Musicians get nervous in performances, just like some grade school students. If a musician cannot perform on stage at the same level that she performed in a rehearsal, that is a problem.<br />
<br />
There are many techniques to battle <strong>performance anxiety</strong>. Books such as "The inner game of tennis" help overcome self-doubt and nervousness. Presenters like Noa Kageyama of "<a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjR7NC35YnWAhUJ3IMKHaS4D9IQFggoMAA&url=http%3A%2F%2Fbulletproofmusician.com%2F&usg=AFQjCNEzuw1ON3hOTHu51Tq17ZlCg6XFCQ" rel="noopener" target="_blank">The Bulletproof Musician</a>" teach other methods. Some musicians medicate themselves before a high-stakes performance. Virtual reality is already used today to help overcome fear of public speaking. It is easy to envision extending this to performance anxiety. Just like VR can place you on stage in a large conference, it can place you on the virtual Carnegie Hall stage.<br />
<br />
One critical performer that gets the least practice time is the <strong>orchestra's conductor</strong>. This is particularly true for young conductors. Without a permanent position with an orchestra, "podium time" is scarce. Conductors end up conducting their CD players or TV sets in preparation for a real orchestra. Imagine using a VR headset with a hand gesture sensor for conducting practice. It could be the next best alternative to the real experience.<br />
<br />
This might not be the most popular use of VR, but certainly one that I can't wait to try myself.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-27214895870335793332017-09-06T20:58:00.002-04:002017-09-06T20:58:40.870-04:00VRguy podcast episode: Kevin Williams discussing trends in out-of-home VR<div class="separator" style="clear: both; text-align: center;">
<a href="https://i0.wp.com/sensics.com/wp-content/uploads/2017/09/Kevin-vert-2.jpg?resize=400%2C267" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="267" data-original-width="400" height="213" src="https://i0.wp.com/sensics.com/wp-content/uploads/2017/09/Kevin-vert-2.jpg?resize=400%2C267" width="320" /></a></div>
Kevin Williams of KWP consulting and I spoke about out-of-home VR, including:<br />
<br />
<ul>
<li>Longevity of VR cafe model</li>
<li>Single-experience vs configurable-experience sites</li>
<li>How much money would people pay for VR experience</li>
<li>and more</li>
</ul>
<div>
<a href="http://sensics.com/vrguy-podcast-episode-26-kevin-williams-discussing-updates-home-vr/" target="_blank">Listen to the Podcast or read the transcript here</a></div>
<div>
<br /></div>
<div>
you can also compare this to what <a href="http://sensics.com/vrguy-podcast-episode-2-kevin-williams/" target="_blank">he said in late 2015</a></div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-85386743705081701452017-08-28T12:45:00.000-04:002017-08-28T12:45:14.645-04:00VRguy Podcast Episode: Jason Jerald, Principal Consultant at NextGen Interactions<div class="separator" style="clear: both; text-align: center;">
<a href="https://i1.wp.com/sensics.com/wp-content/uploads/2017/08/jjerald.headshot.500x500.jpg?resize=400%2C400" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="400" data-original-width="400" height="200" src="https://i1.wp.com/sensics.com/wp-content/uploads/2017/08/jjerald.headshot.500x500.jpg?resize=400%2C400" width="200" /></a></div>
Jason and I talk about VR sickness, precision input, and design tradeoffs in VR interactions. We have a particularly interesting discussion about pen input in VR<div>
<br /></div>
<div>
Listen to the Podcast or read the transcript <a href="http://sensics.com/vrguy-podcast-episode-25-jason-jerald-principal-consultant-nextgen-interactions/" target="_blank">at this link</a> </div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-80816333125822147752017-08-25T22:21:00.000-04:002017-08-25T22:21:11.939-04:00Four myths are blocking real, needed VR standards<br /><br />Neil Trevett, VP Developer Ecosystem at NVIDIA and President of the Khronos Group collaborated with me on a<a href="https://venturebeat.com/2017/08/25/4-myths-getting-in-the-way-of-necessary-vr-standardization/" target="_blank"> VentureBeat article</a> discussing the four myths blocking real, needed VR standards.<br /><br />The four myths we discuss are:<br /><ol>
<li>It’s too early for standards.</li>
<li>Standards stifle innovation.</li>
<li>Consumers won’t be impacted if standards are not enacted.</li>
<li>There are too many cooks developing standards.</li>
</ol>
You can <a href="https://venturebeat.com/2017/08/25/4-myths-getting-in-the-way-of-necessary-vr-standardization/" target="_blank">read the full article here.</a><span id="goog_877940876"></span><a href="https://www.blogger.com/"></a><span id="goog_877940877"></span><div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-50076011215896360432017-08-21T14:28:00.003-04:002017-08-21T14:28:53.375-04:00New VRguy podcast episode: Nick Whiting, technical director of AR/VR/XR at Epic Games<a href="http://sensics.com/vrguy-podcast-episode-24-nick-whiting-technical-director-vrar-epic-games/" target="_blank">Listen to my conversation with Nick or read the transcript here</a>. <div class="separator" style="clear: both; text-align: center;">
<a href="https://i1.wp.com/sensics.com/wp-content/uploads/2017/08/NickWhiting_Epic.jpg?resize=300%2C400" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="400" data-original-width="300" height="200" src="https://i1.wp.com/sensics.com/wp-content/uploads/2017/08/NickWhiting_Epic.jpg?resize=300%2C400" width="150" /></a></div>
We talk about Robo Recall, the unique challenges of VR and AR for game engines, what Epic learned from enterprise customers and much more.<div>
<br /></div>
<div>
Other recent interesting episodes:</div>
<div>
<br /></div>
<div>
<a href="http://sensics.com/vrguy-podcast-episode-23-neil-trevett-president-khronos-group/" target="_blank">Neil Trevett, VP at NVIDIA and President of the Khronos Group</a></div>
<div>
<a href="http://sensics.com/vrguy-podcast-episode-22-wanda-meloni-ceo-principal-analyst-m2advisory/" target="_blank">Wanda Meloni, CEO and Principal Analyst at M2 Advisory</a></div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-52581396337564267112017-06-11T12:30:00.001-04:002017-06-11T12:30:06.007-04:00How does eye tracking work?<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9WMlS4zT8EGX_IfM40NsTPyRjcgF7lGd3g4FCTkRPGI1c9dWeQDEm9iLeUo4rV_XHHewLchGMPMV7x5GxpR3DIjp1kxQFrgL4RgEO84UUHiY9xgJyiwa6zKOpei3f6a0vsmqcX2fvRX0/s1600/eyeTrackerSampleSmall.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="214" data-original-width="284" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9WMlS4zT8EGX_IfM40NsTPyRjcgF7lGd3g4FCTkRPGI1c9dWeQDEm9iLeUo4rV_XHHewLchGMPMV7x5GxpR3DIjp1kxQFrgL4RgEO84UUHiY9xgJyiwa6zKOpei3f6a0vsmqcX2fvRX0/s1600/eyeTrackerSampleSmall.jpg" /></a></div>
Eye tracking could become a standard peripheral in VR/AR headsets. Tracking gaze direction can deliver many benefits. Foveated rendering, for instance, optimizes GPU resources by using eye tracking data. Higher-resolution images at shown at the central vision area and lower-resolution outside it. Understanding gaze direction can lead to more natural interaction. Additionally, People with certain disabilities can use their eyes instead of their hands. Eye tracking can detect concussions in athletes and can even help people see better. Eye tracking can help advertisers understand what interests customers.<br /><br />Eye tracking is complex. Scientists and vendors have spent many year perfecting algorithms and techniques. <br /><br />But how does it work? Let's look at a high-level overview.<br /><br />Most eye tracking systems use a camera pointing at the eye and infrared (IR) light. IR illuminates the eye and a camera sensitive to IR analyzes the reflections. The wavelength of the light is often 850 nanometers. It is just outside the visible spectrum of 390 to 700 nanometers. The eye can't detect the illumination but the camera can.<br /><br />We see the world when our retinal detects light entering through the pupil. IR light also enters the eye through this pupil. Outside the pupil area, light does not enter the eye. Instead, it reflects back towards the camera. Thus, the camera sees the pupil as a dark area - no reflection - whereas the rest of the eye is brighter. This is "dark pupil eye tracking". If the IR light source is near the optical axis, it can reflect from the back of the eye. In this case, the pupil appears bright. This is "bright pupil eye tracking". It is like the "red eye" effect when using flash photography. Whether we use dark or bright pupil, the key point is that the pupil looks different than the rest of the eye.<br /><br />The image captured by the camera is then processed to determine the location of the pupil. This allows estimating the direction of gaze from the observed eye. Processing is sometimes done on a PC, phone or other connected processor. Other vendors developed special-purpose chips that offload the processing from the main CPU. If eye tracking cameras observe both eyes, one can combine the gaze readings from both eyes. This allows estimating of the fixation point of the user in real or virtual 3D space. <br /><br />There are other eye tracking approaches that are less popular. For instance, some have tried to detect movements of the eye muscles. This method provides high-speed data but is less accurate than camera-based tracking.<br /><br />How often should we calculate the gaze direction? The eyes have several types of movements. Saccadic movements are fast and happen when we need to shift gaze from one area to another. Vergence shifts are small movements the help in depth perception. They aim to get the image of an object to appear on corresponding spots on both retinas. Smooth pursuit is how we move when we track a moving object. To track saccadic movements, one needs to track the eye hundreds of time per second. But, saccadic movements do not provide gaze direction. Thus, they are interesting to research applications but not to mass-market eye tracking. Vergence and smooth pursuit movements are slower. Tens of samples per second are often enough. Since Many VR applications want to have the freshest data, there is a trend to track the eyes at the VR frame rate. <br /><br />Eye tracking systems need to compensate for movements of the camera relative to the eye. For instance, a head-mounted display can slide and shift relative to the eyes. One popular technique is to use reflections of the light source from the cornea. These reflections are called Purkinje reflections. They change little during eye rotation and can serve as an anchor for the algorithm. Other algorithms try to identify the corners of the eye as an anchor point.<br /><br />There are other variables that an algorithm needs to compensate for. The eye is not a perfect sphere. Some people have bulging eyes and others have inset eyes. The location of the eye relative to the camera is not constant between users. These and other variables are often addressed during a calibration procedure. Simple calibration presents a cross on the screen at a known location and asks the user to fixate on it. By repeating this for a few locations, the algorithm calibrates the tracker to a user.<br /><br />Beyond the algorithm, the optical system of the tracker presents extra challenges. It aims to be lightweight. It tries to avoid needs constraints on the optics used to present the actual VR/AR image to the user. It needs to work with a wide range of facial structures. For a discussion on optical configurations for eye tracking, <a href="http://sensics.com/the-three-and-a-half-configurations-of-eye-trackers/" target="_blank">please see here</a>.<br /><br />Eye trackers used to be expensive. This was not the result of expensive components, but rather of a limited market. When only researchers bought eye trackers, companies charged more to cover their R&D expenses. As eye trackers move into mainstream, eye trackers will become inexpensive.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com1tag:blogger.com,1999:blog-7595096811901096571.post-1529213479230859812017-05-22T14:05:00.001-04:002017-05-22T14:05:57.670-04:00Understanding Relative IlluminationRelative illumination in the context of optical design is the phenomena of image roll-off (e.g. reduction) towards the edge of an eyepiece. This manifests in an image that is brighter at the center of eyepiece relative to the edge of the eyepiece.<br />
<br />
Relative illumination is usually shown in a graph such as the one below<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdBmjrcNpYIgfhwrBR66pKBaRVsDn7uLmqtFEu8veY8VPD6xvKIoY_AUDtXgSBcbBK-BKNluQ0iPfTwarln_Xg0Hv1yfMHaaleaJu-IVUK_3lKzUx6kS8X08XVfarUTNtnTrs6jK68hWc/s1600/relativeIllumination.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="376" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdBmjrcNpYIgfhwrBR66pKBaRVsDn7uLmqtFEu8veY8VPD6xvKIoY_AUDtXgSBcbBK-BKNluQ0iPfTwarln_Xg0Hv1yfMHaaleaJu-IVUK_3lKzUx6kS8X08XVfarUTNtnTrs6jK68hWc/s640/relativeIllumination.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
This particular graph is from an eyepiece with 60-degree horizontal field of view <a href="http://sensics.com/portfolio-posts/oem-optics/" target="_blank">designed by Sensics</a>. The graph shows how the illumination changes from the center of the lens, e.g. 0, to the edge of the lens, e.g. 30 degrees. The Y axis shows the relative illumination where the center illumination is defined as "1". In this particular eyepiece, the illumination at the edge is just over 70% of the illumination at the center.<br />
<br />
This effect can also be viewed in simulations. The first image below shows a simulated image through this eyepiece when ignoring the impact of relative illumination:<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDIxCzYt9nfr6TaF3-Wz2gb4BdXtOtoOb6Ir8fjn1BO_QVs-llEqLe2D85U9zLPjOfnNXmNnqMTibZ0Ub0W45iYLnyFDLInHCQkgtMFkoXSx3Hxd0_dhTdwsc2BMtB_j9j8bFMO5XhIcg/s1600/Normalized.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDIxCzYt9nfr6TaF3-Wz2gb4BdXtOtoOb6Ir8fjn1BO_QVs-llEqLe2D85U9zLPjOfnNXmNnqMTibZ0Ub0W45iYLnyFDLInHCQkgtMFkoXSx3Hxd0_dhTdwsc2BMtB_j9j8bFMO5XhIcg/s320/Normalized.JPG" width="288" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Simulated image while ignoring the effect of relative illumination</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
The second image shows the impact of relative illumination which can be seen at the edges</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqZzBr_KXEd845C6-JsfJu-3OgPAFza49WeRYo_Epe5n0g7B4XHEBEijip68ypE16yX_yQI4vN2A3-XAfNLR1l4WYGbXPRl03WbPqwDSimnNW1XYazJlZUOHrLu3nCBdSSNJog6YiOwwA/s1600/ShowingRelativeIllumination.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqZzBr_KXEd845C6-JsfJu-3OgPAFza49WeRYo_Epe5n0g7B4XHEBEijip68ypE16yX_yQI4vN2A3-XAfNLR1l4WYGbXPRl03WbPqwDSimnNW1XYazJlZUOHrLu3nCBdSSNJog6YiOwwA/s320/ShowingRelativeIllumination.JPG" width="289" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Simulated image with relative illumination</td></tr>
</tbody></table>
Relative illumination is perfectly normal and to be expected. It exists in practically every eyepiece and every sensor. It is often the result of vignetting - some light rays coming from the display through the eyepiece to the eye that are blocked by some mechanical feature of the eyepiece. This can be an internal mechanical structure or simply the edge of a particular lens. Light rays from the edge of the display are easier to block and thus typically suffer more vignetting.<br />
<br />
When we look at an optical design, we look to see that the relative illumination graph is monotonic, e.g. always decreasing. A non-monotonic curve (e.g. a sudden increase followed by a decrease) would manifest itself as a bright ring in the image, and this is usually not desired.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-89691340183715677952017-05-02T13:48:00.002-04:002017-05-02T13:48:18.851-04:00A Visit to the IMAX VR CenterA few weeks ago, I was in Los Angeles and decided to visit the newly-opened IMAX VR center. I went there as a regular paying customer - not some "behind the scene tour" - to see and learn. I've experienced Zero Latency, The Void and many others, so could not resist trying IMAX.<br /><br />The lobby of the attraction is reminiscent of a small movie theater lobby. Vertically-oriented monitors on the walls announce the available VR experiences. A small reception area sells $10 tickets for the attractions. A display shows the available time slots for each 10-minute experience. After purchasing the tickets, a friend and I were asked to wait for our scheduled time. When the time came, an attendant escorted us to the VR area.<br /><br />If I remember correctly, there were eight available experiences. Seven of them were based on the HTC VIVE. One - the John Wick Chronicles - was showing on Starbreeze headset. The HTC VIVE experiences did not appear to to be specially-made for this venue. For instance, one experience was <a href="http://store.steampowered.com/app/381940/Trials_on_Tatooine/" target="_blank">Trials on Tatooine</a> which can be freely downloaded from the <a href="http://store.steampowered.com/app/381940/Trials_on_Tatooine/">Steam store</a>. I think people come to movie theaters for an experience that they can't get at home. One would expect VR to be the same.<br /><br />I have an HTC Vive at home (as well as many other headsets) at home. Using them is part of my job. However, most folks don't have easy access to PC-based VR equipment. For now, stock experiences might be just fine to get people exposed to VR.<br /><br />Inside to the VR area, Each headset was in a space separated by low walls, a bit like an open space in an office. Headset cables were tied from the ceiling. HTC VIVE units had a leather face mask which is probably easier to clean. An operator administered each experience - one operator per headset. . Operators were friendly and enthusiastic about the VR equipment. I think their enthusiasm was contagious, which was nice. <br /><br />Speaking of contagious, the operators told me that they wipe the face masks between users. Masks also get replaced every couple of weeks. I was told that visitors did not often complain about wearing a VR goggle that was used by many people before them. <br /><br />I couldn't help but wonder about the economics. 15-minute timeslots: 10 minutes of usage plus some time to get people in and out of the experience. $40 an hour per station. One full-time operator per station. Now add rent, equipment, content fees, ticket sales, credit card fees, etc. Can you make money? Maybe making money is not the goal in this first location. Instead, the goal could be to have a "concept store" towards inclusion at the lobby of regular movie theaters.<br /><br />Since I don't have a Starbreeze headset at home, I opted for the John Wick experience. It's a shooter game that encourages you to move in a space while holding a weapon. As expected, Virtual soldiers try to kill you. The headset was fairly light and the weapon comfortable. The experience was immersive though both the image and graphics quality could have been better. I can see why a person with little VR experience could enjoy these 10 minutes.<br /><br />My friend did not have many VR experiences before this visit. He chose 'Trials on Tatooine" which he seemed to thoroughly like.<br /><br />In all - a nice start to what can be the next big thing in entertainment.<br /><br />Have you tried IMAX VR too? What did you think?<br /><div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com1tag:blogger.com,1999:blog-7595096811901096571.post-13930609131608384012017-04-09T17:35:00.000-04:002017-04-09T17:35:49.758-04:00The Unique Requirements of Public VR<div class="separator" style="clear: both; text-align: center;">
<a href="http://maxpixel.freegreatpicture.com/static/photo/1x/Fivefold-Looping-Olympia-Looping-Roller-Coaster-1689992.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://maxpixel.freegreatpicture.com/static/photo/1x/Fivefold-Looping-Olympia-Looping-Roller-Coaster-1689992.jpg" height="212" width="320" /></a></div>
A good treadmill for home use costs around $1000. A treadmill for use in a health club could be ten times more expensive. Why would club owners agree to pay so much more? Because they understand that the home equipment would not withstand the heavy use in a club.<br />
<br />
The same is true for VR goggles. Goggles for home use are not suitable for sustained use in arcades and amusement parks. Both VR vendors and attraction operators need to understand and address these differences.<br />
<br />
Durability is one key issue. A goggle in an amusement park is subject to both accidental and intentional abuse. A kid might try to see if a lens can pop out. Someone else might wish to take a component as a souvenir.<br />
<br />
Hygiene is also important. VR experiences can be intense, and parks can be in hot and humid areas. Whether it is sweat, suntan lotion or something else, a guest does not wish to wear a soaked or dirty goggle. This is true not only for the face mask, but for any built-in headphones.<br />
<br />
Sweat and humidity might also cause fogging on the lenses. If a guest has to take off the goggles to defog them, the break in VR immersion degrades the experience.<br />
<br />
Depending on the attraction, visitors can be of all ages. Good physical fit is important, regardless if the visitor is a small kit or a large man. Goggles must accommodate a wide range of head sizes, and eye separation. Eyeglasses also present a design challenge. Visitors prefer to keep them on, so goggle vendors try to make room for them.<br />
<br />
Beyond the customer-facing experience, there are important operational considerations. Attractions make money by providing a unique experience to a large number of visitors. Every minute spent by a guest adjusting straps in a roller-coaster, is a minute lost. Some have reported a 30% decrease in guest throughput after upgrading a roller coaster to VR. A larger park crew assisting guests or maintaining headsets means larger operational costs.<br />
<br />
There are several different types of public VR experiences, each with unique challenge. A free-roam experience (e.g. Zero Latency) needs to address backpack PCs and controllers. Themed experiences such as The Void have accessories that supplant the story. These accessories have many of the same challenges. A small attraction in a shopping mall cannot afford a large operating crew. A VR roller coaster might need a chin strap to keep the goggles on the head. If a VR roller coaster relies on standard phones, they might overheat or need to recharge often.<br />
<br />
Often, the first instinct of those building VR attractions is to do everything. They might try to build their own goggles, create content, or even a tracking system. Over time, they focus on their core competencies, bringing external vendors for everything else.<br />
<br />
The first generation of these solutions show the immense promise of public VR. VR Coaster, for instance, has deployed GearVR-based roller coaster experiences in over 20 parks. These use a special tracking system to determine the position of each car at any time. The Void and Zero Latency use backpack PCs to allow guests to explore unique spaces. Talon Simulations provides flight and driving simulations in malls. IMAX opened a VR center where guests can try a variety of 10-minute VR experiences. Most of these first-generation solutions use consumer-grade hardware. Operators realize that many problems still need solutions. At the same time, consumers learn what they like and dislike in current solutions.<br />
<br />
HMD vendors are also rising to the challenge. ImmersiON-VRelia has developed phone-based goggles that feature a reinforced plastic construction. Sensics released goggles with detachable face mask to address both hygiene and operational efficiency.<br />
<br />
What's missing? Low-latency wireless video solutions to get rid of cables. Faster ways to adjust goggles for guest. Better methods to clean goggles between guests. Phones the don't overheat. Multi-user experiences with a stronger social aspect. The passage of time to see what works and what doesn't. VR standards to help integrate new devices into compelling experiences.<br />
<br />
I am excited what public VR experiences could be. My excitement comes both from a user standpoint - these experiences are fun! - but also from a problem-solver view - the problems are challenging.<br />
<br />
Try these experiences next time you can. Going to a movie theater provides a different experience than watching at home. Going through a good public VR experience is beyond what VR at home provides.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-4090390782100803652017-04-03T13:42:00.004-04:002017-04-03T13:42:31.525-04:00Suffering, Art and VR Standards<br />Think about a great work of art: a classic book, a timeless painting, a symphonic masterpiece. What's common to many of these creations?<br /><br />They were all the result of great suffering.<br /><br />Tolstoy, Van Gogh, Mozart - they did not have easy lives. Many of the greats suffered from oppression, mental or physical illness, or hunger. <br /><br /><br />If you don't have drama in your life, how could you summon drama for your art?<br /><br />People ask me "what made you want to work on VR standards?" My answer: it's the suffering.<br /><br />No, not my personal suffering. I'm no Amaedus or have never considered cutting off my earlobe to express love.<br /><br />But in many years of working with customers on their VR systems, I saw a lot of technical suffering:<br /><br />The suffering of integrators that need to chase the latest API again and again. That don't know if the equipment they design for today will be available to buy in a year. <br /><br />The suffering of device manufacturers that need just one more driver to support them.<br /><br />The suffering of end-users that wonder if today's software will work on tomorrow's devices.<br /><br />That's why we need efforts like <a href="http://osvr.github.io/" target="_blank">OSVR</a> or OpenXR to make it easy for everyone to work together. It wouldn't be as timeless or profound as "War and Peace", but it will help a lot of people.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com1tag:blogger.com,1999:blog-7595096811901096571.post-54206208377787210392017-04-01T08:35:00.000-04:002017-04-01T08:35:07.988-04:00Unholy AllianceA few weeks ago, we were approached by a VR porn site seeking a partnership.<br />
<br />
It is no secret that adult entertainment is an early adopter of many new technologies and virtual reality is no exception.<br />
<br />
We respectfully declined as we decided a long time ago that we won't participate in this market.<br />
<br />
Besides, I'm no longer as good looking as I used to be.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-27757782901264572442017-03-27T07:46:00.000-04:002017-03-27T16:31:35.849-04:00Virtual Reality Standards: too early or long overdue?<div style="color: #0d0d0d; font-family: Merriweather, Georgia, "Times New Roman", serif; letter-spacing: -0.7px; line-height: 28.8px; margin-bottom: 1.1em; margin-top: 1.1em; text-align: center; widows: 1;">
<span style="font-size: x-small;">This article originally appeared on <a href="http://readwrite.com/2017/03/22/virtual-reality-standards-too-early-or-long-overdue-dl1/" target="_blank">Mar 22nd at ReadWrite</a></span></div>
You bought a new printer for the office. You unpack and connect it to your PC. You install its demonstration software and see the printer works well. Then, an unfortunate surprise: your word processor cannot work with this printer. You'll need to wait until the maker of the word processor releases a new version. When will this version be available? Should you return the printer? Should you change to a different word processor?<br /><br />If this sounds like the 1980's, it's not too far from where VR is today. VR programs are often hard-coded to one set of hardware devices. Only a particular HMD, coupled with its tracking system and controller will work. Every device manufacturer has a different API. If you want to use this VR experience with a different set of hardware, you might not be in luck. At best, you'll need another version. Worst case, you just won't be able to do it. The problem of different vendors having different APIs is often called 'API fragmentation'<br /><br />VR standards can help solve this fragmentation problem. In the PC world there are a few basic device types: keyboard, mouse, printer, scanner, and so forth. Likewise, basic VR device types include an HMD, tracker, controller, and a few others. A standard way for a program to interface with these VR devices would help solve this problem.<br /><br />If software could work across different hardware combinations, almost everyone would benefit:<br /><ul>
<li>Consumers could mix and match devices to their liking. I may have a Dell computer but I don't always want a Dell printer to go with it. Furthermore, consumers would be confident that their investments are future-proof. A 2017 game would likely work with 2018 or 2019 hardware. </li>
<li>Game publishers and other experience creators would have a larger addressable market. Today, they hard-code their game to a particular set of hardware devices. Tomorrow, they would support any device that has a conforming 'driver'. </li>
<li>Manufacturers that support a standard would have their devices work with lots of content. This would allow even small players to enter the market, and promote innovation. </li>
</ul>
Some claim it is too early for a VR standard. VR is new, they say. Let a couple of years pass and then we will know what to standardize. Yet while consumer VR is new, VR has been in academia and industry for decades. Labs and factories deployed HMDs, head and eye trackers, and controllers for many years. Such devices were expensive and perhaps more clunky, but still performed the same functions. Software frameworks like UNC's VRPN (http://www.vrpn.org) provided device independent access for many years.<br /><br />The resistance to a standard sometimes stems from the competitive strategy of a company. A vendor relying on a 'walled garden' approach often wishes to control the entire stack. The ability to swap out hardware, or use a different app store might be not what they had in mind.<br /><br />In VR, there are often two standard interfaces that need to defined. The first is the device interface. This defines how to configure devices of particular type and how to extract data from them. Printers have different capabilities but share the same basic functions. The same is true for VR devices. The second standard interface is the application interface. It describes how an application or a game engine renders its content and get data. Inbetween the applications and devices there is often a middleware layer. That middleware is the software intermediary between applications and devices.<br /><br />One effort that adapts this approach is OSVR. Started by Sensics and Razer, it is an open-source software platform for VR. OSVR implements both a device interface layer as well as an application layer. OSVR supports over 200 devices, and most of the OSVR code is free and open-source.<br /><br />Another effort is OpenVR which is an open API (though not open source) from Valve. Building on the success of SteamVR, OpenVR allows HMDs to work with SteamVR content. There is some compatibility between these efforts. An OSVR plugin for OpenVR, allowing OSVr devices to work with SteamVR content.<br /><br />In January, the Khronos group (known for OpenGL standards) launched a new VR initiative. The initiative, called OpenXR, brings together a wide range of companies. Industry leaders including Google, Oculus, Valve, Sensics and Samsung are part of this effort. OpenXR aims to combine lessons learned from building OSVR, OpenVR and proprietary APIs. It aims to create both a device interface as well as application interface. It is unclear how soon this effort will mature. Khronos standards take an average of 18 months. It is also unclear what capabilities will be part of the first standard. What is clear is that these companies felt enough pain to want to work on standards.<br /><br />I am encouraged that so many participants are coming together to work on a standard. Other interested parties are also invited to contribute. Standards are sometimes boring, but they are important. They will make the consumer experience better and promote innovation. <div style="color: #0d0d0d; font-family: Merriweather, Georgia, "Times New Roman", serif; letter-spacing: -0.7px; line-height: 28.8px; margin-bottom: 1.1em; margin-top: 1.1em; widows: 1;">
<span style="font-size: x-small; letter-spacing: -0.7px; text-align: center;">This article originally appeared on </span><a href="http://readwrite.com/2017/03/22/virtual-reality-standards-too-early-or-long-overdue-dl1/" style="font-size: small; letter-spacing: -0.7px; text-align: center;" target="_blank">Mar 22nd at ReadWrite</a></div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com1tag:blogger.com,1999:blog-7595096811901096571.post-26053733201400315642016-10-19T13:19:00.000-04:002016-10-19T13:19:31.353-04:00Peeking inside the Sensics Goggles for Public VR<figure class="graf graf--figure" name="2c46" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><div class="graf graf--p" name="cf90">
Earlier this week, we made the Sensics <a class="markup--anchor markup--p-anchor" data-href="http://sensics.com/portfolio-posts/goggles-for-public-vr/" href="http://sensics.com/portfolio-posts/goggles-for-public-vr/" target="_blank">Goggles for Public VR</a> available for purchase on the <a class="markup--anchor markup--p-anchor" data-href="https://osvrstore.com/products/pre-production-goggles-for-public-vr" href="https://osvrstore.com/products/pre-production-goggles-for-public-vr" target="_blank">OSVR Store</a>. This is a limited pre-production run as we gear up for production of larger quantities.</div>
<div class="graf graf--p" name="9ba5">
We designed this product to address the needs of those that operate VR in public places such as theme parks, entertainment venues and shopping malls.</div>
<div class="graf graf--p" name="4f48">
Goggles for public VR have different requirements than goggles for home use just like an exercise treadmill at a gym or health club needs to be different than a treadmill at home. Specifically, goggles for public VR need to be:</div>
<ul class="postList">
<li class="graf graf--li" name="04eb">Durable, so that they withstand use by a large number of people. Unlike users of a VR goggle at home, users of a VR goggle at a public place might care less about handling it carefully.</li>
<li class="graf graf--li" name="c5d8">Easy to clean, so that every user can get a clean, fresh feeling when wearing the goggles, regardless of who wore it before them.</li>
<li class="graf graf--li" name="a19a">Easy to maintain, in case something breaks.</li>
<li class="graf graf--li" name="9934">Designed to allow maximum throughput of guests so as to maximize the number of people that can experience the attraction.</li>
</ul>
<div class="graf graf--p" name="928d">
At the same time, the visual experience needs to be at least as good as goggles for home use because guests typically expect an experience beyond what they can get at home.</div>
<div class="graf graf--p" name="08d4">
To achieve these goals, we used mass-produced 2160x1200 90 Hz OLED screens, high-quality dual-element optics with individual focusing mechanism, accurate 9-axis orientation tracker and incorporated them into a novel, patent-pending design. Below are some of the highlights of this design. To illustrate them, we mostly use the CAD drawings because they make it easy to show internal parts.</div>
<div class="graf graf--p" name="48c0">
Here is CAD model of the entire unit (each part is colored differently in this model to make them stand out) next to the actual unit:</div>
<figure class="graf graf--figure" name="68ca"><img class="graf-image" data-height="325" data-image-id="0*iQf12m2e3BUBXI9o.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*iQf12m2e3BUBXI9o.jpg" /></figure><figure class="graf graf--figure" name="9ecb"><img class="graf-image" data-height="267" data-image-id="0*RvO3-4gGvYtKB7K7.png" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*RvO3-4gGvYtKB7K7.png" /></figure><figure class="graf graf--figure" name="fa66"><img class="graf-image" data-height="325" data-image-id="0*mybajFK4FeUS5Ywz.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*mybajFK4FeUS5Ywz.jpg" /></figure><div class="graf graf--p" name="eaf5">
The back side of the unit has a cable clip to allow easy insertion and removal of cables as required. This ensures that cables don’t get in the way of the user.</div>
<figure class="graf graf--figure" name="918c"><img class="graf-image" data-height="325" data-image-id="0*jobzoAvjs395EXQq.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*jobzoAvjs395EXQq.jpg" /></figure><div class="graf graf--p" name="5205">
The front of the unit includes a window that is transparent to IR. This allows inclusion of a Leap Motion camera inside the unit to facilitate natural interaction with the hands. The fact that the controller is embedded inside the goggle eliminates the need to route cables externally. This approach is superior to external mounting of the controller because when mounted externally, the controller might be easier to detach from the goggles. Note that in the CAD model, the IR window has been removed so that the Leap Motion unit is clearly visible.</div>
<figure class="graf graf--figure" name="f191"><img class="graf-image" data-height="325" data-image-id="0*0VbNDniQv-Ze5Hfd.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*0VbNDniQv-Ze5Hfd.jpg" /></figure><div class="graf graf--p" name="ae0b">
To fit a wide range of users, the goggles were designed with adjustable optics. These allow people that normally wear eyeglasses to take them off and still see an excellent picture. Individual knobs — highlighted by the arrows in the CAD drawing from a bottom view perspective<br />- allow focusing of each eye independently. It is also possible to design optics that have a large enough <a class="markup--anchor markup--p-anchor" data-href="http://sensics.com/key-parameters-optical-designs/" href="http://sensics.com/key-parameters-optical-designs/" target="_blank">eye relief</a> to accommodate glasses but we chose adjustments in this particular design.</div>
<figure class="graf graf--figure" name="844e"><img class="graf-image" data-height="325" data-image-id="0*41hTXXjQARFRLHzt.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*41hTXXjQARFRLHzt.jpg" /></figure><div class="graf graf--p" name="869e">
The face mask — the part that touches the user’s face — is easily removable and replaceable. It is designed with a groove (not shown in the picture) that allows an operator to quickly and accurately replace the mask when needed without requiring any special tools.</div>
<figure class="graf graf--figure" name="4747"><img class="graf-image" data-height="325" data-image-id="0*ww48Pw4M0639hg1v.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*ww48Pw4M0639hg1v.jpg" /></figure><div class="graf graf--p" name="380d">
VR experiences can be very intense. For instance, guests to <a class="markup--anchor markup--p-anchor" data-href="http://sensics.com/sensics-helps-zero-latency-power-the-worlds-first-multiplayer-free-roam-virtual-reality-experience/" href="http://sensics.com/sensics-helps-zero-latency-power-the-worlds-first-multiplayer-free-roam-virtual-reality-experience/" target="_blank">SEGA Joypolis</a> run around in a special warehouse and shoot zombies. It is important to keep these guests cool and dry. That’s why we the public VR goggles include dual silent fans that whisk away humidity and heat.</div>
<div class="graf graf--p" name="c7aa">
The diagram has arrows pointing to an air vent (one in each side) and the holes through which it exits the goggles. An important feature in the Sensics design is the ability to separate the “passive part” of the goggles (facemask and head strap) from the “active part” (electronics, optics, etc.). This feature provides several important benefits:</div>
<ol class="postList">
<li class="graf graf--li" name="1de5">It allows guests to don the passive part while waiting in line. They can adjust the fit to their heads, and make sure the strap is comfortable. While doing so, the front part of the passive unit is completely open so guests can still see the real world, take a selfie with the strap. Only when the activity is about to begin does the operator attach the active part to the passive part.</li>
<li class="graf graf--li" name="e156">It permits various cleaning strategies for the passive part — the part that touches the head. For instance, an attraction operator can have many more passive parts than active parts and then clean the passive parts in batch at the end of the day.</li>
<li class="graf graf--li" name="7c12">Separating the face mask from the active part of the goggles allows for multiple sizes of the face mask to fit kids, different facial structures and so forth.</li>
</ol>
<div class="graf graf--p" name="389e">
The two parts of the goggles — active and passive — are illustrated in the photos below by Sensics team member Yaron Kaufman.</div>
<figure class="graf graf--figure" name="80cb"><img class="graf-image" data-height="300" data-image-id="0*vGrqw__DURgQxMgf.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*vGrqw__DURgQxMgf.jpg" /></figure><figure class="graf graf--figure" name="8a13"><img class="graf-image" data-height="300" data-image-id="0*wqc1qbMWY_UvCCSD.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*wqc1qbMWY_UvCCSD.jpg" /></figure><figure class="graf graf--figure" name="2c46"><img class="graf-image" data-height="325" data-image-id="0*RgQIWhKwrZkAbE1M.jpg" data-width="400" draggable="false" src="https://cdn-images-1.medium.com/max/800/0*RgQIWhKwrZkAbE1M.jpg" /></figure><div class="graf graf--p" name="a8fc">
Detaching the passive part from the active part is done by pressing two button — one in each side of the goggles. The button is shown as yellow highlighted by the arrow in the diagram. The clasp holding the two parts together is made of metal, and thus designed for numerous grab/release cycles.</div>
<div class="graf graf--p" name="3412">
<br /></div>
<div class="graf graf--p" name="3412">
Two additional parts are highlighted in the diagram: configurable buttons on the top right side of the goggles serve as programmable user-interface controls. This could be to increase/decrease volume, to pause the game, select a menu item or any other function. The mechanical design allows for one, two or three buttons per the preferences of the customer.</div>
<div class="graf graf--p" name="ac3a">
Last, an audio output jack appear on the bottom. The goggles can also support a permanent audio solution which attaches where the large yellow ellipse is shown in the diagram to the right. We put a lot of thought into designing this product. We hope you will get a chance to try it and appreciate its suitability to public VR applications.</div>
</figure><div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com1tag:blogger.com,1999:blog-7595096811901096571.post-38569499132597204982016-08-09T00:22:00.001-04:002016-08-09T00:22:15.626-04:00Why did Sensics launch the OSVR Store?<br /><br />Last week, the <a href="http://www.osvrstore.com/">OSVR Store</a> came on-line. It offers a range of OSVR-related products, services, accessories and components. It also contains useful information, most of it adopted from this blog.<br /><br />But why did the Sensics team launch it?<br /><br />The first answer that comes to mind is “to make money”. That’s an obvious reason, as Sensics is a for-profit company. We invest a lot in developing OSVR and would love to see returns on our investments.<br /><br />But that’s not the only reason, nor perhaps the most important one. Here are some others.<br /><br />We wanted the OSVR Store to be helpful to the VR enthusiast and hacker. That’s why we offer components: optics, tracking boards from various vendors, IR camera. More components are coming. Some will use those to upgrade an existing system, others to build a new one.<br /><br />We wanted a place for hardware developers, a platform to market their innovations. If you make something OSVR-related, we invite you to sell it on the OSVR Store. It can be an OSVR-supported HMD. It can be an accessory or component that can help OSVR users. It can even be OSVR-related services. We strive to offer fair and simple terms. If you can build it, we can help you promote it. Drop us a note at hello@osvrstore.com to get started.<br /><br />To me, OSVR has always been about choice. About democratizing VR. Not forcing users to buy everything from the same vendor. Encouraging applications to run on many devices. Support more than one operating system.<br /><br />The OSVR Store is one more way to give everyone choice. <a href="http://www.osvrstore.com/">Check it out</a>.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-72042157694301159362016-08-01T22:14:00.002-04:002016-08-01T22:14:25.267-04:00OSVR - a Look Ahead<h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
Introduction</h2>
OSVR is an open source software platform and VR goggle. Sensics and Razer launched OSVR 18 months ago with the intent of democratizing VR. We wanted to provide an open alternative to walled-garden, single-device approaches.<br /><br />It turns out that others share this vision. We saw exponential growth in participation in OSVR. Acer, NVIDIA, Valve, Ubisoft, Leap Motion and many others joined the ecosystem. The OSVR goggle – called the Hacker Development Kit – has seen several major hardware improvements. The founding team and many other contributors expanded the functionality of the OSVR software.<br /><br />I’d like to describe how I hope to see OSVR develop given past and present industry trends.<div style="background-color: white; border: 0px; color: #bfbfbf; font-family: Montserrat; font-size: 14px; line-height: 26px; margin-bottom: 20px; padding: 0px; vertical-align: baseline; word-wrap: break-word;">
<br /></div>
<h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
Increased Device Diversity leads to more Choices for Customers</h2>
<h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
Trends</h3>
An avalanche of new virtual reality devices arrived. We see goggles, motion trackers, haptics, eye trackers, motion chairs and body suits. There is no slowdown in sight: many new devices will launch in the coming months. What is common to all these devices? They need software: game engine plugins, compatible content and software utilities. For device manufacturers, this software is not a core competency but ‘a necessary evil’. Without software, these new devices are almost useless.<br /><br />At the same time, content providers realize it’s best not to limit the their content to one device. The VR market is too small for that. The more devices you support, the largest your addressable market becomes.<br /><br />With such rapid innovation, what was the best VR system six months ago is anything but that today. The dream VR system might be a goggle from one vendor, input devices from another and tracking from a third. Wait another six months and you’ll want something else. Does everything need to come from the same vendor? Maybe not. The lessons of home electronics apply to VR: you don’t need a single vendor to make all your devices.<br /><br />This ‘mix and match’ ability is even more critical for enterprise customers. VR arcades, for instance, might use custom hardware or professional tracking systems. They want a software environment that is flexible and extensible. They want an environment that supports ‘off-the-shelf’ products yet extends for ‘custom’ designs.<div>
<span style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em;"><br /></span></div>
<div>
<span style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em;">OSVR Implications</span><br /><br />OSVR already supports hundreds devices. The up-to-date list is here: <a href="http://osvr.github.io/compatibility/">http://osvr.github.io/compatibility/</a> . Every month, device vendors, VR enthusiasts and the core OSVR team add new devices. Most OSVR plugins (extension modules) are open-sourced. Thus, it is often possible to use an existing plugin as baseline for a new one. With every new device, we come closer towards achieving universal device support.<br /><br />A key OSVR goal is to create abstract device interfaces. This allows applications to work without regards to the particular device or technology choice. For example, head tracking can come from optical trackers or inertial ones. The option of a a “mix and match” approach overcomes the risk of a single vendor lock-in. You don’t change your word processor when you buy a new printer. Likewise, you shouldn’t have to change your applications when you get a new VR device.<br /><br />We try to make it easy to add OSVR support to any device. We worked with several goggle manufactures to create plugins for their products. Others did this work themselves. Once such a plugin is ready, customers instantly gains access to all OSVR content. Many game engines – such as Unity, Unreal and SteamVR- immediately support it.<br /><br />The same is also true for input and output peripherals such as eye trackers and haptic devices. If developers use an API from one peripheral vendor, they need to learn a new API for each new device. If developers use the OSVR API, they don’t need to bother with vendor-specific interfaces.<br /><br />I would love to see more enhancements to the abstract OSVR interfaces. They should reflect new capabilities, support new devices and integrate smart plugins.</div>
<div>
<br /><br /><h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
More People Exposed to more VR Applications in More Places</h2>
<h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
Trends</h3>
Just a few years ago, the biggest VR-centric conference of the year had 500 attendees. Most attendees had advanced computer science degrees. My company was one of about 10 presenting vendors. Today, you can experience a VR demo at a Best Buy. You can use a VR device on a roller coaster. With a $10 investment, you can turn your phone into a simple VR device.<br /><br />In the past, to set up a VR system you had to be a geek with plenty of time. Now, ordinary people expect to do it with ease.<br /><br />More than ever, businesses are experimenting with adopting VR. Applications that have always been the subject of dreams of are becoming practical. We see entertainment, therapy, home improvement, tourism, meditation, design and many other applications.<br /><br />These businesses are discovering that different applications have different hardware and software requirements. A treadmill at home is not going to survive the intensive use at a gym. Likewise, a VR device designed for home use is not suitable for use in a high-traffic shopping mall. The computing and packaging requirements for these applications are different from use to use. Some accept a high-end gaming PC, while others prefer inexpensive Android machines. I expect to see the full gamut of hardware platforms and a wide variety of cost and packaging options.</div>
<div>
<br /><h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
OSVR Implications</h3>
“Any customer can have a car painted any color that he wants so long as it is black”, said Henry Ford. I’d like to see a different approach, one that encourages variety and customization.<br /><br />On the hardware side, Sensics is designing many products that use OSVR components. For instance, our “Goggles for public VR” use OSVR parts in an amusement park goggle. We also help other companies use OSVR components inside their own packages. For those that want to design their own hardware, the OSVR goggle is a good reference design.<br /><br />On the software side, I would like to see OSVR expand to support more platforms. I’d like to see better Mac support and more complete coverage of Android and Linux platforms. I’d like to see VR work well on mid-range PCs and not limited to the newest graphics cards. This will lower the barriers to experience good VR and bring more people into the fold. I’d like to see device-specific optimizations to make the most of available capabilities. The OpenCV image processing library has optimizations for many processors. OSVR could follow a similar path.<br /><br />Additionally, it is important to automate or at least simplify the end-user experience. Make it as close to plug-and-play as possible . The task of identifying available devices and configuring them should be quick and simple.<br /><br />Simplicity is not limited to configuration. We’d like to see easier ways to choose, buy and deploy software.<div style="background-color: white; border: 0px; color: #bfbfbf; font-family: Montserrat; font-size: 14px; line-height: 26px; margin-bottom: 20px; padding: 0px; vertical-align: baseline; word-wrap: break-word;">
<br /></div>
<h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
Reducing Latency is Becoming Complex</h2>
<h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
Trends</h3>
Presence in VR requires low latency, and reducing latency is not easy. Low latency is also not the result of one single technique. Instead, many methods work together to achieve the desired result. Asynchronous time warp modifies the image just before sending it to the display. Predictive tracking lowers perceived latency by estimating future orientation. Direct mode bypasses the operating system. Foveated rendering reduces render complexity by understanding eye position. Render masking removes pixels from hidden areas in the image.<br /><br />If this sounds complex, it is just the beginning. One needs to measure optical distortion and correct it in real-time. Frame rates continue to increase, thus lowering the available time to render a frame. Engines can optimize rendering by using similarities between the left- and right-eye images. Techniques that used to be exotic are now becoming mainstream.<br /><br />A handful of companies have the money and people to master all these techniques. Most other organizations prefer to focus on their core competencies. What should they do?</div>
<div>
<br /><h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
OSVR implications</h3>
A key goal of OSVR is to “make hard things easy without making easy things hard”. The OSVR Render Manager examplifies this. OSVR makes these latency-reduction methods available to everyone. We work with graphics vendors to achieve direct mode through their API. We work with game engines to provide native integration of OSVR into their code.<br /><br />I expect the OSVR community to continue to keep track of the state of the art, and improve the code-base. Developers using OSVR can focus away from the plumbing of rendering. OSVR will continue to allow developers to focus on great experiences.</div>
<div>
<br /><br /> <h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
The Peripherals are Coming</h2>
<h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
Trends</h3>
A PC is useful with a mouse and keyboard. Likewise, A goggle is useful with a head tracker. A PC is better when adding a printer, a high-quality microphone and a scanner. A goggle is better with an eye tracker, a hand controller and a haptic device. VR peripherals increase immersion and bring more senses into play.<br /><br />In a PC environment, there are many ways to achieve the same task. You select an option using the mouse, the keyboard, by touching the screen, or even with your voice. In VR, you can do this with a hand gesture, with a head nod or by pressing a button. Applications want to focus on what you want to do rather than how you express your wishes.<br /><br />More peripherals mean more configurations. If you are in a car racing experience, you’d love to use a rumble chair if you have it. Even though Rumble chairs are not commonplace, there are several types of them. Applications need to be able to sense what peripherals are available and make use of them.<br /><br />Even a fundamental capability like tracking will have many variants. Maybe you have a wireless goggle that allows you to roam around. Maybe you sit in front of a desk with limited space. Maybe you have room to reach forward with your hands. Maybe you are on a train and can’t do so. Applications can’t assume just one configuration.</div>
<div>
<br /><h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
OSVR implications</h3>
OSVR embeds Virtual Reality Peripheral Network (VRPN), an established open-source library. Supporting many devices and focusing on the what, not the how is in our DNA.<br /><br />I expect OSVR to continue to improve its support for new devices. We might need to enhance the generic eye tracker interface as eye trackers become more common. We will need to look for common characteristics of haptics devices. We might even be able to standardize how vendors specify optical distortion.<br /><br />This is a community effort, not handed down from some elder council in an imperial palace. I would love to see working groups formed to address areas of common interest.<div style="background-color: white; border: 0px; color: #bfbfbf; font-family: Montserrat; font-size: 14px; line-height: 26px; margin-bottom: 20px; padding: 0px; vertical-align: baseline; word-wrap: break-word;">
<br /></div>
<h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
Turning Data into Information</h2>
<h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
Trends</h3>
A stream of XYZ hand coordinates is useful. Knowing that this stream represents a ‘figure 8’ is more useful. Smart software can turn data into higher-level information. Augmented reality tools detect objects in video feeds. Eye tracking software converts eye images into gaze direction. Hand tracking software converts hand position into gestures.<br /><br />Analyzing real-time data gets us closer to understanding emotion and intent. In turn, applications that make use if this information can become more compelling. A game can use gaze direction to improve the quality of interaction with a virtual character. Monitoring body vitals can help achieve the desire level of relaxation or excitement.<br /><br />As users experience this enhanced interaction, they will demand more of it.</div>
<div>
<br /><h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
OSVR Implications</h3>
Desktop applications don’t have code to detect a mouse double-click. They rely on the operating system to convert mouse data into the double-click event. OSVR needs to provide applications with both low-level data and high-level information.<br /><br />In “OSVR speak”, an analysis plugin is the software that converts data into information. While early OSVR work focused on lower-level tasks, several analysis plugins are already available. For example, DAQRI integrated a plugin that detects objects in a video stream.<br /><br />I expect many more plugins will become available. The open OSVR architecture opens plugin development to everyone. If you are an eye tracking expert, you can add an eye tracking plugin. If you have code that detects gestures, it is easy to connect it to OSVR. One might also expect a plugin marketplace, like an asset store, to help find and deploy plugins.<div style="background-color: white; border: 0px; color: #bfbfbf; font-family: Montserrat; font-size: 14px; line-height: 26px; margin-bottom: 20px; padding: 0px; vertical-align: baseline; word-wrap: break-word;">
<br /></div>
<h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
Augmenting Reality</h2>
<h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
Market trends</h3>
Most existing consumer-level devices are virtual reality devices. Google Glass has not been as successful as hoped. Magic Leap is not commercial yet. Microsoft Hololens kits are shipping to developers, but are not priced for consumers yet.<br /><br />With time, augmented-reality headsets will become consumer products. AR products share many of the needs of their VR cousins. They need abstract interfaces. They need to turn data into information. They need high-performance rendering and flexible sensing.<h3 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 30px; line-height: 1.2em; margin: 0px 0px 30px; word-wrap: break-word;">
OSVR Implications</h3>
The OSVR architecture supports AR just as it supports VR. Because AR and VR have so much in common, many components are already in place.<br /><br />AR devices are less likely to tether to a Windows PC. The multi-platform and multi-OS capabilities of OSVR will be an advantage. Wherever possible, I hope to continue and see a consistent cross-platform API for OSVR. This will allow developers to tailor deployment options to the customer needs.<div style="background-color: white; border: 0px; color: #bfbfbf; font-family: Montserrat; font-size: 14px; line-height: 26px; margin-bottom: 20px; padding: 0px; vertical-align: baseline; word-wrap: break-word;">
<br /></div>
<h2 style="background-color: white; color: #222222; font-family: Montserrat; font-size: 38px; line-height: 1.2em; margin: 0px 0px 38px; word-wrap: break-word;">
Summary</h2>
<br /><br />We designed OSVR to provide universal connectivity between engines and devices. OSVR makes hard things easy so developers can focus on fantastic experiences, not plumbing. It is open so that the rate of innovation is not constrained by a single company. I expect it to be invaluable for many years to come. Please join the OSVR team and myself for this exciting journey.<br /><br />To learn more about our work in OSVR, please <a href="http://sensics.com/portfolio-posts/osvr-open-source-virtual-reality/">visit this page</a><br /><br /> <br /><br /> <br /><br /><div style="text-align: center;">
This post was written by Yuval Boger, CEO of Sensics and co-founder of <a href="http://www.osvr.org/">OSVR</a>. Yuval and his team designed the OSVR software platform and built key parts of the OSVR offering.</div>
</div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-10143422176293249592016-07-21T14:37:00.000-04:002016-07-21T14:37:20.664-04:00Key Parameters for Optical DesignsAt <a href="http://www.sensics.com/" target="_blank">Sensics</a>, we completed many optical designs for VR over the years, and are busy these days with new ones to accommodate new displays and new sets of requirements. For those thinking about optics, here is a collection of some important parameters to consider, when focusing in optical systems for VR.<br />
<br />
<span style="text-decoration: underline;"><strong>Field of View</strong></span>: typically measured in degrees, the field of view defines what is the horizontal, vertical and diagonal extent that can be viewed at any given point. This is often specified as a monocular (single eye) field of view, but it is also customary to specify the binocular field of view and thus the <a href="http://sensics.com/what-is-binocular-overlap-and-why-should-you-care/">binocular overlap</a><br />
<br />
<div style="text-align: left;">
<span style="text-decoration: underline;"><strong>Eye relief</strong></span>: typically measured in millimeters, the eye relief indicates the distance between the eye and the closest optical element as seen in the illustration below. </div>
<div style="text-align: left;">
<br /></div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/07/eyeRelief.jpg" style="margin-left: auto; margin-right: auto;"><img alt="Eye Relief" class="wp-image-26787" src="http://sensics.com/wp-content/uploads/2016/07/eyeRelief-300x280.jpg" height="420" width="450" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small;">Illustration of eye relief</span></td></tr>
</tbody></table>
<div style="text-align: center;">
<div style="text-align: left;">
<br /></div>
</div>
<div style="text-align: left;">
Regular eyeglasses have an eye relief of about 12mm.
Advantages of larger eye relief:
</div>
<ul>
<li>If the optics are too close to the eye, they generate discomfort such as when the eyelashes touch the optics.</li>
<li>If the eye relief is large enough, the system might be able to accommodate people wearing glasses without the need to provide a focusing mechanism to compensate for not having glasses</li>
</ul>
Disadvantages of larger eye relief:
<br />
<ul>
<li>The total depth of the optical system (distance from eye to screen) becomes larger and the overall system potentially more cumbersome.</li>
<li>The minimal diameter first optical element is dictated by a combination of the desired field of view and eye relief. Larger eye relief requires the lens to be wider and thus likely heavier.</li>
</ul>
<span style="text-decoration: underline;"><strong>Eye box</strong></span>: often specified in millimeters, the eye box determines how much the eye can move up/down/left/right from the optimal position without significant degradation in the image quality. Some optical systems such as rifle scopes have very narrow eye box because they want to 'force' the eye to be in the optimal position. Other optical systems, such as HMDs used in soldier training, might desire larger eye boxes to allow the trainee to see a good image even as the HMD moves on the head while the trainee is running. The image quality at the optimal position is most always best, but if the eye box is too narrow, the user will not obtain a good image without tedious adjustments.
For instance, the diagram below shows the simulation results of an optical design at the nominal eye position (left) and at 4 mm away from the optimal position:<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/07/eyeboxDemo.jpg" style="margin-left: auto; margin-right: auto;"><img alt="Eye box simulation" class="wp-image-26788 size-full" src="http://sensics.com/wp-content/uploads/2016/07/eyeboxDemo.jpg" height="405" width="682" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: start;">Comparing optical quality at a distance away from the optimal eye position</span></td></tr>
</tbody></table>
<br />
<strong><span style="text-decoration: underline;">Material and type of lens</span>: </strong>a lens is typically made from optical-grade plastic or from glass. There are hundreds of different optical-grade glass types but only about a dozen optical-grade plastic material. Different material provide different light bending properties (e.g. <a href="https://en.wikipedia.org/wiki/Refractive_index" target="_blank"><em>index of refraction</em></a>) so it is quite common that multi-element optical systems are made with more than one material. Glass is typically heavier, more expensive to mold, but has greater variety, provides better surface quality and is often physically harder (e.g. more resistant to scratches). Plastic is cheaper and lighter. Additional lens types and non-linear optical elements such as <a href="http://sensics.com/the-promise-and-perils-of-using-fresnel-lenses-2/">Fresnel Lenses</a> and <a href="https://en.wikipedia.org/wiki/Polarizer" target="_blank">polarizers</a> are also available.<br />
<br />
<strong><span style="text-decoration: underline;">Distortion</span>:</strong> <a href="http://sensics.com/what-is-geometric-distortion-and-why-should-you-care/">optical distortion</a> is one type of imperfection in an optical design. Distortion causes straight lines not being seen as straight lines when viewed through the optics. An example of this is shown below.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/03/IMG_14651.jpg" style="margin-left: auto; margin-right: auto;"><img alt="Optical distortion" class="wp-image-25788 size-medium" src="http://sensics.com/wp-content/uploads/2016/03/IMG_14651-225x300.jpg" height="300" width="225" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: start;">Optical distortion</span></td></tr>
</tbody></table>
<div style="text-align: left;">
Distortion is reported in percentage units. If a pixel is placed at a distance of 100 pixels (or mm or degrees or inches or whichever unit you prefer) and appears as if it at a distance of 110, the distortion at that particular point is (110-100)/100 = 10%. During the process of optical design, distortion graphs are commonly viewed during the iterations of the design. For instance, consider the distortion graph below for a design with 96 degrees field of view (2 x 48): </div>
<div style="text-align: left;">
<br /></div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/07/sampleDistortionCurve.jpg" style="margin-left: auto; margin-right: auto;"><img alt="Sample Distortion Curve" class="wp-image-26795 size-full" src="http://sensics.com/wp-content/uploads/2016/07/sampleDistortionCurve.jpg" height="292" width="289" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Distortion graph</span></td></tr>
</tbody></table>
<div style="text-align: center;">
<div style="text-align: left;">
<br /></div>
</div>
<div style="text-align: left;">
The graph shows, for instance, that at 30 degrees away from the center, distortion is still about 2-3%, but at 40 degrees away from the center it increases to about 8 percent. The effect of distortion is sometimes shown in a distortion grid shown below. If the optical design was perfect and had no distortion, each blue cross would line up perfectly at the grid intersection points. </div>
<div style="text-align: left;">
<br /></div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/07/DistortionGrid.jpg" style="margin-left: auto; margin-right: auto;"><img alt="Distortion Grid" class="wp-image-26796 size-medium" src="http://sensics.com/wp-content/uploads/2016/07/DistortionGrid-272x300.jpg" height="300" width="272" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Distortion Grid</span></td></tr>
</tbody></table>
<div style="text-align: left;">
<span style="text-align: left;">Sometimes, distortion is monotonic, meaning that it gradually increases as one moves towards the edge. Non-monotonic distortion can cause the appearance of a 'bubble' if not corrected. </span></div>
<div style="text-align: left;">
<span style="text-align: left; text-decoration: underline;"><strong><br /></strong></span></div>
<div style="text-align: left;">
<span style="text-align: left;"></span><span style="text-align: left; text-decoration: underline;"><strong>Chromatic aberration</strong></span><span style="text-align: left;">: Just like white light breaks into various colors when passing through a prism, an optical system might behave differently for different wavelengths/colors. This could cause color breakup. It is useful to explore how much the system is 'color corrected' so as to minimize this color breakup. The image below shows a nice picture at the center of the optical system but fairly significant color breakup at the edges. </span></div>
<div style="text-align: left;">
<span style="text-align: left;"><br /></span></div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/07/colorBreakup.jpg" style="margin-left: auto; margin-right: auto; text-align: left;"><img alt="Color breakup" class="wp-image-26798 size-full" src="http://sensics.com/wp-content/uploads/2016/07/colorBreakup.jpg" height="353" width="450" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><span style="font-size: small; text-align: left;">Color breakup</span></td></tr>
</tbody></table>
<div style="text-align: left;">
<strong style="text-align: left;"><span style="text-decoration: underline;">Relative illumination</span>: </strong><span style="text-align: left;">the ability of an optical system to collect light can change throughout the image. Consider a uniformly-lit surface that is viewed through an optical system. Often, the perceived brightness at the center of the optics is the highest and it drops is one moves towards the edges. This is numerically expressed as relative illumination such as the graph below. While the human eye has amazing dynamic range, non-monotonic illumination can cause the appearance of dark or bright 'rings' in the image. </span><br />
<div style="text-align: start;">
<span style="text-align: left;"><br /></span></div>
<div style="text-align: start;">
<span style="text-align: left;"><br /></span></div>
<div style="text-align: center;">
<div style="text-align: left;">
<br /></div>
</div>
<div style="text-align: center;">
<div style="text-align: left;">
<br /></div>
</div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/07/RelativeIllumination.jpg" style="margin-left: auto; margin-right: auto; text-align: left;"><img alt="Relative Illumination" class="wp-image-26800" src="http://sensics.com/wp-content/uploads/2016/07/RelativeIllumination-300x168.jpg" height="252" width="450" /></a></td></tr>
<tr><td class="tr-caption" style="font-size: 12.8px;"><span style="font-size: small; text-align: left;">Relative Illumination</span></td></tr>
</tbody></table>
<div style="text-align: center;">
<div style="text-align: left;">
<br /></div>
</div>
<div style="text-align: justify;">
<span style="text-align: left;"></span><span style="text-align: left; text-decoration: underline;"><strong>Spot size</strong></span><span style="text-align: left;">: imagine a screen with a pattern of tiny dots. In a perfect world, all dots would appear with the same size and no smear when looking through the optical system. In reality, the dot size typically increases as one moves away from the center. The numerical measurement of this is the spot size and diagrams indicating the spot size at different points through the optics often look something like this:</span></div>
<div style="text-align: center;">
<span style="text-align: left;"><br /></span></div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://sensics.com/wp-content/uploads/2016/07/spotSize.jpg" style="margin-left: auto; margin-right: auto; text-align: left;"><img alt="Spot size" class="wp-image-26802 size-full" src="http://sensics.com/wp-content/uploads/2016/07/spotSize.jpg" height="352" width="466" /></a></td></tr>
<tr><td class="tr-caption" style="font-size: 12.8px;"><span style="font-size: small;">Spot Size</span></td></tr>
</tbody></table>
<div style="text-align: left;">
<strong style="text-align: left;"><span style="text-decoration: underline;">Other characteristics</span>:</strong><span style="text-align: left;"> depending on the desired use case, there are often size, weight and cost limitations that need to be considered to narrow the range of acceptable solutions to the specifications. Just like it is easier to fit a higher-degree polynomial to a set of data points because more terms provide additional degrees of freedom, it is easier to achieve the a set of desired optical parameters with additional lenses (or more precisely with additional surfaces), but extra lenses often add cost, size and weight. </span></div>
<div style="text-align: center;">
<strong style="text-align: left;"><span style="text-decoration: underline;"><br /></span></strong></div>
<div style="text-align: left;">
<span style="text-align: left;"></span><strong style="text-align: left;"><span style="text-decoration: underline;">Putting it all together</span></strong><span style="text-align: left;">: it is practically impossible to find a car that is inexpensive, has amazing fuel efficiency, offers fantastic acceleration, seats 7 people and is very pleasing to the eye. Similarly, it is difficult to design an optical system that has no distortion, provides wide field of view, large eye box, costs next to nothing and is very thin. When contracting the design of an optical system, it is useful to define all desired characteristics but specify which parameters are key and which parameters are less important.</span></div>
</div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com2tag:blogger.com,1999:blog-7595096811901096571.post-42129165410384520832016-06-22T09:00:00.000-04:002016-06-22T09:00:10.930-04:00The Three (and a half) Configurations of Eye TrackersEye tracking could become a critical sensor in HMDs. In previous posts such as <a href="http://sensics.com/understanding-foveated-rendering/">here</a>, <a href="http://sensics.com/vrguy-podcast-episode-13-dr-christian-lange-cso-of-ergoneers-on-eye-tracking/">here</a> and <a href="http://sensics.com/how-eye-tracking-can-impact-head-tracking/">here</a> we discussed some of the ways that eye trackers could be useful as input devices, as ways to reduce rendering load and more.
But how are eye trackers installed inside an HMD? An appropriate placement of the eye tracking camera gives a quality image of the eye regardless of the gaze direction. If the eye image is bad, the tracking quality will be bad. It's truly a 'garbage in, garbage out' situation.
The three typical ways to install a camera are:
<br />
<ol>
<li>Underneath the optics</li>
<li>Combined with the optics via a hot mirror (or an internal reflection)</li>
<li>Inside the optics.</li>
</ol>
In this post, we describe these configurations.
<br />
<h4>
Underneath the optics</h4>
<a href="http://sensics.com/wp-content/uploads/2015/07/ErgoneersIntegration.jpg"><img alt="Eye tracking" class="alignright wp-image-24580" src="http://sensics.com/wp-content/uploads/2015/07/ErgoneersIntegration-300x149.jpg" height="297" width="600" /></a>
This configuration is illustrated in the image on the right, which shows the Sensics zSight HMD with an integrated Ergoneers eye tracker. The tracker is the small camera that is visible underneath the left eyepiece.
The angle in which the camera is installed is important. A camera that is perpendicular - practically looking into the eye - will typically get an excellent image. If the camera angle is steep, the anatomy of the eye - eyelids, eyelashes, inset eyes - gets in the way of getting a good image.
If the eye relief (distance from cornea to first element of the optics) is small, the camera will need to be placed at a steeper angle than if the eye relief was large. If the diameter of the optics are large, the camera would need to be placed lower and thus at a steeper angle than if the diameter of the optics is smaller.
If the user wears glasses, an eye tracker that is placed underneath the optics might "see" the frame of the glasses instead of the eye.
Having said that, the advantage of this approach is that it does not place many constraints on the optics. Eye tracker cameras could usually be added below optics that were not designed to accommodate eye tracking.
<br />
<h4>
Eye tracker that is combined with the optics</h4>
<a href="http://sensics.com/wp-content/uploads/2016/06/HotMirrorWithRays.jpg"><img alt="HotMirrorWithRays" class="alignright wp-image-26678 size-full" src="http://sensics.com/wp-content/uploads/2016/06/HotMirrorWithRays.jpg" height="414" width="697" /></a>Eye tracking cameras are often infra-red cameras that look at IR light that is reflected off the eye. As such, eye tracking cameras don't need visible light. This allows using what is called a hot mirror: a mirror that reflects IR light yet passes visible light.
Consider the optical system shown to the right (copyright Sensics). Light from the screen (right side) passes through a lens, a hot mirror and another lens and reaches the eye. In contrast, if the eye is lit by an IR light source, IR light coming back from the eye is reflected off the hot mirror towards the upper part of the optical system. If a camera is placed there, it can have an excellent view of the eye without interfering with the optical quality.
This configuration also gives more flexibility with regards to the camera being used. For instance, a larger camera (perhaps with very high frame rate) would not be feasible if placed under the optics. However, when placed separately from the optical system such as above the mirror, it might fit.
The downside of this configuration, other than the need to add the hot mirror, is that the optical system needs to leave enough room for the hot mirror and this introduces a mechanical constraint that limits the options of the optical designer.
<a href="http://sensics.com/wp-content/uploads/2016/06/ReflectionWithRays.jpg"><img alt="ReflectionWithRays" class="alignright wp-image-26679 size-full" src="http://sensics.com/wp-content/uploads/2016/06/ReflectionWithRays.jpg" height="415" width="694" /></a>A variation on this design (what I referred in the title as "the half" configuration is having the IR light reflect off one of the optical surfaces, assuming this surface is coated with an IR-reflective coating. You can see this in the configuration on the right (also copyright Sensics). An optical element is curved and the IR light reflects off it into the camera. The image received by the camera might be somewhat distorted, but since that image is processed by an algorithm, that algorithm could compensate for the image distortion.
This solution removes the need for a hot mirror but does require that there is a lens that is shaped in a way to reflect the IR light into the camera. It also requires the additional expense of an IR coating.
<br />
<h4>
Eye tracker integrated with the optics</h4>
<a href="http://sensics.com/wp-content/uploads/2015/07/dSightErgoneers.jpg"><img alt="dSight with Ergoneers eye tracker" class="alignright wp-image-25284" src="http://sensics.com/wp-content/uploads/2015/07/dSightErgoneers-300x169.jpg" height="338" width="600" /></a>The third configuration is even simpler. A miniature camera is used. A small hole is drilled through the optics and the camera is placed through it. The angle and location of the camera is balanced between getting an optical image of the eye and the need to not introduce a significant visual distraction.
This is shown on the right as part of the eye tracking option of the Sensics dSight. This configuration gives excellent flexibility with regards to camera placement, but does introduce some visual distraction and requires careful drilling of a hole through the optics.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com3tag:blogger.com,1999:blog-7595096811901096571.post-77408446767469254232016-06-16T09:00:00.000-04:002016-06-16T09:00:01.797-04:00Notes from the Zero Latency Free-Roam VR Gameplay<div class="separator" style="clear: both; text-align: center;">
<a href="http://sensics.com/wp-content/uploads/2016/06/ZeroLatencyZombie-768x432.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://sensics.com/wp-content/uploads/2016/06/ZeroLatencyZombie-768x432.png" height="225" width="400" /></a></div>
I spent this past weekend in Australia working with Sensics customer Zero Latency towards their <a href="http://sensics.com/sensics-helps-zero-latency-power-the-worlds-first-multiplayer-free-roam-virtual-reality-experience/">upcoming VR deployment at SEGA's Joypolis </a>park in Tokyo. As part of the visit, I had the chance to go through the Zero Latency "Zombie Outbreak" experience and I thought I would share some notes from it.
Zero Latency has been running this experience for quite some time and have had nearly 10,000 paying customers do it. The experience is about 1-hour long including about 10 minutes of pre-game briefing and equipment setup, 45 minutes of play and 5 minutes to take the equipment off and get the space ready for the next group. There are 6 customer slots per hour and everyone plays together in the same space at the same time. To date, Zero Latency has opened this to customers for about 29 hours a week - mostly on weekends - but will now be adding weeknights for a total of 40 game hours per week. A ticket costs 88 Australian dollars (about 75 US dollars) and there is typically a 6-week waiting list to get in.
The experience is located in the Zero Latency office, a converted warehouse in the north side of Melbourne Australia. Most of the warehouse is taken up by the rectangular game space, about 15 x 25 meters (50 x 80 feet), or 375 m² (4000 sq ft) to be precise. The rest of the warehouse is used for two floors engineering and administrative offices. One can peak through the office windows at the customers playing and during the day you can constantly hear the shouts of excitement, squeals of joy and screams of horror coming from the game space.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://sensics.com/wp-content/uploads/2016/06/12919768_597359477080355_6730711490669403606_n1-768x421.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://sensics.com/wp-content/uploads/2016/06/12919768_597359477080355_6730711490669403606_n1-768x421.jpg" height="218" width="400" /></a></div>
I had a chance to go through the game twice: once with a group of Zero Latency employees before the space was opened to customers, and once as the 6th man of a 5-person group of paying customers late night.
Once customers come in they are greeted by a 'game master' that provides a pre-mission briefing, explains the rules and provides explanation on the gaming gun. The gun can switch between a semi-automatic rifle and a shotgun. It has a trigger, a button to switch modes, a reload button and a pump to load bullets into the shotgun and load grenades when in rifle mode. I found the gun to be comfortable and balanced, and it seems that is has undergone many iterations before arriving in the current form. Players wear a backpack that includes a lightweight Alienware portable computer, a battery and a control box. The HMD and the gun have lighted spheres on them - reminiscent of the PlayStation Move - that are used to track the players and the weapons throughout the space. Players also wear Razer headsets that provide two-way audio so that players can easily communicate with each other as well as hear instructions from the game master.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://sensics.com/wp-content/uploads/2016/06/13428570_622895444526758_8599332009795572409_n1-768x513.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://sensics.com/wp-content/uploads/2016/06/13428570_622895444526758_8599332009795572409_n1-768x513.jpg" height="266" width="400" /></a></div>
The game starts with a few minutes of acclimation where players walk across the space to virtual shooting range and spend a couple of minutes getting comfortable with operating their weapons. The game then starts. It is essentially a simple game - players fight their way through the space while shooting zombies and other menacing characters, some of which shoot back at you. Every few minutes, players switch scenes by going through an elevator or teleportation waypoints, circles on the ground where each of the six players has to stand before the next scene can be reached. Sometimes you fight in an urban setting, sometimes on a rooftop, inside a cafeteria and so forth. Zombies can be killed by a direct shot to the head or multiple shots to the body. The players can also be killed, but then return to the game after about 10 seconds of appearing as a 'ghost'. Game 'power ups' are sometimes found through the space. For instance, during my gameplay I found an AK-47 assault rifle and later a heavy machine gun. At the end of the game, each player is shown their score and ranking, where the score is calculated based on the number of kills and the number of player deaths. That score sheet is emailed to players and is available for later viewing on the Web.
The graphics are fine and an attacking zombie is quite compelling when it is right in your face, but the things I truly found compelling in the game are not so much the graphics and gameplay but rather a few other things:
<br />
<ul>
<li>Free-roam VR is great. The large space offers fantastic freedom of movement. You can see players move throughout the space, duck to take cover, turn around quickly with no hesitation at all. This generates an excellent feeling of immersion. You can truly feel that you could hide behind corners or walk anywhere with no apparent limitations. Of course, every space has physical limitations and Zero Latency has implemented a system where if you get too close to a player or a wall, something like a radar appears on your screen showing you at the center and the obstacles (players, walls) on it so that you know how to avoid them. If you get too close, the game pauses until you are farther away. This felt very natural. Throughout nearly two hours of active gameplay I think I brushed once or twice against another player but no more than that, even though players were in close proximity. Immersion is such that players don't notice people that are not players around them. In the current Zero Latency office, the bathroom for the office (the "Loo" in "Australian") is right across from the playing space so to get there you can either take a detour walking alongside the walls or go straight through the playing area where the players couldn't care less because they don't even know that you are walking by.</li>
</ul>
<ul>
<li>The social aspect is very compelling. This game is not about 6 individuals playing separately in a space. It is about 6 players acting as a team within the space. You can definitely hear "you take the right corridor and I'll take the left", or "watch your back" or "I need some help here!" shouts from one player to another. Players that work individually have little chance to stop the zombie invasion coming from all directions, but playing together gives you that chance.</li>
<li>Tracking - for both the head and the weapon - are very smooth to the point where you don't think about it. Because multiple players are tracked in the space you can see their avatars around you (sometimes with name tags). The graphics of players walking in the game need some work in my opinion, but you can clearly see where everyone is and what they are doing.</li>
<li>45 minutes of game play go by very quickly and the game masters control the pace very well. As you can imagine, some groups take longer than others to get to the next waypoints, and the game uses waiting for elevators or helicopters as a way to condense or extend the total time. For instance, once you arrive in the cafeteria a sign shows up that the elevator will arrive in 100 seconds. I would imagine that if a group arrived earlier, they would have to wait longer for the elevator or if a group took more time, they would wait less.</li>
</ul>
The space itself is essentially empty save for the overhead tracking cameras. Thus, the same space the tracking system can be used for many different experiences. Unfortunately, we had to work from time to time and I did not have a chance to try some of the newer experiences that Zero Latency is working on, especially since the space was occupied by customers most of the time. I'm certainly looking forward to coming there again and continue to save the world.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-14844184383154975542016-06-06T10:00:00.000-04:002016-06-06T10:00:01.036-04:00Understanding Pixel Density and Eye-Limiting ResolutionIf the human eye was a digital camera, it's "data sheet" would say that it has a of 60 pixels/degree at the fovea (the part of the retina where the visual acuity is highest). This is called eye-limiting resolution.<br />
<br />
This means that if there an image with 3600 pixels (60 x 60) and that image fell on a 1° x 1° area of the fovea, a person would not be able to tell it apart from an image with 8100 pixels (90 x 90) that fell on a 1° x 1° area of the fovea.<br />
<br />
<em>Note 1: 60 pixels per degree figure is sometimes expressed as "1 arc-minute per pixel". Not surprisingly, an arc-minute is an angular measurement defined as 1/60th of a degree.</em><br />
<br />
<em>Note 2: this kind of calculation is the basis for what Apple refers to as a "retina display", a screen that when held at the right distance would generate this kind of pixel density on the retina.</em><br />
<br />
If you have a VR goggle, you can calculate the <em>pixel density</em> - how many pixels per degree if presents the eye - by dividing the number of pixels in a horizontal display line by the horizontal field of view provided by the eyepiece. For instance, the Oculus DK1 (yes, I know that was quite a while ago) had 1280 x 800 pixels across both eyes, so 640 x 800 pixels per eye, and with a monocular horizontal field of view of about 90 degrees, it had a pixel density of 640 / 90 so just over 7 pixels/degree.<br />
<br />
Not to pile on the DK1 (it had many good things, though resolution was not one of them), 7 pixels/degree is the linear pixel density. When you think about it in terms of pixel density per surface area, is it not just 8.5 times worse than the human eye (60 / 7 = 8.5) but actually a lot worse (8.5 * 8.5 which is over 70).
The following table compares pixel densities for some popular consumer and professional HMDs:<br />
<br />
<table style="width: 100%px;">
<thead>
<tr>
<th style="text-align: left;">Product</th>
<th style="text-align: center;">Horizontal pixels
per eye</th>
<th style="text-align: center;">Approximate Horizontal Field of View
(degrees per eye)</th>
<th style="text-align: center;">Approximate Pixel Density
(pixels/degree)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Oculus DK1</td>
<td style="text-align: right;">640</td>
<td style="text-align: center;">90</td>
<td style="text-align: right;">7.1</td>
</tr>
<tr>
<td style="text-align: left;">OSVR HDK</td>
<td style="text-align: right;">960</td>
<td style="text-align: center;">90</td>
<td style="text-align: right;">10.7</td>
</tr>
<tr>
<td style="text-align: left;">HTC VIVE</td>
<td style="text-align: right;">1080</td>
<td style="text-align: center;">90</td>
<td style="text-align: right;">12.0</td>
</tr>
<tr>
<td style="text-align: left;">Sensics dSight</td>
<td style="text-align: right;">1920</td>
<td style="text-align: center;">95</td>
<td style="text-align: right;">20.2</td>
</tr>
<tr>
<td style="text-align: left;">Sensics zSight</td>
<td style="text-align: right;"> 1280</td>
<td style="text-align: center;"> 48</td>
<td style="text-align: right;">26.6</td>
</tr>
<tr>
<td style="text-align: left;">Sensics zSight 1920</td>
<td style="text-align: right;"> 1920</td>
<td style="text-align: center;"> 60</td>
<td style="text-align: right;"> 32.0</td>
</tr>
<tr>
<td style="text-align: left;">Human fovea</td>
<td></td>
<td></td>
<td style="text-align: right;"> 60.0</td>
</tr>
</tbody>
</table>
<br />
Higher pixel density allows you to see finer details - read text; see the grain of the leather on a car's dashboard; spot a target at a greater distance - and in general contribute to an increasingly realistic image.<br />
<br />
Historically, one of the things that separated professional-grade HMDs from consumer HMDs was that the professional HMDs had higher pixel density.
Let's simulate this using the following four images. Let's assume that the first image, taken from Unreal Engine's <em>Showdown</em> demo, is shown at full 60 pixels/degree density. We can then re-sample it at half the pixel density - simulating 30 pixels/degree - and then half again (resulting in 15 pixels/degree) and half again (7,5 pixels/degree). Notice the stark differences as we go to lower and lower pixel densities.<br />
<a href="http://sensics.com/wp-content/uploads/2016/05/ShowdownFull.jpg"><img alt="ShowdownFull" class="aligncenter wp-image-26494 size-full" src="http://sensics.com/wp-content/uploads/2016/05/ShowdownFull.jpg" height="360" width="640" /></a><br />
<div style="text-align: center;">
Full resolution (simulating 60 pixels/degree)</div>
<div style="text-align: center;">
<br /></div>
<a href="http://sensics.com/wp-content/uploads/2016/05/ShowdownHalf-1.jpg"><img alt="ShowdownHalf-1" class="aligncenter size-full wp-image-26495" src="http://sensics.com/wp-content/uploads/2016/05/ShowdownHalf-1.jpg" height="360" width="640" /></a>
<br />
<div style="text-align: center;">
Half resolution (simulating 30 pixels/degree)</div>
<a href="http://sensics.com/wp-content/uploads/2016/05/ShowdownQuarter-1.jpg"><img alt="ShowdownQuarter-1" class="aligncenter size-full wp-image-26496" src="http://sensics.com/wp-content/uploads/2016/05/ShowdownQuarter-1.jpg" height="360" width="640" /></a>
<br />
<div style="text-align: center;">
Simulating 15 pixels/degree</div>
<div style="text-align: center;">
<br /></div>
<a href="http://sensics.com/wp-content/uploads/2016/05/ShowdownEight-1.jpg"><img alt="ShowdownEight-1" class="aligncenter size-full wp-image-26493" src="http://sensics.com/wp-content/uploads/2016/05/ShowdownEight-1.jpg" height="362" width="640" /></a>
<br />
<div style="text-align: center;">
Simulating 7.5 pixels/degree</div>
<div style="text-align: center;">
<br /></div>
Higher pixel density for the visual system is not the same as higher pixel density for the screen because pixels on the screen are magnified through the optics. The same screen could be magnified differently with two different optical systems resulting in different pixel densities presented to the eye. It is true, though, that given the same optical system, higher pixel density of pixels on the screen does translate to higher pixel density presented to the eye.
As screens get better and better, we will get increasingly closer to eye-limiting resolution in the HMD and thus to essentially photo-realistic experiences.<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com1tag:blogger.com,1999:blog-7595096811901096571.post-55287646791910923342016-05-31T09:30:00.000-04:002016-05-31T09:30:29.768-04:00How binocular overlap impacts horizontal field of viewIn a previous post, we discussed <a href="http://sensics.com/what-is-binocular-overlap-and-why-should-you-care/">binocular overlap</a> which increases overall horizontal (and diagonal) field of view. HMD manufacturers sometimes create partially overlapped systems (e.g. overlap less than 100%) to increase the overall horizontal field of view.<br />
<br />
For example, imagine an eyepiece that provides a 90 degree horizontal field of view that subtends from 45° to the left to 45° to the right. If both left and right eyepieces point at the same angle, the overall horizontal field of view of the goggles is also from 45° to the left to 45° to the right, so a total of 90 degrees. When both eyepieces cover the same angles, as in this example, we call this 100% overlap.<br />
<br />
But now lets assume that the left eyepiece is rotated a bit to the left so that it subtends from 50° to the left and 40° to the right. The monocular field of view is unchanged at 90°. If the right eye is symmetrically moved, it now covers from 40° to the left to 50° to the right. In this case, the binocular (overall) horizontal field of view is 100°, so a bit larger than in the 100% case, and the overlap is 80° (40° to the left to 40° to the right) or 80/90=88.8%<br />
<br />
The following tables provide a useful reference to see how to percent of binocular overlap impacts the horizontal (and thus also the diagonal) field of view. We provide two tables, one for displays with a 16:9 aspect ratio (such as 2560x1440 or 1920x1080) and the other for 9:10 aspect ratio (such as the 1080x1200 display in the HTC VIVE). Click on them to see a larger version.<br />
<br />
For instance, if we look at the 16:9 table we can read through an example of a 90° diagonal field of view, which would translate into 82.1° horizontal and 52.2° vertical if the entire screen was visible. Going down the table we can see that at 100% overlap, the binocular horizontal field of view remains the same, e.g. 82.1° and the diagonal also remains the same. However, if we chose 80% binocular overlap, the binocular horizontal field of view grows to 98.6°, vertical stays the same and diagonal grows to 103.2°<br />
<a href="http://sensics.com/wp-content/uploads/2016/05/overlap-for-16-9-aspect-ratio.jpg"><img alt="overlap for 16-9 aspect ratio" class="aligncenter wp-image-26487 size-full" src="http://sensics.com/wp-content/uploads/2016/05/overlap-for-16-9-aspect-ratio.jpg" height="274" width="640" /></a><br />
<br />
<a href="http://sensics.com/wp-content/uploads/2016/05/overlap-for-9-10-aspect-ratio.jpg"><img alt="overlap for 9-10 aspect ratio" class="aligncenter size-full wp-image-26486" src="http://sensics.com/wp-content/uploads/2016/05/overlap-for-9-10-aspect-ratio.jpg" height="274" width="640" /></a><br />
<br />
For those interested, the exact math is below:
<img alt="overlap equations" class="aligncenter wp-image-26488 size-full" src="http://sensics.com/wp-content/uploads/2016/05/overlap-equations.jpg" height="490" width="550" /><div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-22869425972753256882016-05-08T19:28:00.001-04:002016-05-08T19:28:42.893-04:00Understanding Predictive Tracking<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMuqwh9rGyT6KrE2GGrTLC6deEMMWumK86NH7uQfHR1Sj64pxAA325Ia0o2_dIu7vvL-4JwLjwKK36JEpWVmfEM7rsuN3LggcjiLac4_mG7rFZoO4OfFb7cL9mM6qAa0p4dAdozULwjvk/s1600/abfilter.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="192" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMuqwh9rGyT6KrE2GGrTLC6deEMMWumK86NH7uQfHR1Sj64pxAA325Ia0o2_dIu7vvL-4JwLjwKK36JEpWVmfEM7rsuN3LggcjiLac4_mG7rFZoO4OfFb7cL9mM6qAa0p4dAdozULwjvk/s320/abfilter.png" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Image source: Adrian Boeing blog</td></tr>
</tbody></table>
In the context of AR and VR systems, predictive tracking refers to the process of predicting the future orientation and/or position of an object or body part. For instance, one might want to predict the orientation of the head or the position of the hand.<br />
<br />
<h4>
Why is predictive tracking useful?</h4>
One common use of predictive tracking is to reduce the apparent "motion to photon" latency, meaning the time between movement and when that movement is reflected in the drawn scene. Since there is some delay between movement and an updated display (more on the sources of that delay below), using an estimated future orientation and position as the data used in updating the display, could shorten that perceived latency.<br />
<br />
While a lot of attention has been focused on predictive tracking in virtual reality applications, it is also very important in augmented reality. For instance, if you are displaying a graphical overlay to appear on top of a physical object that you see with an augmented reality goggles, it is important that the overlay stays on the object even when you rotate your head. The object might be recognized with a camera, but it takes time for the camera to capture the frame, for a processor to determine where the object is in the frame and for a graphics chip to render the new overlay. By using predictive tracking, you can get better apparent registration between the overlay and the physical object.<br />
<br />
<h4>
How does it work? </h4>
If you saw a car travelling at a constant speed and you wanted to predict where that car will be one second in the future, you could probably make a fairly accurate prediction. You know the current position of the car, you might know (or can estimate) the current velocity, and thus you can extrapolate the position into the near future.<br />
<br />
Of course if you compare your prediction with where the car actually is in one second, your prediction is unlikely to be 100% accurate every time: the car might change direction or speed during that time. The farther out you are trying to predict, the less accurate your prediction will be: predicting where the car will be in one second is likely much more accurate than predicting where it will be in one minute.<br />
<br />
The more you know about the car and its behavior, the better chance you have of making an accurate prediction. For instance, if you were able to measure not only the velocity but also the acceleration, you can make a more accurate prediction.<br />
<br />
If you have additional information about the behavior of the tracked body, this can also improve prediction accuracy. For instance, when doing head tracking, understand how fast the head can possibly rotate and what are common rotation speeds, can improve the tracking model. Similarly, if you are doing eye tracking, you can use the eye tracking information to anticipate head movements as discussed <a href="http://vrguy.blogspot.com/2014/11/how-eye-tracking-can-impact-head.html" target="_blank">in this post</a><br />
<br />
<h4>
Sources of latency</h4>
<div>
The desired to perform predictive tracking comes from having some latency between actual movement and displaying an image that reflects that movement. Latency can come from multiple sources, such as:</div>
<div>
<ul>
<li><b>Sensing delays</b>. The sensors (e.g. gyroscope) may be bandwidth-limited and do not instantaneously report orientation or position changes. Similarly, camera-based sensors may exhibit delay between when the pixel on the camera sensor receives light from the tracked object to that frame being ready to be sent to the host processor.</li>
<li><b>Processing delays</b>. Sensors are often combined using some kind of sensor fusion algorithm, and executing this algorithm can add latency.</li>
<li><b>Data smoothing.</b> Sensor data is sometimes noisy and to avoid erroneous jitter, software or hardware-based low pass algorithms are executed.</li>
<li><b>Transmission delays.</b> For example, if orientation sensing is done using a USB-connected device, there is some non-zero time between the data available to be ready by the host processor and the time data transfer over USB is completed.</li>
<li><b>Rendering delays.</b> When rendering a non-trivial scene, it takes some time to have the image ready to be sent to the display device.</li>
<li><b>Frame rate delays</b>. If a display is operating at 100 Hz, for instance, there is a 10 mSec time between successive frames. Information that is not precisely current to when a particular pixel is drawn may need to wait until the next time that pixel is drawn on the display.</li>
</ul>
<div>
Some of these delays are very small, but unfortunately all of them add up and predictive tracking, along with other techniques such as <a href="http://vrguy.blogspot.com/2016/04/time-warp-explained.html" target="_blank">time warping</a>, are helpful in reducing the apparent latency.</div>
</div>
<div>
<br /></div>
<h4>
How much to track into the future?</h4>
<div>
In two words: it depends. You will want to estimate the end-the-end latency of your system as a starting point and then optimize them to your liking.</div>
<div>
<br /></div>
<div>
It may be that you will need to predict several timepoints into the future at any given time. Here are some examples why this may be required:</div>
<div>
<ul>
<li>There are objects with different end-to-end delays. For instance, a hand tracked with a camera may be have different latency than a head tracker, but both need to be drawn in sync in the same scene, so predictive tracking with different 'look ahead' times will be used.</li>
<li>In configurations where a single screen - such as a cell phone screen - is used to provide imagery to both eyes, it is often the case that the image for one eye appears with a delay of half a frame (e.g. half of 1/60 seconds, or approx 8 mSec) relative to the other eye. In this case, it is best to use predictive tracking that looks ahead 8 mSec more for that delayed half of the screen.</li>
</ul>
</div>
<h4>
Common prediction algorithms</h4>
<div>
Here is some sampling of predictive tracking algorithms:</div>
<div>
<ul>
<li><b>Dead reckoning</b>. This is a very simple algorithm: if the position and velocity (or angular position and angular velocity) is known at a given time, the predicted position assumes that the last know position and velocity are correct and the velocity remains the same. For instance, if the last known position is 100 units and the last known velocity is 10 units/sec, then the predicted position 10 mSec (0.01 seconds) into the future is 100 + 10 x 0.01 = 100.1. While this is very simple to compute, it assumes that the last position and velocity are accurate (e.g. not subject to any measurement noise) and that the velocity is constant. Both these assumptions are often incorrect.</li>
<li><b>Kalman predictor</b>. This is based on a popular Kalman filter that is used to reduce sensor noise in systems where there exists a mathematical model of the system's operation. See <a href="https://en.wikipedia.org/wiki/Kalman_filter" target="_blank">here</a> for more detailed explanation of the Kalman filter.</li>
<li><b>Alpha-beta-gamma</b>. The ABG predictor is closely related to the Kalman predictor, but is less general and has simpler math, which we can explain here at a high level. ABG tries to continuously estimate both velocity and acceleration and use them in prediction. Because the estimates take into account actual data, they provide some measurement noise reduction. Configuring the parameters (alpha, beta and gamma) provide the ability to emphasize responsiveness as opposed to noise reduction. If you'd like to follow the math, here it goes:</li>
</ul>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZL72DsWPYVIxz1zLOzfOTr47qTzA68C7IQkNVDKjBw26lg4DsaCfD0nRxNtt4IY3x-etyyUS71VtNQbOGqt68TSu4JZU7UYHJNOrULjWIau7ReymOToF2v2cmDgD7QARhxHCs3pTpKWg/s1600/alpha-beta-gamma+formulas.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="456" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZL72DsWPYVIxz1zLOzfOTr47qTzA68C7IQkNVDKjBw26lg4DsaCfD0nRxNtt4IY3x-etyyUS71VtNQbOGqt68TSu4JZU7UYHJNOrULjWIau7ReymOToF2v2cmDgD7QARhxHCs3pTpKWg/s640/alpha-beta-gamma+formulas.JPG" width="640" /></a></div>
<div>
<br /></div>
<h4>
Summary</h4>
<div>
Predictive tracking is a useful and commonly-used technique for reducing apparent latency. It offers simple or sophisticated implementations, requires some thought and analysis, but it is well worth it.</div>
<br /><div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0tag:blogger.com,1999:blog-7595096811901096571.post-12888972213128379692016-04-30T16:12:00.002-04:002016-04-30T16:12:48.260-04:00VR and AR in 12 variations<span style="font-family: inherit;">I've been thinking about how to classify VR and AR headsets and am starting to look at them along three dimensions (no pun intended):</span><br />
<ol>
<li><span style="font-family: inherit;"><b>VR </b>vs <b>AR</b></span></li>
<li><span style="font-family: inherit;"><b>PC-powered</b> vs. <b>Phone-powered</b> vs. <b>Self-powered</b>. This looks at where the processing and video generation is coming from. Is it connected to a PC? Is it using a standard phone? Or does it embed processing inside the headset </span></li>
<li><span style="font-family: inherit;"><b>Wide field</b> <b>of view </b>vs. <b>Narrow FOV</b></span></li>
</ol>
<div>
<span style="font-family: inherit;">This generates a total of 2 x 3 x 2 = 12 options as follows</span><br />
<span style="font-family: inherit;"><br /></span>
<br />
<table border="0" cellpadding="0" cellspacing="0" class="MsoNormalTable" style="background: white; border-collapse: collapse; mso-padding-alt: 0in 0in 0in 0in; mso-yfti-tbllook: 1184; width: 660px;">
<tbody>
<tr style="height: 15.0pt; mso-yfti-firstrow: yes; mso-yfti-irow: 0;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 184.5pt;" width="246">
<table border="0" cellpadding="0" cellspacing="0" class="MsoNormalTable" style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border-collapse: collapse; width: 660px;">
<tbody>
<tr style="height: 15.0pt; mso-yfti-firstrow: yes; mso-yfti-irow: 0;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><b>Configuration<o:p></o:p></b></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><b><br /></b></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal" style="text-align: center;">
<span style="color: #222222;"><span style="font-family: inherit;"><b>Example and typical use</b><o:p></o:p></span></span></div>
<div class="MsoNormal" style="text-align: center;">
<span style="color: #222222;"><span style="font-family: inherit;"><b><br /></b></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 1;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">1: VR, PC-powered, Wide-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Examples: Oculus, HTC Vive, Sensics dSight, OSVR HDK. This immersive VR configuration is used in many applications, though the most popular one is gaming. One attribute that separated consumer-grade goggles like the HTC Vive from professional-grade goggles such as the <a href="http://sensics.com/portfolio-posts/dsight/" target="_blank">Sensics dSight</a> is pixel density: the number of pixels per degree. You can think about this as the diffence between watching a movie on a 50 inch standard-definition TV as opposed to a 50 inch HDTV.</span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td></tr>
<tr style="height: 15.0pt; mso-yfti-irow: 2;"><td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252"><div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">2: VR, PC-powered, Narrow-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Example: Sensics <a href="http://sensics.com/portfolio-posts/zsight-1920/" target="_blank">zSight 1920</a>. With a given number of pixels per eye, narrow-field systems allow for much higher pixel density, which allows observing fine details or very small objects. For instance, imagine that you are training to land a UAV. The first step in landing a UAV is spotting it in the sky. The higher the pixel density is, the farther out you can spot an object of a given size. The zSight 1920 has about 32 pixels/degree whereas a modern consumer goggle like the HTC Vive has less than half that.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 3;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">3: VR, Phone-powered, Wide-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Examples: Samsung Gear VR, Google Cardboard, Zeiss VROne.. This configuration where the phone is inserted into some kind of holster is used for general-purpose mobile VR. The advantages of this configuration is its portability as well as its low cost - assuming you already own a compatible phone. The downside of this configuration is that the processing power of a phone is inferior to a high-end PC and thus the experience is more limited in terms of frame rate and scene complexity. Today's phones were not fully designed with VR in mind, so there are sometimes concerns about overheating and battery life.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 4;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">4: VR, Phone-powered, Narrow-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Example: LG 369 VR. In this configuration, the phone is not carried on the head but rather connected via a thin wire to a smaller unit on the head. The advantage of this configuration is that it can be very lightweight and compact. Also, the phone could potentially be used as an input pad. The downside is that the phone is connected via a cable. Another downside is often the cost. Because this configuration does not use the phone screens, it needs to include its own screens that might add to the cost. Another advantage is that the phone camera can not be used for video see-through or for sensing.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 5;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">5: VR, Self-powered, Wide-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Examples: Gameface Labs, Pico Neo. These configurations aim for standalone, mobile VR without requiring the mobile phone. They potentially save weight by not using unnecessary phone components such as the casing and touch screen, but would typically be more expensive than phone based VR for those users that already own the phone. They might have additional flexibility with regards to which sensors to include, camera placement and battery location. They are more difficult to upgrade relative to a phone-based VR solution, but the fact that the phone cannot be taken out might be an advantage for applications such as public VR where a fully-integrated system that cannot be easily taken apart is a plus.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 6;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">6: VR, Self-powered, Narrow-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Example: <a href="http://smartgoggles.net/" target="_blank">Sensics SmartGoggles.</a> These configurations are less popular today. Even the Sensics SmartGoggles which included on-board Android processor as well as wide-area hand sensors was built with relatively narrow field of view (60 degrees) because of the components available at the time.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 7;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">7: AR, PC-powered, Wide-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Example: Meta 2. In many augmented reality applications, people ask for wide field so that, for instance, a virtual object that appears overlaid on the real world does not disappear when the user looks to the side. This configuration may end up being transient because in many cases the value of augmented reality is in being able to interact with the real world, and the user's motion when tethered to a PC is more limited. However, one might see uses in applications such as engineering workstation.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 8;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">8: AR, PC-powered, Narrow-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;">I am not aware of good examples of this configuration. It combines the limit of narrow-field AR with the tether to the PC.</span></div>
<div class="MsoNormal">
<span style="color: #222222;"><br /></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 9;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">9: AR, Phone-powered, Wide-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">This could become one of the standard AR configuration just like phone-powered, wide-field VR is becoming a mainstream configuration. To get there, the processing power and optics/display technology catch up with the requirements.<br /></td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 10;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">10: AR, Phone-powered, Narrow-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">Example: Seebright. In this configuration, a phone is worn on the head and its screen becomes the display for the goggles. Semi-transparent optics combine phone-generated imagery with the real world. I believe this is primarily a transient configuration until wide-field models appear.<o:p></o:p></span></span></div>
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;"><br /></span></span></div>
</td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 11;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">11: AR, Self-powered, Wide-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">I am unaware of current examples of this configuration though one would assume it could be very attractive because of the mobility on one hand and the ability to interact in a wide field of view.<br /></td>
</tr>
<tr style="height: 15.0pt; mso-yfti-irow: 12; mso-yfti-lastrow: yes;">
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 189.0pt;" width="252">
<div class="MsoNormal">
<span style="color: #222222;"><span style="font-family: inherit;">12: AR, Self-powered, Narrow-field<o:p></o:p></span></span></div>
</td>
<td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 4.25in;" width="408">
<div class="MsoNormal">
<span style="color: #222222; font-family: inherit;">Examples: Microsoft Hololens, Google glass, Vuzix M300. There are two types of devices here: one is an 'information appliance' like Google Glass, designed to provide contextually-relevant information without taking over the field of view. These configurations are very attractive in industrial settings for applications like field technicians, workers in a warehouse or even customer service representatives needing a mobile, wearable terminal often to connect with a cloud-based database. The second type of device, </span><span style="color: #222222;">exemplified</span><span style="color: #222222; font-family: inherit;"> by the Hololens seeks to augment the reality by placing virtual objects locked in space. I am sure the Hololens would like to be a wide-field model and it is narrow field at the moment because of the limitations of its current display technology</span></div>
</td>
</tr>
</tbody></table>
<div class="MsoNormal">
</div>
<div class="MsoNormal">
<br /></div>
</td><td style="height: 15.0pt; padding: 0in 0in 0in 0in; width: 310.5pt;" width="414"><br /></td></tr>
</tbody></table>
<table border="0" cellpadding="0" cellspacing="0" class="MsoNormalTable" style="background: white; border-collapse: collapse; mso-padding-alt: 0in 0in 0in 0in; mso-yfti-tbllook: 1184; width: 660px;"><tbody>
</tbody></table>
<table border="0" cellpadding="0" cellspacing="0" class="MsoNormalTable" style="background: white; border-collapse: collapse; mso-padding-alt: 0in 0in 0in 0in; mso-yfti-tbllook: 1184; width: 678px;"><tbody>
</tbody></table>
Looking forward to feedback and comments.<br />
<div class="MsoNormal">
<br /></div>
</div>
<div class="blogger-post-footer"><script src="http://www.google-analytics.com/urchin.js" type="text/javascript">
</script>
<script type="text/javascript">
_uacct = "UA-726889-1";
urchinTracker();
</script></div>VRGuyhttp://www.blogger.com/profile/02351122537042235636noreply@blogger.com0