Prof. Eric Hodgson |
Eric, thank you for speaking with me. For those that do not know you, please tell us who are you and what do you do?
I'm a psychologist and professor at Miami University (the original one in Ohio, not that *other* Miami). I split my time between the Psychology department -- where I use virtual environments to study spatial perception, memory, and navigation -- and the interdisciplinary Interactive Media Studies program, where I teach courses in 3D modeling, data visualization, and virtual reality development to students from all across the university. I help oversee two sister facilities at Miami, the HIVE (Huge Immersive Virtual Environment) and the SIVC (Smale Interactive Visualization Center). The HIVE is a HMD-based facility with an 1,100 square meter tracking area. The SIVC houses a 4-walled CAVE along with several other 3D projection systems, immersive desktops, development labs, and several motion-capture systems. The HIVE has been funded mostly by the National Science Foundation, the Army Research Office, and the Ohio Board of Regents. The Smale Center was started with a $1.75m gift from the late John Smale, a former CEO of Proctor & Gamble, which uses CAVEs and other visualization systems in their R&D cycle.You are the director of Smale Interactive Visualization Center. What kind of work is being performed at the Center?
It's a multi-disciplinary center, and a large part of my job is to enable students and faculty from across to university to leverage VR for their work, especially if they don't have the skillset to do it themselves. We also work with regional, national, and international industry partners on specific projects. The work can vary widely, which I find interesting and encouraging -- VR is becoming a very general-purpose tool rather than a niche for a few narrow fields. One of our first projects was building an immersive, 3D Mandala for the Dali Lama for his visit to campus. We've also done motion capture of proper violin-playing arm motion for the music department, developed medical training simulations for the nursing program, developed experiments to study postural sway with psychology, done interactive virtual walk-thoughs of student-designed architectural projects, supported immersive game development, and done work on developing next-generation motion sensing devices and navigation interfaces. Not to mention a 3D visualization of 18th century poetry, which was a collaboration between the Center, the English department, Computer Science, and Graphic Design. I love my job. I also do a lot of tours, field trips, and workshops. When you have a CAVE, a zSpace, a pile of HMDs, and lots of other fun toys (um... I mean tools), you end up being a must-see stop on the campus tour.
A good portion of your research seems to be in the area of redirected walking. Can you explain, in a lay person’s terms, what is redirected walking?
In layman's terms, Redirected Walking is a way of getting people into walking in circles without them realizing it, while it looks like they are walking in a straight line visually. Virtual environments and game levels can be very big; tracking spaces in a lab are typically pretty small. Redirected walking lets immersed users double-back into the same physical space while traveling through a much larger virtual space. There are other techniques that can come into play, such as magnifying or compressing turns, or stretching distances somewhat, but the basic techniques are all aimed at getting people to double back into open physical space so they can keep walking in the virtual environment. It's a bit like the original holodeck on Star Trek... users go into a small room, it turns into an alternate reality, and suddenly they can walk for miles without hitting the walls.What made you become interested in this area?
Necessity, mostly. I'm a psychologist, studying human spatial cognition and navigation. My colleagues and I use a lot of virtual environments and motion tracking to do our research. VEs allow us to have complete control over the spaces people are navigating, and we can do cool things like moving landmarks, de-coupling visual and physical sensory information, and creating geometrically impossible spaces for people to navigate through. Our old lab was a 10m X 10m room, with a slightly smaller tracking area. As a result, we were stuck studying, essentially, room-sized navigation. There are a lot of interesting questions we could address, though, if we could let people navigate through larger spaces. So, we outfitted a gymnasium (and later a bigger gymnasium) with motion tracking that we called the HIVE, for Huge Immersive Virtual Environment. We built a completely wearable rendering system with battery-powered HMDs, and viola... we could study, um, big-room sized navigation. Since that still wasn't enough, we started exploring Redirected Walking as a way to study truly large-scale navigation in VEs with natural walking.It seems that one of the keys to successful redirection is providing visual cues that are imperceptible. Can you give us an example of the type of cues and their magnitude?
Some of the recent work we've done uses a virtual grocery store, so I'll use that as an example. Let's say you're immersed, and trying to walk down an isle and get to the milk cooler. I can rotate the entire store around an axis that's centered on your head, slowly, so that you'll end up veering in the direction I rotate the store (really we manipulate the virtual camera, but the end result is the same as rotating the store). The magnitude of the rotation scales up with movement speed in our algorithm, so if you walk faster -- and thus create more optic flow -- I can inject a little bit more course correction. The rotations tend to be on the order of 8 - 10 degrees per second. By comparison, when you turn and look around in an HMD, you can easily move a few hundred degrees per second. You could detect these kind of rotations easily if you were standing still, but while walking or turning there's enough optic flow, head bob, and jarring from foot impact that the adjustments get lost in all the movment. Our non-visual spatial senses (e.g., inertial sensing by the inner ear, kinesthetic senses from your legs, etc.) have just enough noise in them that the visuals still appear to match.Are all the cues visual or are the auditory or other cues that can be used?
Right now the cues we use are primarily visual, and to a lesser extent auditory, but it's easy to add in other senses if you have the ability. Since 3D audio emanates from a particular location in the virtual world, not the physical world, rotating the visuals brings all of the audio sources along with it. An active haptics display could work the same way, or a scent generator. Redirected walking essentially diverts the virtual camera, so any multisensory display that makes calculations based on your position in the virtual world will still work as expected. Adding more sensory feedback just reinforces what your eyes are seeing and should strengthen the effect.What are practical applications of redirected walking? Is there a case study of someone using redirected walking outside an academic environment?
A gymnasium is about the smallest space you can work with and still curve people around without them noticing, so this is never going to run in your living room. We do have a portable, wearable version of our system with accurate foot-based position tracking that can be taken out of lab and used in, say, a park or a soccer field. It's a bit tricky, though, since the user is essentially running around the great outdoors blindfolded. If you're a liability lawyer for a VR goggle manufacturer, that's the kind of use case that gives you nightmares, but redirected walking could actually work in the gaming market with the right safety protocols. For example, we have safety mechanisms built in to our own systems, which usually includes a sighted escort and an automated warning system when users approach a pre-defined boundary. This could work in a controlled theme park or arcade-type setting, or with home-users that use some common sense. I can also see this technique being useful in industry and military applications. For example, the portable backpack system could easily be used for mission rehearsal in some remote corner of the globe. A squad of soldiers could each wear their own simulation rig and have their own ad-hoc tracking area to move around in. Likewise, some industry simulations incorporate large spaces and can benefit from physical movement. One scenario that came up a few years ago related to training repair technicians for large oil refineries, which can cover a square mile or more. Standing in a CAVE and pushing the forward button on a joystick just doesn't give you the same experience of having to actually walk a thousand meters across the facility while carrying heavy equipment, and then making a mission-critical repair under time pressure. Redirected walking would increase the realism of the training simulation without requiring a mile-square tracking area. Finally, I can see this benefiting the K-12 education system. Doing a virtual field trip in the gym would be pretty cool, and a responsible teacher or two could be present to watch out for the kids' safety.Can redirected walking be applicable to augmented reality scenarios or just to immersive virtual reality?
It really doesn't make sense with augmented reality, in which you want the real and virtual worlds to be as closely aligned as possible. With redirected walking, the relationship between the real and virtual diverges pretty quickly. If you're doing large-scale navigation in AR, such as overlaying underground geological formations through a drill site, you'll want to actually navigate across the corresponding real-world space. It could make sense in some AR game situations, but it would be hard to make any continual, subtle adjustments to the virtual graphics without making them move perceptably relative to the real-world surroundings.Is this technique applicable also to multi-person scenarios?
Definitely, and that's something we're actively exploring now. As long as you're redirecting people, and effectively steering where they go in the real world, there's no reason not to take several immersed people in the same tracking space and weave them in and around each other. Or, as I mentioned above with our portable system, if you can reliably contain people to a certain physical space with redirection, you can spread people out across a field and let everyone have their own little region while traveling through extremely large VEs. Adding multiple users does add some unexpected complexities, however. Under normal conditions, for example, when two immersed users come face to face in the VE, they would also be face to face in the physical world, and they could talk to each other normally, or reach out and touch each other, or share tools, etc. With redirected walking, those same users could be tens or hundreds of meters apart in the real world, requiring some sort of VOIP solution. By the same token, someone who is a mile away virtually might actually be very close to you, and you could hear them talking but not be able to see them, leading to an Uncanny Valley scenario.How large or how small can a physical space be to implement successful redirected walking? Can this be used in a typical living room?
The HIVE is about 25m across in its narrowest dimension, which is about as small as you'd want to go. This is definitely not living-room material, which is where devices like the Omni will thrive instead. A lot of the literature recommends a space with a minimum radius of 30m+, which I think is about right. We have to stop people occasionally who are on a collision course with one of the lab's walls. A slightly larger space would let us catch and correct those trajectories automatically instead of manually stopping and redirecting the user. One thing to note is that the required tracking space interacts a lot with how much you turn up the redirection -- higher levels of steering constrain people to a smaller space, but they also become more noticeable. The type of VE and the user's task can also play a role. It seems like close-in environments like our virtual store make redirection more perceptible than open, visually ambiguous VEs like a virtual forest.How immersive does an experience need to be for redirected walking to be successful?
High levels of immersion definitely help, but I'm not sure there's a certain threshold for success or failure here. Redirection relies on getting people to focus on their location in the virtual world while ignoring where they are in the room, and to innately accept their virtual movement as being accurate, even though it's not. Anytime you're in a decent HMD with 6-DOF tracking, the immersion level is going to be fairly high anyways, so this turns out to be a fairly easy task. As long as redirection is kept at reasonably low levels, it has been shown to work without being noticed, without increasing simulator sickness, and without warping people's spatial perception or mental map of the space.Can you elaborate a bit on plans for future research in this area?
Right now the focus in our lab is on implementing mutli-user redirection and on improving the steering algorithms we use. We're also looking at behavioral prediction and virtual environment structure to try and predict where people might go next, or where they can't go next. For example, if I know you're in a hallway and can't turn for the next 10m, I can let you walk parallel to a physical wall without fear that you'll turn suddenly and hit it. There's a lot of other research going on right now in other labs that explore the perceptual limits of the effect and alternative methods of redirecting people. For example, it's possible to use an effect called "change blindness" to essentially restructure any part of the environment that's out of a person's view. So, if I'm looking at something on a virtual desk, the door behind me might move from one wall to another, causing me to leave alter my course by 90 degrees when I move to a different area. There's also a lot of work that's been done on catching potential wall collisions and gracefully resetting the user without breaking immersion too much.For those that want to learn more about redirected walking, what other material would you recommend?
I'd really recommend reading Sharif Razzaque's early work on the topic, much of which he published with Mary Whitton out of UNC Chapel HIll. (http://scholar.google.com/scholar?q=razzaque+redirected+walking&btnG=&hl=en&as_sdt=0%2C36)
I'd also recommend reading some of Frank Steinicke's recent work on the techniques and perceptable limits of redirection (http://img.uni-wuerzburg.de/personen/prof_dr_frank_steinicke/), or some of our lab's work comparing higher-level redirection strategies such as steering people towards a central point versus steering people onto an ideal orbit around the room (http://www.users.miamioh.edu/hodgsoep/publications.php).
Finally, there's a good book that just came out on Human Walking in Virtual Environments that contains several good chapters on redirection as well as a broader look at the challenges of navigating in VEs and the perceptual factors involved. (http://www.amazon.com/Human-Walking-Virtual-Environments-Applications/dp/1441984313).
No comments:
Post a Comment