hosted by
publicationslist.org
    

Bernhard E. Riecke


ber1@sfu.ca

Books

2003
B E Riecke (2003)  How far can we get with just visual information? : Path integration and spatial updating studies in virtual reality   Berlin: Logos 8:  
Abstract: How do we find our way in everyday life? In real world situations, it typically takes a considerable amount of time to get completely lost. In most Virtual Reality (VR) applications, however, users are quickly lost after only a few simulated turns. This happens even though many recent VR applications are already quite compelling and look convincing at first glance. So what is missing in those simulated spaces? Why is spatial orientation there not as easy as in the real world? In other words, what sensory information is essential for accurate, effortless and robust spatial orientation? How are the different information sources combined and processed? In this thesis, these and related questions were approached by performing a series of spatial orientation experiments in various VR setups as well as in the real world. Modeling of the underlying spatial orientation processes finally led to a comprehensive framework based on logical propositions, which was applied to both our experiments and selected experiments from the literature.
Notes:

Journal articles

2012
Klaus Gramann, Shawn Wing, Tzyy-Ping Jung, Erik Viirre, Bernhard E Riecke (2012)  Switching spatial reference frames for yaw and pitch navigation   Spatial Cognition and Computation 12: 2-3. 159-194 04  
Abstract: Humans demonstrate preferences to use egocentric or allocentric reference frames in navigation tasks that lack embodied (vestibular and/or proprioceptive) cues. Here, we investigated how reference frame proclivities affect spatial navigation in horizontal versus vertical planes. Point-to-origin performance after visually displayed vertical trajectories was surprisingly accurate and almost matched yaw performance for both egocentric and allocentric strategies. For vertical direction changes, 39% of participants unexpectedly switched to their non-preferred (allocentric) reference frame. This might be explained by vertical (25°–90° up/downward pitched) trajectories having lower ecological validity and creating more pronounced visuo-vestibular conflicts, emphasizing individual differences in processing idiothetic, embodied sensory information. (PsycINFO Database Record (c) 2012 APA, all rights reserved) (journal abstract)
Notes:
L McIntosh, B E Riecke, S DiPaola (2012)  Efficiently Simulating the Bokeh of Polygonal Apertures in a Postâ€Process Depth of Field Shader   Computer Graphics Forum  
Abstract: The effect of aperture shape on an image, known in photography as â€bokeh’, is an important characteristic of depth of field in real-world cameras. However, most real-time depth of field techniques produce Gaussian bokeh rather than the circular or polygonal bokeh that is almost universal in real-world cameras. â€Scattering’ (i.e. point-splatting) techniques provide a flexible way to model any aperture shape, but tend to have prohibitively slow performance, and require geometry-shaders or significant engine changes to implement. This paper shows that simple post-process â€gathering’ depth of field shaders can be easily extended to simulate certain bokeh effects. Specifically we show that it is possible to efficiently model the bokeh effects of square, hexagonal and octagonal apertures using a novel separable filtering approach. Performance data from a video game engine test demonstrates that our shaders attain much better frame rates than a naive non-separable approach.
Notes:
2010
Wataru Teramoto, Bernhard E Riecke (2010)  Dynamic visual information facilitates object recognition from novel viewpoints   Journal of Vision 10: 13. 1-13 11  
Abstract: Normally, people have difficulties recognizing objects from novel as compared to learned views, resulting in increased reaction times and errors. Recent studies showed, however, that this “view-dependency” can be reduced or even completely eliminated when novel views result from observer's movements instead of object movements. This observer movement benefit was previously attributed to extra-retinal (physical motion) cues. In two experiments, we demonstrate that dynamic visual information (that would normally accompany observer's movements) can provide a similar benefit and thus a potential alternative explanation. Participants performed sequential matching tasks for Shepard–Metzler-like objects presented via head-mounted display. As predicted by the literature, object recognition performance improved when view changes (45° or 90°) resulted from active observer movements around the object instead of object movements. Unexpectedly, however, merely providing dynamic visual information depicting the viewpoint change showed an equal benefit, despite the lack of any extra-retinal/physical self-motion cues. Moreover, visually simulated rotations of the table and hidden target object (table movement condition) yielded similar performance benefits as simulated viewpoint changes (scene movement condition). These findings challenge the prevailing notion that extra-retinal (physical motion) cues are required for facilitating object recognition from novel viewpoints, and highlight the importance of dynamic visual cues, which have previously received little attention.
Notes:
2009
Bernhard E Riecke, Daniel Feuereissen, John J Rieser (2009)  Auditory self-motion simulation is facilitated by haptic and vibrational cues suggesting the possibility of actual motion   ACM Transactions on Applied Perception (TAP) 6: 20:1–20:22-20:1–20:22  
Abstract: Sound fields rotating around stationary blindfolded listeners sometimes elicit auditory circular vection, the illusion that the listener is physically rotating. Experiment 1 investigated whether auditory circular vection depends on participants' situational awareness of “movability,” that is, whether they sense/know that actual motion is possible or not. While previous studies often seated participants on movable chairs to suspend the disbelief of self-motion, it has never been investigated whether this does, in fact, facilitate auditory vection. To this end, 23 blindfolded participants were seated on a hammock chair with their feet either on solid ground (“movement impossible”) or suspended (“movement possible”) while listening to individualized binaural recordings of two sound sources rotating synchronously at 60°/s. Although participants never physically moved, situational awareness of movability facilitated auditory vection. Moreover, adding slight vibrations like the ones resulting from actual chair rotation increased the frequency and intensity of vection. Experiment 2 extended these findings and showed that nonindividualized binaural recordings were as effective in inducing auditory circular vection as individualized recordings. These results have important implications both for our theoretical understanding of self-motion perception and for the applied field of self-motion simulations, where vibrations, nonindividualized binaural sound, and the cognitive/perceptual framework of movability can typically be provided at minimal cost and effort.
Notes: http://doi.acm.org/10.1145/1577755.1577763
Bernhard E Riecke, Aleksander Väljamäe, Jörg Schulte-Pelkum (2009)  Moving sounds enhance the visually-induced self-motion illusion (circular vection) in virtual reality   ACM Transactions on Applied Perception (TAP) 6: 7:1–7:27-7:1–7:27 03  
Abstract: While rotating visual and auditory stimuli have long been known to elicit self-motion illusions (“circular vection”), audiovisual interactions have hardly been investigated. Here, two experiments investigated whether visually induced circular vection can be enhanced by concurrently rotating auditory cues that match visual landmarks (e.g., a fountain sound). Participants sat behind a curved projection screen displaying rotating panoramic renderings of a market place. Apart from a no-sound condition, headphone-based auditory stimuli consisted of mono sound, ambient sound, or low-/high-spatial resolution auralizations using generic head-related transfer functions (HRTFs). While merely adding nonrotating (mono or ambient) sound showed no effects, moving sound stimuli facilitated both vection and presence in the virtual environment. This spatialization benefit was maximal for a medium (20° × 15°) FOV, reduced for a larger (54° × 45°) FOV and unexpectedly absent for the smallest (10° × 7.5°) FOV. Increasing auralization spatial fidelity (from low, comparable to five-channel home theatre systems, to high, 5° resolution) provided no further benefit, suggesting a ceiling effect. In conclusion, both self-motion perception and presence can benefit from adding moving auditory stimuli. This has important implications both for multimodal cue integration theories and the applied challenge of building affordable yet effective motion simulators.
Notes: http://doi.acm.org/10.1145/1498700.1498701
B E Riecke (2009)  Cognitive and higher-level contributions to illusory self-motion perception (“vection”) : does the possibility of actual motion affect vection?   Japanese Journal of Psychonomic Science 28: 1. 135-139  
Abstract: Large-field moving visual stimuli have long been known to be capable of inducing compelling illusions of self-motion (“vection”) in stationary observers. Traditionally, the origin of such visually induced self-motion illusions has been attributed to low-level, bottom-up perceptual processes without much cognitive/higher-level contribution. In the last years, however, this view has been challenged, and an increasing number of studies has investigated potential higher-level/cognitive contributions. This paper aims at providing a concise review and discussion of one of these aspects: Does the cognitive framework of whether or not actual movement is possible affect illusory self-motion? Despite a variety of different approaches, there is growing evidence that both cognitive and perceptual information indicating movability can facilitate self-motion perception, especially when combined. This has important implications for our understanding of cognitive/perceptual contributions to self-motion perception as well as the growing field of self-motion simulations and virtual reality, where the need for physical motion of the observer could be reduced by intelligent usage of cognitive/perceptual frameworks of movability.
Notes:
B E Riecke, D Feuereissen, J J Rieser (2009)  Rotating sound fields can facilitate biomechanical self-motion illusion ("circular vection")   Journal of Vision 9: 8. 714-714  
Abstract: While both biomechanical and moving auditory cues have been shown to elicit self-motion illusions ("circular vection"), their combined influence has never been investigated before. Here, we tested the influence of biomechanical vection (participants were seated stationary above a platform rotating at 60°/s and stepped along) and auditory vection (binaural recordings of two sound sources rotating at 60°/s) both in isolation and together. All participants reported biomechanical vection after a mean onset latency of 33.5s. Interestingly, even though auditory cues by themselves proved insufficient to induce vection in all but one participant, adding rotating sounds significantly enhanced biomechanical vection in all dependent measures: Vection onset times were decreased by 35%, vection intensity was increased by 32%, and participants had a stronger sensation of really rotating in the actual lab (28% increase). In fact, participants were able to update their orientation in the lab in all but the pure auditory condition, suggesting that their mental representation was directly affected by the biomechanical and auditory cues - although perceived self-rotation velocities were typically below the stimulus velocities. Apart from its theoretical relevance, the current findings have important implications for applications in, e.g., entertainment and motion simulation: While spatialized sound seems not by itself sufficient to induce compelling self-motion illusions, it can clearly support and facilitate biomechanical vection and has earlier been shown to also facilitate visually induced circular vection (Riecke et al., 2005, 2008) and thus support information from other modalities. Furthermore, high-fidelity, headphone-based sound simulation is not only reliable and affordable, but also offers an amount of realism that is yet unachievable for visual simulations: While even the best existing visual display setups will hardly be confused with "seeing the real thing", headphone-based auralization can be virtually indistinguishable from listening to the real sound und thus can provide a true "virtual reality". Support: NIMH Grant 2-R01-MH57868, NSF Grant 0705863, Vanderbilt University, Max Planck Society, Simon Fraser University.
Notes:
Bobby Bodenheimer, Daniel Feuereissen, Betsy Williams, Peng Peng, Timothy McNamara, Bernhard Riecke (2009)  Locomotion for navigation in virtual environments : Walking, turning, and joystick modalities compared   Journal of Vision 9: 8. 1126-1126  
Abstract: Considerable evidence shows that people have difficulty maintaining orientation in virtual environments. This difficulty is usually attributed to poor idiothetic cues, such as the absence of proprioception. The absence of proprioceptive cues makes a strong argument against the use of a joystick interface for locomotion. The importance of full physical movement for navigation has also recently been confirmed (Ruddle and Lessels, 2006), where subjects performed a navigational task better when they walked freely rather than when they could only physically rotate or only move virtually.Our experiment replicates the experiment of Ruddle and Lessels but under different conditions. Here all conditions are conducted using a head-mounted display, whereas Ruddle and Lessels mixed display types. Our environment contains no environmental cues to geometry, as all landmarks are either randomly placed and oriented, or absent, whereas the Ruddle and Lessels environment included a simulated rectangular room that was always visible. People are sensitive to environmental geometry, but the effect on navigation is an active area of research (Kelly et al., 2008), thus our environment omitted them.In this experiment, subjects (N=12) locomoted through an environment in one of three ways: they walked, they used the joystick to translate while physically rotating their bodies to change orientation, or they used a joystick to both translate and rotate with no physical movement occurring.A within-subjects design found that subjects were marginally better in the walking condition than in other conditions (F(1,11) = 2.88, p=.07). Subjects were significantly slower in the joystick condition than in other conditions (F(1,1)=5.44, p=.01). Subjects traveled significantly less distance in completing the task in the walking condition than in other conditions (F(1,11)=4.28, p=.03). In general, we conclude that walking seems a better method for locomotion in virtual environments than locomoting with a joystick.
Notes:
2008
J W Kelly, B E Riecke, J M Loomis, A C Beall (2008)  Visual control of posture in real and virtual environments   Perception & Psychophysics 70: 1. 158-165  
Abstract: In two experiments, we investigated the stabilizing influence of vision on human upright posture in real and virtual environments. Visual stabilization was assessed by comparing eyes-open with eyes-closed conditions while participants attempted to maintain balance in the presence of a stable visual scene. Visual stabilization in the virtual display was reduced, as compared with real-world viewing. This difference was partially accounted for by the reduced field of view in the virtual display. When the retinal flow in the virtual display was removed by using dynamic random-dot stereograms with single-frame lifetimes (cyclopean stimuli), vision did not stabilize posture. There was also an overall larger stabilizing influence of vision when more unstable stances were adopted (e.g., one-foot, as compared with side-by-side, stance). Reducing the graphics latency of the virtual display by 63% did not increase visual stabilization in the virtual display. Other visual and psychological differences between real and virtual environments are discussed.
Notes:
Bernhard E Riecke (2008)  Consistent Left-Right Reversals for Visual Path Integration in Virtual Reality : More than a Failure to Update One's Heading?   Presence: Teleoperators and Virtual Environments 17: 2. 143-175  
Abstract: Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In VR applications and psychological experiments, such disorientation is often compensated for by extensive training. Here, two experimental series investigated participants' sense of direction by means of a rapid point-to-origin paradigm without any performance feedback or training. This paradigm allowed us to study participants' intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along one- or two-segment trajectories, participants were asked to point back to the origin of locomotion “as accurately and quickly as possible.” Despite using an immersive, high-quality video projection with a 84° × 63° field of view, participants' overall performance was rather poor. Moreover, about 40% of the participants exhibited striking qualitative errors, namely left-right reversals—despite not misinterpreting the visually simulated turning direction. Even when turning angles were announced in advance to obviate encoding errors due to misperceived turning angles, many participants still produced surprisingly large systematic and random errors, and perceived task difficulty and response times were unexpectedly high. Careful analysis suggests that some, but not all, of the left-right inversions can be explained by a failure to update visually displayed heading changes. Taken together, this study shows that even an immersive, high-quality video projection system is not necessarily sufficient for enabling natural and intuitive spatial orientation or automatic spatial updating in VR, even when advance information about turning angles was provided. We posit that investigating qualitative errors for basic spatial orientation tasks using, for example, rapid point-to-origin paradigms can be a powerful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance. We provide some guidelines for VR system designers.
Notes:
2007
Ahmet Oǧuz AkyĂĽz, Roland Fleming, Bernhard E Riecke, Erik Reinhard, Heinrich H BĂĽlthoff (2007)  Do HDR displays support LDR content? : a psychophysical evaluation   ACM Transactions on Graphics (TOG) 38.1-38.7-38.1-38.7  
Abstract: The development of high dynamic range (HDR) imagery has brought us to the verge of arguably the largest change in image display technologies since the transition from black-and-white to color television. Novel capture and display hardware will soon enable consumers to enjoy the HDR experience in their own homes. The question remains, however, of what to do with existing images and movies, which are intrinsically low dynamic range (LDR). Can this enormous volume of legacy content also be displayed effectively on HDR displays? We have carried out a series of rigorous psychophysical investigations to determine how LDR images are best displayed on a state-of-the-art HDR monitor, and to identify which stages of the HDR imaging pipeline are perceptually most critical. Our main findings are: (1) As expected, HDR displays outperform LDR ones. (2) Surprisingly, HDR images that are tone-mapped for display on standard monitors are often no better than the best single LDR exposure from a bracketed sequence. (3) Most importantly of all, LDR data does not necessarily require sophisticated treatment to produce a compelling HDR experience. Simply boosting the range of an LDR image linearly to fit the HDR display can equal or even surpass the appearance of a true HDR image. Thus the potentially tricky process of inverse tone mapping can be largely circumvented.
Notes: http://doi.acm.org/10.1145/1275808.1276425
B E Riecke, D W Cunningham, H H BĂĽlthoff (2007)  Spatial updating in virtual reality : the sufficiency of visual information   Psychological Research 71: 3. 298-313 05  
Abstract: Robust and effortless spatial orientation critically relies on "automatic and obligatory spatial updating", a largely automatized and reflex-like process that transforms our mental egocentric representation of the immediate surroundings during ego-motions. A rapid pointing paradigm was used to assess automatiâ„…bligatory spatial updating after visually displayed upright rotations with or without concomitant physical rotations using a motion platform. Visual stimuli displaying a natural, subject-known scene proved sufficient for enabling automatic and obligatory spatial updating, irrespective of concurrent physical motions. This challenges the prevailing notion that visual cues alone are insufficient for enabling such spatial updating of rotations, and that vestibular/proprioceptive cues are both required and sufficient. Displaying optic flow devoid of landmarks during the motion and pointing phase was insufficient for enabling automatic spatial updating, but could not be entirely ignored either. Interestingly, additional physical motion cues hardly improved performance, and were insufficient for affording automatic spatial updating. The results are discussed in the context of the mental transformation hypothesis and the sensorimotor interference hypothesis, which associates difficulties in imagined perspective switches to interference between the sensorimotor and cognitive (to-be-imagined) perspective.
Notes: http://dx.doi.org/10.1007/s00426-006-0085-z
2006
Bernhard E Riecke, Jörg Schulte-Pelkum, Marios N Avraamides, Markus Von Der Heyde, Heinrich H BĂĽlthoff (2006)  Cognitive factors can influence self-motion perception (vection) in virtual reality   ACM Transactions on Applied Perception (TAP) 3: 194-216 07  
Abstract: Research on self-motion perception and simulation has traditionally focused on the contribution of physical stimulus properties (“bottom-up factors”) using abstract stimuli. Here, we demonstrate that cognitive (“top-down”) mechanisms like ecological relevance and presence evoked by a virtual environment can also enhance visually induced self-motion illusions (vection). In two experiments, naive observers were asked to rate presence and the onset, intensity, and convincingness of circular vection induced by different rotating visual stimuli presented on a curved projection screen (FOV: 54° × 45°). Globally consistent stimuli depicting a natural 3D scene proved more effective in inducing vection and presence than inconsistent (scrambled) or unnatural (upside-down) stimuli with similar physical stimulus properties. Correlation analyses suggest a direct relationship between spatial presence and vection. We propose that the coherent pictorial depth cues and the spatial reference frame evoked by the naturalistic environment increased the believability of the visual stimulus, such that it was more easily accepted as a stable “scene” with respect to which visual motion is more likely to be judged as self-motion than object motion. This work extends our understanding of mechanisms underlying self-motion perception and might thus help to improve the effectiveness and believability of virtual reality applications.
Notes:
2005
Bernhard E Riecke, Markus Von Der Heyde, Heinrich H BĂĽlthoff (2005)  Visual cues can be sufficient for triggering automatic, reflexlike spatial updating   ACM Transactions on Applied Perception (TAP) 2: 183-215 07  
Abstract: “Spatial updating” refers to the process that automatically updates our egocentric mental representation of our immediate surround during self-motions, which is essential for quick and robust spatial orientation. To investigate the relative contribution of visual and vestibular cues to spatial updating, two experiments were performed in a high-end Virtual Reality system. Participants were seated on a motion platform and saw either the surrounding room or a photorealistic virtual model presented via head-mounted display or projection screen. After upright rotations, participants had to point “as accurately and quickly as possibl ” to previously learned targets that were outside of the current field of view (FOV). Spatial updating performance, quantified as response time, configuration error, and pointing error, was comparable in the real and virtual reality conditions when the FOV was matched. Two further results challenge the prevailing basic assumptions about spatial updating: First, automatic, reflexlike spatial updating occurred without any physical motion, i.e., visual information from a known scene alone can, indeed, be sufficient, especially for large FOVs. Second, continuous-motion information is not, in fact, mandatory for spatial updating---merely presenting static images of new orientations proved sufficient, which motivated our distinction between continuous and instant-based spatial updating.
Notes: http://doi.acm.org/10.1145/1077399.1077401
2004
2003
J Schulte-Pelkum, B E Riecke, M von der Heyde, H H BĂĽlthoff (2003)  Screen curvature does influence the perception of visually simulated ego-rotations   Journal of Vision 3: 9.  
Abstract: In general, the literature suggests that visual information alone is insufficient to control rotational self-motion accurately. Typically, subjects misperceive simulated self-rotations when no vestibular or proprioceptive feedback is available (see Bakker et al., 1999; 2001 - these studies were done with HMDs). On the other hand, Riecke et al. (2002) found nearly perfect turning performance when a curved, half-cylindrical projection screen with a large FOV of 180$^\circ$ was used. So far, no study has systematically looked at the effect of screen curvature on ego-motion perception. To investigate whether screen curvature influences turning performance, we had 14 participants perform visually simulated ego-rotations either using a flat projection screen (FOV 86$^\circ$$\times$64$^\circ$) or a curved projection screen (radius 2m) with the same FOV in a within-subject repeated-measures design. Subjects saw a star field of limited lifetime dots without any landmarks, and they used a joystick to control instructed turn angles between 45 and 270$^\circ$ (steps of 45$^\circ$). No feedback about accuracy was provided. A repeated-measures ANOVA revealed a significant effect of screen curvature, and also an interaction between curvature and turn angle: While target angles were undershot on the curved screen (gain factor 0.84), a surprising overshoot was observed for the flat screen (gain factor 1.12). Subjects verbal reports indicate that on the curved screen, the simulated self-rotations looked more realistic than on the flat screen. This may have led them to overestimate turns on the curved screen (thus undershoot turn angles) and to underestimate turns on the flat screen (thus overshoot turn angles). A possible explanation is that rotational lamellar flow on the flat screen was misperceived as translational flow rather than as rotational flow. Results indicate that screen curvature is a critical parameter to be considered for ego-motion simulation and vection studies.
Notes: doi: 10.1167/3.9.411
2002
Bernhard E Riecke, Henricus A H C van Veen, Heinrich H BĂĽlthoff (2002)  Visual homing is possible without landmarks : a path integration study in virtual reality   Presence: Teleoperators and Virtual Environments 11: 443-473 10  
Abstract: The literature often suggests that proprioceptive and especially vestibular cues are required for navigation and spatial orientation tasks involving rotations of the observer. To test this notion, we conducted a set of experiments in virtual environments in which only visual cues were provided. Participants had to execute turns, reproduce distances, or perform triangle completion tasks. Most experiments were performed in a simulated 3D field of blobs, thus restricting navigation strategies to path integration based on optic flow. For our experimental set-up (half-cylindrical 180 deg. projection screen), optic flow information alone proved to be sufficient for untrained participants to perform turns and reproduce distances with negligible systematic errors, irrespective of movement velocity. Path integration by optic flow was sufficient for homing by triangle completion, but homing distances were biased towards the mean response. Additional landmarks that were only temporarily available did not improve homing performance. However, navigation by stable, reliable landmarks led to almost perfect homing performance. Mental spatial ability test scores correlated positively with homing performance, especially for the more complex triangle completion tasks-suggesting that mental spatial abilities might be a determining factor for navigation performance. In summary, visual path integration without any vestibular or kinesthetic cues can be sufficient for elementary navigation tasks like rotations, translations, and triangle completion.
Notes:
2001
B E Riecke, M von der Heyde, H H BĂĽlthoff (2001)  How Real is Virtual Reality Really? : Comparing Spatial Updating using Pointing Tasks in Real and Virtual Environments   Journal of Vision 1: 3. 321a-321a  
Abstract: When moving through space, we continuously update our egocentric mental spatial representation of our surroundings. We call this seemingly effortless, automatic, and obligatory (i.e., hard-to-suppress) process "spatial updating". Our goal here is twofold: (1) To quantify spatial updating; (2) Investigate the importance and interaction of visual and vestibular cues for spatial updating. In a learning phase (20 min) subjects learned the positions of twelve targets attached to the walls, 2.5m away. Subjects saw either the real environment or a photo-realistic copy presented via a head-mounted display (HMD). A motion platform was used for vestibular stimulation. In the test phase subjects were rotated to different orientations and asked to point "as quickly and accurately as possible" to four targets announced consecutively via headphones. In general, subjects had no problem mentally updating their orientation in space and were as good as for rotations where they were immediately returned to the original orientation. Performance, quantified as response time, absolute pointing error and pointing variability, was best in the real world condition. However, when the field of view was limited via cardboard blinders to match that of the HMD (40x30 deg), performance decreased and was comparable to the HMD condition. Presenting turning information only visually (through the HMD) hardly altered those results. In both the real world and HMD conditions, spatial updating was obligatory in the sense that it was significantly more difficult to IGNORE ego-turns (i.e., "point as if not having turned") than to UPDATE them as usual. Speeded pointing tasks proved to be a viable method for quantifying "spatial updating". We conclude that, at least for the limited turning angles used (<60 deg), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was presented. ACKNOWLEDGEMENTS: This research was funded by the Max-Planck Society and the Deutsche Forschungsgemeinschaft (SFB 550)
Notes:
2000
H H BĂĽlthoff, B E Riecke, H A H C van Veen (2000)  Do we really need vestibular and proprioceptive cues for homing   Invest. Ophthalmol. Vis. Sci. (ARVO) 41: 4. 225B225-225B225  
Abstract: Purpose: The literature generally suggests that even for simple navigation tasks, optic flow information is insufficient, and proprioceptive or vestibular cues are required. To test this claim, we conducted triangle completion experiments in a virtual environment providing optic flow information only. Methods: All experiments were performed in a simulated 3D field of blobs providing a convincing feeling of self-motion but no landmarks, thus restricting navigation strategies to optic flow based path integration. Ego-motion was visually simulated on a half-cylindrical 180$^\circ$ projection screen of 7m diameter using mouse buttons for locomotion control. In Exp 1, subjects had to execute turns and reproduce distances using randomized given velocities. Exp's 2 and 3 were triangle completion experiments: Subjects had to return to the starting point after moving outwards along two prescribed segments of the triangle. In Exp 2, five different isosceles triangles for left and right turns were used. In Exp 3, 60 different triangles were used, with length of leg 1, leg 2 and the enclosed angle randomized. Results: In all experiments, we found a linear correlation between executed and correct turns or distances, quantified by the slope of the fit (compression rate) and the signed error. In Exp 1, untrained subjects executed distances and turns with negligible systematic errors, irrespective of movement velocity. Exp 2 revealed no signed error for distance or turn response. Subjects showed a strong tendency towards mean responses for distances (slope 0.58), but not for turns (0.91). In Exp 3, performance was enhanced for distances (0.86), but reduced for turns (0.77). Conclusions: Visual path integration by optic flow proved to be reliable and sufficient for simple navigation tasks. Compared to similar triangle completion experiments using virtual environments (Péruch et al., Perc. 97; Duchon et al., Psychonomics 99) or blind locomotion (Loomis et al., JEP 93), we did not find the typically observed strong compression (slope <0.5) towards mean turn responses. We also found fewer systematic errors. In our experiments, vestibular and proprioceptive cues were not needed for accurate homing.
Notes:

Book chapters

2011
2010
Bernhard Riecke, Bobby Bodenheimer, Timothy McNamara, Betsy Williams, Peng Peng, Daniel Feuereissen (2010)  Do We Need to Walk for Effective Virtual Reality Navigation? : Physical Rotations Alone May Suffice   Edited by:Christoph Hölscher, Thomas Shipley, Marta Olivetti Belardinelli, John Bateman, Nora Newcombe. 234-247 Springer Berlin / Heidelberg  
Abstract: Physical rotations and translations are the basic constituents of navigation behavior, yet there is mixed evidence about their relative importance for complex navigation in virtual reality (VR). In the present experiment, 24 participants wore head-mounted displays and performed navigational search tasks with rotations/translations controlled by physical motion or joystick. As expected, physical walking showed performance benefits over joystick navigation. Controlling translations via joystick and rotations via physical rotations led to better performance than joystick navigation, and yielded almost comparable performance to actual walking in terms of search efficiency and time. Walking resulted, however, in increased viewpoint changes and shorter navigation paths, suggesting a rotation/translation tradeoff and different navigation strategies. While previous studies have emphasized the importance of full physical motion via walking (Ruddle &amp; Lessels, 2006, 2009), our data suggests that considerable navigation improvements can already be gained by allowing for full-body rotations, without the considerable cost, space, tracking, and safety requirements of free-space walking setups.
Notes:
2000

Conference papers

2012
Bernhard E Riecke (2012)  Are left-right hemisphere errors in point-to-origin tasks in VR caused by failure to incorporate heading changes?   1-20  
Abstract: Optic flow displays are frequently used both in spatial cognition/psychology research and VR simulations to avoid the influence of recognizable landmarks. However, optic flow displays not only lead to frequent misperceptions of simulated turns, but also to drastic qualitative errors: When asked to point back to the origin of locomotion after viewing simulated 2-segment excursions in VR, between 40\% (Riecke 2008) and 100\% (Klatzky et al., 1998) of participants responded as if they failed to update and incorporate the visually simulated turns into their responses. To further investigate such “NonTurner” behaviour, the current study used a wider range of path geometries that allow for clearer disambiguation of underlying strategies and mental processes. 55\% of participants showed clear qualitative pointing errors (left-right hemisphere errors), thus confirming the reliability of the effect and the difficulties in properly using optic flow even in high-quality VR displays. Results suggest that these qualitative errors are not caused by left-right mirrored responses, but are indeed based on a failure to properly incorporate visually presented turns into point-to-origin responses. While the majority of these qualitative errors could be attributed to NonTurner behaviour as previously proposed, we identified a novel, modified NonTurner strategy that could reconcile prior findings. Finally, results suggest that Turners (which properly incorporate visually presented turns) use online updating of the homing direction, whereas NonTurners resort to more effortful and cognitively demanding offline strategies. Better understanding these strategies and underlying processes and how they depend on stimulus and display parameters can help to inform the design of more effective VR simulations.
Notes:
Bernhard E Riecke, Daniel Feuereissen, John J Rieser, Timothy P McNamara (2012)  Self-Motion Illusions (Vection) in VR – Are They Good For Anything?   35-38  
Abstract: When we locomote through real or virtual environments, self-to- object relationships constantly change. Nevertheless, in real environments we effortlessly maintain an ongoing awareness of roughly where we are with respect to our immediate surrounds, even in the absence of any direct perceptual support (e.g., in darkness or with eyes closed). In virtual environments, however, we tend to get lost far more easily. Why is that? Research suggests that physical motion cues are critical in facilitating this ―automatic spatial updating‖ of the self-to-surround relationships during perspective changes. However, allowing for full physical motion in VR is costly and often unfeasible. Here, we demonstrated for the first time that the mere illusion of self- motion (―circular vection‖) can provide a similar benefit as actual self-motion: While blindfolded, participants were asked to imagine facing new perspectives in a well-learned room, and point to previously-learned objects. As expected, this task was difficult when participants could not physically rotate to the instructed perspective. Performance was significantly improved, however, when they perceived illusory self-rotation to the novel perspective (even though they did not physically move). This circular vection was induced by a combination of rotating sound fields (―auditory vection‖) and biomechanical vection from stepping along a carrousel-like rotating floor platter. In summary, illusory self- motion was shown to indeed facilitate perspective switches and thus spatial orientation. These findings have important implications for both our understanding of human spatial cognition and the design of more effective yet affordable VR simulators. In fact, it might ultimately enable us to relax the need for physical motion in VR by intelligently utilizing self-motion illusions.
Notes:
Bernhard E Riecke, Daniel Feuereissen (2012)  To Move or Not to Move : Can Active Control and User-Driven Motion Cueing Enhance Self-Motion Perception ("Vection") in Virtual Reality?   1-8 ACM  
Abstract: Can self-motion perception in virtual reality (VR) be enhanced by providing affordable, user-powered minimal motion cueing? To investigate this, we compared the effect of different interaction and motion paradigms on onset latency and intensity of self-motion illusions (“vection”) induced by curvilinear locomotion in projection-based VR. Participants either passively observed the simulation or had to actively follow pre-defined trajectories of different curvature in a simple virtual scene. Visual-only locomotion (either passive or with joystick control) was compared to locomotion controlled by a modified Gyroxus gaming chair, where leaning forwards and sideways (±10cm) controlled simulated translations and rotations, respectively, using a velocity control paradigm similar to a joystick. In the active visual+chair motion condition, participants controlled the chair motion and resulting virtual locomotion themselves, without the need for external actuation. In the passive visual+chair motion condition, the experimenter did this. Self-motion intensity was increased in the visual+chair motion conditions as compared visual-only motion, corroborating the benefit of simple motion cueing. Surprisingly, however, active control reduced the occurrence of vection and increased vection onset latencies, especially in the chair motion condition. This might be related to the reduced intuitiveness and controllability observed for the active chair motion as compared to the joystick condition. Together, findings suggest that simple user-initiated motion cueing can in principle provide an affordable means of increasing self-motion simulation fidelity in VR. However, usability and controllability issues of the gaming chair used might have counteracted the benefit of such motion cueing, and suggests ways to improve the interaction paradigm.
Notes:
Jay Vidyarthi, Bernhard E Riecke, Diane Gromala (2012)  Encouraging Meditative Experiences through Respiratory-Musical Interaction   1-4  
Abstract: We have designed and implemented a chamber of complete darkness where users shape a peaceful soundscape using only their respiration. This interactive system was designed to foster a meditative experience by facilitating users’ sense of immersion while following a specific attentional pattern characteristic of mindfulness. The goal of Sonic Cradle is twofold: first, to trigger the proven effects of mindfulness on stress, and second, to help teach and demystify the concept of meditation for users’ long- term benefit. This short research note situates and presents this interaction design concept and its first implementation. We conclude by touching upon ongoing co-design sessions and our long-term plans for mixed methods validation.
Notes:
Bernhard E Riecke, Salvar Sigurdarson, Andrew P Milne (2012)  Moving through Virtual Reality without moving?   1-7  
Abstract: Virtual Reality (VR) technology is increasingly used in spatial cognition research, as it offers high experimental control and interactivity in naturalistic multi-modal environments, something that is difficult to achieve in real-world settings. Even in the most sophisticated and costly VR systems people do not necessarily perceive and behave as they would in the real world. This might be related to our inability to use embodied (and thus often highly automated and effective) spatial orientation processes in VR. While real-world locomotion affords automatic and obligatory spatial updating of our self-to-surrounding relationships, such that we easily remain oriented during simple perspective changes, the same is not necessarily true in VR. This can lead to striking systematic and qualitative errors such as failures to update rotations (“Nonturner” behaviour). Here, we investigated whether rich naturalistic visual stimuli in immersive VR might be sufficient to compensate for the lack of physical motion. To this end, 24 participants performed point-to-origin tasks after visually simulated excursions along streets of varying curvature in a naturalistic virtual city. Most (21/24) participants properly updated simulated self-motions and showed only moderate regression towards mean pointing responses. 3/24 participants, however, exhibited striking “Nonturner” behaviour in that they pointed as if they did not update the visually simulated turns and their heading had not changed. This suggests that our immersive naturalistic VR stimuli were an improvement over prior optic flow stimuli, but still insufficient in eliciting obligatory spatial updating that supported correct point-to-origin responses in all participants.
Notes:
Salvar Sigurdarson, Andrew P Milne, Daniel Feuereissen, Bernhard E Riecke (2012)  Can phys­i­cal motions pre­vent dis­ori­en­ta­tion in nat­u­ral­is­tic VR?   31-34  
Abstract: Most virtual reality simulators have a serious flaw: Users tend to get easily lost and disoriented as they navigate. According to the prevailing opinion, this is because of the lack of actual physical motion to match the visually simulated motion: E.g., using HMD- based VR, Klatzky et al. [1] showed that participants failed to update visually simulated rotations unless they were accompanied by physical rotation of the observer, even if passive. If we use more naturalistic environments (but no salient landmarks) instead of just optic flow, would physical motion cues still be needed to prevent disorientation? To address this question, we used a para- digm inspired by Klatzky et al.: After visually displayed passive movements along curved streets in a city environment, partici- pants were asked to point back to where they started. In half of the trials the visually displayed turns were accompanied by a match- ing physical rotation. Results showed that adding physical motion cues did not improve pointing performance. This suggests that physical motions might be less important to prevent disorientation if visuals are naturalistic enough. Furthermore, unexpectedly two participants consistently failed to update the visually simulated heading changes, even when they were accompanied by physical rotations. This suggests that physical motion cues do not neces- sarily improve spatial orientation ability in VR (by inducing ob- ligatory spatial updating). These findings have noteworthy impli- cations for the design of effective motion simulators.
Notes:
Salvar Sigurdarson, Andrew P Milne, Daniel Feuereissen, Bernhard E Riecke (2012)  Can physical motions prevent disorientation in naturalistic VR?   31-34  
Abstract: Most virtual reality simulators have a serious flaw: Users tend to get easily lost and disoriented as they navigate. According to the prevailing opinion, this is because of the lack of actual physical motion to match the visually simulated motion: E.g., using HMD-based VR, Klatzky et al. [1] showed that participants failed to update visually simulated rotations unless they were accompanied by physical rotation of the observer, even if passive. If we use more naturalistic environments (but no salient landmarks) instead of just optic flow, would physical motion cues still be needed to prevent disorientation? To address this question, we used a paradigm inspired by Klatzky et al.: After visually displayed passive movements along curved streets in a city environment, participants were asked to point back to where they started. In half of the trials the visually displayed turns were accompanied by a matching physical rotation. Results showed that adding physical motion cues did not improve pointing performance. This suggests that physical motions might be less important to prevent disorientation if visuals are naturalistic enough. Furthermore, unexpectedly two participants consistently failed to update the visually simulated heading changes, even when they were accompanied by physical rotations. This suggests that physical motion cues do not necessarily improve spatial orientation ability in VR (by inducing obligatory spatial updating). These findings have noteworthy implications for the design of effective motion simulators.
Notes:
Anna Macaranas, Alissa N Antle, Bernhard E Riecke (2012)  Bridging the gap : attribute and spatial metaphors for tangible interface design   161-168 ACM  
Abstract: If tangible user interfaces (TUIs) are going to move out of research labs and into mainstream use they need to support tasks in abstract as well as spatial domains. Designers need guidelines for TUIs in these domains. Conceptual Metaphor Theory can be used to design the relations between physical objects and abstract representations. In this paper, we use physical attributes and spatial properties of objects as source domains for conceptual metaphors. We present an empirical study where twenty participants matched physical representations of image schemas to metaphorically paired adjectives. Based on our findings, we suggest twenty pairings that are easily identified, suggest groups of image schemas that can serve as source domains for a variety of metaphors, and provide guidelines for structuring physical-abstract mappings in abstract domains. These guidelines can help designers apply metaphor theory to design problems in abstract domains, resulting in effective interaction.
Notes:
2011
Bernhard E Riecke, Daniel Feuereissen, John J Rieser, Timothy P McNamara (2011)  Spatialized sound enhances biomechanically-induced self-motion illusion (vection)   2799-2802  
Abstract: The use of vection, the illusion of self-movement, has recently been explored as a novel way to immerse observers in mediated environments through illusory yet compelling self-motion without physically moving. This provides advantages over existing systems that employ costly, cumbersome, and potentially hazardous motion platforms, which are often surprisingly inadequate to provide life-like motion experiences. This study investigates whether spatialized sound rotating around the stationary, blindfolded listener can facilitate biomechanical vection, the illusion of self-rotation induced by stepping along a rotating floor plate. For the first time, integrating simple auditory and biomechanical cues for turning in place evoked convincing circular vection. In an auditory baseline condition, participants experienced only spatialized auditory cues. In a purely biomechanical condition, seated participants stepped along sideways on a rotating plate while listening to mono masking sounds. Scores of the bi-modal condition (binaural+biomechanical cues) exceeded the sum of both single cue conditions, which may imply super-additive or synergistic effects.
Notes: http://doi.acm.org/10.1145/1978942.1979356
Jay Vidyarthi, Alissa N Antle, Bernhard E Riecke (2011)  Sympathetic guitar : can a digitally augmented guitar be a social entity?   1819-1824  
Abstract: Previous work suggests that people treat interactive media as if they were social entities. By drawing a parallel between socio-cognitive theory and interface design, we intend to experimentally determine whether deliberate design decisions can have an effect on users' perception of an interactive medium as a social entity. In this progress report, we describe the theoretical underpinnings and motivations which led to the design and implementation of the Sympathetic Guitar: a guitar interface which supplements standard acoustic sound with a spatially-separate audio response based on the user's hand positions and performance dynamics. This prototype will be used for investigating user response to a specific, socially-relevant design decision.
Notes: http://doi.acm.org/10.1145/1979742.1979863
Andrew P Milne, Alissa N Antle, Bernhard E Riecke (2011)  Tangible and body-based interaction with auditory maps   2329-2334  
Abstract: Blind people face a significant challenge navigating through the world, especially in novel environments. Maps, the most common of navigational aids, are of little use to the blind, who could benefit greatly from the information they contain. Recent work in auditory maps has shown the potential for delivering spatial information through sound. Users control their position and orientation on a digitally enhanced map and listen for the location of important landmarks. Orientation control is important because sound localization cues can sometimes be ambiguous, especially when in front of and behind a listener. Previous devices have used a tangible interface, in which users manipulate a small motion tracked object, to allow users to control their position and orientation on a map. Motivated by research that has identified the importance of body-based cues, from the joints, muscles and vestibular system in spatial perception, we expanded on previous interfaces by constructing an auditory map prototype that allows users to control their orientation through natural head movements. A pilot study was conducted to compare the head-movement-based interface to a tangible interface.
Notes: http://doi.acm.org/10.1145/1979742.1979874
Jay Vidyarthi, Bernhard E Riecke, Alissa N Antle (2011)  Sympathetic guitar : humans respond socially to interactive technology in an abstract, expressive context   9-16 ACM  
Abstract: There seems to be an inherent sociality of computers which is somehow related to their interactivity. However, existing research on this topic is limited to direct interaction, semantic information, clear goals and the visual modality. The present work replicates and extends a previous study on human politeness toward computer systems using a different interaction paradigm involving indirect remote sensors in the context of expressive musical performance with a guitar. Results suggest that the quality of interactivity of a system contributes to its sociality, demonstrating the relevance of an existing body of literature on social responses to technology to the aesthetic of abstract, expressive systems such as video games, artistic tools, ambient systems, media art installations, and mobile device applications. Secondary findings suggest the possibility of manipulating the inherent social presence of an interface through informed design decisions, but a direct investigation is needed on this issue.
Notes:
M Lockyer, L Bartram, B E Riecke (2011)  Simple motion textures for ambient affect   89-96 ACM  
Abstract: The communication of emotion and the creation of affect are core to creating immersive and engaging experiences, such as those in performance, games and simulation. They often rely on atmospheric cues that influence how an environment feels. The design of such ambient visual cues for affect is an elusive topic that has been studied by painters, theatre directors, scenic designers, lighting designers, filmmakers, producers, and artists for years. Research shows that simple motions have the capacity to be both perceptually efficient and powerfully evocative, and motion textures -- patterns of ambient motion throughout the scene -- are frequently used to imbue the atmosphere with affect. To date there is little empirical evidence of what properties of motion texture are most influential in this affect. In this paper we report the results of a study of simple, abstract motion textures that show path curvature, speed and texture layout can influence affective impressions such as valence, comfort, urgency and intensity.
Notes:
Arefe Dalvandi, Bernhard E Riecke, Tom Calvert, Sabine Coquillart, Anthony Steed, Greg Welch (2011)  Panoramic Video Techniques for Improving Presence in Virtual Environments   103-110 Eurographics Association  
Abstract: Photo-realistic techniques that use sequences of images captured from a real environment can be used to create virtual environments (VEs). Unlike 3D modelling techniques, the required human work and computation are inde- pendent of the amounts of detail and complexity that exist in the scene, and in addition they provide great visual realism. In this study we created virtual environments using three different photo-realistic techniques: panoramic video, regular video, and a slide show of panoramic still images. While panoramic video offered continuous move- ment and the ability to interactively change the view, it was the most expensive and time consuming to produce among the three techniques. To assess whether the extra effort needed to create panoramic video is warranted, we analysed how effectively each of these techniques supported a sense of presence in participants. We analysed participants†subjective sense of presence in the context of a navigation task where they travelled along a route in a VE and tried to learn the relative locations of the landmarks on the route. Participants†sense of presence was highest in the panoramic video condition. This suggests that the effort in creating panoramic video might be warranted whenever high presence is desired.
Notes:
2010
Mona Erfani, Magy El-Nasr, David Milam, Bardia Aghabeigi, Beth Lameman, Bernhard E Riecke, Hamid Maygoli, Sang Mah, Peter Forbrig, Fabio PaternĂł, Annelise Mark Pejtersen (2010)  The Effect of Age, Gender, and Previous Gaming Experience on Game Play Performance   293-296 Springer Boston  
Abstract: It is common sense that people don’t play games that are too difficult for them. Thus Game developers need to understand the performance abilities of players. Several studies suggest a clear dissimilarity in video game playing abilities between different genders and age groups. In this paper, we report on a study investigating impact of age, gender and previous gaming experience on gameplay performance. The study explored the performance of 60 kids 6-16 years old within three video games: Rock Band 2, Lego Star Wars and Kameo. The paper outlines clear impact of age and gender and less prior gaming experience on performance parameters: score and game progression.
Notes:
Mona Erfani, Magy El-Nasr, David Milam, Bardia Aghabeigi, Beth Lameman, Bernhard E Riecke, Hamid Maygoli, Sang Mah (2010)  The Effect of Age, Gender, and Previous Gaming Experience on Customization activities within games    
Abstract: Understanding players and their game playing behavior is a growing area of research that is currently being explored by many game companies, including Electronic Arts, Hobbo Entertainment, and XEODesign. In this paper, we report on a study we conducted to understand the influence of age, gender, and previous gaming experience on customization activities of game players in the younger age group within games. We note that player behavior in such scenarios can be used to improve game design and maybe lead to new player models. The results show significant differences between genders within our sample and the type of activities and items used within character and level customizations. We also found some correlations between previous gaming experience and strategies users take to customize their levels, as well as the type of customization activities they engage in. These results will be discussed in this paper as they contribute several design lessons for designs of games or virtual worlds that involve customizations.
Notes:
K Seaborn, B E Riecke, A N Antle (2010)  Exploring the interplay of visual and haptic modalities in a pattern-matching task   61-66 Piscataway, NJ  
Abstract: It is not well understood how working memory deals with coupled haptic and visual presentation modes. Present theoretical understandings of human cognition indicate that these modes are processed by the visuo-spatial sketchpad. If this is accurate, then there may be no efficiency in distributing information between the haptic and visual modalities in situations of visual overload [1]. However, this needs to be empirically explored. In this paper, we describe an evaluation of human performance in a pattern-matching task involving a fingertip interface that can present both haptic and visual information. Our purpose was to explore the interplay of visual and haptic processing in working memory, in particular how presentation mode affects performance. We designed a comparative study involving a pattern-matching task. Users were presented with a sequence of two patterns through different modalities using a fingertip interface and asked to differentiate between them. While no significant difference was found between the visual and visual+haptic presentation modes, the results indicate a strong partiality for the coupling of visual and haptic modalities. This suggests that working memory is not hampered by a using both visual and haptic channels, and that recall may be strengthened by dual-coding both visual and haptic modes.
Notes:
Bernhard E Riecke, Daniel Feuereissen, John J Rieser (2010)  Spatialized sound influences biomechanical self-motion illusion ("vection")   158-158  
Abstract: Although moving auditory cues have long been known to induce self-motion illusions ("circular vection") in blindfolded participants, little is known about how spatial sound can facilitate or interfere with vection induced by other non-visual modalities like biomechanical cues. To address this issue, biomechanical circular vection was induced in seated, stationary participants by having them step sideways along a rotating floor ("circular treadmill") turning at 60°/s (see Fig. 1, top). Three research hypotheses were tested by comparing four different sound conditions in combination with the same biomechanical vection-inducing stimulus (see Fig. 1, bottom).
Notes: http://doi.acm.org/10.1145/1836248.1836280
Vinu S Rajus, Robert Woodbury, Halil I Erhan, B E Riecke, V Mueller (2010)  Collaboration in Parametric Design : Analyzing User Interaction during Information Sharing   320-326  
Abstract: Designers work in groups. They need to share information either synchronously or asynchronously as they work with parametric modeling software, as with all computer-aided design tools. Receiving information from collaborators while working may intrude on their work and thought processes. Little research exists on how the reception of design updates influences designers in their work. Nor do we know much about designer preferences for collaboration. In this paper, we examine how sharing and receiving design updates affects designers’ performances and preferences. We present a system prototype to share changes on demand or in continuous mode while performing design tasks. A pilot study measuring the preferences of nine pairs of designers for different combinations of control modes and design tasks shows statistically significant differences between the task types and control modes. The types of tasks affect the preferences of users to the types of control modes. In an apparent contradiction, user preference of control modes contradicts task performance time.
Notes:
Adam Hoyle, Etienne Naugle, Anton Brosas, Siamak Arzanpour, Gary Wang, Bernhard E Riecke (2010)  Two-Axis Circular Treadmill for Human Perception and Behaviour Research in Virtual Environments   1-65  
Abstract: This project required the design and implementation of a two-axis circular treadmill for research in human perception and behaviour in virtual environments. A close partnership between the client and the engineering team ensured that the platform was designed to meet the current experimental needs as well as remaining flexible so that future experimental needs may be accommodated. An integrated, multidisciplinary design approach was used to create an unrivalled motion platform, on which a variety of planned and future experiments can be conducted. This approach combined elements of mechanical, systems, control, and electronics engineering to design and realize the structural platform, drive train, control system, and control electronics. The design has been fabricated and is under testing. The treadmill is expected to be displayed in the CSME Forum 2011.
Notes:
2009
Dinara Moura, B E Riecke (2009)  Is seeing a virtual environment like seeing the real thing?   131-131 ACM  
Abstract: Immersive virtual environments (IVE) are increasingly used in both fundamental research like experimental psychology and applications such as training, phobia therapy, or entertainment. Ideally, people should be able to perceive and behave in such IVEs as naturally and effectively as in real environments --- especially if real-world transfer is desired. Being inherently mobile species, enabling natural spatial orientation and cognition in IVEs is essential. Here, we investigated whether seeing a virtual environment has a similar effect on our spatial cognition and mental spatial representation as a comparable real-world stimulus does -- if it does not, how could we assume real-world transfer?
Notes:
J Bizzocchi, B Youssef, B Quan, W Suzuki, M Bagheri, B E Riecke (2009)  Re : Cycle-a Generative Ambient Video Engine   1-7  
Abstract: Re:Cycle is a generative ambient video art piece based on nature imagery captured in the Canadian Rocky Mountains. Ambient video is designed to play in the background of our lives. An ambient video work is difficult to create - it can never require our attention, but must always rewards attention when offered. A central aesthetic challenge for this form is that it must also support repeated viewing. Re:Cycle relies on a generative recombinant strategy for ongoing variability, and therefore a higher re-playability factor. It does so through the use of two random-access databases: one database of video clips, and another of video transition effects. The piece will run indefinitely, joining clips and transitions from the two databases in randomly varied combinations. Generative ambient video is an art form that draws upon the continuing proliferation and increased sophistication of technology as a supporting condition. Ambient video benefits from the ongoing distribution of ever-larger and improved video screens. Generative ambient video is more easily realized within a culture where computation, like the large video screen, is also becoming more ubiquitous A series of related creative decisions gave Re:Cycle its final shape. The decisions all wrestled with variations on a single problem: how to find an appropriate balance between aesthetic control on the one hand, and variability/re-playability on the other. The paper concludes with a description of future work to be done on the project, including the use of metadata to improve video flow and sequencing coherence.
Notes:
B E Riecke, Pooya Amini Behbahani, Chris D Shaw (2009)  Display size does not affect egocentric distance perception of naturalistic stimuli   15-18 ACM  
Abstract: Although people are quite accurate in visually perceiving absolute egocentric distances in real environments up to 20m, they usually underestimate distances in virtual environments presented through head-mounted displays (HMDs). Several previous studies examined different potential factors, but none of these factors could convincingly explain the observed distance compressionin HMDs. In this study, we investigated the potential influence of naturalistic stimulus presentation and display size -- a factor largely overlooked in previous studies. To this end, we used an indirect blindfolded walking task to previously-seen targets. Participants viewed photos of targets located at various distances on the ground through different-sized displays (HMD, 24" monitor, and 50" screen) and walked without vision to where they thought the location of the target was. Real-world photographs were used to avoid potential artifacts of computer-graphics stimuli. Displays were positioned to provide identical fields of view (32° x 24°). Distance judgments were unexpectedly highly accurate and showed no signs of distance compression for any of the displays. Moreover, display size did not affect distance perception, and performance was virtually identical to a real world baseline, where real-world targets were viewed through 32° x 24° field of view restrictors. A careful analysis of potential underlying factors suggests that the typically-observed distance compression for HMDs might be overcome by using naturalistic real-world stimuli. This might also explain why display size did not affect distance judgments.
Notes:
2008
Bernhard E Riecke, Daniel Feuereissen, John J Rieser (2008)  Auditory self-motion illusions ("circular vection") can be facilitated by vibrations and the potential for actual motion   147-154  
Abstract: It has long been known that sound fields rotating around a stationary, blindfolded observer can elicit self-motion illusions ("circular vection") in 20--60% of participants. Here, we investigated whether auditory circular vection might depend on whether participants sense and know that actual motion is possible or impossible. Although participants in auditory vection studies are often seated on moveable seats to suspend the disbelief of self-motion, it has never been investigated whether this does, in fact, facilitate vection. To this end, participants were seated on a hammock chair with their feet either on solid ground ("movement impossible" condition) or suspended ("movement possible" condition) while listening to individualized binaural recordings of two sound sources rotating synchronously at 60°/s. In addition, hardly noticeable vibrations were applied in half of the trials. Auditory circular vection was elicited in 8/16 participants. For those, adding vibrations enhanced vection in all dependent measures. Not touching solid ground increased the intensity of self-motion and the feeling of actually rotating in the physical room. Vection onset latency and the percentage of trials where vection was elicited were only marginally significantly (p<.10) affected, though. Together, this suggests that auditory self-motion illusions can be stronger when one senses and knows that physical motion might, in fact, be possible (even though participants always remained stationary). Furthermore, there was a benefit both of adding vibrations and having one's feet suspended. These results have important implications both for our theoretical understanding of self-motion perception and for the applied field of self-motion simulations, where both vibrations and the cognitive/perceptual framework that actual motion is possible can typically be provided at minimal cost and effort.
Notes: http://doi.acm.org/10.1145/1394281.1394309
Peng Peng, Bernhard E Riecke, Betsy Williams, Timothy P McNamara, Bobby Bodenheimer (2008)  Navigation modes in virtual environments : walking vs. joystick   192-192  
Abstract: There is considerable evidence that people have difficulty maintaining orientation in virtual environments. This difficulty is usually attributed to poor idiothetic cues, such as the absence of proprioception and other sources of information provided by self locomotion. The lack of proprioceptive cues presents a strong argument against the use of a joystick interface, and the importance of full physical movement for navigation tasks has also recently been confirmed by Ruddle and Lessels [2006], who showed that subjects performing a navigational task were superior when they were allowed to walk freely rather than when they could only physically rotate themselves or only move virtually. Our study seeks to confirm the results of Ruddle and Lessels.
Notes: http://doi.acm.org/10.1145/1394281.1394321
2007
Wataru Teramoto, Bernhard E Riecke (2007)  Physical self-motion facilitates object recognition, but does not enable view-independence   142-142  
Abstract: It is well known that people have difficulties in recognizing an object from novel views as compared to learned views, resulting in increased response times and/or errors. This so-called view-dependency has been confirmed by many studies. In the natural environment, however, there are two ways of changing views of an object: one is to rotate an object in front of a stationary observer (object-movement), the other is for the observer to move around a stationary object (observer-movement). Note that almost all previous studies are based on the former procedure. Simons et al. [2002] criticized previous studies in this regard and examined the difference between object- and observer-movement directly. As a result, Simons et al. [2002] reported the elimination of this view-dependency when novel views resulted from observer-movement, instead of object-movement. They suggest the contribution of extra-retinal (vestibular and proprioceptive) information to object recognition. Recently, however, Zhao et al. [2007] reported that the observer's movement from one view to another only decreased view-dependency without fully eliminating it. Furthermore, even this effect vanished for rotations of 90° instead of 50°. Larger rotations were not tested. The aim of the present study was to clarify the underlying mechanism of this phenomenon and to investigate larger angles of view change (45-180°, in 45° steps).
Notes: http://doi.acm.org/10.1145/1272582.1272619
B E Riecke, J M Wiener (2007)  Can People Not Tell Left from Right in VR? : Point-to-origin Studies Revealed Qualitative Errors in Visual Path Integration   3-10  
Abstract: Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training. The current study investigated participants� sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants� intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along oneor two-segment trajectories, participants were asked to point back to the origin of locomotion "as accurately and quickly as possible". Despite using a high-quality video projection with a 84��63� field of view, participants� overall performance was rather poor. Moreover, six of the 16 participants exhibited striking qualitative errors, i.e., consistent left-right confusions that have not been observed in comparable real world experiments. Taken together, this study suggests that even an immersive high-quality video projection system is not necessarily sufficient for enabling natural spatial orientation in VR. We propose that a rapid point-to-origin paradigm can be a useful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance.
Notes: http://dx.doi.org/10.1109/VR.2007.352457
B E Riecke, J M Wiener (2007)  Consistent Left-Right Errors for Visual Path Integration in Virtual Reality : More Than a Failure to Update One's Heading?   139-139 ACM Press  
Abstract: Optic flow is known to enable humans to estimate heading, translations, and rotations. Here, we investigated whether optic flow simulating self-motions in virtual reality might also enable natural and intuitive spatial orientation, without the need for error-corrective feedback or training. After visually displayed passive excursions along 1- or 2-segment paths, participants had to point toward the starting point "as accurately and quickly as possible". Turning angles were announced in advance to obviate encoding errors due to misperceived turning angles. Nevertheless, many participants still produced surprisingly large systematic and random errors, and perceived task difficulty and response times were unexpectedly high. Moreover, 11 of the 24 participants showed consistent qualitative errors, namely left-right reversals – despite not misinterpreting the visually simulated motion direction. Careful analysis suggests that some, but not all, of the left-right inversions can be explained by a failure to update visually displayed heading changes. Left-right inversion was correlated with reduced mental spatial ability (corroborating earlier results), but not gender. In conclusion, optic flow was clearly insufficient for enabling natural and intuitive spatial orientation or automatic spatial updating, even when advance information about turning angles was provided. We posit that investigating qualitative errors for basic spatial orientation tasks using, e.g., point-to-origin paradigms can be a powerful tool for benchmarking VR setups from a human-centered perspective.
Notes:
2006
Bernhard E Riecke, Jan M Wiener (2006)  Point-to-origin experiments in VR revealed novel qualitative errors in visual path integration    
Abstract: Bernhard E. Riecke No contact information provided yet. Bibliometrics: publication history Publication years2002-2010 Publication count23 Citation Count68 Available for download19 Downloads (6 Weeks)87 Downloads (12 Months)815 View colleagues of Bernhard E. Riecke  Jan M. Wiener No contact information provided yet. Bibliometrics: publication history Publication years2006-2009 Publication count6 Citation Count1 Available for download2 Downloads (6 Weeks)0 Downloads (12 Months)6 View colleagues of Jan M. Wiener
Notes: http://doi.acm.org/10.1145/1179622.1179840
B E Riecke (2006)  Simple user-generated motion cueing can enhance self-motion perception (Vection) in virtual reality   104-107 ACM  
Abstract: Despite amazing advances in the visual quality of virtual environ-ments, affordable-yet-effective self-motion simulation still poses a major challenge. Using a standard psychophysical paradigm, the effectiveness of different self-motion simulations was quantified in terms of the onset latency, intensity, and convincingness of the per-ceived illusory self motion (vection). Participants were asked to actively follow different pre-defined trajectories through a naturalistic virtual scene presented on a panoramic projection screen using three different input devices: a computer mouse, a joystick, or a modified manual wheelchair. For the wheelchair, participants exerted their own minimal motion cueing using a simple force-feedback and a velocity control paradigm: small translational or rotational motions of the wheelchair (limited to 8cm and 10°, re-spectively) initiated a corresponding visual motion with the visual velocity being proportional to the wheelchair deflection (similar to a joystick). All dependent measures showed a clear enhancement of the perceived self-motion when the wheelchair was used instead of the mouse or joystick. Compared to more traditional approaches of enhancing self-motion perception (e.g., motion platforms, free walking areas, or treadmills) the current approach of using a simple user-generated motion cueing has only minimal requirements in terms of overall costs, required space, safety features, and technical effort and expertise. Thus, the current approach might be promising for a wide range of low-cost applications.
Notes:
B E Riecke, J M Wiener (2006)  Point-to-origin experiments in VR revealed novel qualitative errors in visual path integration   156-156  
Abstract: Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training. The current study investigated participants? sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants? intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along oneor two-segment trajectories, participants were asked to point back to the origin of locomotion "as accurately and quickly as possible". Despite using a high-quality video projection with a 84��63� field of view, participants? overall performance was rather poor. Moreover, six of the 16 participants exhibited striking qualitative errors, i.e., consistent left-right confusions that have not been observed in comparable real world experiments. Taken together, this study suggests that even an immersive high-quality video projection system is not necessarily sufficient for enabling natural spatial orientation in VR. We propose that a rapid point-to-origin paradigm can be a useful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance.
Notes:
2005
B E Riecke, D Västfjäll, P Larsson, J Schulte-Pelkum (2005)  Top-Down and Multi-Modal Influences on Self-Motion Perception in Virtual Reality   1-10  
Abstract: INTRODUCTION: Much of the work on self-motion perception and simulation has investigated the contribution of physical stimulus properties (so-called bottom-up factors). This paper provides an overview on recent experiments demonstrating that illusory self-motion perception can also benefit from top-down mechanisms, e.g. expectations, the interpretation and meaning associated with the stimulus, and the resulting spatial presence in the simulated envi-ronment. METHODS: Several VR setups were used as a means to independently control different sensory modali-ties, thus allowing for well-controlled and reproducible psychophysical experiments. Illusory self-motion perception (vection) was induced using rotating visual or binaural auditory stimuli, presented via a curved projection screen (FOV: 54x40.5$^\circ$) or headphones, respectively. Additional vibrations, subsonic sound, or cognitive frameworks were applied in some trials. Vection was quantified in terms of onset time, intensity, and convincingness ratings. RESULTS & DISCUSSION: Auditory vection studies showed that sound sources participants associated with sta-tionary acoustic landmarks (e.g., a fountain) can significantly increase the effectiveness of the self-motion illu-sion, as compared to sound sources that are typically associated to moving objects (like the sound of footsteps). A similar top-down effect was observed in a visual vection experiment: Showing a rotating naturalistic scene in VR improved vection considerably compared to scrambled versions of the same scene. Hence, the possibility to interpret the stimulus as a stationary reference frame seems to enhance the self-motion perception, which challenges the pre-vailing opinion that self-motion perception is primarily bottom-up driven. Even the mere knowledge that one might potentially be moved physically increased the convincingness of the self-motion illusion significantly, especially when additional vibrations supported the interpretation that one was really moving. CONCLUSIONS: Various top-down mechanisms were shown to increase the effectiveness of self-motion simulations in VR, even though they have received little attention in the literature up to now. Thus, we posit that a perceptually-oriented approach that combines both bottom-up and top-down factors will ultimately enable us to optimize self-motion simulations in terms of both effectiveness and costs.
Notes:
B E Riecke, J Schulte-Pelkum, F Caniard, H H BĂĽlthoff (2005)  Influence of Auditory Cues on the visually-induced Self-Motion Illusion (Circular Vection) in Virtual Reality   49-57  
Abstract: This study investigated whether the visually induced selfmotion illusion ('circular vection') can be enhanced by adding a matching auditory cue (the sound of a fountain that is also visible in the visual stimulus). Twenty observers viewed rotating photorealistic pictures of a market place projected onto a curved projection screen (FOV: 54$^\circ$x45$^\circ$). Three conditions were randomized in a repeated measures within-subject design: No sound, mono sound, and spatialized sound using a generic head-related transfer function (HRTF). Adding mono sound increased convincingness ratings marginally, but did not affect any of the other measures of vection or presence. Spatializing the fountain sound, however, improved vection (convincingness and vection buildup time) and presence ratings significantly. Note that facilitation was found even though the visual stimulus was of high quality and realism, and known to be a powerful vection-inducing stimulus. Thus, HRTF-based auralization using headphones can be employed to improve visual VR simulations both in terms of self-motion perception and overall presence. SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.
Notes:
B E Riecke, J Schulte-Pelkum, H H BĂĽlthoff (2005)  Perceiving Simulated Ego-Motions in Virtual Reality - Comparing Large Screen Displays with HMDs   344-355  
Abstract: In Virtual Reality, considerable systematic spatial orientation problems frequently occur that do not happen in comparable real-world situations. This study investigated possible origins of these problems by examining the influence of visual field of view (FOV) and type of display device (head-mounted display (HMD) vs. projection screens) on basic human spatial orientation behavior. In Experiment 1, participants had to reproduce traveled distances and to turn specified target angles in a simple virtual environment without any landmarks that was projected onto a 180$^\circ$ half-cylindrical projection screen. As expected, distance reproduction performance showed only small systematic errors. Turning performance, however, was unexpectedly almost perfect (gain=0.97), with negligible systematic errors and minimal variability, which is unprecedented in the literature. In Experiment 2, turning performance was compared between a projection screen (FOV 84$^\circ$$\times$63$^\circ$), an HMD (40$^\circ$$\times$30$^\circ$), and blinders (40$^\circ$$\times$30$^\circ$) that restricted the FOV on the screen. Performance was best with the screen (gain 0.77) and worst with the HMD (gain 0.57). We found a significant difference between blinders (gain 0.73) and HMD, which indicates that different display devices can influence ego-motion perception differentially, even if the physical FOVs are equal. We conclude that the type of display device (HMD vs. curved projection screen) seems to be more critical than the FOV for the perception of ego-rotations. Furthermore, large, curved projection screens yielded better performance than HMDs. SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.
Notes:
Betty J Mohler, William B Thompson, Bernhard Riecke, Heinrich H BĂĽlthoff (2005)  Measuring vection in a large screen virtual environment   103-109  
Abstract: This paper describes the use of a large screen virtual environment to induce the perception of translational and rotational self-motion. We explore two aspects of this problem. Our first study investigates how the level of visual immersion (seeing a reference frame) affects subjective measures of vection. For visual patterns consistent with translation, self-reported subjective measures of self-motion were increased when the floor and ceiling were visible outside of the projection area. When the visual patterns indicated rotation, the strength of the subjective experience of circular vection was unaffected by whether or not the floor and ceiling were visible. We also found that circular vection induced by the large screen display was reported subjectively more compelling than translational vection. The second study we present describes a novel way in which to measure the effects of displays intended to produce a sense of vection. It is known that people unintentionally drift forward if asked to run in place while blindfolded and that adaptations involving perceived linear self-motion can change the rate of drift. We showed for the first time that there is a lateral drift following perceived rotational self-motion and we added to the empirical data associated with the drift effect for translational self-motion by exploring the condition in which the only self-motion cues are visual.
Notes: http://doi.acm.org/10.1145/1080402.1080421
Bernhard E Riecke, Jorg Schulte-Pelkum, Franck Caniard, Heinrich H Bulthoff (2005)  Towards Lean and Elegant Self-Motion Simulation in Virtual Reality   131-138  
Abstract: Despite recent technological advances, convincing self-motion simulation in Virtual Reality (VR) is difficult to achieve, and users often suffer from motion sickness and/or disorientation in the simulated world. Instead of trying to simulate self-motions with physical realism (as is often done for, e.g., driving or flight simulators), we propose in this paper a perceptually oriented approach towards self-motion simulation. Following this paradigm, we performed a series of psychophysical experiments to determine essential visual, auditory, and vestibular/tactile parameters for an effective and perceptually convincing self-motion simulation. These studies are a first step towards our overall goal of achieving lean and elegant self-motion simulation in Virtual Reality (VR) without physically moving the observer. In a series of psychophysical experiments about the self-motion illusion (circular vection), we found that (i) vection as well as presence in the simulated environment is increased by a consistent, naturalistic visual scene when compared to a sliced, inconsistent version of the identical scene, (ii) barely noticeable marks on the projection screen can increase vection as well as presence in an unobtrusive manner, (iii) physical vibrations of the observerýs seat can enhance the vection illusion, and (iv) spatialized 3D audio cues embedded in the simulated environment increase the sensation of self-motion and presence. We conclude that providing consistent cues about self-motion to multiple sensory modalities can enhance vection, even if physical motion cues are absent. These results yield important implications for the design of lean and elegant self-motion simulators.
Notes:
Bernhard E Riecke, Jörg Schulte-Pelkum, Marios N Avraamides, Markus von der Heyde, Heinrich H BĂĽlthoff (2005)  Scene consistency and spatial presence increase the sensation of self-motion in virtual reality   111-118  
Abstract: The illusion of self-motion induced by moving visual stimuli ("vection") has typically been attributed to low-level, bottom-up perceptual processes. Therefore, past research has focused primarily on examining how physical parameters of the visual stimulus (contrast, number of vertical edges etc.) affect vection. Here, we investigated whether higher-level cognitive and top-down processes - namely global scene consistency and spatial presence - also contribute to the illusion. These factors were indirectly manipulated by presenting either a natural scene (the Tübingen market place) or various scrambled and thus globally inconsistent versions of the same stimulus. Due to the scene scrambling, the stimulus could no longer be perceived as a consistent 3D scene, which was expected to decrease spatial presence and thus impair vection. Twelve naive observers were asked to indicate the onset, intensity, and convincingness of circular vection induced by rotating visual stimuli presented on a curved projection screen (FOV: 54°x45°). Spatial presence was assessed using presence questionnaires. As predicted, scene scrambling impaired both vection and presence ratings for all dependent measures. Neither type nor severity of scrambling, however, showed any clear effect. The data suggest that higher-level information (the interpretation of the globally consistent stimulus as a 3D scene and stable reference frame) dominated over the low-level (bottom-up) information (more contrast edges in the scrambled stimuli, which are known to facilitate vection). Results suggest a direct relation between spatial presence and self-motion perception. We posit that stimuli depicting globally consistent, naturalistic scenes provide observers with a convincing spatial reference frame for the simulated environment which allows them to feel "spatially present" therein. We propose that this, in turn, increases the believability of the visual stimuli as a stable "scene" with respect to which visual motion is more likely to be judged as self-motion. We propose that not only low-level, bottom-up factors, but also higher-level factors such as the meaning of the stimulus are relevant for self-motion perception and should thus receive more attention. This work has important implications for both our understanding of selfmotion perception and motion simulator design and applications.
Notes:
2004
B E Riecke, J Schulte-Pelkum, M N Avraamides, H H BĂĽlthoff (2004)  Enhancing the Visually Induced Self-Motion Illusion (Vection) under Natural Viewing Conditions in Virtual Reality   125-132  
Abstract: The visually induced illusion of ego-motion (vection) is known to be facilitated by both static fixation points [1] and foreground stimuli that are perceived to be stationary in front of a moving background stimulus [2]. In this study, we found that hardly noticeable marks in the periphery of a projection screen can have similar vection-enhancing effects, even without fixating or suppressing the optokinetic reflex (OKR). Furthermore, vection was facilitated even though the marks had no physical depth separation from the screen. Presence ratings correlated positively with vection, and seemed to be mediated by the ego-motion illusion. Interestingly, the involvement/attention aspect of overall presence was more closely related to vection onset times, whereas spatial presence-related aspects were more tightly related to convincingness ratings. This study yields important implications for both presence theory and motion simulator design and applications, where one often wants to achieve convincing ego-motion simulation without restricting eye movements artificially. SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.
Notes: 10.1.1.122.5636
Bernhard E Riecke, Heinrich H BĂĽlthoff (2004)  Spatial updating in real and virtual environments : contribution and interaction of visual and vestibular cues   9-17  
Abstract: INTRODUCTION: When we move through the environment, the self-to-surround relations constantly change. Nevertheless, we perceive the world as stable. A process that is critical to this perceived stability is "spatial updating", which automatically updates our egocentric mental spatial representation of the surround according to our current self-motion. According to the prevailing opinion, vestibular and proprioceptive cues are absolutely required for spatial updating. Here, we challenge this notion by varying visual and vestibular contributions independently in a high-fidelity VR setup. METHODS: In a learning phase, participants learned the positions of twelve targets attached to the walls of a 5x5m room. In the testing phase, participants saw either the real room or a photo-realistic copy presented via a head-mounted display (HMD). Vestibular cues were applied using a motion platform. Participants' task was to point "as accurately and quickly as possible" to four targets announced consecutively via headphones after rotations around the vertical axis into different positions. RESULTS: Automatic spatial updating was observed whenever useful visual information was available: Paticipants had no problem mentally updating their orientation in space, irrespective of turning angle. Performance, quantified as response time, configuration error, and pointing error, was best in the real world condition. However, when the field of view was limited via cardboard blinders to match that of the HMD (40 × 30°), performance decreased and was comparable to the HMD condition. Presenting turning information only visually (through the HMD) hardly altered those results. In both the real world and HMD conditions, spatial updating was obligatory in the sense that it was significantly more difficult to ignore ego-turns (i.e., "point as if not having turned") than to update them as usual. CONCLUSION: The rapid pointing paradigm proved to be a useful tool for quantifying spatial updating. We conclude that, at least for the limited turning angles used (<60°), the Virtual Reality simulation of ego-rotation was as effective and convincing (i.e., hard to ignore) as its real world counterpart, even when only visual information was presented. This has relevant implications for the design of motion simulators for, e.g., architecture walkthroughs.
Notes:
2002
B E Riecke, M von der Heyde, Heinrich H BĂĽlthoff (2002)  Spatial updating in virtual environments : What are vestibular cues good for   421-421  
Abstract: When we turn ourselves, our sensory inputs somehow turn the "world inside our head" accordingly so as to stay in alignment with the outside world. This "spatial updating" occurs automatically, without conscious effort, and is normally "obligatory" (i.e., cognitively impenetrable and hard to suppress). We pursued two main questions here: 1) Which cues are sufficient to initiate obligatory spatial updating? 2) Under what circumstances do vestibular cues become important? STIMULI: A photo-realistic virtual replica of the Tübingen market place was presented via a curved projection screen (84x63° FOV). For vestibular stimulation, subjects were seated on a Stewart motion platform. TASK: Subjects were rotated consecutively to random orientations and asked to point "as accurately and quickly as possible" to 4 out of 22 previously-learned targets. Targets were announced consecutively via headphones and chosen to be outside of the current FOV. Photo-realistic visual stimuli from a well-known environment including an abundance of salient landmarks allowed accurate spatial updating (mean absolute pointing error, pointing variability, and response time were 16.5°, 17.0°, and 1.19s, respectively). Moreover, those stimuli triggered spatial updating even when participants were asked to ignore turn cues and "point as if not having turned", (32.9°, 27.5°, 1.67s, respectively). Removing vestibular turn cues did not alter performance significantly. This result conflicts with the prevailing opinion that vestibular cues are required for proper updating of ego-turns. We did find that spatial updating benefitted from vestibular cues when visual turn information was degraded to a mere optic flow pattern. Under all optic flow conditions, however, spatial updating was impaired and no longer obligatory. We conclude that "good" visual landmarks can initiate obligatory spatial updating and overcome the visuo-vestibular cue conflict. SUPPORT: Max Planck Society and Deutsche Forschungsgemeinschaft (SFB 550)
Notes:
M von der Heyde, B E Riecke, F R Gouveia (2002)  Embedding presence-related terminology in a logical and functional model   37-52  
Abstract: In this paper, we introduce first steps towards a logically consistent framework describ-ing and relating items concerning the phenomena of spatial presence, spatial orientation, and spatial updating. Spatial presence can be regarded as the consistent feeling of be-ing in a specific spatial context, and intuitively knowing where one is with respect to the immediate surround. The core idea is to try to understand presence-related issues by ana-lyzing their logical and functional relations. This is done by determining necessary and/or sufficient conditions between related items. This eventually leads to a set of necessary prerequisites and sufficient conditions for spatial presence, spatial orientation, and spatial updating. More specifically, the logical structure of our framework allows for novel ways of quantifying spatial presence and spatial updating.
Notes:
2001
M von der Heyde, B E Riecke, D W Cunningham, H H BĂĽlthoff, K Nakayama et al (2001)  No Visual Dominance for Remembered Turns - Psychophysical Experiments on the Integration of Visual and Vestibular Cues in Virtual Reality    
Abstract: In most virtual reality (VR) applications turns are misperceived, which leads to disorientation. Here we focus on two cues providing no absolute spatial reference: optic flow and vestibular cues. We asked whether: (a) both visual and vestibular information are stored and can be reproduced later; and (b) if those modalities are integrated into one coherent percept or if the memory is modality specific. We used a VR setup including a motion simulator (Stewart platform) and a head-mounted display for presenting vestibular and visual stimuli, respectively. Subjects followed an invisible randomly generated path including heading changes between 8.5 and 17 degrees. Heading deviations from this path were presented as vestibular roll rotation. Hence the path was solely defined by vestibular (and proprioceptive) information. The subjects' task was to continuously adjust the roll axis of the platform to level position. They controlled their heading with a joystick and thereby maintained an upright position. After successfully following a vestibularly defined path twice, subjects were asked to reproduce it from memory. During the reproduction phase, the gain between the joystick control and the resulting visual and vestibular turns were independently varied. Subjects learned and memorized curves of the vestibularly defined virtual path and were able to reproduce the amplitudes of the turns. This demonstrates that vestibular signals can be used for spatial orientation in virtual reality. Since the modality with the bigger gain factor had a dominant effect on the reproduced turns, the integration of visual and vestibular information seems to follow a "max rule", in which the larger signal is responsible for the perceived and memorized heading change. von der Heyde, M., Riecke, B.E., Cunningham, D.W., & B�lthoff, H.H. (2001). No visual dominance for remembered turns - Psychophysical experiments on the integration of visual and vestibular cues in virtual reality [Abstract]. Journal of Vision, 1(3):188, 188a, http://journalofvision.org/1/3/188/, doi:10.1167/1.3.188.
Notes:
M von der Heyde, B E Riecke, D W Cunningham, H H BĂĽlthoff, K Nakayama et al (2001)  Visual-Vestibular Sensor Integration Follows a Max-Rule : Results from Psychophysical Experiments in Virtual Reality   142-142  
Abstract: Perception of ego turns is crucial for navigation and self-localization. Yet in most virtual reality (VR) applications turns are misperceived, which leads to disorientation. Here we focus on two cues providing no absolute spatial reference: optic flow and vestibular cues. We asked whether: (a) both visual and vestibular information are stored and can be reproduced later; and (b) if those modalities are integrated into one coherent percept or if the memory is modality specific. In the following experiment, subjects learned and memorized turns and were able to reproduce them even with different gain factors for the vestibular and visual feedback. We used a VR setup including a motion simulator (Stewart platform) and a head-mounted display for presenting vestibular and visual stimuli, respectively. Subjects followed an invisible randomly generated path including heading changes between 8.5 and 17 degrees. Heading deviations from this path were presented as vestibular roll rotation. Hence the path was solely defined by vestibular (and proprioceptive) information. One group of subjects' continuously adjusted the roll axis of the platform to level position. They controlled their heading with a joystick and thereby maintained an upright position. The other group was passively guided through the sequence of heading turns without any roll signal. After successfully following a vestibularly defined path twice, subjects were asked to reproduce it from memory. During the reproduction phase, the gain between the joystick control and the resulting visual and vestibular turns were independently varied by a factor ofâ…źsqrt(2), 1 or sqrt(2). Subjects from both groups learned and memorized curves of the vestibularly defined virtual path and were able to reproduce the amplitudes of the turns. This demonstrates that vestibular signals can be used for spatial orientation in virtual reality. Since the modality with the bigger gain factor had for both groups a dominant effect on the reproduced turns, the integration of visual and vestibular information seems to follow a "max rule", in which the larger signal is responsible for the perceived and memorized heading change.
Notes:

Other

2012
2011
(2011)  Do Virtual and Real Environments Influence Spatial Cognition Similarly?    
Abstract: Given the increasing use of virtual environments in both research and industry, it is important to understand whether virtual reality actually affords spatial perception, cognition, and behavior similar to the real world. Using a judgment of relative direction (JRD) task, Riecke and McNamara (Psychonomics 2007) demonstrated orientation-specific interference between participant’s physical orientation in an empty test room and their to-be-imagined orientation in a previously learned room of similar geometry. To investigate whether the same interference can be observed in VR, we replicated the previous procedure but added a "virtual" test condition in which participant's performed the same JRD task, but in a photorealistic virtual replica of the real test room displayed using an immersive custom-built Wheatstone stereoscope (2560x1600 pixel/eye). While some participants showed the expected effect in the real but not virtual environment, unexpectedly we also observed the reverse. We are currently running control studies to investigate potential underlying factors.
Notes:
(2011)  Sonic Cradle; Project Exhibition in Chronic Pain : Art & Science Collaborations    
Abstract: For your cvs, hte citation is: Sonic Cradle (project) Names Title: Chronic Pain: Art & Science Collaborations California Nanosystems Institute (CNSI) UCLA September 29 – November 30, 2011
Notes:
2010
(2010)  Brain dynamics associated with navigation in 3-D space    
Abstract: Spatial navigation is a complex task that requires integration of multisensory information on the navigator's movement in space based on distinct reference frames (e.g., egocentric or allocentric reference frames). Even though different reference frames contribute to successful spatial orienting, several factors influence the use of one or the other system. One factor is an individual proclivity to use distinct reference frames during spatial orienting [1-4]. Most studies on spatial navigation investigated spatial orienting in 2-D space demonstrating that visual flow information on heading changes is sufficient to update one's position and orientation. In contrast, updating yaw and pitch rotation based on visual flow seems to be severely limited [5]. However, no study hitherto investigated the influence of individual proclivities in using distinct reference frames during orienting in three-dimensional virtual environments. Here, we investigated homing performances of subjects preferentially using an egocentric or an allocentric reference frame during passages through three-dimensional space with heading changes in yaw and pitch. The subject's task was to keep up orientation during passages through star fields with heading changes in the horizontal or the vertical plane, unpredictable on a trial. At the end of a passage subjects had to adjust a homing arrow so as to point back to the origin of their passage. The angle of heading changes in yaw and pitch were systematically varied, with angles of 25, 50, 75, or 90deg (up/down, left/right). High density EEG was recorded continuously and analyzed using Independent Component Analysis (ICA) and subsequent clustering of independent components (ICs). Subjects preferentially using an egocentric or an allocentric reference frame revealed comparable homing accuracy for heading changes in pitch and yaw. Importantly, approximately half of the subject with a proclivity for an egocentric reference frame used the preferred frame for horizontal heading changes only but switched to an allocentric reference frame for heading changes in the pitch plane. The brain dynamics underlying spatial orienting for horizontal heading changes replicated previous results with increased activity in a wide spread cortical network during the passage [6]. Heading changes in pitch were associated with increased activity within a comparable network with differences in task-related spectral perturbations as compared to heading changes in yaw. We will discuss common patterns and differences in bran dynamics associated with the use of distinct reference frames for heading changes in pitch and yaw. 1 Gramann, K., El Sharkawy, J., & Deubel, H., Eye-movements during navigation in a virtual tunnel. Int J Neurosci 119 (10), 1755-1778 (2009). 2 Gramann, K., Muller, H.J., Eick, E.M., & Schonebeck, B., Evidence of separable spatial representations in a virtual navigation task. J Exp Psychol Hum Percept Perform 31 (6), 1199-1223 (2005). 3 Riecke, B.E., Consistent left-right reversals for visual path integration in virtual reality: More than a failure to update ones heading? Presence-Teleop Virt 17 (2), 143-175 (2008). 4 Iaria, G., Petrides, M., Dagher, A., Pike, B., & Bohbot, V.D., Cognitive strategies dependent on the hippocampus and caudate nucleus in human navigation: variability and change with practice. J Neurosci 23 (13), 5945-5952 (2003). 5 Vidal, M., Amorim, M.A., & Berthoz, A., Navigating in a virtual three-dimensional maze: how do egocentric and allocentric reference frames interact? Cognitive Brain Research 19 (3), 244-258 (2004). 6 Gramann, K. et al., Human Brain Dynamics Accompanying Use of Egocentric and Allocentric Reference Frames during Navigation. J Cogn Neurosci (2009).
Notes:
2009
(2009)  Comparing spatial perception/cognition in real versus immersive virtual environments - it doesn't compare!    
Abstract: Virtual reality (VR) is increasingly used in psychological research and applications, but does VR really afford natural human spatial perception/cognition, which is a prerequisite for effective spatial behavior? Using judgment of relative direction (JRD) tasks, Riecke and McNamara (2007, Abstracts of the Psychonomic Society) demonstrated orientation-specific interference between participant's actual orientation in an empty rectangular room and their to-be-imagined orientation in a previously learned room. To test whether VR yields similar interference, we replicated their study using a modified condition: We used an empty virtual (instead of real) test room presented on a 180x150deg video projection. After learning 15 target objects in a rectangular office, participants performed JRD tasks ("imagine facing X, point to Y") while facing different orientations in the virtual test room. Despite using identical procedures, seeing the virtualenvironment did not produce the same interference as a comparable real-world stimulus, suggesting qualitative differences in human spatial perception/cognition in real versus computer-simulated worlds.
Notes:
(2009)  Spatial perception and orientation in virtual environments – is virtual reality real enough?    
Abstract: While Virtual Reality (VR) offers many experimental advantages including stimulus control and interaction in flexible, naturalistic multi-modal environments, there is mixed evidence whether humans perceive and behave similarly in computer-simulated environments. I will present and discuss several recent studies suggesting that care should be taken when using VR for perceptual/behavioral research: While naturalistic visual cues can, in principle, be sufficient to allow for real-world-like spatial orientation performance in VR, there is often a strong influence of both the content displayed (e.g., naturalism and cues available in VR) and the context (e.g., display type, size, and FOV).
Notes:
2008
2007
(2007)  Similarity Between Room Layouts Causes Orientation-Specific Sensorimotor Interference in To-Be-Imagined Perspective Switches    
Abstract: May (2004) suggested that the difficulty of imagined perspective switches is partially caused by interference between the sensorimotor (actual) and to-be-imagined orientation. Here, we demonstrate a similar interference, even if participants are in a remote room and don�t know their physical orientation with respect to the to-beimagined orientation. Participants learned 15 target objects located in an office from one orientation (0�, 120�, or 240�). Participants were blindfolded and disoriented before being wheeled to an empty test room of similar geometry. Participants were seated facing 0�, 120�, or 240�, and were asked to perform judgments of relative direction (e.g., imagine facing "pen", point to "phone"). Performance was facilitated when participants� to-be-imagined orientation in the learning room was aligned with the corresponding orientation in the test room. This suggests that merely being in an empty room of similar geometry can be sufficient to automatically reanchor one�s representation and thus produce orientation-specific interference.
Notes:
T Meilinger, B E Riecke, D Berger, H H BĂĽlthoff (2007)  A novel immersive virtual environment setup for behavioural experiments in humans, tested on spatial memory for environmental spaces   http://www.kyb.mpg.de/publication.html?publ=4490  
Abstract: We present a summary of the development of a new virtual reality setup for behavioural experiments in the area of spatial cognition. Most previous virtual reality setups can either not provide accurate body motion cues when participants are moving in a virtual environment, or participants are hindered by cables while walking in virtual environments with a head-mounted display (HMD). Our new setup solves these issues by providing a large, fully trackable walking space, in which a participant with a HMD can walk freely, without being tethered by cables. Two experiments on spatial memory are described, which tested this setup. The results suggest that environmental spaces traversed during wayfinding are memorised in a view-dependent way, i.e., in the local orientation they were experienced, and not with respect to a global reference direction.
Notes:
(2007)  Spatial Orientation in the Immediate Environment : How Can the Different Theories be Reconciled?    
Abstract: Recently, there has been an increasing interest in theories about human spatial memory and orientation (see, e.g., Burgess, 2006 for a recent review). There is, however, an apparent conflict between many of those theories that yet needs to be re-solved. Here, we outline a theoretical framework that aims at integrating two current theories of spatial orientation: May (2004) pro-posed that the difficulty of imagined perspective switches is caused, at least in part, by an interference between the sensori-motor and the to-be-imagined perspectives. Riecke & von der Heyde (2002) developed a theoretical framework that is based on a network of logical propositions (i.e., necessary and sufficient conditions). They proposed that automatic spatial updating can only occur if there is a consistency between the observer's concurrent egocentric reference frames (e.g., mediated by real world perception, virtual reality [VR], or imagined perspectives). We propose that the underlying processes are the same, in the sense that a consistency between egocentric representations (Riecke & von der Heyde, 2002) is equivalent to an absence of interference (May, 2004). Whenever the current egocentric representations of the immediate surroundings are consistent, there should be no interference. According to Riecke & von der Heyde (2002), this state enables automatic spatial updating. We propose that this lack of interference might also be able to explain other important phenomena, such as the relative ease of adopting a new perspective after being disoriented. Con-versely, interference (inconsistency) between the primary, embodied egocentric representation and a to-be-imagined (e.g., experimentally instructed) egocentric representation implies the difficulty of adopting a new perspective. We posit that such interference or inconsistency also explains the difficulty people have in ignoring bodily rotations. To avoid the vagueness that purely verbally defined theories sometimes suffer from, we offer a well-defined graphical and structural representation of our framework. Integrating logical and information flow representations in one coherent frame-work not only provides a unified representation of previously seemingly isolated findings and theories, but also fosters a deeper understanding of the underlying processes and enables clear, testable predictions. [1] Burgess, N. (2006): Trends in Cognitive Sciences 10(12), 551-557 [2] May, M. (2004): Cognitive Psychology 48(2), 163-206 [3] Riecke, B. E. and von der Heyde, M. (2002): TR 100, MPI for Biological Cybernetics. Avaliable: www.kyb.mpg.de/publication.html?publ=2021 NIMH Grant 2-R01-MH57868. TWK 2007
Notes:
(2007)  Long-Term Memory for Environmental Spaces - the Case of Orientation Specificity    
Abstract: Recently, there has been an increasing interest in theories about human spatial memory and orientation (see, e.g., Burgess, 2006 for a recent review). There is, however, an apparent conflict between many of those theories that yet needs to be re-solved. Here, we outline a theoretical framework that aims at integrating two current theories of spatial orientation: May (2004) pro-posed that the difficulty of imagined perspective switches is caused, at least in part, by an interference between the sensori-motor and the to-be-imagined perspectives. Riecke & von der Heyde (2002) developed a theoretical framework that is based on a network of logical propositions (i.e., necessary and sufficient conditions). They proposed that automatic spatial updating can only occur if there is a consistency between the observer's concurrent egocentric reference frames (e.g., mediated by real world perception, virtual reality [VR], or imagined perspectives). We propose that the underlying processes are the same, in the sense that a consistency between egocentric representations (Riecke & von der Heyde, 2002) is equivalent to an absence of interference (May, 2004). Whenever the current egocentric representations of the immediate surroundings are consistent, there should be no interference. According to Riecke & von der Heyde (2002), this state enables automatic spatial updating. We propose that this lack of interference might also be able to explain other important phenomena, such as the relative ease of adopting a new perspective after being disoriented. Con-versely, interference (inconsistency) between the primary, embodied egocentric representation and a to-be-imagined (e.g., experimentally instructed) egocentric representation implies the difficulty of adopting a new perspective. We posit that such interference or inconsistency also explains the difficulty people have in ignoring bodily rotations. To avoid the vagueness that purely verbally defined theories sometimes suffer from, we offer a well-defined graphical and structural representation of our framework. Integrating logical and information flow representations in one coherent frame-work not only provides a unified representation of previously seemingly isolated findings and theories, but also fosters a deeper understanding of the underlying processes and enables clear, testable predictions. [1] Burgess, N. (2006): Trends in Cognitive Sciences 10(12), 551-557 [2] May, M. (2004): Cognitive Psychology 48(2), 163-206 [3] Riecke, B. E. and von der Heyde, M. (2002): TR 100, MPI for Biological Cybernetics. Avaliable: www.kyb.mpg.de/publication.html?publ=2021 NIMH Grant 2-R01-MH57868. TWK 2007
Notes:
2006
B E Riecke, J Schulte-Pelkum (2006)  Using the perceptually oriented approach to optimize spatial presence & ego-motion simulation   http://www.kyb.mpg.de/publication.html?publ=4186  
Abstract: This chapter is concerned with the perception and simulation of ego-motion in virtual environments, and how spatial presence and other higher cognitive and top-down factors can contribute to improve the illusion of ego-motion in virtual reality (VR). In the real world, we are used to being able to move around freely and interact with our environment in a natural and effortless manner. Current VR technology does, however, not yet allow for natural, real-life-like interaction between the user and the virtual environment. One crucial shortcoming in current VR is the insufficient and often unconvincing simulation of ego-motion, which frequently causes disorientation, unease, and motion sickness. We posit that a realistic perception of ego-motion in VR is a fundamental constituent for spatial presence and vice versa. Thus, by improving both spatial presence and ego-motion perception in VR, we aim to eventually enable performance levels in VR similar to the real world for basic tasks, e.g., spatial orientation and distance perception, which are currently very problematic cases. Users frequently get lost easily in VR while navigating, and simulated distances appear to be compressed and underestimated compared to the real world (Witmer & Sadowski, 1998; Chance, Gaunet, Beall, & Loomis, 1998; Creem-Regehr, Willemsen, Gooch, and Thompson, 2003; Knapp, 1999; Thompson, Willemsen, Gooch, Creem-Regehr, Loomis, & Beall, 2004, Stanney, 2002).
Notes:
B E Riecke, H G Nusseck, J Schulte-Pelkum (2006)  Selected Technical and Perceptual Aspects of Virtual Reality Displays   http://3t.kyb.tuebingen.mpg.de/publications/attachments/RieckeNusseckSchulte-Pelkum_06_MPIK-TR_154__Selected%20Technical%20and%20Perceptual%20Aspects%20of%20Virtual%20Reality%20Displays_%5B0%5D.pdf  
Abstract: There is an increasing amount of different presentation techniques available for producing visual Virtual Reality (VR) scenes. The purpose of this chapter is to give a brief and introductory overview of existing VR presentation techniques and to highlight advantages and disadvantages of each technique, depending on the specific applications. This should enable the reader to design and/or improve their VR visualization setup in terms of both the perceptual aspects and the effectiveness for a given task or goal . In this overview, we relate the different types of presentation techniques to aspects of human physiology of visual perception which have important implications for VR setups. This will, by no means, be a complete overview of all physiological aspects. For a detailed overview and introduction, see, e.g., Goldstein (2002). The aim of a visual simulation is to achieve a convincing and perceptually realistic presentation of the simulated environment. Ideally, the user should feel present in the virtual environment and not be able to tell whether it is real or simulated. The human visual system uses several cues to form a percept of the surrounding environment. We will have a closer look at some of these cues in the first section, as they are of crucial importance when looking at simulated scenes. The remaining sections are concerned with possible technical implementations and how these relate to the perceptual aspects and effectiveness for a given task.
Notes:
(2006)  Visually induced linear vection is enhanced by small physical accelerations    
Abstract: Wong & Frost (1981) showed that the onset latency of visually induced self-rotation illusions (circular vection) can be reduced by concomitant small physical motions (jerks). Here, we tested whether (a) such facilitation also applies for translations, and (b) whether the strength of the jerk (degree of visuo-vestibular cue conflict) matters. 14 na�ve observers rated onset, intensity, and convincingness of forward linear vection induced by photorealistic visual stimuli of a street of houses presented on a projection screen (FOV: 75��58�). For⅔ of the trials, brief physical forward accelerations (jerks applied using a Stewart motion platform) accompanied the visual motion onset. Adding jerks enhanced vection significantly; Onset latency was reduced by 50%, convincingness and intensity ratings increased by more than 60%. Effect size was independent of visual acceleration (1.2 and 12m/s2) and jerk size (about 0.8 and 1.6m/s2 at participants? head for 1 and 3cm displacement, respectively), and showed no interactions. Thus, quantitative matching between the visual and physical acceleration profiles might not be as critical as often believed as long as they match qualitatively and are temporally synchronized. These findings could be employed for improving the convincingness and effectiveness of low-cost simulators without the need for expensive, large motion platforms. SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.
Notes:
2005
(2005)  Can auditory cues influence the visually induced self-motion illusion?    
Abstract: It is well known that a moving visual stimulus covering a large part of the visual field can induce compelling illusions of self-motion ('vection'). Lackner (1977 Aviation Space and Environmental Medicine 48 129 - 131) showed that sound sources rotating around a blindfolded person can also induce vection. In the current study, we investigated visuo-auditory interactions for circular vection by testing whether adding an acoustic landmark that moves together with the visual stimulus enhances vection. Twenty observers viewed a photorealistic scene of a market place that was projected onto a curved projection screen (FOV 54 deg × 40 deg). In each trial, the visual scene rotated at 30° s-1 around the Earth's vertical axis. Three conditions were randomised in a within-subjects design: no-sound, mono-sound, and spatialised-sound (moving together with the visual scene) played through headphones using a generic head-related transfer function (HRTF). We used sounds of flowing water, which matched the visual depiction of a fountain that was visible in the market scene. Participants indicated vection onset by deflecting the joystick in the direction of perceived self-motion. The convincingness of the illusion was rated on an 11-point scale (0 - 100%). Only the spatialised-sound that moved according to the visual stimulus increased vection significantly: convincingness ratings increased from 60.2% for mono-sound to 69.6% for spatialised-sound (t19 = -2.84, p = 0.01), and the latency from vection onset until saturated vection decreased from 12.5 s for mono-sound to 11.1 s for spatialised-sound (t19 = 2.69, p = 0.015). In addition, presence ratings assessed by the IPQ presence questionnaire were slightly but significantly increased. Average vection onset times, however, were not affected by the auditory stimuli. We conclude that spatialised-sound that moves concordantly with a matching visual stimulus can enhance vection. The effect size was, however, rather small (15%). In a control experiment, we will investigate whether this might be explained by a ceiling effect, since visually induced vection was already quite strong. These results have important implications for our understanding of multi-modal cue integration during self-motion. [Supported by the EU-funded POEMS Project (Perceptually Oriented Ego-Motion Simulation, IST-2001-39223).]
Notes:
(2005)  Auditory cues can facilitate the visually-induced self-motion illusion (circular vection) in Virtual Reality    
Abstract: INTRODUCTION: There is a long tradition of investigating the self-motion illusion induced by rotating visual stimuli ("circular vection"). Recently, Larsson et al. (2004) showed that up to 50% of participants could also get some vection from rotating sound sources while blindfolded, replicating findings from Lackner (1977). Compared to the compelling visual illusion, though, auditory vection is rather weak and much less convincing. METHODS: Here, we tested whether adding an acoustic landmark to a rotating visual photorealistic stimulus of a natural scene can improve vection. Twenty observers viewed rotating stimuli that were projected onto a curved projection screen (FOV: 54$^\circ$x40.5$^\circ$). The visual scene rotated around the earth-vertical axis at 30$^\circ$/s. Three conditions were randomized in a repeated measures within-subject design: No-sound, mono-sound, and 3D-sound using a generic head-related transfer function (HRTF). RESULTS: Adding mono-sound showed only minimal tendencies towards increased vection and did not affect presence-ratings at all, as assessed using the Schubert et al. (2001) presence questionnaire. Vection was, however, slightly but significantly improved by adding a rotating 3D-sound source that moved in accordance with the visual scene: Convincingness ratings increased from 60.2% (mono-sound) to 69.6% (3D-sound) (t(19)=-2.84, p=.01), and vection buildup-times decreased from 12.5s (mono-sound) to 11.1s (3D-sound) (t(19)=2.69, p=.015). Furthermore, overall presence ratings were increased slightly but significantly. Note that vection onset times were not significantly affected (9.6s vs. 9.9s, p>.05). CONCLUSIONS: We conclude that adding spatialized 3D-sound that moves concordantly with a visual self-motion simulation does not only increase overall presence, but also improves the self-motion sensation itself. The effect size for the vection measures was, however, rather small (about 15%), which might be explained by a ceiling effect, as visually induced vection was already quite strong without the 3D-sound (9.9s vection onset time). Merely adding non-spatialized (mono) sound did not show any clear effects. These results have important implications for the understanding or multi-modal cue integration in general and self-motion simulations in Virtual Reality in particular. SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.
Notes:
B E Riecke, J Schulte-Pelkum, F Caniard, H H BĂĽlthoff (2005)  Spatialized auditory cues enhance the visually-induced self-motion illusion (circular vection) in Virtual Reality.   http://www.kyb.mpg.de/publication.html?publ=4187  
Abstract: "Circular vection" refers to the illusion of self-motion induced by rotating visual or auditory stimuli. Visually induced vection can be quite compelling, and the illusion has been investigated extensively for over a century. Rotating auditory cues can also induce vection, but only in about 25-60% of blindfolded participants (Lackner, 1977; Larsson et al., 2004). Furthermore, auditory vection is much weaker and far less compelling than visual vection, which can be indistinguishable from real motion. Here, we investigated whether an additional auditory cue (the sound of a fountain that is also visible in the visual stimulus) can be utilized to enhance visually induced self-motion perception. To the best of our knowledge, this is the first study directly addressing audio-visual contributions to vection. Twenty observers viewed rotating photorealistic pictures of a natural scene projected onto a curved projection screen (FOV: 54�x45�). Three conditions were randomized in a repeated measures within-subject design: No sound, mono sound, and spatialized sound using a generic head-related transfer function (HRTF). Adding mono sound to the visual vection stimulus increased convincingness ratings marginally, but did not affect vection onset time, vection buildup time, vection intensity, or rated presence. Spatializing the fountain sound such that it moved in accordance with the fountain in the visual scene, however, improved vection significantly in terms of convincingness, vection buildup time, and presence ratings. The effect size for the vection measures was, however, rather small (<16%). This might be related to a ceiling effect, as visually induced vection was already quite strong without the spatialized sound (10s vection onset time). Despite the small effect size, this study shows that HRTF-based auralization using headphones can be employed to improve visual VR simulations both in terms of self-motion perception and overall presence. Note that facilitation was found even though the visual stimulus was of high quality and realism, and known to be quite powerful in inducing vection. These findings have important implications both for the understanding of cross-modal cue integration and for optimizing VR simulations.
Notes:
2004
(2004)  Top-down influence on visually induced self-motion perception (vection)    
Abstract: INTRODUCTION: The prevailing notion of visually induced illusory self-motion perception (vection) is that the illusion arises from bottom-up perceptual processes. Therefore, past research has focused primarily on examining how physical parameters of the visual stimulus (contrast, number of vertical edges etc.) affect vection. In this study, we examined the influence of a top-down process: Spatial presence in the simulated scene. Spatial presence was manipulated by presenting either a photorealistic image of the Tübingen market place or modified versions of the same stimulus. Modified stimuli were created by either slicing the original image horizontally and randomly reassembling it or by scrambling image parts in a mosaic-like manner. We expected scene modification to decrease spatial presence and thus impair vection. METHODS: Ten naive observers viewed stimuli projected onto a curved projection screen subtending a field of view (FOV) of 54x40.5$^\circ$. We measured vection onset times and had participants rate the convincingness of the self-motion illusion for each trial using a 0-100% scale. In addition, we assessed spatial presence using standard presence questionnaires. RESULTS: As expected, scene modification led to both reduced presence scores and impaired vection: Modified stimuli yielded longer vection onset times and lower convincingness ratings than the intact market scene (t(9)=-2.36, p=.043 and t(9)=3.39, p=.008, resp.). It should be pointed out that the scrambled conditions had additional high contrast edges (compared to the sliced or intact stimulus). Previous research has shown that adding vertical high contrast edges facilitate vection. Therefore, one would predict that the scrambled stimuli should improve vection. The results show, however, a tendency towards reduced vection for the scrambled vs. sliced or intact stimuli. This suggests that the low-level information (more contrast edges in the scrambled stimulus) were dominated by high level information (consistent reference frame for the intact market scene). Interestingly, the number of slices or mosaics (2, 8, or 32 per 45$^\circ$ FOV) had no clear influence on either perceived vection or presence; two slices were already enough to impair scene presence. CONCLUSIONS: These results suggest that there might be a direct relation between spatial presence and self-motion perception. We posit that stimuli depicting naturalistic scenes provide observers with a convincing reference frame for the simulated environment which enables them to feel "spatially present" in that scene. This, in turn, facilitates the self-motion illusion. This work not only can shed some light on ego-motion perception, but also has important implications for motion simulator design and application. SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.
Notes:
(2004)  Vibrational cues enhance believability of ego-motion simulation    
Abstract: We investigated whether the visually induced perception of illusory self-motion (vection) can be influenced by vibrational cues. Circular vection was induced in 22 observers who viewed a naturalistic scene displayed on a projection screen (FOV 54?x40.5?). Two factors were varied: The velocity profile of the visual stimulus (3 or 12 sec to reach 30?/s), and the presence or absence of vibrations. Vibrations were generated by 4 subwoofers mounted below the seat and floor panel. Participants used a joystick to indicate vection onset, and the convincingness of the illusion was rated by magnitude estimation. Data analysis showed that fast accelerations resulted in shorter vection-onset times. Convincingness ratings were affected significantly by the vibrations: With the vibrations, vection was rated to be more convincing. Vection-onset latency, however, was not influenced by vibrations. Interestingly, 3 participants stated that vibrations reduced vection because the vibration amplitudes were not matched to the visual velocity profiles and thus became unrealistic. We conclude that vibrations can influence the convincingness of vection, but that cognition has a moderating effect: If conflicts between visual and vibrational cues are registered, vection seems to be reduces because of the cognitive conflict. These results are relevant for the design of ego-motion simulators. SUPPORT: EU grant POEMS-IST-2001-39223 (see www.poems-project.info) and Max Planck Society.
Notes:
2003
(2003)  Circular vection is facilitated by a consistent photorealistic scene    
Abstract: It is well known that large visual stimuli that move in a uniform manner can induce illusory sensations of self-motion in stationary observers. This perceptual phenomenon is commonly referred to as vection. The prevailing notion of vection is that the illusion arises from bottom-up perceptual processes and that it mainly depends on physical parameters of the visual stimulus (e.g., contrast, spatial frequency etc.). In our study, we investigated whether vection can also be influenced by top-down processes: We tested whether a photorealistic image of a real scene that contains consistent spatial information about pictorial depth and scene layout (e.g., linear perspective, relative size, texture gradients etc.) can induce vection more easily than a comparable stimulus with the same image statistics where information about relative depth and scene layout has been removed. This was done by randomly shuffling image parts in a mosaic-like manner. The underlying idea is that the consistent photorealistic scene might facilitate vection by providing the observers with a convincing mental reference frame for the simulated environment so that they can feel "spatially present" in that scene. That is, the better observers accept this virtual scene instead of their physical surrounding - i.e., the simulation setup - as the primary reference frame, the less conflict between the two competing reference frames should arise and therefore spatial presence and ego-motion perception in the virtual scene should be enhanced. In a psychophysical experiment with 18 observers, we measured vection onset times and convincingness ratings of sensed ego-rotations for both visual stimuli. Our results confirm the hypothesis that cognitive top-down processes can influence vection: On average,we found 50% shorter vection onset times and 30% higher convincingness ratings of vection for the consistent scene. This finding suggests that spatial presence and ego-motion perception are closely related to one another. The results are relevant both for the theory of ego-motion perception and for ego-motion simulation applications in Virtual Reality. see also Schulte-Pelkum:08PhD, exp. 1
Notes:
(2003)  Influence of display parameters on perceiving visually simulated ego-rotations - a systematic investigation    
Abstract: In Virtual Reality, subjects typically misperceive visually simulated turning angles. The literature on this topic reports inconclusive data. This may be partly due to the different display devices and field of views FOV in the studies. Our study aims to disentangle the specific influence of display devices, FOV, and screen curvature on the perceived turning angle for simulated ego-rotations. In Experiment 1, display devices (HMDs vs. curved projection screen) and FOV were manipulated. Subjects were seated in front of the screen and saw a star field of limited lifetime dots on a dark background. They were instructed to perform simulated ego-rotations between 45 and 225$^\circ$ and they used a joystick to control the simulated turns. In a within-subject design, performance was compared between a projection screen (FOV 86$^\circ$$\times$64$^\circ$), a HMD (40$^\circ$$\times$30$^\circ$), and blinders that reduced the FOV on the screen to 40$^\circ$$\times$30$^\circ$. Generally, all target angles were undershot. We found gain factors of 0.74 for the projection screen, 0.71 for the blinders, and 0.56 for the HMD. The reduction of the FOV on the screen had no significant effect (p=0.407), whereas the difference between the HMD and blinders with the same FOV was highly significant (p<0.01). In Experiment 2, screen curvature was manipulated. Subjects performed the same task as in Experiment 1, either on a flat projection screen or on a curved screen (radius 2m, FOV 86$^\circ$$\times$64$^\circ$ for both). Screen curvature had a significant effect (p<0.001): While subjects turned too far on the flat screen (gain 1.12), they didn't turn far enough on the curved screen (gain 0.84). Subjects' verbal reports indicate that rotational optic flow on the flat screen was misperceived as translational flow. We conclude the following: First, display devices seem to be more critical than FOV for ego-rotations, the projection screen being superior to the HMD. Second, screen curvature is an important parameter to be considered for ego-motion simulation.
Notes:
(2003)  Reflex-like spatial updating can be adapted without any sensory conflict    
Abstract: Reflex-like processes are normally recalibrated with a concurrent sensory conflict. Here, we investigated reflex-like (obligatory) spatial updating (online updating of our egocentric spatial reference frame during self-motion, which is largely beyond conscious control). Our object was to adapt vestibularly induced reflex-like spatial updating with the use of a purely cognitive interpretation of the angle turned--that is, without any concurrent sensory conflict, just by presenting an image with a different orientation, after physical turns in complete darkness. The experiments consisted of an identical pre-test and post-test, and an adaptation phase in between. In all three phases, spatial updating was quantified by behavioural measurements of the new post-rotation orientations (rapid pointing to invisible landmarks in a previously learned scene). In the adaptation phase, visual feedback was additionally provided after the turn and pointing task (display of an orientation that differed from the actual turning angle by a factor of 2). The results show that the natural, unadapted gain of perceived versus real turn angle in the pre-test was increased by nearly a factor of 2 in the adaptation phase and remained at this level during the post-test. We emphasise that at no point was simultaneous visual and vestibular stimulation provided. We conclude that vestibularly driven reflex-like spatial updating can be adapted without any concurrent sensory conflict, just by a pure cognitive conflict. That is, the cognitive discrepancy between the vestibularly updated reference frame (which served for the pointing) and the subsequently received static visual feedback was able to recalibrate the interpretation of self-motion.[Supported by Max Planck Society and Deutsche Forschungsgemeinschaft (SFB 550).]
Notes:
(2003)  Qualitative modeling of spatial orientation processes using a logical network of necessary and sufficient conditions    
Abstract: INTRODUCTION: Findings from spatial orientation and navigation experiments are typically rather diverse and highly task-dependent. In this paper, we attempted to model the underlying spatial orientation processes by analyzing their logical and functional relations. This eventually led to a network of necessary prerequisites and sufficient conditions for spatial orientation, spatial presence, and spatial updating. LOGICAL MODELING: How does logical modeling work? For example, it is evident that ego-motion perception cannot occur without some kind of motion perception. That is, intact ego-motion perception seems to logically depend on intact motion perception (NO motion perception NO ego-motion perception). Conversely, if we observe intact ego-motion perception, we can conclude that motion perception must also be intact, which can be represented as ego-motion perception motion perception (as $\neg$B =⇒ $\neg$A ≤⇒ A =⇒ B). OVERVIEW OF THE MODEL: Spatial behavior and spatial perception are the main components of the perception-action cycle and constitute the top and bottom part of the framework, respectively (see Figure 1). Meaningful spatial behavior is essentially based and logically dependent on spatial perception, and is mediated by several possible spatial orientation processes. At the bottom part of the framework, we distinguish mainly between two branches, a relative motion branch on the left side and an absolute location branch on the right side (see Figure 1). The left "relative motion branch" is based on path integration of perceived motions. It is responsible for generating the perception of ego-motion (e.g., vection) and the continuous updating of the self-location in space. The right "absolute location branch" constitutes an alternative approach to finding ones way around, by using landmarks as reference points. Object/landmark memory is hereby involved in the recognition of salient features in the environment. In addition to the left and right branch, we propose a third pathway that is responsible for robust and automated spatial orientation. That is, if we want to know where we are without having to think much about it, we need some process that allows for quick & intuitive spatial orientation and prevents us from getting lost, even when we do not constantly pay attention. To achieve this, some automated process (called "automatic spatial updating" or just "spatial updating") needs to always update our egocentric mental reference frame of the surround during ego-motions, such that it stays in close alignment with the physical surround. We distinguished between four qualitatively different aspects or properties of spatial orientation processes: adaptable, quick & intuitive, accurate & precise and abstract strategies. These different aspects of spatial behavior seem to depend logically on different underlying spatial orientation processes and data structures. We categorized those processes into cognition (abstract mental reasoning), piloting (landmark-based navigation), continuous spatial updating and instantaneous spatial updating. The complete framework is presented in Figure 2 for reference. Instead of trying to explain the whole framework, we would instead like to focus here on the two spatial updating processes that are responsible for robust and effortless spatial orientation. CONTINUOUS VS. INSTANTANEOUS SPATIAL UPDATING: "Continuous spatial updating" refers to the largely automated and reflex-like process of updating our mental egocentric reference frame during self-motions based on continuous motion cues. Continuous spatial updating is based on the integration of the perceived ego-motion, whereas instantaneous spatial updating is based on object and scene recognition (see Figure 2). Instantaneous spatial updating occurs for example in the moment of waking up after having fallen asleep on a bus: As soon as we look out of the window and recognize the outside scene, we are automatically re-anchored to that reference frame. That is, we immediately know where we are without any conscious effort and without being able to suppress that re-anchoring (instantaneous spatial updating) of our egocentric reference frame. Embedding these two spatial updating processes into a framework of logical connections allows to clearly disambiguate between them: Either of these processes may enable (i.e., is a logical prerequisite for) quick & intuitive spatial orientation (see Figure 2). Only instantaneous spatial updating, however, allows for accurate & precise spatial orientation, as it is based on the localization and identification of landmarks embedded into a consistent scene. This has specific implications that can be experimentally tested and controlled. As a first test of the model, we performed a series of spatial updating experiments in different virtual environments. For example, we selectively disabled either the relative motion branch or absolute location branch by either removing all useful landmarks or by eliminating all motion cues in a "teleport" condition, respectively. In the latter teleport experiment, instantaneous spatial updating was able to compensate for the missing motion information and resulting lack of continuous spatial updating without any significant decrease in performance (Riecke, von der Heyde, & Bülthoff, 2002). This confirmed our distinction between continuous and instantaneous spatial updating as two separate processes that can serve as a mutual backup-system. CONCLUSIONS: This framework is intended as a working hypotheses that can assist in analyzing spatial situations and experimental results. It provides a coherent representation for the large number of experimental paradigms and results and can thus allow for a unifying big picture that might help to structure and clarify our reasoning and discussions. In particular, it proved helpful in understanding the implications if certain processes related to spatial orientation are impaired or defunct (see, e.g., Riecke, 2003, part IV). Furthermore, the human factors issues involved in all Virtual Reality applications can be tackled by analyzing the relevant simulation and display parameters necessary for quick and effortless spatial orientation: Most importantly, any application that does not enable automatic spatial updating should decrease quick and effortless spatial orientation performance and hence unnecessarily increase cognitive load. Only future research, however, will enable us to rigorously test the proposed logical framework and refine or extend it where appropriate. SUPPORT: Max Planck Society, European Community (IST-2001-39223, FET Proactive Initiative, project "POEMS" (Perceptually Oriented Ego-Motion Simulation, www.poems-project.info), and Deutsche Forschungsgemeinschaft (SFB 550, B2). REFERENCES: Riecke, B. E. (2003). How far can we get with just visual information? Path integration and spatial updating studies in Virtual Reality. Ph.D. thesis, Eberhard-Karls-Universität Tübingen, Fakultät für Physik. Available: www.kyb.mpg.de/publication.html?publ=2088. Riecke, B. E., von der Heyde, M., & Bülthoff, H. H. (2002). Teleporting works - Spatial updating experiments in Virtual Tübingen. In OPAM, Talk presented at the 10th annual meeting of OPAM, Kansas City, United States. Available: www.kyb.mpg.de/publication.html?publ=1952.
Notes:
2002
B E Riecke, M von der Heyde (2002)  Qualitative Modeling of Spatial Orientation Processes using Logical Propositions : Interconnecting Spatial Presence, Spatial Updating, Piloting, and Spatial Cognition    
Abstract: In this paper, we introduce first steps towards a logically consistent framework describing and relating items concerning the phenomena of spatial orientation processes, namely spatial presence, spatial updating, piloting, and spatial cognition. Spatial presence can for this purpose be seen as the consistent feeling of being in a specific spatial context, and intuitively knowing where one is with respect to the immediate surround. The core idea of the framework is to model spatial orientation-related issues by analyzing their logical and functional relations. This is done by determining necessary and/or sufficient conditions between related items like spatial presence, spatial orientation, and spatial updating. This eventually leads to a set of necessary prerequisites and sufficient conditions for those items. More specifically, the logical structure of our framework suggests novel ways of quantifying spatial presence and spatial updating. Furthermore, it allows to disambiguate between two complementing types of automatic spatial updating: On the one hand, the well-known continuous spatial updating induced by continuous movement information. On the other hand, a novel type of discontinuous, teleport-like “instantaneous spatial updating” that allows participants to quickly adopt the reference frame of a new location without any explicit motion cues. ACKNOWLEDGEMENTS: This research was funded by the Max-Planck Society and the Deutsche Forschungsgemeinschaft (SFB 550 Erkennen, Lokalisieren, Handeln: neurokognitive Mechanismen und ihre Flexibilität).
Notes:
(2002)  Spatial updating experiments in Virtual Reality : What makes the world turn around in our head?    
Abstract: During ego-turns, our mental spatial representation of the surround is automatically rotated to stay in alignment with the physical surround. We know that this "spatial updating" process is effortless, automatic, and typically obligatory (i.e., cognitively impenetrable and hard-to-suppress). We were interested in two main questions here: 1) Can visual cues be sufficient to initiate obligatory spatial updating, in contrast to the prevailing opinion that vestibular cues are required? 2) How do vestibular cues, field of view (FOV), display method, turn amplitude and velocity influence spatial updating performance? STIMULI: A photo-realistic virtual replica of the Tübingen market place was presented via a curved projection screen (84x63$^\circ$ FOV or restricted to 40x30$^\circ$) or a head-mounted display (HMD, 40x30$^\circ$). A Stewart motion platform was used for vestibular stimulation. TASK: Participants were rotated successively to different orientations and asked to point "as quickly and accurately as possible" to four targets randomly selected from a set of 22 salient landmarks previously learned. Targets were announced consecutively via headphones and selected to be outside of the visible range (i.e., between 42$^\circ$ and 105$^\circ$ left or right from straight ahead). Performance was quantified as absolute pointing error, pointing variability, and response time. In general, participants had no problem mentally updating their orientation in space (UPDATE condition) and spatial updating performance was the same as for rotations where they were immediately returned to the previous orientation (CONTROL condition). Spatial updating was always "obligatory" in the sense that it was significantly more difficult to IGNORE ego-turns (i.e., "point as if not having turned"). We observed this data pattern irrespective of turning velocity, head mounted display (HMD) or projection screen usage, and amount of vestibular cues accompanying the visual turn. Increasing the visual field of view (from 40x30$^\circ$ FOV to 84x63$^\circ$) increased UPDATE performance especially for larger turns, i.e., potentially more difficult tasks. IGNORE performance, however, was unaltered. Large turns (>80$^\circ$) were almost as easy to UPDATE as small turns, but much harder to IGNORE (p<0.05). This suggests that larger turns result in a more obligatory (hard-to-suppress) spatial updating of the world inside our head. We conclude that photo-realistic visual stimuli from well-known environments including an abundance of salient landmarks are sufficient to trigger spatial updating and hence turn the world inside our head, irrespective of vestibular cues. This result conflicts with the prevailing opinion that vestibular cues are required for proper updating of ego-turns. Several factors might explain this difference, primarily the immersiveness of our visualization setup and the abundance of natural landmarks in a well-known environment. SUPPORT: Max Planck Society and Deutsche Forschungsgemeinschaft (SFB 550)
Notes:
(2002)  Spatial updating in virtual environments : What are vestibular cues good for   http://journalofvision.org/2/7/421/  
Abstract: When we turn ourselves, our sensory inputs somehow turn the "world inside our head" accordingly so as to stay in alignment with the outside world. This "spatial updating" occurs automatically, without conscious effort, and is normally "obligatory" (i.e., cognitively impenetrable and hard to suppress). We pursued two main questions here: 1) Which cues are sufficient to initiate obligatory spatial updating? 2) Under what circumstances do vestibular cues become important? STIMULI: A photo-realistic virtual replica of the Tübingen market place was presented via a curved projection screen (84x63° FOV). For vestibular stimulation, subjects were seated on a Stewart motion platform. TASK: Subjects were rotated consecutively to random orientations and asked to point "as accurately and quickly as possible" to 4 out of 22 previously-learned targets. Targets were announced consecutively via headphones and chosen to be outside of the current FOV. Photo-realistic visual stimuli from a well-known environment including an abundance of salient landmarks allowed accurate spatial updating (mean absolute pointing error, pointing variability, and response time were 16.5°, 17.0°, and 1.19s, respectively). Moreover, those stimuli triggered spatial updating even when participants were asked to ignore turn cues and "point as if not having turned", (32.9°, 27.5°, 1.67s, respectively). Removing vestibular turn cues did not alter performance significantly. This result conflicts with the prevailing opinion that vestibular cues are required for proper updating of ego-turns. We did find that spatial updating benefitted from vestibular cues when visual turn information was degraded to a mere optic flow pattern. Under all optic flow conditions, however, spatial updating was impaired and no longer obligatory. We conclude that "good" visual landmarks can initiate obligatory spatial updating and overcome the visuo-vestibular cue conflict. SUPPORT: Max Planck Society and Deutsche Forschungsgemeinschaft (SFB 550)
Notes:
(2002)  Spatial updating experiments in Virtual Reality : What makes the world turn around in our head   http://3t.kyb.tuebingen.mpg.de/de/publication.html?publ=632  
Abstract: During ego-turns, our mental spatial representation of the surround is automatically rotated to stay in alignment with the physical surround. We know that this "spatial updating" process is effortless, automatic, and typically obligatory (i.e., cognitively impenetrable and hard-to-suppress). We were interested in two main questions here: 1) Can visual cues be sufficient to initiate obligatory spatial updating, in contrast to the prevailing opinion that vestibular cues are required? 2) How do vestibular cues, field of view (FOV), display method, turn amplitude and velocity influence spatial updating performance? STIMULI: A photo-realistic virtual replica of the Tübingen market place was presented via a curved projection screen (84x63° FOV or restricted to 40x30°) or a head-mounted display (HMD, 40x30°). A Stewart motion platform was used for vestibular stimulation. TASK: Participants were rotated successively to different orientations and asked to point "as quickly and accurately as possible" to four targets randomly selected from a set of 22 salient landmarks previously learned. Targets were announced consecutively via headphones and selected to be outside of the visible range (i.e., between 42° and 105° left or right from straight ahead). Performance was quantified as absolute pointing error, pointing variability, and response time. In general, participants had no problem mentally updating their orientation in space (UPDATE condition) and spatial updating performance was the same as for rotations where they were immediately returned to the previous orientation (CONTROL condition). Spatial updating was always "obligatory" in the sense that it was significantly more difficult to IGNORE ego-turns (i.e., "point as if not having turned"). We observed this data pattern irrespective of turning velocity, head mounted display (HMD) or projection screen usage, and amount of vestibular cues accompanying the visual turn. Increasing the visual field of view (from 40x30° FOV to 84x63°) increased UPDATE performance especially for larger turns, i.e., potentially more difficult tasks. IGNORE performance, however, was unaltered. Large turns (>80°) were almost as easy to UPDATE as small turns, but much harder to IGNORE (p<0.05). This suggests that larger turns result in a more obligatory (hard-to-suppress) spatial updating of the world inside our head. We conclude that photo-realistic visual stimuli from well-known environments including an abundance of salient landmarks are sufficient to trigger spatial updating and hence turn the world inside our head, irrespective of vestibular cues. This result conflicts with the prevailing opinion that vestibular cues are required for proper updating of ego-turns. Several factors might explain this difference, primarily the immersiveness of our visualization setup and the abundance of natural landmarks in a well-known environment. SUPPORT: Max Planck Society and Deutsche Forschungsgemeinschaft (SFB 550)
Notes:
(2002)  Contribution and interaction of visual and vestibular cues for spatial updating in real and virtual environments    
Abstract: In a series of experiments, we established a speeded pointing paradigm to investigate the influence and interaction of visual and vestibular stimulus parameters for spatial updating in real and virtual environments. STIMULI: Participants saw either the real surround or a photorealistic virtual replica presented via HMD or projection screen. A Stewart motion platform was used for vestibular stimulation. TASK: After simulated or real ego-turns, participants were asked to quickly point towards different previously-learned target objects. Targets were announced consecutively via headphones and chosen to be outside of the current field of view. Performance in real and virtual environments was comparable. Photorealistic visual stimuli from well-known environments including an abundance of salient landmarks proved sufficient to initiate obligatory spatial updating and hence turn the world inside our head, even against our conscious will and without corresponding vestibular cues. Spatial updating benefitted from vestibular cues only when visual turn information was reduced to optic flow information only. There, however, spatial updating was impaired and no longer obligatory. Apart form the well-known smooth spatial updating induced by continuous movement information, we found also a discontinuous, jump-like spatial updating that allowed participants to quickly adopt a new orientation without any explicit motion cues. SUPPORT: Max Planck Society and Deutsche Forschungsgemeinschaft (SFB 550)
Notes:
2001
M von der Heyde, B E Riecke (2001)  How to cheat in motion simulation - comparing the engineering and fun ride approach to motion cueing    
Abstract: The goal of this working paper is to discuss different motion cueing approaches. They stem either from the engineering field of building flight and driving simulators, or from the modern Virtual Reality fun rides presented in amusement parks all over the world. The principles of motion simulation are summarized together with the technical implementations of vestibular stimulation with limited degrees of freedom. A psychophysical experiment in Virtual Reality is proposed to compare different motion simulation approaches and quantify the results using high-level psychophysical methods as well as traditional evaluation methods. ACKNOWLEDGEMENTS: This research was funded by the Max-Planck Society and the Deutsche Forschungsgemeinschaft (SFB 550 Erkennen, Lokalisieren, Handeln: neurokognitive Mechanismen und ihre Flexibilität).
Notes:
2000
B E Riecke, H A H C van Veen, H H BĂĽlthoff (2000)  Visual Homing is possible without Landmarks : A Path Integration Study in Virtual Reality    
Abstract: The literature often suggests that proprioceptive and especially vestibular cues are required for navigation and spatial orientation tasks involving rotations of the observer. To test this notion, we conducted a set of experiments in virtual reality where only visual cues were provided. Subjects had to execute turns, reproduce distances or perform triangle completion tasks: After following two prescribed segments of a triangle, subjects had to return directly to the unmarked starting point. Subjects were seated in the center of a half-cylindrical 180$^\circ$ projection screen and controlled the visually simulated ego-motion with mouse buttons. Most experiments were performed in a simulated 3D field of blobs providing a convincing feeling of self-motion (vection) but no landmarks, thus restricting navigation strategies to path integration based on optic flow. Other experimental conditions included salient landmarks or landmarks that were only temporarily available. Optic flow information alone proved to be sufficient for untrained subjects to perform turns and reproduce distances with negligible systematic errors, irrespective of movement velocity. Path integration by optic flow was sufficient for homing by triangle completion, but homing distances were biased towards mean responses. Additional landmarks that were only temporarily available did not improve homing performance. However, navigation by stable, reliable landmarks led to almost perfect homing performance. Mental spatial ability test scores correlated positively with homing performance especially for the more complex triangle completion tasks, suggesting that mental spatial abilities might be a determining factor for navigation performance. Compared to similar experiments using virtual environments (Péruch et al., 1997, Bud, 2000) or blind locomotion (Loomis et al., 1993), we did not find the typically observed distance undershoot and strong regression towards mean turn responses. Using a virtual reality setup with a half-cylindrical 180$^\circ$ projection screen allowed us to demonstrate that visual path integration without any vestibular or kinesthetic cues is sufficient for elementary navigation tasks like rotations, translations, and homing via triangle completion.
Notes:
(2000)  Reicht optischer Fluss wirklich nicht zum Heimfinden?    
Abstract: FRAGESTELLUNG: Die Literatur legt nahe, da\ss selbst für einfache Orientierungs- und Heimfindeaufgaben die durch optischen Flu\ss gegebene Information unzureichend ist und vestibuläre und kinästhetische Reize benötigt werden. Um diese Behauptung zu testen, führten wir Dreiecksvervollständigungsexperimente in einer virtuellen Umgebung durch, die als einzige Informationsquelle optischen Flu\ss anbot. METHODEN: Die simulierte Eigenbewegung wurden visuell auf einer halbzylindrischen 180Projektionsleinwand (7m Durchmesser) dargeboten und über Maus-Tasten gesteuert. Damit die Versuchspersonen zur Navigation nur Pfadintegration und keine Landmarkeninformation verwenden konnten, bestand die simulierte Welt lediglich aus einer 3D Punktewolke. Diese enthielt keinerlei hilfreiche Orientierungspunkte (Landmarken), vermittelte jedoch ein überzeugendes Gefühl von Eigenbewegung (Vektion). In Exp 1 sollten die Versuchspersonen Drehungen um bestimmte Winkel ausführen sowie Distanzen reproduzieren, wobei die Geschwindigkeiten randomisiert wurden. Exp 2 & 3 waren Dreiecksvervollständigungsexperimente: Versuchspersonen folgten zwei Schenkeln eines Dreiecks und sollten dann selbstständig zum nicht markierten Ausgangspunkt zurückfinden. In Exp 2 wurden fünf verschiedene gleichschenklige Dreiecke für Links- und Rechtsdrehungen verwendet, in Exp 3 hingegen 60 verschiedene Dreiecke mit randomisierten Schenkellängen und Winkeln. ERGEBNISSE: Unabhängig von der Bewegungsgeschwindigkeit konnten untrainierte Versuchspersonen in Exp 1 Drehungen und Distanzen mit nur geringfügigem systematischen Fehler ausführen. Wir fanden in Exp 2 & 3 generell eine lineare Korrelation zwischen ausgeführten und korrekten Werten für die Me\ssgrö\ssen Drehwinkel und zurückgelegte Distanz. Für die weitere Analyse verwendeten wir deshalb für beide Me\ssgrö\ssen die Steigungen der Regressionsgeraden ("Kompressionsrate") und die Abweichungen vom korrekten Wert (signed error). Exp 2 zeigte keine signifikanten Fehler (d.h. generelle Ãśber- oder Unterschätzung) für Drehungen oder Distanzen. Distanzantworten waren stark in Richtung Mittelwert verschoben (Kompressionsrate 0.58), Winkelantworten jedoch kaum (0.91). Für randomisierte Dreiecksgeometrien in Exp 3 reduzierte sich diese Tendenz zu mittleren Antworten für Distanzen (0.86), verstärkte sich jedoch für Drehungen (0.77). SCHLU\ssFOLGERUNGEN: In ähnlichen Experimenten zur Dreiecksvervollständigung unter Beschränkung auf visuelle Information (Virtual Reality: Péruch et al., Perc. '97; Duchon et al., Psychonomics '99) und auf propriozeptive Reize (blindes gehen: Loomis et al., JEP '93) zeigte sich eine starke Tendenz zu mittleren Drehwinkeln (Kompressionsrate < 0.5), die wir nicht fanden. Die Tendenz, bei reinen Drehaufgaben in visuellen virtuellen Umgebungen nicht weit genug zu drehen (Péruch '97; Bakker, Presence '99) konnte ebenfalls nicht beobachtet werden (Exp 1). Pfadintegration aufgrund optischen Flusses erwies sich in unseren Experimenten als ausreichend und verlä\sslich für Orientierungs- und Heimfindeaufgaben. Vestibuläre und kinästhetische Information waren hierfür nicht erforderlich.
Notes:
(2000)  Humans can separately perceive distance, velocity, and acceleration from vestibular stimulation    
Abstract: Purpose: The vestibular system is known to measure changes in linear and angular position changes in terms of acceleration. Can humans judge these vestibular signals as acceleration and integrate them to reliably derive distance and velocity estimates? Methods: Twelve blindfolded naive volunteers participated in a psychophysical experiment using a Stewart-Platform motion simulator. The vestibular stimuli consisted of Gaussian-shaped translatory or rotatory velocity profiles with a duration of less than 4 seconds. The full two-factorial design covered 6 peak accelerations above threshold and 5 distances with 4 repetitions. In three separate blocks, the subjects were asked to verbally judge on a scale from 1 to 100 the distance traveled or the angle turned, maximum velocity and maximum acceleration. Results: Subjects judged the distance, velocity and acceleration quite consistently, but with systematic errors. The distance estimates showed a linear scaling towards the mean response and were independent of accelerations. The correlation of perceived and real velocity was linear and showed no systematic influence of distances or accelerations. High accelerations were drastically underestimated and accelerations close to threshold were overestimated, showing a logarithmic dependency. Therefore, the judged acceleration was close to the velocity judgment. There was no significant difference between translational and angular movements. Conclusions: Despite the fact that the vestibular system measures acceleration only, one can derive peak velocity and traveled distance from it. Interestingly, even though maximum acceleration was perceived non-linearly, velocity and distance judgments were linear.
Notes:
(2000)  Humans can extract distance and velocity from vestibular perceived acceleration    
Abstract: Abstract: Purpose: The vestibular system is known to measure accelerations for linear forward movements. Can humans integrate these vestibular signals to derive reliably distance and velocity estimates? Methods: Blindfolded naive volunteers participated in a psychophysical experiment using a Stewart-Platform motion simulator. The vestibular stimuli consisted of Gaussian-shaped translatory velocity profiles with a duration of less than 4 seconds. The full two-factorial design covered 6 peak accelerations above threshold and 5 distances up to 25cm with 4 repetitions. In three separate blocks, the subjects were asked to verbally judge on a scale from 1 to 100 traveled distance, maximum velocity and maximum acceleration. Results: Subjects perceived distance, velocity and acceleration quite consistently, but with systematic errors. The distance estimates showed a linear scaling towards the mean and were independent of accelerations. The correlation of perceived and real velocity was linear and showed no systematic influence of distances or accelerations. High accelerations were drastically underestimated and accelerations close to threshold were overestimated, showing a logarithmic dependency. Conclusions: Despite the fact that the vestibular system measures acceleration only, one can derive peak velocity and traveled distance from it. Interestingly, even though maximum acceleration was perceived non linear, velocity and distance was judged consistently linear.
Notes:
1999
(1999)  Visual Homing to a Virtual Home    
Abstract: Purpose: Results from previous studies (e.g. Loomis et al., JEP, 1993) suggest that proprioceptive cues play a major role in human homing behaviour. We conducted triangle completion experiments in virtual environments to measure homing performance based solely on visual cues. Methods: Subjects were seated in the centre of a large half-cylindrical 180$^\circ$ projection screen and steered smoothly through the simulated scene using mouse buttons. Experiments were conducted in two environments: an extended volume filled with random blobs (inducing strong vection), and a photorealistic town containing distinct landmarks. On each trial, subjects had to return to their starting point after moving outwards along two prescribed segments (40m long, subtending a 30$^\circ$..150$^\circ$ horizontal angle) of an imaginary triangle. To exclude scene-matching as a homing strategy, the simulated environment was modified to a different but similar one just before the subject started the return movement. Results: We found strong systematic errors in distance travelled but only small deviations in turning angles. After some practice the variability (standard deviation) of the responses typically dropped to roughly 10m for distance and 10 degree for turns (lower variability for town than for blobs-scene). Omitting the scene modification before the return movement resulted in nearly perfect performance, stressing the dominant role of piloting under natural conditions. Exchanging the mouse interface for a more realistic bicycle interface, thus introducing proprioceptive cues for side-ways tilt and pedal resistance, reduced the systematic error in rotation but also increased the overall variability. Conclusion: Path integration using optical information alone is sufficient for accurate homing.
Notes:
(1999)  Heimfinden in virtuellen Umgebungen    
Abstract: FRAGESTELLUNG: Ergebnisse früherer Studien deuten darauf hin, da\ss propriozeptive Reize für das menschliche Heimfindeverhalten eine wesentliche Rolle spielen (z.B. Loomis et al., JEP, 1993). Wir untersuchten den Einflu\ss visueller Information und speziell des optischen Flu\sses auf die Heimfindeleistungen anhand von Dreiecksvervollständigungsexperimenten in virtuellen Umgebungen. METHODEN: Versuchspersonen sollten zum Ausgangspunkt zurückfinden, nachdem sie sich entlang zweier vorgegebener Dreiecksschenkel von diesem entfernt hatten. Die Versuchsumgebung wurde auf einer halbzylinderförmigen 180$^\circ$-Projektionsleinwand dargestellt. Dabei wurden die simulierten Eigenbewegungen über die Maustasten gesteuert. Die Experimente wurden in zwei verschiedenen Szenarien durchgeführt: Einer Punktewolke, die einen hohen Grad an Vektion (Gefühl für Eigenbewegung) hervorruft und einer photorealistischen Kleinstadt mit zahlreichen salienten Landmarken. Um Landmarkennavigation auszuschlie\ssen wurden für den Rückweg sämtliche Landmarken ausgetauscht ("Szenenwechsel" Bedingung). ERGEBNISSE: Wir fanden starke systematische Fehler in der zurückgelegten Distanz, nicht jedoch in den Drehwinkeln. In einem Kontrollexperiment resultierte der Verzicht auf Szenenwechsel in fast perfekten Homingleistungen. Dies legt nahe, da\ss image matching (falls möglich) einen dominanten Einflu\ss auf die Heimfindegenauigkeit hat. SCHLU\ssFOLGERUNGEN: Optischer Flu\ss in der Punktewolke erwies sich als ausreichend, um die Heimfindeaufgabe zu lösen und führte zu ähnlichen Heimfindeleistungen wie in der Stadtumgebung. Die Verwendung von Szenenwechsel in virtuellen Umgebungen ermöglichte es, den Einflu\ss zweier wesentlicher Komponenten der visuellen Raumorientierung voneinander zu separieren: Optischer Flu\ss (Pfadintegration) versus Landmarken (Piloting). View Poster 182KB (PDF) Heimfinden in virtuellen Umgebungen Bernhard E. Riecke & H.A.H.C. van Veen Max-Planck-Institut für biologische Kybernetik, Tübingen (presented at 2. Tübinger Wahrnehmungskonferenz 1999 Ergebnisse früherer Studien deuten darauf hin, da\ss propriozeptive Reize für das menschliche Heimfindeverhalten eine wesentliche Rolle spielen (z.B. Loomis et al., JEP, 1993). Wir untersuchten den Einflu\ss visueller Information und speziell des optischen Flu\sses auf die Heimfindeleistungen anhand von Dreiecksvervollständigungsexperimenten in virtuellen Umgebungen. Versuchspersonen sollten zum Ausgangspunkt zurückfinden, nachdem sie sich entlang zweier vorgegebener Dreiecksschenkel von diesem entfernt hatten. Die Versuchsumgebung wurde auf einer halbzylinderförmigen 180$^\circ$-Projektionsleinwand dargestellt. Dabei wurden die simulierten Eigenbewegungen über die Maustasten gesteuert. Die Experimente wurden in zwei verschiedenen Szenarien durchgeführt: Einer Punktewolke, die einen hohen Grad an Vektion (Gefühl für Eigenbewegung) hervorruft und einer photorealistischen Kleinstadt mit zahlreichen salienten Landmarken. Um Landmarkennavigation auszuschlie\ssen wurden für den Rückweg sämtliche Landmarken ausgetauscht ("Szenenwechsel" Bedingung). Wir fanden starke systematische Fehler in der zurückgelegten Distanz, nicht jedoch in den Drehwinkeln. In einem Kontrollexperiment resultierte der Verzicht auf Szenenwechsel in fast perfekten Homingleistungen. Dies legt nahe, da\ss image matching (falls möglich) einen dominanten Einflu\ss auf die Heimfindegenauigkeit hat. Optischer Flu\ss in der Punktewolke erwies sich als ausreichend, um die Heimfindeaufgabe zu lösen und führte zu ähnlichen Heimfindeleistungen wie in der Stadtumgebung. Die Verwendung von Szenenwechsel in virtuellen Umgebungen ermöglichte es, den Einflu\ss zweier wesentlicher Komponenten der visuellen Raumorientierung voneinander zu separieren: Optischer Flu\ss (Pfadintegration) versus Landmarken (Piloting).
Notes:
(1999)  Is homing by optic flow possible?    
Abstract:
Notes:
1998
Powered by PublicationsList.org.