hosted by
publicationslist.org
    
Marc van Wanrooij

beestmeester@hotmail.com

Journal articles

2009
Marc M Van Wanrooij, Andrew H Bell, Douglas P Munoz, A John Van Opstal (2009)  The effect of spatial-temporal audiovisual disparities on saccades in a complex scene.   Exp Brain Res 198: 2-3. 425-437 Sep  
Abstract: In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438-454, 2002). In those experiments both stimulus modalities belonged to the same object, and subjects were instructed to foveate that source, irrespective of modality. Under natural conditions, however, subjects have no prior knowledge as to whether visual and auditory events originated from the same, or from different objects in space and time. In the present experiments we included these possibilities by introducing various spatial and temporal disparities between the visual and auditory events within the AV-background. Subjects had to orient fast and accurately to the visual target, thereby ignoring the auditory distractor. We show that this task belies a dichotomy, as it was quite difficult to produce fast responses (<250 ms) that were not aurally driven. Subjects therefore made many erroneous saccades. Interestingly, for the spatially aligned events the inability to ignore auditory stimuli produced shorter reaction times, but also more accurate responses than for the unisensory target conditions. These findings, which demonstrate effective multisensory integration, are similar to the previous study, and the same multisensory integration rules are applied (Corneil et al. in J Neurophysiol 88:438-454, 2002). In contrast, with increasing spatial disparity, integration gradually broke down, as the subjects' responses became bistable: saccades were directed either to the auditory (fast responses), or to the visual stimulus (late responses). Interestingly, also in this case responses were faster and more accurate than to the respective unisensory stimuli.
Notes:
2007
Marc M Van Wanrooij, A John Van Opstal (2007)  Sound localization under perturbed binaural hearing.   J Neurophysiol 97: 1. 715-726 Jan  
Abstract: This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband (0.5-20 kHz; BB) noises, with sound levels between 30 and 60 dB, A-weighted (dBA). To deny listeners any consistent azimuth-related head-shadow cues, stimuli were randomly interleaved. A plug immediately degraded azimuth performance, as evidenced by a sound level-dependent shift ("bias") of responses contralateral to the plug, and a level-dependent change in the slope of the stimulus-response relation ("gain"). Although the azimuth bias and gain were highly correlated, they could not be predicted from the plug's acoustic attenuation. Interestingly, listeners performed best for low-intensity stimuli at their normal-hearing side. These data demonstrate that listeners rely on monaural spectral cues for sound-source azimuth localization as soon as the binaural difference cues break down. Also the elevation response components were affected by the plug: elevation gain depended on both stimulus azimuth and on sound level and, as for azimuth, localization was best for low-intensity stimuli at the hearing side. Our results show that the neural computation of elevation incorporates a binaural weighting process that relies on the perceived, rather than the actual, sound-source azimuth. It is our conjecture that sound localization ensues from a weighting of all acoustic cues for both azimuth and elevation, in which the weights may be partially determined, and rapidly updated, by the reliability of the particular cue.
Notes:
2005
Marc M Van Wanrooij, A John Van Opstal (2005)  Relearning sound localization with a new ear.   J Neurosci 25: 22. 5413-5424 Jun  
Abstract: Human sound localization results primarily from the processing of binaural differences in sound level and arrival time for locations in the horizontal plane (azimuth) and of spectral shape cues generated by the head and pinnae for positions in the vertical plane (elevation). The latter mechanism incorporates two processing stages: a spectral-to-spatial mapping stage and a binaural weighting stage that determines the contribution of each ear to perceived elevation as function of sound azimuth. We demonstrated recently that binaural pinna molds virtually abolish the ability to localize sound-source elevation, but, after several weeks, subjects regained normal localization performance. It is not clear which processing stage underlies this remarkable plasticity, because the auditory system could have learned the new spectral cues separately for each ear (spatial-mapping adaptation) or for one ear only, while extending its contribution into the contralateral hemifield (binaural-weighting adaptation). To dissociate these possibilities, we applied a long-term monaural spectral perturbation in 13 subjects. Our results show that, in eight experiments, listeners learned to localize accurately with new spectral cues that differed substantially from those provided by their own ears. Interestingly, five subjects, whose spectral cues were not sufficiently perturbed, never yielded stable localization performance. Our findings indicate that the analysis of spectral cues may involve a correlation process between the sensory input and a stored spectral representation of the subject's ears and that learning acts predominantly at a spectral-to-spatial mapping level rather than at the level of binaural weighting.
Notes:
2004
Marc M Van Wanrooij, A John Van Opstal (2004)  Contribution of head shadow and pinna cues to chronic monaural sound localization.   J Neurosci 24: 17. 4163-4171 Apr  
Abstract: Monaurally deaf people lack the binaural acoustic difference cues in sound level and timing that are needed to encode sound location in the horizontal plane (azimuth). It has been proposed that these people therefore rely on spectral pinna cues of their normal ear to localize sounds. However, the acoustic head-shadow effect (HSE) might also serve as an azimuth cue, despite its ambiguity when absolute sound levels are unknown. Here, we assess the contribution of either cue in the monaural deaf to two-dimensional (2D) sound localization. In a localization test with randomly interleaved sound levels, we show that all monaurally deaf listeners relied heavily on the HSE, whereas binaural control listeners ignore this cue. However, some monaural listeners responded partly to actual sound-source azimuth, regardless of sound level. We show that these listeners extracted azimuth information from their pinna cues. The better monaural listeners were able to localize azimuth on the basis of spectral cues, the better their ability to also localize sound-source elevation. In a subsequent localization experiment with one fixed sound level, monaural listeners rapidly adopted a strategy on the basis of the HSE. We conclude that monaural spectral cues are not sufficient for adequate 2D sound localization under unfamiliar acoustic conditions. Thus, monaural listeners strongly rely on the ambiguous HSE, which may help them to cope with familiar acoustic environments.
Notes:
2002
B D Corneil, M Van Wanrooij, D P Munoz, A J Van Opstal (2002)  Auditory-visual interactions subserving goal-directed saccades in a complex scene.   J Neurophysiol 88: 1. 438-454 Jul  
Abstract: This study addresses the integration of auditory and visual stimuli subserving the generation of saccades in a complex scene. Previous studies have shown that saccadic reaction times (SRTs) to combined auditory-visual stimuli are reduced when compared with SRTs to either stimulus alone. However, these results have been typically obtained with high-intensity stimuli distributed over a limited number of positions in the horizontal plane. It is less clear how auditory-visual interactions influence saccades under more complex but arguably more natural conditions, when low-intensity stimuli are embedded in complex backgrounds and distributed throughout two-dimensional (2-D) space. To study this problem, human subjects made saccades to visual-only (V-saccades), auditory-only (A-saccades), or spatially coincident auditory-visual (AV-saccades) targets. In each trial, the low-intensity target was embedded within a complex auditory-visual background, and subjects were allowed over 3 s to search for and foveate the target at 1 of 24 possible locations within the 2-D oculomotor range. We varied systematically the onset times of the targets and the intensity of the auditory target relative to background [i.e., the signal-to-noise (S/N) ratio] to examine their effects on both SRT and saccadic accuracy. Subjects were often able to localize the target within one or two saccades, but in about 15% of the trials they generated scanning patterns that consisted of many saccades. The present study reports only the SRT and accuracy of the first saccade in each trial. In all subjects, A-saccades had shorter SRTs than V-saccades, but were more inaccurate than V-saccades when generated to auditory targets presented at low S/N ratios. AV-saccades were at least as accurate as V-saccades but were generated at SRTs typical of A-saccades. The properties of AV-saccades depended systematically on both stimulus timing and S/N ratio of the auditory target. Compared with unimodal A- and V-saccades, the improvements in SRT and accuracy of AV-saccades were greatest when the visual target was synchronous with or leading the auditory target, and when the S/N ratio of the auditory target was lowest. Further, the improvements in saccade accuracy were greater in elevation than in azimuth. A control experiment demonstrated that a portion of the improvements in SRT could be attributable to a warning-cue mechanism, but that the improvements in saccade accuracy depended on the spatial register of the stimuli. These results agree well with earlier electrophysiological results obtained from the midbrain superior colliculus (SC) of anesthetized preparations, and we argue that they demonstrate multisensory integration of auditory and visual signals in a complex, quasi-natural environment. A conceptual model incorporating the SC is presented to explain the observed data.
Notes:
Powered by publicationslist.org.