hosted by
publicationslist.org
    
Argiro Vatakis

argiro.vatakis@gmail.com

Journal articles

2008
 
DOI   
PMID 
Argiro Vatakis, Charles Spence (2008)  Evaluating the influence of the 'unity assumption' on the temporal perception of realistic audiovisual stimuli.   Acta Psychol (Amst) 127: 1. 12-23 Jan  
Abstract: Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.
Notes:
 
DOI   
PMID 
Argiro Vatakis, Jordi Navarra, Salvador Soto-Faraco, Charles Spence (2008)  Audiovisual temporal adaptation of speech: temporal order versus simultaneity judgments.   Exp Brain Res 185: 3. 521-529 Mar  
Abstract: The temporal perception of simple auditory and visual stimuli can be modulated by exposure to asynchronous audiovisual speech. For instance, research using the temporal order judgment (TOJ) task has shown that exposure to temporally misaligned audiovisual speech signals can induce temporal adaptation that will influence the TOJs of other (simpler) audiovisual events (Navarra et al. (2005) Cognit Brain Res 25:499-507). Given that TOJ and simultaneity judgment (SJ) tasks appear to reflect different underlying mechanisms, we investigated whether adaptation to asynchronous speech inputs would also influence SJ task performance. Participants judged whether a light flash and a noise burst, presented at varying stimulus onset asynchronies, were simultaneous or not, or else they discriminated which of the two sensory events appeared to have occurred first. While performing these tasks, participants monitored a continuous speech stream for target words that were either presented in synchrony, or with the audio channel lagging 300 ms behind the video channel. We found that the sensitivity of participant's TOJ and SJ responses was reduced when the background speech stream was desynchronized. A significant modulation of the point of subjective simultaneity (PSS) was also observed in the SJ task but, interestingly, not in the TOJ task, thus supporting previous claims that TOJ and SJ tasks may tap somewhat different aspects of temporal perception.
Notes:
2007
 
DOI   
PMID 
Argiro Vatakis, Jordi Navarra, Salvador Soto-Faraco, Charles Spence (2007)  Temporal recalibration during asynchronous audiovisual speech perception.   Exp Brain Res 181: 1. 173-181 Jul  
Abstract: We investigated the consequences of monitoring an asynchronous audiovisual speech stream on the temporal perception of simultaneously presented vowel-consonant-vowel (VCV) audiovisual speech video clips. Participants made temporal order judgments (TOJs) regarding whether the speech-sound or the visual-speech gesture occurred first, for video clips presented at various different stimulus onset asynchronies. Throughout the experiment, half of the participants also monitored a continuous stream of words presented audiovisually, superimposed over the VCV video clips. The continuous (adapting) speech stream could either be presented in synchrony, or else with the auditory stream lagging by 300 ms. A significant shift (13 ms in the direction of the adapting stimulus in the point of subjective simultaneity) was observed in the TOJ task when participants monitored the asynchronous speech stream. This result suggests that the consequences of adapting to asynchronous speech extends beyond the case of simple audiovisual stimuli (as has recently been demonstrated by Navarra et al. in Cogn Brain Res 25:499-507, 2005) and can even affect the perception of more complex speech stimuli.
Notes:
 
DOI   
PMID 
Argiro Vatakis, Charles Spence (2007)  How 'special' is the human face? Evidence from an audiovisual temporal order judgment task.   Neuroreport 18: 17. 1807-1811 Nov  
Abstract: Upright and inverted audiovisual video clips of a monkey producing a 'coo' and a human imitating this vocalization were presented at a range of stimulus onset asynchronies. Participants made temporal order judgments regarding which modality stream appeared to have been presented first. The results showed that inverting the dynamic human visual display led to a significant differences in the point of subjective simultaneity, with the inverted human faces requiring more time to be processed compared with the upright displays. No such inversion effect was found for the monkey visual displays. These results demonstrate that the effect of inversion on the temporal perception of audiovisual speech stimuli are driven by the viewing of a human face rather than by the integration of audiovisual speech.
Notes:
 
PMID 
Argiro Vatakis, Linda Bayliss, Massimiliano Zampini, Charles Spence (2007)  The influence of synchronous audiovisual distractors on audiovisual temporal order judgments.   Percept Psychophys 69: 2. 298-309 Feb  
Abstract: Participants made unspeeded temporal order judgments (TOJs) regarding which occurred first, anauditory or a visual target stimulus, when they were presented at a variety of different stimulus onset asynchronies. The target stimuli were presented either in isolation or positioned randomly among a stream of three synchronous audiovisual distractors. The largest just noticeable differences were reported when the targets were presented in the middle of the distractor stream. When the targets were presented at the beginning of the stream, performance was no worse than when the audiovisual targets were presented in isolation. Subsequent experiments revealed that performance improved somewhat when the position of the target was fixed or when the target was made physically distinctive from the distractors. These results show that audiovisual TOJs are impaired by the presence of audiovisual distractors and that this cost can be ameliorated by directing attention to the appropriate temporal position within the stimulus stream.
Notes:
 
PMID 
Argiro Vatakis, Charles Spence (2007)  Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.   Percept Psychophys 69: 5. 744-756 Jul  
Abstract: We investigated whether the "unity assumption," according to which an observer assumes that two different sensory signals refer to the same underlying multisensory event, influences the multisensory integration of audiovisual speech stimuli. Syllables (Experiments 1, 3, and 4) or words (Experiment 2) were presented to participants at a range of different stimulus onset asynchronies using the method of constant stimuli. Participants made unspeeded temporal order judgments regarding which stream (either auditory or visual) had been presented first. The auditory and visual speech stimuli in Experiments 1-3 were either gender matched (i.e., a female face presented together with a female voice) or else gender mismatched (i.e., a female face presented together with a male voice). In Experiment 4, different utterances from the same female speaker were used to generate the matched and mismatched speech video clips. Measuring in terms of the just noticeable difference the participants in all four experiments found it easier to judge which sensory modality had been presented first when evaluating mismatched stimuli than when evaluating the matched-speech stimuli. These results therefore provide the first empirical support for the "unity assumption" in the domain of the multisensory temporal integration of audiovisual speech stimuli.
Notes:
2006
 
DOI   
PMID 
Argiro Vatakis, Charles Spence (2006)  Audiovisual synchrony perception for speech and music assessed using a temporal order judgment task.   Neurosci Lett 393: 1. 40-44 Jan  
Abstract: This study investigated people's sensitivity to audiovisual asynchrony in briefly-presented speech and musical videos. A series of speech (letters and syllables) and guitar and piano music (single and double notes) video clips were presented randomly at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. The accuracy of participants' TOJ performance (measured in terms of the just noticeable difference; JND) was significantly better for the speech than for either the guitar or piano music video clips, suggesting that people are more sensitive to asynchrony for speech than for music stimuli. The visual stream had to lead the auditory stream for the point of subjective simultaneity (PSS) to be achieved in the piano music clips while auditory leads were typically required for the guitar music clips. The PSS values obtained for the speech stimuli varied substantially as a function of the particular speech sound presented. These results provide the first empirical evidence regarding people's sensitivity to audiovisual asynchrony for musical stimuli. Our results also demonstrate that people's sensitivity to asynchrony in speech stimuli is better than has been suggested on the basis of previous research using continuous speech streams as stimuli.
Notes:
 
DOI   
PMID 
Georgina Lyons, Daniel Sanabria, Argiro Vatakis, Charles Spence (2006)  The modulation of crossmodal integration by unimodal perceptual grouping: a visuotactile apparent motion study.   Exp Brain Res 174: 3. 510-516 Oct  
Abstract: We adapted the crossmodal dynamic capture task to investigate the modulation of visuotactile crossmodal integration by unimodal visual perceptual grouping. The influence of finger posture on this interaction was also explored. Participants were required to judge the direction of a tactile apparent motion stream (moving either to the left or to the right) presented to their crossed or uncrossed index fingers. The participants were instructed to ignore a distracting visual apparent motion stream, comprised of either 2 or 6 lights presented concurrently with the tactile stimuli. More crossmodal dynamic capture of the direction of the tactile apparent motion stream by the visual apparent motion stream was observed in the 2-lights condition than in the 6-lights condition. This interaction was not modulated by finger posture. These results suggest that visual intramodal perceptual grouping constrains the crossmodal binding of visual and tactile apparent motion information, irrespective of finger posture.
Notes:
 
DOI   
PMID 
Argiro Vatakis, Charles Spence (2006)  Temporal order judgments for audiovisual targets embedded in unimodal and bimodal distractor streams.   Neurosci Lett 408: 1. 5-9 Nov  
Abstract: We investigated whether the presence of unimodal or bimodal (synchronous) distractors would affect temporal order judgments (TOJs) for pairs of asynchronous audiovisual target stimuli. Participants made unspeeded TOJs regarding which of a pair of auditory and visual stimuli, presented at different stimulus onset asynchronies using the method of constant stimuli, occurred first. These asynchronous target stimuli were presented in a fixed position amongst a stream of three (auditory, visual, or audiovisual) distractors in each block of trials. The largest just noticeable differences (JNDs) were reported when the target stimuli were presented in the middle (position 3) of the distractor stream. Importantly, audiovisual distractors were shown to interfere with TOJ performance far more than unimodal (auditory or visual) distractors. The point of subjective simultaneity (PSS) was also influenced by the modality of the distractors, and by the position of the target within the distractor stream. These results confirm the existence of a specifically bimodal crowding effect, with audiovisual TOJs being impaired far more by the presence of audiovisual distractors that by unimodal auditory or visual distractors.
Notes:
 
DOI   
PMID 
Argiro Vatakis, Charles Spence (2006)  Evaluating the influence of frame rate on the temporal aspects of audiovisual speech perception.   Neurosci Lett 405: 1-2. 132-136 Sep  
Abstract: We investigated whether changing the frame rate at which speech video clips were presented (6-30 frames per second, fps) would affect audiovisual temporal perception. Participants made unspeeded temporal order judgments (TOJs) regarding which signal (auditory or visual) was presented first for video clips presented at a range of different stimulus onset asynchronies (SOAs) using the method of constant stimuli. Temporal discrimination accuracy was unaffected by changes in frame rate, while lower frame rate speech video clips required larger visual-speech leads for the point of subjective simultaneity (PSS) to be achieved than did higher frame rate video clips. The significant effect of frame rate on temporal perception demonstrated here has not been controlled for in previous studies of audiovisual synchrony perception using video stimuli and is potentially important given the rapid increase in the use of audiovisual videos in cognitive neuroscience research in recent years.
Notes:
 
DOI   
PMID 
Argiro Vatakis, Charles Spence (2006)  Audiovisual synchrony perception for music, speech, and object actions.   Brain Res 1111: 1. 134-142 Sep  
Abstract: We investigated the perception of synchrony for complex audiovisual events. In Experiment 1, a series of music (guitar and piano), speech (sentences), and object action video clips were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. Temporal discrimination accuracy was significantly better for the object actions than for the speech video clips, and both were significantly better than for the music video clips. In order to investigate whether or not these differences in TOJ performance were driven by differences in stimulus familiarity, we conducted a second experiment using brief speech (syllables), music (guitar), and object action video clips of fixed duration together with temporally reversed (i.e., less familiar) versions of the same stimuli. The results showed no main effect of stimulus type on temporal discrimination accuracy. Interestingly, however, reversing the video clips resulted in a significant decrement in temporal discrimination accuracy as compared to the normally presented for the music and object actions clips, but not for the speech stimuli. Overall, our results suggest that cross-modal temporal discrimination performance is better for audiovisual stimuli of lower complexity as compared to stimuli having continuously varying properties (e.g., syllables versus words and/or sentences).
Notes:
2005
 
DOI   
PMID 
Bart Krekelberg, Argiro Vatakis, Zoe Kourtzi (2005)  Implied motion from form in the human visual cortex.   J Neurophysiol 94: 6. 4373-4386 Dec  
Abstract: When cartoonists use speed lines--also called motion streaks--to suggest the speed of a stationary object, they use form to imply motion. The goal of this study was to investigate the mechanisms that mediate the percept of implied motion in the human visual cortex. In an adaptation functional imaging paradigm we presented Glass patterns that, just like speed lines, imply motion but do not on average contain coherent motion energy. We found selective adaptation to these patterns in the human motion complex, the lateral occipital complex (LOC), and earlier visual areas. Glass patterns contain both local orientation features and global structure. To disentangle these aspects we performed a control experiment using Glass patterns with minimal local orientation differences but large global structure differences. This experiment showed that selectivity for Glass patterns arises in part in areas beyond V1 and V2. Interestingly, the selective adaptation transferred from implied motion stimuli to similar real motion patterns in dorsal but not ventral areas. This suggests that the same subpopulations of cells in dorsal areas that are selective for implied motion are also selective for real motion. In other words, these cells are invariant with respect to the cue (implied or real) that generates the motion. We conclude that the human motion complex responds to Glass patterns as if they contain coherent motion. This, presumably, is the reason why these patterns appear to move coherently. The LOC, however, has different cells that respond to the structure of real motion patterns versus implied motion patterns. Such a differential response may allow ventral areas to further analyze the structure of global patterns.
Notes:
 
DOI   
PMID 
Jordi Navarra, Argiro Vatakis, Massimiliano Zampini, Salvador Soto-Faraco, William Humphreys, Charles Spence (2005)  Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration.   Brain Res Cogn Brain Res 25: 2. 499-507 Oct  
Abstract: We examined whether monitoring asynchronous audiovisual speech induces a general temporal recalibration of auditory and visual sensory processing. Participants monitored a videotape featuring a speaker pronouncing a list of words (Experiments 1 and 3) or a hand playing a musical pattern on a piano (Experiment 2). The auditory and visual channels were either presented in synchrony, or else asynchronously (with the visual signal leading the auditory signal by 300 ms; Experiments 1 and 2). While performing the monitoring task, participants were asked to judge the temporal order of pairs of auditory (white noise bursts) and visual stimuli (flashes) that were presented at varying stimulus onset asynchronies (SOAs) during the session. The results showed that, while monitoring desynchronized speech or music, participants required a longer interval between the auditory and visual stimuli in order to perceive their temporal order correctly, suggesting a widening of the temporal window for audiovisual integration. The fact that no such recalibration occurred when we used a longer asynchrony (1000 ms) that exceeded the temporal window for audiovisual integration (Experiment 3) supports this conclusion.
Notes:
2004
 
PMID 
Thomas Z Strybel, Argiro Vatakis (2004)  A comparison of auditory and visual apparent motion presented individually and with crossmodal moving distractors.   Perception 33: 9. 1033-1048  
Abstract: Unimodal auditory and visual apparent motion (AM) and bimodal audiovisual AM were investigated to determine the effects of crossmodal integration on motion perception and direction-of-motion discrimination in each modality. To determine the optimal stimulus onset asynchrony (SOA) ranges for motion perception and direction discrimination, we initially measured unimodal visual and auditory AMs using one of four durations (50, 100, 200, or 400 ms) and ten SOAs (40-450 ms). In the bimodal conditions, auditory and visual AM were measured in the presence of temporally synchronous, spatially displaced distractors that were either congruent (moving in the same direction) or conflicting (moving in the opposite direction) with respect to target motion. Participants reported whether continuous motion was perceived and its direction. With unimodal auditory and visual AM, motion perception was affected differently by stimulus duration and SOA in the two modalities, while the opposite was observed for direction of motion. In the bimodal audiovisual AM condition, discriminating the direction of motion was affected only in the case of an auditory target. The perceived direction of auditory but not visual AM was reduced to chance levels when the crossmodal distractor direction was conflicting. Conversely, motion perception was unaffected by the distractor direction and, in some cases, the mere presence of a distractor facilitated movement perception.
Notes:

Conference papers

2007

Society for Neuroscience Abstracts. Program No. 301.19, 85.

2004
Powered by publicationslist.org.