hosted by
publicationslist.org
    

Jason S Chan


jason.chan@tcd.ie

Journal articles

2010
Jason S Chan, Cristina Simões-Franklin, Hugh Garavan, Fiona N Newell (2010)  Static images of novel, moveable objects learned through touch activate visual area hMT+.   Neuroimage 49: 2. 1708-1716 Jan  
Abstract: Although many studies have found similar cortical areas activated during the recognition of objects encoded through vision or touch, little is known about cortical areas involved in the crossmodal recognition of dynamic objects. Here, we investigated which cortical areas are involved in the recognition of moving objects and were specifically interested in whether motion areas are involved in the recognition of dynamic objects within and across sensory modalities. Prior to scanning, participants first learned to recognise a set of 12 novel objects, each presented either visually or haptically, and either moving or stationary. We then conducted fMRI whilst participants performed an old-new task with static images of learned or not-learned objects. We found the fusiform and right inferior frontal gyri more activated to within-modal visual than crossmodal object recognition. Our results also revealed increased activation in area hMT+, LOC and the middle occipital gyrus, in the right hemisphere only, for the objects learned as moving compared to the learned static objects, regardless of modality. We propose that the network of cortical areas involved in the recognition of dynamic objects is largely independent of modality and have important implications for understanding the neural substrates of multisensory dynamic object recognition.
Notes:
2008
Jason S Chan, Fiona N Newell (2008)  Behavioral evidence for task-dependent "what" versus "where" processing within and across modalities.   Percept Psychophys 70: 1. 36-49 Jan  
Abstract: Task-dependent information processing for the purpose of recognition or spatial perception is considered a principle common to all the main sensory modalities. Using a dual-task interference paradigm, we investigated the behavioral effects of independent information processing for shape identificationand localization ofobject features within and across vision and touch. In Experiment 1, we established that color and texture processing (i.e., a "what" task) interfered with both visual and haptic shape-matching tasks and that mirror image and rotation matching (i.e., a "where" task) interfered with a feature-location-matching task in both modalities. In contrast, interference was reduced when a "where" interference task was embedded in a "what" primary task and vice versa. In Experiment 2, we replicated this finding within each modality, using the same interference and primary tasks throughout. In Experiment 3, the interference tasks were always conducted in a modality other than the primary task modality. Here, we found that resources for identification and spatial localization are independent of modality. Our findings further suggest that multisensory resources for shape recognition also involve resources for spatial localization. These results extend recent neuropsychological and neuroimaging findings and have important implications for our understanding of high-level information processing across the human sensory systems.
Notes:
2007
Jason S Chan, Thorsten Maucher, Johannes Schemmel, Dana Kilroy, Fiona N Newell, Karlheinz Meier (2007)  The virtual haptic display: a device for exploring 2-D virtual shapes in the tactile modality.   Behav Res Methods 39: 4. 802-810 Nov  
Abstract: In order to understand better the processes involved in the perception of shape through touch, someelement of control is required over the nature of the shape presented to the hand and the presentation timing. To that end, we have developed a cost-effective, computer-controlled apparatus for presenting haptic stimuli using active touch, known as a virtual haptic display (VHD). The operational principle behind this device is that it translates black and white visual images into topographic, 2-D taxel (tactile pixel) arrays, along the same principle using in Braille letters. These taxels are either elevated or depressed at any one time representing white and black pixel colors of the visual image, respectively. To feel the taxels, the participant places their fingers onto a carriage which can be moved over the surface of the device to reveal a virtual shape. We conducted two experiments and the results show that untrained participants are able to recognize different, simple and complex, shapes using this apparatus. The VHD apparatus is therefore ideal at presenting 2-Dshapes through touch alone. Moreover,this device and its supporting software can also be used for presenting computer-controlled stimuli in cross-modal experiments.
Notes:
2004
Daniel Sanabria, Salvador Soto-Faraco, Jason S Chan, Charles Spence (2004)  When does visual perceptual grouping affect multisensory integration?   Cogn Affect Behav Neurosci 4: 2. 218-229 Jun  
Abstract: Several studies have shown that the direction in which a visual apparent motion stream moves can influence the perceived direction of an auditory apparent motion stream (an effect known as cross-modal dynamic capture). However, little is known about the role that intramodal perceptual grouping processes play in the multisensory integration of motion information. The present study was designed to investigate the time course of any modulation of the cross-modal dynamic capture effect by the nature of the perceptual grouping taking place within vision. Participants were required to judge the direction of an auditory apparent motion stream while trying to ignore visual apparent motion streams presented in a variety of different configurations. Our results demonstrate that the cross-modal dynamic capture effect was influenced more by visual perceptual grouping when the conditions for intramodal perceptual grouping were set up prior to the presentation of the audiovisual apparent motion stimuli. However, no such modulation occurred when the visual perceptual grouping manipulation was established at the same time as or after the presentation of the audiovisual stimuli. These results highlight the importance of the unimodal perceptual organization of sensory information to the manifestation of multisensory integration.
Notes:
2003
Jason S Chan, Charles Spence (2003)  Presenting multiple auditory signals using multiple sound cards in Visual Basic 6.0.   Behav Res Methods Instrum Comput 35: 1. 125-128 Feb  
Abstract: In auditory research, it is often desirable to present more than two auditory stimuli at any one time. Although the technology has been available for some time, the majority of researchers have not utilized it. This article provides a simple means of presenting multiple, concurrent, independent auditory events, using two or more different sound cards installed within a single computer. By enabling the presentation of more auditory events, we can hope to gain a better understanding of the cognitive and attentional processes operating under more complex and realistic scenes, such as that embodied by the cocktail party effect. The software requirements are Windows 98SR2/Me/NT4/2000/XP, Visual Basic 6.0, and DirectX 7.0 or above. The hardware requirements are a Pentium II, 128 MB RAM, and two or more different sound cards.
Notes:
Powered by PublicationsList.org.