Abstract: Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area.
Abstract: We present a method that maps collisions on a dynamic deformable virtual character designed to be used for tactile haptic rendering in Immersive Virtual Reality (IVR). Our method computes exact intersections by relying on the use of programmable graphics hardware. Based on interference tests between deformable meshes (an avatar controlled by a human participant) and a few hundred collider objects, our method gives coherent haptic feedback to the participant. We use GPU textures to map surface regions of the avatar to haptic actuators. We illustrate our approach by using a vest composed of vibrators for haptic rendering and we show that our method achieves collision detection at rates well over 1kHz on good quality deformable avatar meshes which makes our method suitable for video games and virtual training applications
Abstract: We describe a system that shows how to substitute a person’s body in virtual reality by a virtual body (or avatar). The avatar is seen from a first person perspective, moves as the person moves and the system generates touch on the real person’s body when the avatar is touched. Such replacement of the person’s real body by a virtual body requires a wide field-of-view head-mounted display, real-time whole body tracking, and tactile feedback. We show how to achieve this with a variety of off-the-shelf hardware and software, and also custom systems for real-time avatar rendering and collision detection. We present an overview of the system and detail on some of its components. We provide examples of how such a system is being used in some of our current experimental studies of embodiment.
Abstract: Modelling, animation and rendering has dominated research computer graphics yielding increasingly rich and realistic virtual worlds. The complexity, richness and quality of the virtual worlds are viewed through a single media that is a virtual camera. In order to properly convey information, whether related to the characters in a scene, the aesthetics of the composition or the emotional impact of the lighting, particular attention must be given to how the camera is positioned and moved. This paper presents an overview of automated camera planning techniques. After analyzing the requirements with respect to shot properties, we review the solution techniques and present a broad classification of existing approaches. We identify the principal shortcomings of existing techniques and propose a set of objectives for research into automated camera planning.
Abstract: In this paper, we present a semantic space partitioning (SSP) approach to the virtual camera composition problem. Virtual camera composition (VCC) consists in positioning a camera in a virtual world, such that the resulting image satisfiÂ…es a set of visual cinematographic properties. Whereas most related works concentrate on numerically computing a unique camera position satisfying the problem, we offer to isolate identical possible solutions in 3D volumes with respect to their visual properties, and to propose them to the user. We introduce the notion of semantic volumes as an extension of visual aspects to characterize, compute and manipulate distinct solution sets. Our approach relies on (1) a space partitioning process derived from a study of possible camera locations w.r.t. to the objects in the scene and (2) local search numerical techniques to compute good representatives of each volume. This work is motivated by the lack of VCC tools in 3D software and the will to integrate cinematographic semantics in the description, solving and interaction processes. Experimental results illustrate the suitability of our approach for identifying and providing distinct solution sets. Furthermore, the exploitation of the semantic volumes lays the groundwork for natural and efficient user interaction by providing knowledge and reasoning on possible classes of solutions.