hosted by
publicationslist.org
    

Ladislav Kunc

Department of Computer Computer Graphics and Interaction, Karlovo náměstí 13, Prague 2, 12 135
kuncladi@fel.cvut.cz
I am researcher and PhD student at Department of Computer Graphics and Interaction, Czech Technical University in Prague. I am also internship researcher at IBM Research in Prague since 2006. I work in the field of embodied conversational agents (ECA). Especially, I am interested in usage of talking heads in multimodal applications.

Book chapters

2011
Ladislav Kunc, Pavel Slavík (2011)  Corrected Human Vision System and the McGurk Effect   In: HCI International 2011 Posters' Extended Abstracts Edited by:Constantine Stephanidis. 345-349 Springer Berlin Heidelberg  
Abstract: The McGurk effect test is a method to evaluate articulation of talking heads. Our work addresses the issue of corrected vision influence on the McGurk effect perception. We conducted an experiment to investigate this influence. Measured data shows different perception of participants with corrected vision in some cases. The results could help talking head evaluators to compare talking head implementations each other with elimination of the influence of corrected vision.
Notes:

Conference papers

2009
J Cuřín, J Kleindienst, L Kunc, M Labský (2009)  Voice-driven Jukebox with ECA interface   In: Proceedings of 13th International Conference "Speech and Computer" 146-151  
Abstract: Embodied Conversational Agents (ECAs) have been applied in different types of online environments such as webpages, games, virtual reality, and presentations, and have been evaluated in different contexts such as learning, commerce, and entertainment. In this paper, we discuss a specific application domain of ECAs â a voice-driven dialog application that uses the ECA as a primary user interaction interface. Our motivation is deliberately focused on a narrow application domain â in our case a jukebox application â to debate specific practical issues and thus investigate new potentials for personification and customization of ECAs. The novelty of our approach is in integration of graphical interface into the dialog-driven application logic. The graphical interface is represented by the ECA and a visual information (including multimedia) displayed on the background. At the same time, we investigate the feasibility of this approach by understanding the criteria of user acceptance from the viewpoint of multi-modal conversational systems where voice and the ECA represent the primary input, output channels, respectively. The prototype system has been subjected to initial usability testing in order to obtain qualified and quantified evidence and understand potential shortcomings of the investigated interface.
Notes:
L Kunc, P Slavík (2009)  Study on Sensitivity to ECA Behavior Parameters   In: Intelligent Virtual Agents IVA'09 521-522  
Abstract: The behavior of an ECA is a hard research question. Part of agent's visual behavior is its appearance. Besides the "static" appearance there is another way to change appearance -- considering "dynamic" appearance. This paper discusses a selection of appearance parameters and provides statistical results of a pilot user study on sensitivity to these parameters to find out importance of individual parameters. Personalized settings of these parameters could lead us to more enjoyable embodied agents and to better communication with an agent.
Notes:
2008
L Kunc, J Kleindienst, P Slavík (2008)  Talking Head As Life Blog   In: Proceedings of the 10th international conference on Text, Speech and Dialogue 365-372  
Abstract: The paper describes an experimental presentation system that can automatically generate dynamic ECA-based presentations from structured data including text context, images, music and sounds, videos, etc. Thus the Embodied Conversational Agent acts as a moderator in the chosen presentation context, typically personal diaries. Since an ECA represents a rich channel for conveying both verbal and non-verbal messages, we are researching ECAs as facilitators that transpose âdryâ data such as diaries and blogs into more lively and dynamic presentations based on ontologies. We constructed our framework on an existing toolkit ECAF that supports runtime generation of ECA agents. We describe the extensions of the toolkit and give an overview of the current system architecture. We describe the particular Grandma TV scenario, where a family uses the ECA automatic presentation engine to deliver weekly family news to distant grandparents. Recently conducted usability studies suggest the pros and cons of the presented approach.
Notes:
L Kunc, P Slavík (2008)  Talking Head-Visualizations & Level of Detail   In: International Conference Visualisation IV'08 129-134  
Abstract: Embodied conversational agents are fast growing area of research interest. These agents allow interacting with computers in a natural way. The usage of agents is closely associated with the problem of facial animation. Realistic real-time facial animation is very computational expensive. This work is dedicated to the problem of deformation visualization of pseudo-muscles and head model mesh during the speech using the extended Waters pseudo-muscle head model for animation. These visualizations help us to find out a new way of dynamic level of detail. By reducing activity of particular groups of muscles we can achieve better animation speeds. Running the experiments based on three scenarios proved that these speed-ups are noticeable.
Notes:
2007
L Kunc, J Kleindienst (2007)  ECAF : Authoring Language for Embodied Conversational Agents   In: Proceedings of the 10th international conference on Text, Speech and Dialogue 206-213  
Abstract: Embodied Conversational Agent (ECA) is the user interface metaphor that allows to naturally communicate information during human-computer interaction in synergic modality dimensions, including voice, gesture, emotion, text, etc. Due to its anthropological representation and the ability to express human-like behavior, ECAs are becoming popular interface front-ends for dialog and conversational applications. One important prerequisite for efficient authoring of such ECA-based applications is the existence of a suitable programming language that exploits the expressive possibilities of multimodally blended messages conveyed to the user. In this paper, we present an architecture and interaction language ECAF, which we used for authoring several ECA-based applications. We also provide the feedback from usability testing we carried for user acceptance of several multimodal blending strategies.
Notes:
Powered by PublicationsList.org.