(Paper #70)
In this work, we describe our preliminary efforts in the scope of humanlike learning from demonstration (LFD) for creating biologically valid jaw movements toward building a phonetically and visually synchronized trainable talking robot mouth system. Numerous studies show perception of naturalness and similarity to humans are important issues for the acceptance of social robots. For example, it is discomforting if the robot does not exhibit humanlike eye and jaw movements or fails to pay attention to the task. We also expect that giving robots realistic mouth movements which match the auditory signal not only enhances the perception that the robot is talking, but can also increase the intelligibility of the speech uttered by the robot especially under acoustically noisy conditions.
Humanlike learning from demonstration based jaw movement creation utilizes auditory and visual sensory information to acquire perceptual and visual motor skills from human. For each sound unit, the acquired visual motor trajectories are stored in a database called experience library. Initial demonstration of our proposed system indicates that the correlation between the audio and the generated jaw movements is preserved for both the virtual and the hardware platforms.
Human Humanoid Interaction
[320x240, 0min08s, 1.1MB, MPEG1]