Abstract
Co-verbal gestures are an important part of human communication, improving its efficiency for information conveyance. A key component of such improvement is the observer's ability to integrate information from the two communication channels, speech and gesture. Whether such integration also occurs when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for a human communicator, is an open question. Here, we present an experiment which, using a fully within subjects design, shows that for a range of iconic gestures, speech and gesture integration occurs with similar efficiency for human and for robot communicators. The gestures for this study were produced on an Aldebaran Robotics NAO robot platform with a Kinect based tele-operation system. We also show that our system is able to produce a range of iconic gestures that are understood by participants in unimodal (gesture only) communication, as well as being efficiently integrated with speech. Hence, we demonstrate the utility of iconic gestures for robotic communicators.
Original language | English |
---|---|
Title of host publication | Proceedings of 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 1999-2006 |
Number of pages | 8 |
Volume | 2015-June |
DOIs | |
Publication status | Published - 29 Jun 2015 |
Research Groups and Themes
- Cognitive Science
- Visual Perception