Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

Paul Bremner, Ute B Leonards

Research output: Contribution to journalArticle (Academic Journal)peer-review

35 Citations (Scopus)
419 Downloads (Pure)

Abstract

Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.
Original languageEnglish
Article number183
Number of pages14
JournalFrontiers in Psychology
Volume7
DOIs
Publication statusPublished - 17 Feb 2016

Research Groups and Themes

  • Visual Perception
  • Cognitive Science

Keywords

  • human-robot interaction
  • gestures
  • humanoid robotics
  • tele-operated robot
  • multi-modal communication

Fingerprint

Dive into the research topics of 'Iconic Gestures for Robot Avatars, Recognition and Integration with Speech'. Together they form a unique fingerprint.

Cite this