Abstract
Rehabilitation robotics and physical therapy could
greatly benefit from engaging and motivating, robotic caregivers
which respond in accordance to patients’ emotional and social
cues. Recent studies indicate that human-machine interactions
are more believable and memorable when a physical entity
is present, provided that the machine behaves in a realistic
manner. It is desirable to adopt face-to-face communication
because it is the most natural and efficient way of exchanging
information and does not require users to alter their habits.
Towards this end, we describe a process for animating a
robot head, based on video input of a human head. We map
from the 2D coordinates of feature points into the robot’s servo
space using Partial Least Squares (PLS). Learning is done using
a small set of keyframes manually created by an animator.
The method is efficient, robust to tracking errors and
independent of the scale of the face being tracked.
Translated title of the contribution | Towards Realistic Facial Behaviour - Mapping from Video Footage to a Robot Head |
---|---|
Original language | English |
Title of host publication | 10th edition of the International Conference on Rehabilitation Robotics (ICORR) |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Publication status | Published - 2007 |
Bibliographical note
Other page information: -Conference Proceedings/Title of Journal: 10th edition of the International Conference on Rehabilitation Robotics (ICORR)
Other identifier: 2000700