Rehabilitation robotics and physical therapy could greatly benefit from engaging and motivating, robotic caregivers which respond in accordance to patients’ emotional and social cues. Recent studies indicate that human-machine interactions are more believable and memorable when a physical entity is present, provided that the machine behaves in a realistic manner. It is desirable to adopt face-to-face communication because it is the most natural and efficient way of exchanging information and does not require users to alter their habits. Towards this end, we describe a process for animating a robot head, based on video input of a human head. We map from the 2D coordinates of feature points into the robot’s servo space using Partial Least Squares (PLS). Learning is done using a small set of keyframes manually created by an animator. The method is efficient, robust to tracking errors and independent of the scale of the face being tracked.
|Title of host publication||10th edition of the International Conference on Rehabilitation Robotics (ICORR)|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Publication status||Published - 2007|
Bibliographical noteOther page information: -
Conference Proceedings/Title of Journal: 10th edition of the International Conference on Rehabilitation Robotics (ICORR)
Other identifier: 2000700
Peter, J., Campbell, N., & Christopher, M. (2007). Towards Realistic Facial Behaviour - Mapping from Video Footage to a Robot Head. In 10th edition of the International Conference on Rehabilitation Robotics (ICORR) Institute of Electrical and Electronics Engineers (IEEE). http://www.cs.bris.ac.uk/Publications/pub_master.jsp?id=2000700