Recent advances in wearable sensing allow active control of the orientation of a body-mounted camera worn by a remote user. In this paper we consider the control of the active camera from head movements. In the context of teleoperation, these may be the head movements of a remote operator, perhaps acting as the wearer’s assistant. The movements are likely to be larger than those in video-conference applications, and so frontal facial features are insufficient. The paper presents a model which incrementally combines a fixed 3D shape model with specific features found on the observed head. Robust methods, including the incorporation of a colour model, are used to mitigate the effect of mismatching, the main contribution being the use of both interest point and colour features within a single random-sampling framework.
|Translated title of the contribution||Head pose estimation for wearable robot control|
|Title of host publication||Proc. British Machine Vision Conference|
|Publication status||Published - 2002|
Bibliographical noteOther page information: -
Conference Proceedings/Title of Journal: Proc. British Machine Vision Conference
Other identifier: 2000864