We present a general method for real-time, vision-only single-camera simultaneous localisation and mapping (SLAM) --- an algorithm which is applicable to the localisation of any camera moving through a scene --- and study its application to the localisation of a wearable robot with active vision. Starting from very sparse initial scene knowledge, a map of natural point features spanning a section of a room is generated on-the-fly as the motion of the camera is simultaneously estimated in full 3D. Naturally this permits the annotation of the scene with rigidly-registered graphics, but further it permits automatic control of the robot's active camera: for instance, fixation on a particular object can be maintained during extended periods of arbitrary user motion, then shifted at will to another object which has potentially been out of the field of view. This kind of functionality is the key to the understanding or ``management'' of a workspace which the robot needs to have in order to assist its wearer usefully in tasks. We believe that the techniques and technology developed are of prime importance towards the goal of a fully autonomous wearable assistant and of particular immediate value in scenarios of remote collaboration, where a remote expert is able to annotate, through the robot, the environment the wearer is working in.
|Translated title of the contribution||Real-Time Localisation and Mapping with Wearable Active Vision|
|Title of host publication||Unknown|
|Publisher||IEEE Computer Society|
|Publication status||Published - Oct 2003|