Visual Mapping and Multi-modal Localisation for Anywhere AR Authoring

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

This paper presents an Augmented Reality system that combines a range of localisation technologies that include GPS, UWB, user input and Visual SLAM to enable both retrieval and creation of annotations in most places. The system works for multiple users and enables sharing and visualizations of annotations with a control centre. The process is divided into two main steps i) global localisation and ii) 6D local mapping. For the case of visual relocalisation we develop and evaluate a method to rank local maps which improves performance over previous art. We demonstrate the system working over a wide area and for a range of environments.
Translated title of the contributionVisual Mapping and Multi-modal Localisation for Anywhere AR Authoring
Original languageEnglish
Title of host publicationProceedings of the ACCV Workshop on Application of Computer Vision for Mixed and Augmented Reality
PublisherSpringer Berlin Heidelberg
Publication statusPublished - 2010

Publication series

NameLecture Notes in Computer Science

Bibliographical note

Other page information: -
Conference Proceedings/Title of Journal: Proceedings of the ACCV Workshop on Application of Computer Vision for Mixed and Augmented Reality
Other identifier: 2001274

Fingerprint Dive into the research topics of 'Visual Mapping and Multi-modal Localisation for Anywhere AR Authoring'. Together they form a unique fingerprint.

Cite this