COMBINING ABSOLUTE POSITIONING AND VISION FOR WIDE AREA AUGMENTED REALITY

Thomas Banwell, Andrew Calway

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

1 Citation (Scopus)

Abstract

One of the major limitations of vision based mapping and localisation is its inability to scale and operate over wide areas. This restricts its use in applications such as Augmented Reality. In this paper we demonstrate that the integration of a second absolute positioning sensor addresses this problem, allowing independent local maps to be combined within a global coordinate frame. This is achieved by aligning trajectories from the two sensors which enables estimation of the relative position, orientation and scale of each local map. The second sensor also provides the additional benefit of reducing the search space required for efficient relocalisation. Results illustrate the method working for an indoor environment using an ultrasound position sensor, building and combining a large number of local maps and successfully relocalising as users move arbitrarily within the map. To show the generality of the proposed method we also demonstrate the system building and aligning local maps in an outdoor environment using GPS as the position sensor.
Translated title of the contributionCOMBINING ABSOLUTE POSITIONING AND VISION FOR WIDE AREA AUGMENTED REALITY
Original languageEnglish
Title of host publicationInternational Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Publication statusPublished - 2010

Bibliographical note

Other page information: -
Conference Proceedings/Title of Journal: International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Other identifier: 2001229

Fingerprint

Dive into the research topics of 'COMBINING ABSOLUTE POSITIONING AND VISION FOR WIDE AREA AUGMENTED REALITY'. Together they form a unique fingerprint.

Cite this