Efficient Localisation Using Images and OpenStreetMaps

Mengjie Zhou*, Xieyuanli Chen, Obed N Samano Abonce, Cyrill Stachniss, Andrew Calway

*Corresponding author for this work

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

The ability to localise is key for robot navigation. We describe an efficient method for vision-based localisation, which combines sequential Monte Carlo tracking with matching ground-level images to 2-D cartographic maps such as OpenStreetMaps. The matching is based on a learned embedded space representation linking images and map tiles, encoding the common semantic information present in both and providing potential for invariance to changing conditions. Moreover, the compactness of 2-D maps supports scalability. This contrasts with the majority of previous approaches based on matching with single-shot geo-referenced images or 3-D reconstructions. We present experiments using the StreetLearn and Oxford RobotCar datasets and demonstrate that the method is highly effective, giving high accuracy and fast convergence.
Original languageEnglish
Publication statusAccepted/In press - 2021
EventIEEE/RSJ International Conference on Intelligent Robots and Systems - Prague, Prague, Czech Republic
Duration: 27 Sept 20211 Oct 2021

Conference

ConferenceIEEE/RSJ International Conference on Intelligent Robots and Systems
Country/TerritoryCzech Republic
CityPrague
Period27/09/211/10/21

Fingerprint

Dive into the research topics of 'Efficient Localisation Using Images and OpenStreetMaps'. Together they form a unique fingerprint.

Cite this