Abstract
The ability to localise is key for robot navigation. We describe an efficient method for vision-based localisation, which combines sequential Monte Carlo tracking with matching ground-level images to 2-D cartographic maps such as OpenStreetMaps. The matching is based on a learned embedded space representation linking images and map tiles, encoding the common semantic information present in both and providing potential for invariance to changing conditions. Moreover, the compactness of 2-D maps supports scalability. This contrasts with the majority of previous approaches based on matching with single-shot geo-referenced images or 3-D reconstructions. We present experiments using the StreetLearn and Oxford RobotCar datasets and demonstrate that the method is highly effective, giving high accuracy and fast convergence.
Original language | English |
---|---|
Publication status | Accepted/In press - 2021 |
Event | IEEE/RSJ International Conference on Intelligent Robots and Systems - Prague, Prague, Czech Republic Duration: 27 Sept 2021 → 1 Oct 2021 |
Conference
Conference | IEEE/RSJ International Conference on Intelligent Robots and Systems |
---|---|
Country/Territory | Czech Republic |
City | Prague |
Period | 27/09/21 → 1/10/21 |