Object-Augmented RGB-D SLAM for Wide-Disparity Relocalisation

Yuhang Ming*, Xingrui Yang, Andrew Calway

*Corresponding author for this work

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

We propose a novel object-augmented RGB-D SLAM system that is capable of constructing a consistent object map and performing relocalisation based on centroids of objects in the map. The approach aims to overcome the view dependence of appearance-based relocalisation methods using point features or images. During the map construction, we use a pre-trained neural network to detect objects and estimate 6D poses from RGB-D data. An incremental probabilistic model is used to aggregate estimates over time to create the object map. Then in relocalisation, we use the same network to extract objects-of-interest in the `lost' frames. Pairwise geometric matching finds correspondences between map and frame objects, and probabilistic absolute orientation followed by application of iterative closest point to dense depth maps and object centroids gives relocalisation. Results of experiments in desktop environments demonstrate very high success rates even for frames with widely different viewpoints from those used to construct the map, significantly outperforming two appearance-based methods.
Original languageEnglish
Publication statusAccepted/In press - 2021
EventIEEE/RSJ International Conference on Intelligent Robots and Systems - Prague, Prague, Czech Republic
Duration: 27 Sept 20211 Oct 2021

Conference

ConferenceIEEE/RSJ International Conference on Intelligent Robots and Systems
Country/TerritoryCzech Republic
CityPrague
Period27/09/211/10/21

Fingerprint

Dive into the research topics of 'Object-Augmented RGB-D SLAM for Wide-Disparity Relocalisation'. Together they form a unique fingerprint.

Cite this