Abstract
We use detected objects as basic features in a semantic place recognition system with the aim of allowing recognition when views are disparate. This is achieved by constructing a 2D place model of object positions and then using training examples to compute the probability that a pair depict the same place. We also generate an estimate of the relative pose of the cameras. Results on a dataset of 40 urban locations show good recognition performance and pose estimation, even for highly disparate views.
Original language | English |
---|---|
Title of host publication | Proceedings of the British Machine Vision Conference (BMVC) |
Publication status | Published - Sept 2013 |