Visual place recognition using landmark distribution descriptors

Pilailuck Panphattarasap, Andrew Calway

Research output: Working paperWorking paper and Preprints

295 Downloads (Pure)


Recent work by Suenderhauf et al. [1] demonstrated improved visual place recognition using proposal regions coupled with features from convolutional neural networks (CNN) to match landmarks between views. In this work we extend the approach by introducing descriptors built from landmark features which also encode the spatial distribution of the landmarks within a view. Matching descriptors then enforces consistency of the relative positions of landmarks between views. This has a significant impact on performance. For example, in experiments on 10 image-pair datasets, each consisting of 200 urban locations with significant differences in viewing positions and conditions, we recorded average precision of around 70% (at 100% recall), compared with 58% obtained using whole image CNN features and 50% for the method in [1].
Original languageEnglish
Number of pages14
Publication statusPublished - 15 Aug 2016

Bibliographical note

13 pages


  • place recognition

Fingerprint Dive into the research topics of 'Visual place recognition using landmark distribution descriptors'. Together they form a unique fingerprint.

Cite this