Visual place recognition using landmark distribution descriptors

Pilailuck Panphattarasap, Andrew Calway

Research output: Working paper

307 Downloads (Pure)

Abstract

Recent work by Suenderhauf et al. [1] demonstrated improved visual place recognition using proposal regions coupled with features from convolutional neural networks (CNN) to match landmarks between views. In this work we extend the approach by introducing descriptors built from landmark features which also encode the spatial distribution of the landmarks within a view. Matching descriptors then enforces consistency of the relative positions of landmarks between views. This has a significant impact on performance. For example, in experiments on 10 image-pair datasets, each consisting of 200 urban locations with significant differences in viewing positions and conditions, we recorded average precision of around 70% (at 100% recall), compared with 58% obtained using whole image CNN features and 50% for the method in [1].
Original languageEnglish
PublisherarXiv.org
Number of pages14
Volume1608.04274
Publication statusPublished - 15 Aug 2016

Bibliographical note

13 pages

Keywords

  • place recognition

Fingerprint

Dive into the research topics of 'Visual place recognition using landmark distribution descriptors'. Together they form a unique fingerprint.
  • Visual place recognition using landmark distribution descriptors

    Panphattarasap, P. & Calway, A., 2017, Computer Vision - ACCV 2016: 13th Asian Conference on Computer Vision, ACCV 2016, Revised Selected Papers. Springer-Verlag Berlin, Vol. 10114 LNCS. p. 487-502 16 p. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); vol. 10114 LNCS).

    Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

    Open Access
    File
    9 Citations (Scopus)
    209 Downloads (Pure)

Cite this