Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation

Oliver Moolan-Feroze, Andrew Calway

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

4 Citations (Scopus)
247 Downloads (Pure)


In this work we present a novel framework that uses deep learning to predict object feature points that are out- of-view in the input image. This system was developed with the application of model-based tracking in mind, particularly in the case of autonomous inspection robots, where only partial views of the object are available. Out-of-view prediction is enabled by applying scaling to the feature point labels during network training. This is combined with a recurrent neural network architecture designed to provide the final prediction layers with rich feature information from across the spatial extent of the input image. To show the versatility of these out- of-view predictions, we describe how to integrate them in both a particle filter tracker and an optimisation based tracker. To evaluate our work we compared our framework with one that predicts only points inside the image. We show that as the amount of the object in view decreases, being able to predict outside the image bounds adds robustness to the final pose estimation.
Original languageEnglish
Title of host publication2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018)
Subtitle of host publicationProceedings of a meeting held 1-5 October 2018, Madrid, Spain
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages7
ISBN (Electronic)9781538680940
ISBN (Print)9781538680933
Publication statusPublished - Mar 2019

Publication series

ISSN (Print)2153-0858
ISSN (Electronic)2153-0866


  • model based 3-D tracking
  • partial views
  • deep learning
  • computer vision
  • robotics


Dive into the research topics of 'Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation'. Together they form a unique fingerprint.

Cite this