Projects per year
This paper presents a novel method of salience and priority estimation for the human visual system during locomotion. This visual information contains dynamic content derived from a moving viewpoint. The priority map, ranking key areas on the image, is created from probabilities of gaze fixations, merged from bottom-up features and top-down control on the locomotion. Two deep convolutional neural networks (CNNs), inspired by models of the primate visual system, are employed to capture local salience features and compute probabilities. The first network operates through the foveal and peripheral areas around the eye positions. The second network obtains the importance of fixated points that have long durations or multiple visits, of which such areas need more times to process or to recheck to ensure smooth locomotion. The results show that our proposed method outperforms the state-of-the-art by up to 30 %, computed from average of four well known metrics for saliency estimation.
|Title of host publication||2016 IEEE International Conference on Image Processing (ICIP 2016)|
|Subtitle of host publication||Proceedings of a meeting held 25-28 September 2016, Phoenix, Arizona, USA|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Number of pages||5|
|Publication status||Published - Mar 2017|
- Cognitive Science
- Visual Perception
- convolutional neural network
- deep learning
FingerprintDive into the research topics of 'Visual salience and priority estimation for locomotion using a deep convolutional neural network'. Together they form a unique fingerprint.
- 2 Finished
31/07/12 → 31/07/15