Fast Depth Estimation for View Synthesis

Nantheera Anantrasirichai, Majid Geravand, David Braendler, David R Bull

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

Disparity/depth estimation from sequences of stereo images is an important element in 3D vision. Owing to occlusions, imperfect settings and homogeneous luminance, accurate estimate of depth remains a challenging problem. Targetting view synthesis, we propose a novel learning-based framework making use of dilated convolution, densely connected convolutional modules, compact decoder and skip connections. The network is shallow but dense, so it is fast and accurate. Two additional contributions - a non-linear adjustment of the depth resolution
and the introduction of a projection loss, lead to reduction of estimation error by up to 20% and 25% respectively. The results show that our network outperforms state-of-the-art methods with an average improvement in accuracy of depth estimation and view synthesis by approximately 45% and 34% respectively. Where our method generates comparable quality of estimated depth, it performs 10 times faster than those methods.
Original languageEnglish
Title of host publication28th European Signal Processing Conference (EUSIPCO 2020)
Publication statusAccepted/In press - 29 May 2020
Event28th European Signal Processing Conference - Amsterdam, Netherlands
Duration: 18 Jan 202122 Jan 2021
https://eusipco2020.org/

Conference

Conference28th European Signal Processing Conference
Abbreviated titleEUSIPCO2020
CountryNetherlands
CityAmsterdam
Period18/01/2122/01/21
Internet address

Keywords

  • depth estimation
  • disparity estimation
  • deep learning
  • CNN
  • view synthesis

Fingerprint Dive into the research topics of 'Fast Depth Estimation for View Synthesis'. Together they form a unique fingerprint.

Cite this