Abstract
Disparity/depth estimation from sequences of stereo images is an important element in 3D vision. Owing to occlusions, imperfect settings and homogeneous luminance, accurate estimate of depth remains a challenging problem. Targetting view synthesis, we propose a novel learning-based framework making use of dilated convolution, densely connected convolutional modules, compact decoder and skip connections. The network is shallow but dense, so it is fast and accurate. Two additional contributions - a non-linear adjustment of the depth resolution
and the introduction of a projection loss, lead to reduction of estimation error by up to 20% and 25% respectively. The results show that our network outperforms state-of-the-art methods with an average improvement in accuracy of depth estimation and view synthesis by approximately 45% and 34% respectively. Where our method generates comparable quality of estimated depth, it performs 10 times faster than those methods.
and the introduction of a projection loss, lead to reduction of estimation error by up to 20% and 25% respectively. The results show that our network outperforms state-of-the-art methods with an average improvement in accuracy of depth estimation and view synthesis by approximately 45% and 34% respectively. Where our method generates comparable quality of estimated depth, it performs 10 times faster than those methods.
Original language | English |
---|---|
Title of host publication | 28th European Signal Processing Conference (EUSIPCO 2020) |
Publication status | Published - 22 Jan 2021 |
Event | 28th European Signal Processing Conference - Amsterdam, Netherlands Duration: 18 Jan 2021 → 22 Jan 2021 https://eusipco2020.org/ |
Conference
Conference | 28th European Signal Processing Conference |
---|---|
Abbreviated title | EUSIPCO2020 |
Country/Territory | Netherlands |
City | Amsterdam |
Period | 18/01/21 → 22/01/21 |
Internet address |
Keywords
- depth estimation
- disparity estimation
- deep learning
- CNN
- view synthesis