Direct virtual viewpoint synthesis from multiple viewpoints

Yi Ding, DW Redmill

Research output: Contribution to conferenceConference Abstract

251 Downloads (Pure)


This paper presents a novel approach for synthesizing intermediate or virtual viewpoints (VVs) of a 3D scene based on information from a number of known reference viewpoints (RVs). The proposed approach directly estimates the pixel value (and corresponding depth) for each pixel in the VV. This is contrast to the more traditional 2 stage approach of firstly building a full 3D or 2.5D model for the scene and then synthesising the desired VV. The potential advantage of this approach is that it works directly with the target virtual view and is hopefully less susceptible to the propagation of errors from the depth estimation stage to the interpolation stage
Original languageEnglish
Pages1045 - 1048
Publication statusPublished - Sep 2005

Bibliographical note

Terms of use: Copyright © 2005 IEEE. Reprinted from IEEE International Conference on Image Processing, 2005 (ICIP 2005).

This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Bristol's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Name of Conference: International Conference on Image Processing


  • depth estimation
  • direct viewpoint synthesis

Fingerprint Dive into the research topics of 'Direct virtual viewpoint synthesis from multiple viewpoints'. Together they form a unique fingerprint.

Cite this