Abstract
Images rendered using global illumination algorithms are
considered amongst the most realistic in 3D computer graphics.
However this high fidelity comes at significant computational
expense. It has long been a goal of computer graphics to create
realistic images at an interactive rate, unfortunately the
computational cost of these algorithms prohibits this. As a direct
result of this, the realism of graphics generally
decreases as the synthesis of images approaches real time.
In this thesis we examine the concept of realism as it relates to
computer graphics. By looking at how we, as humans, perceive the
world around us and printed, or projected versions of that world,
we can gain useful insight into the requirements of computer
graphics. We present methods for reducing the time required for
the production of rendered images based on human perception. By
studying which areas of an image are considered important and
which areas go unobserved it is possible to direct computational
effort only where it is needed. Our approach is based on accepted
psychophysical data.
Although previous methods have used models of human visual
perception to control rendering algorithms, our work takes a novel
approach. By using prior knowledge of scene content and the
efficient use of modern graphics hardware, we interactively create
a map of required pixel quality. This map can then be used to
reduce the complexity of a rendering algorithm on a per pixel
level. This is the opposite of most previous approaches which
attempt instead to save time by stopping the rendering calculation
when pixels reach a certain quality threshold. Additionally we
present a new method in which a combination of the GPU and the CPU
can be used together to selectively render images.
Translated title of the contribution | Rapid Saliency Identification for Selectively Rendering High Fidelity Graphics |
---|---|
Original language | English |
Publication status | Published - 2005 |