The perception of a virtual environment is heavily influenced by the task the user is currently performing in that environment. Thus the human visual system can be exploited to significantly reduce computational time when rendering high fidelity images, without compromising the perceived visual quality. This poster considers how an image can be selectively rendered when a user is performing a visual task in an environment. In particular, we investigate to what level viewers fail to notice degradations in image quality, between non-task related areas and task related areas, when quality parameters such as image resolution, edge anti-aliasing and reflection and shadows are altered.
|Translated title of the contribution
|Selective Rendering of Task Related Scenes
|Title of host publication
|174 - 174
|Number of pages
|Published - Aug 2004