Abstract
A novel system for gaze-contingent image analysis and multisensorial image display is described. The observer's scanpaths are recorded while viewing and analysing 2D or 3D (volumetric) images. A region-of-interest (ROI) centred around the current fixation point is simultaneously subjected to real-time image analysis algorithms to compute various image features, e.g. edges, textures (2D) or surfaces and volumetric texture (3D). This feature information is fed back to the observer using multiple channels, i.e. in visual (replacing the ROI by a visually modified ROI), auditory (generating an auditory display of a computed feature) and tactile (generating a tactile representation of a computed feature) manner. Thus, the observer can use several of his senses to perceive information from the image which may be otherwise hidden to his eyes, e.g. targets or patterns which are very difficult or impossible to detect. The human brain then fuses all the information from the multisensorial display. The moment the eyes make a saccade to a new fixation location, the same process is applied to the new ROI centred around it. In this way the observer receives information from the local real-time image analysis around the point of gaze, hence the term gaze-contingent image analysis. The new system is profiled and several example applications are discussed
Translated title of the contribution | A system for gaze-contingent image analysis and multi-sensorial image display |
---|---|
Original language | English |
Title of host publication | Sixth International Conference on Information Fusion, Cairns, Qld., Australia |
Publisher | University New Mexico |
Pages | 1292 - 1299 |
Number of pages | 8 |
Publication status | Published - 8 Jul 2003 |