Abstract
Gaze-contingent displays are used in this paper for integrated visualisation of 2-D multi-modality images. In gaze-contingent displays a window centred around the observer's fixation point is modified while the observer moves their eyes around the display. In the proposed technique, this window, in the central part of vision, is taken from one of the input modalities, while the rest of the display, in peripheral vision, comes from the other one. The human visual system fuses these two images into a single percept. An SMI EyeLink I eye-tracker is used to obtain real-time data about the observer's fixation point, while he/she is examining the displayed images. The test data used in this study comprise registered medical images (CT and MR), remote sensing images, partially-focused images, and multi-layered geographical maps. In all experiments the observer is presented with a dynamic gaze-contingent display. As the eyes scan the display, information is processed not just from the point of fixation but from a larger area, called the 'useful field of view' or `functional visual field'. Various display parameters, e.g. the size, shape, border, and colour of the window, affect the perception and combination of the two image types. Images generated using this new approach are presented and qualitatively compared to other commonly used multi-modality image display methods, such as adjacent display, 'chessboard' display and transparency weighted display
Translated title of the contribution | Multi-modality gaze-contingent displays for image fusion |
---|---|
Original language | English |
Title of host publication | Fifth International Conference on Information Fusion, Annapolis, MD, USA |
Publisher | Int. Soc. Inf. Fusion |
Pages | 1213 - 1220 |
Number of pages | 8 |
Volume | 2 |
ISBN (Print) | 0972184414 |
DOIs | |
Publication status | Published - 8 Jul 2002 |