A new framework for fusion of 2D images based on their multiscale edges is described in this paper. The new method uses the multiscale edge representation of images proposed by Mallat and Hwang (1992). The input images are fused using their multiscale edges only. Two different algorithms for fusing the point representations and the chain representations of the multiscale edges (wavelet transform modulus maxima) are given. The chain representation has been found to provide numerous new alternatives for image fusion, since edge graph fusion techniques can be employed to combine the images. The new framework encompasses different levels, i.e. pixel and feature levels, of image fusion in the wavelet domain.
|Translated title of the contribution
|Fusion of 2-D Images Using Their Multiscale Edges
|Title of host publication
|15th International Conference on Pattern Recognition (ICPR), Barcelona, Catalonia, Spain
|Institute of Electrical and Electronics Engineers (IEEE)
|41 - 44
|Published - Sept 2000
|15th International Conference on Pattern Recognition - Barcelona, Spain
Duration: 1 Sept 2000 → …
|15th International Conference on Pattern Recognition
|1/09/00 → …
Bibliographical noteRose publication type: Conference contribution
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Bristol's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to firstname.lastname@example.org.
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.