Nonlinear spectral unmixing using residual component analysis and a Gamma Markov random field

Yoann Altmann, Marcelo Pereyra, Steve McLaughlin

Research output: Contribution to conferenceConference Paperpeer-review

197 Downloads (Pure)

Abstract

This paper presents a new Bayesian nonlinear unmixing model for hyperspectral images. The proposed model represents pixel reflectances as linear mixtures of end-members, corrupted by an additional combination of nonlinear terms (with respect to the end-members) and additive Gaussian noise. A central contribution of this work is to use a Gamma Markov random field to capture the spatial structure and correlations of the nonlinear terms, and by doing so to improve significantly estimation performance. In order to perform hyperspectral image unmixing, the Gamma Markov random field is embedded in a hierarchical Bayesian model representing the image observation process and prior knowledge, followed by inference with a Markov chain Monte Carlo algorithm that jointly estimates the model parameters of interest and marginalises latent variables. Simulations conducted with synthetic and real data show the accuracy of the proposed SU and nonlinearity estimation strategy for the analysis of hyperspectral images.
Original languageEnglish
Pages165 - 168
Number of pages4
DOIs
Publication statusPublished - 16 Dec 2015
EventIEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015 - Cancun, Mexico
Duration: 13 May 201616 May 2016

Conference

ConferenceIEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015
CountryMexico
CityCancun
Period13/05/1616/05/16

Keywords

  • Hyperspectral imagery
  • nonlinear spectral unmixing
  • residual component analysis
  • Gamma Markov random field
  • Bayesian estimation

Fingerprint Dive into the research topics of 'Nonlinear spectral unmixing using residual component analysis and a Gamma Markov random field'. Together they form a unique fingerprint.

Cite this