A Novel Transformer Based Semantic Segmentation Scheme for Fine-Resolution Remote Sensing Images

Libo Wang, Rui Li, Chenxi Duan, Ce Zhang, Xiaoliang Meng, Shenghui Fang*

*Corresponding author for this work

Research output: Contribution to journalArticle (Academic Journal)peer-review

163 Citations (Scopus)

Abstract

The fully convolutional network (FCN) with an encoder-decoder architecture has been the standard paradigm for semantic segmentation. The encoder-decoder architecture utilizes an encoder to capture multilevel feature maps, which are incorporated into the final prediction by a decoder. As the context is crucial for precise segmentation, tremendous effort has been made to extract such information in an intelligent fashion, including employing dilated/atrous convolutions or inserting attention modules. However, these endeavors are all based on the FCN architecture with ResNet or other backbones, which cannot fully exploit the context from the theoretical concept. By contrast, we introduce the Swin Transformer as the backbone to extract the context information and design a novel decoder of densely connected feature aggregation module (DCFAM) to restore the resolution and produce the segmentation map. The experimental results on two remotely sensed semantic segmentation datasets demonstrate the effectiveness of the proposed scheme.

Original languageEnglish
Article number6506105
Number of pages5
JournalIEEE Geoscience and Remote Sensing Letters
Volume19
DOIs
Publication statusPublished - 14 Jan 2022

Bibliographical note

Funding Information:
This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 41971352.

Publisher Copyright:
© 2004-2012 IEEE.

Keywords

  • Fine-resolution remote sensing images
  • semantic segmentation
  • transformer

Fingerprint

Dive into the research topics of 'A Novel Transformer Based Semantic Segmentation Scheme for Fine-Resolution Remote Sensing Images'. Together they form a unique fingerprint.

Cite this