Abstract
The fully convolutional network (FCN) with an encoder-decoder architecture has been the standard paradigm for semantic segmentation. The encoder-decoder architecture utilizes an encoder to capture multilevel feature maps, which are incorporated into the final prediction by a decoder. As the context is crucial for precise segmentation, tremendous effort has been made to extract such information in an intelligent fashion, including employing dilated/atrous convolutions or inserting attention modules. However, these endeavors are all based on the FCN architecture with ResNet or other backbones, which cannot fully exploit the context from the theoretical concept. By contrast, we introduce the Swin Transformer as the backbone to extract the context information and design a novel decoder of densely connected feature aggregation module (DCFAM) to restore the resolution and produce the segmentation map. The experimental results on two remotely sensed semantic segmentation datasets demonstrate the effectiveness of the proposed scheme.
Original language | English |
---|---|
Article number | 6506105 |
Number of pages | 5 |
Journal | IEEE Geoscience and Remote Sensing Letters |
Volume | 19 |
DOIs | |
Publication status | Published - 14 Jan 2022 |
Bibliographical note
Funding Information:This work was supported by the National Natural Science Foundation of China (NSFC) under Grant 41971352.
Publisher Copyright:
© 2004-2012 IEEE.
Keywords
- Fine-resolution remote sensing images
- semantic segmentation
- transformer