Abstract
Assigning geospatial objects with specific categories at the pixel level is a fundamental task in remote sensing image analysis. Along with the rapid development of sensor technologies, remotely sensed images can be captured at multiple spatial resolutions (MSR) with information content manifested at different scales. Extracting information from these MSR images represents huge opportunities for enhanced feature representation and characterisation. However, MSR images suffer from two critical issues: (1) increased scale variation of geo-objects and (2) loss of detailed information at coarse spatial resolutions. To bridge these gaps, in this paper, we propose a novel scale-aware neural network (SaNet) for the semantic segmentation of MSR remotely sensed imagery. SaNet deploys a densely connected feature network (DCFFM) module to capture high-quality multi-scale context, such that the scale variation is handled properly and the quality of segmentation is increased for both large and small objects. A spatial feature recalibration (SFRM) module was further incorporated into the network to learn intact semantic content with enhanced spatial relationships, where the negative effects of information loss are removed. The combination of DCFFM and SFRM allows SaNet to learn scale-aware feature representation, which outperforms the existing multi-scale feature representation. Extensive experiments on three semantic segmentation datasets demonstrated the effectiveness of the proposed SaNet in cross-resolution segmentation.
Original language | English |
---|---|
Article number | 5015 |
Journal | Remote Sensing |
Volume | 13 |
Issue number | 24 |
DOIs | |
Publication status | Published - 10 Dec 2021 |
Bibliographical note
Funding Information:Acknowledgments: The authors are very grateful to the many people who helped to comment on the article, and the Large-Scale Environment Remote Sensing Platform (Facility No. 16000009, 16000011, 16000012) provided by Wuhan University and the supports provided by the Surveying and Mapping Institute Lands and Resource Department of Guangdong Province, Guangzhou. Special thanks goes out to the editors and reviewers for providing valuable insight into this article.
Funding Information:
Funding: This research was funded by the National Natural Science Foundation of China (NSFC) under grant number 41971352, National Key Research and Development Program of China under grant number 2018YFB0505003.
Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.
Keywords
- Deep convolutional neural network
- Multiple spatial resolutions
- Remote sensing
- Scale-aware feature representation
- Semantic segmentation