Land-Cover Semantic Segmentation for Very-High-Resolution Remote Sensing Imagery Using Deep Transfer Learning and Active Contour Loss

Chicchon, M; Trujillo, FJL; Sipiran, I; Madrid, R.

Keywords: training, accuracy, feature extraction, transformers, Decoding, Convolutional neural networks, Semantic segmentation, spatial resolution, transfer learning, Computer architecture, Land surface, Deep transfer learning, land-cover segmentation, semantic segmentation models

Abstract

An accurate land-cover segmentation of very-high-resolution aerial images is essential for a wide range of applications, including urban planning and natural resource management. However, the automation of this process remains a challenge owing to the complexity of images, variability in land surface features, and noise. In this study, a method for training convolutional neural networks and transformers to perform land-cover segmentation on very-high-resolution aerial images in a regional context was proposed. We assessed the U-Net-scSE, FT-U-NetFormer, and DC-Swin architectures, incorporating transfer learning and active contour loss functions to improve performance on semantic segmentation tasks. Our experiments conducted using the OpenEarthMap dataset, which includes images from 44 countries, demonstrate the superior performance of U-Net-scSE models with the EfficientNet-V2-XL and MiT-B4 encoders, achieving an mIoU of over 0.80 on a test dataset of urban and rural images from Peru.

Más información

Título según WOS: Land-Cover Semantic Segmentation for Very-High-Resolution Remote Sensing Imagery Using Deep Transfer Learning and Active Contour Loss
Volumen: 13
Fecha de publicación: 2025
Página de inicio: 59007
Página final: 59019
Idioma: English
DOI:

10.1109/ACCESS.2025.3556632

Notas: ISI