Remote Sensing Image Scene Classification via Label Augmentation and Intra-Class Constraint
Hao Xie, Yushi Chen, and and Pedram Ghamisi
Remote sensing images are used in many areas, including object detection, image retrieval, change detection, land use classification, and environmental monitoring. In recent years, remote sensing images have achieved very high spatial resolution , which allows applying computer vision for their processing. Remote sensing image scene classification assigns a scene category to an image. Typically, scene classification tasks can be efficiently solved by deep learning methods based on convolutional neural networks (CNNs). However, the acquisition of labeled samples in remote sensing is challenging, which results in smaller training sets and leads to model over-fitting. To overcome this problem, data augmentation is often used to expand the training set.
Here, we propose an improved method for data augmentation in remote sensing image scene classification. The proposed method considers the scene category and image transformation and assigns a joint label to the generated images. Label augmentation provides more accurate category information and allows using the training samples more effectively. To improve classification accuracy, we impose a constraint on intra-class diversity in the training set caused by label augmentation. To verify the effectiveness of our method, we conduct experiments on three public remote sensing image datasets: UC Merced, AID, and NWPU. We use ResNet18 model, pre-trained on ImageNet, as the backbone network. The results show that the proposed method can surpass other state-of-the-art methods in classification accuracy. This paper demonstrates that classification performance can be noticeably improved by making full use of data without complex algorithms.
Remote Sensing, 13, 2566, 2021-06-30.