Transferring CNN with Adaptive Learning for Remote Sensing Scene Classification
Weiquan Wang, Yushi Chen, and Pedram Ghamisi
Accurate classification of remote sensing (RS) images is a perennial topic of interest in the RS community. Recently, transfer learning, especially for fine-tuning pretrained convolutional neural networks (CNNs), has been proposed as a feasible strategy for RS scene classification. However, because the target domain (i.e., the RS images) and the source domain (e.g., ImageNet) are quite different, simply using the model pretrained on an ImageNet dataset presents some difficulties. The RS images and the pretrained models need to be properly adjusted to build a better classification system. In this study, an adaptive learning strategy for transferring a CNN-based model is proposed. First, an adaptive transform is used to adjust the original size of the RS image to a certain size, which is tailored to the input of the subsequent pretrained model. Then, an adaptive transferring model is proposed to automatically learn what knowledge from the pretrained model should be transferred to the RS scene classification model. Finally, in combination with a label smoothing approach, an adaptive label is presented to generate soft labels based on the statistics of the classification model predictions for each category, which is beneficial for learning the relationships between the target and nontarget categories of scenes. In general, the proposed methods adaptively manage the input, model, and label simultaneously, which leads to better classification performance for RS scene classification. The proposed methods are tested on three widely used datasets, and the obtained results show that the proposed methods provide competitive classification accuracy compared to the state-of-the-art methods.
IEEE Transactions on Geoscience and Remote Sensing, 60, 2022-07-14.