Abstract: Deep learning has attracted much attention in the field of hyperspectral image classification recently, due to its powerful representation and generalization abilities. Most of the current deep learning models are trained in a supervised manner, which requires large amounts of labeled samples to achieve the state-of-the-art performance. Unfortunately, pixel-level labeling in hyperspectral imageries is difficult, time-consuming, and human-dependent. To address this issue, we propose an unsupervised feature learning model using multimodal data, hyperspectral, and light detection and ranging (LiDAR) in particular. It takes advantage of the relationship between hyperspectral and LiDAR data to extract features, without using any label information. After that, we design a dual fine-tuning strategy to transfer the extracted features for hyperspectral image classification with small numbers of training samples. Such a strategy is able to explore not only the semantic information but also the intrinsic structure information of training samples. In order to test the performance of our proposed model, we conduct comprehensive experiments on three hyperspectral and LiDAR datasets. Experimental results show that our proposed model can achieve better performance than several state-of-the-art deep learning models.
External IDs:dblp:journals/tgrs/HangQ022
Loading