Edge-Object Co-Driven Learning for Remote Sensing Change Detection

Yangguang Liu, Fang Liu, Jia Liu, Liang Xiao

Published: 01 Jan 2025, Last Modified: 15 Nov 2025IEEE Transactions on Geoscience and Remote SensingEveryoneRevisionsCC BY-SA 4.0
Abstract: Remote sensing change detection (CD) aims to accurately reveal surface changes by comparing two temporally separated images of the same area. However, in complex environments, insufficient edge detail recognition and limited feature extraction often affect the accuracy of CD. For this purpose, we propose a novel method named the edge-object co-driven learning network (EOCLNet), which employs a combination of the pyramid vision transformer (PVT) and the fast segment anything model (FastSAM) as parallel feature extractors to capture rich multilevel features. Specifically, it includes three key components, which are the edge extraction module (EEM), the object revelation module (ORM), and the edge-object learning (EOL). EEM explicitly captures edge details by combining low-level spatial features with high-level semantic features, providing essential edge knowledge. ORM reveals changed objects by aggregating the highest two levels of semantic features, providing initial change guidance. EOL is designed to implicitly mine edge clues by establishing relationships between edges and changed objects across multiple levels, receiving outputs from both EEM and ORM. Furthermore, during the training process, the uncertainty from the previous level’s change map is utilized to guide the learning at the next level, thereby achieving a transition from uncertainty to certainty. The effectiveness of EOCLNet is validated on three public datasets, where it outperforms several state-of-the-art CD methods.
Loading