SGUVE-Net: Semantic-Guided Underwater Video Enhancement Network for Real-Time IoT-Based Marine Monitoring
Abstract: Underwater video enhancement is crucial for marine research and monitoring applications, particularly in the Internet of Things (IoT) context, where autonomous underwater vehicles (AUVs) and sensor networks are deployed for environmental monitoring and marine life tracking. However, the scarcity of undistorted underwater video data and distortions like motion blur and water turbidity limit the effectiveness of enhancement models. Existing methods typically focus on frame-by-frame enhancement and overlook temporal coherence and computational efficiency. To address these issues, we propose semantic-guided underwater video enhancement network (SGUVE-Net), which combines a multiscale feature-aware network with a semantic branch for localized enhancement. The main branch employs an encoder–decoder architecture, combining spatial group shifting and dual attention mechanisms to fully exploit contextual information for precise alignment. In contrast, the semantic branch focuses on enhancing key regions of the video frames by incorporating high-level semantic cues, which improve motion tracking accuracy and mitigate motion blur, enhancing video quality for real-time IoT applications. They complement each other to achieve differentiated modeling of static background details and dynamic target features. Experimental results show that SGUVE-Net outperforms state-of-the-art methods across several metrics, providing an effective solution for underwater video enhancement in IoT systems.
External IDs:dblp:journals/iotj/ZhouFLZHJM25
Loading