LIQT: Bridging Liquid Neural Dynamics and Human Perceptual Mechanisms for Blind Image Quality Assessment

16 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Image Quality Assessment, Liquid Neural Networks, Transformer
Abstract: Blind Image Quality Assessment (BIQA) seeks to predict perceptual quality in reference-free scenarios, yet conventional methods often hard to capture the human visual system's adaptive spatio-temporal integration of degradation patterns. Inspired by the adaptive temporal dynamics of biological neural circuits, we propose Liquid Image Quality Transformer (LIQT), a novel BIQA framework that integrates Liquid Neural Networks (LNNs) with Transformer-based architectures. LIQT incorporates Liquid Self-Attention (LSA) equipped with Closed-Form Continuous-Time Module (CFCTM), which reformulates liquid time-constant neurons into stable closed-form solutions through learnable decay rates and Padé approximation, thus enabling LIQT to dynamically modulates feature extraction based on local image features. To emulate multi-scale perceptual evaluation, a Multi-Scale Image Quality-Aware Decoder (MIQAD) aggregates multi-scale features from LIQT for comprehensive quality regression. This work pioneers the integration of biomimetic neural mechanisms into BIQA and experiments in six benchmark datasets that span various types of distortion and image content demonstrate the superior performance of LIQT over state-of-the-art methods.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 7045
Loading