Abstract: Demosaicking is a critical process in the digital imaging pipeline, tasked with reconstructing full-color images from sampled data captured by R/G/B color sensors. The challenge arises from two-thirds of the pixel data being missing, which complicates the task of accurate reconstruction. Recent deep learning-based solutions have yielded considerable advancements in demosaicking performance. However, they are computationally intensive and rely on large model architectures, rendering them unsuitable for edge-devices deployment. This work introduces a novel demosaicking method to address these challenges based on green learning (GL), named green image demosaicking (GID). GID offers model transparency while significantly reducing the model size and computational complexity against deep learning methods. Notably, GID does not utilize neural networks. Instead, it is built upon unsupervised representation learning and supervised feature dimension reduction. GID effectively addresses the challenges of big data in vision applications and enhances predictive accuracy during decision-making. GID is engineered for rapid execution with parallel training, making it well-suited for real-time vision tasks on resource-constrained devices.
Loading