Position Paper: Beyond Calibration - Leveraging Controlled Self-Deception for Robust Neural Network Learning
Abstract: Modern neural networks, despite their impressive accuracy, frequently suffer from miscalibration, yielding confidence estimates that poorly reflect true performance. Traditional approaches to calibration typically rely on post-hoc adjustments, which do not influence the intrinsic learning dynamics of the model. In contrast, this paper proposes a novel framework that integrates a self-deception - inspired mechanism into the training process. Drawing on insights from human cognitive processes - where controlled overconfidence and self-deception facilitate perseverance and exploration - we introduce an auxiliary module that selectively boosts the confidence of a network's predictions during training. This controlled confidence boost not only smooths the loss landscape by amplifying gradient signals in regions of ambiguity but also promotes adaptive exploration and robust optimization. Our theoretical analysis demonstrates that, under mild assumptions, the proposed mechanism can enhance learning dynamics and improve calibration without incurring significant computational overhead. By bridging cognitive theory with deep learning, our approach challenges the conventional view that overconfidence is inherently detrimental and paves the way for the development of more resilient and trustworthy AI systems.
External IDs:dblp:conf/ijcnn/SethM25
Loading