$\texttt{StaR}$ and $\texttt{FLAiR}$: Stabilizing and Enriching Randomized Neural Networks

ICLR 2026 Conference Submission20437 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning, Randomized Neural Networks, Spectral Theory, Representation Stability, Few-Step Weight Adaptation, Closed-Form Solution, Efficient Learning
Abstract: Randomized neural networks (RdNNs) surpass conventional deep models in efficiency by freezing randomly initialized input-to-hidden weights, which permits a closed-form output-layer solution and eliminates the need for backpropagation. However, they often suffer from instability and limited representation quality due to unregulated weight initialization and fixed, non-adaptive hidden mappings. Despite their widespread use, the lack of principled mechanisms to stabilize random mappings and enrich hidden representations remains largely unaddressed. To tackle these foundational issues, we introduce two novel, theoretically grounded frameworks, marking the first attempt to stabilize and enrich RdNN representations. First, $\textbf{\texttt{StaR}}$ ($\textbf{\texttt{Sta}}$ble $\textbf{\texttt{R}}$epresentations) mechanism that regulates the spectrum of input-to-hidden random weight matrix by constraining singular values to a bounded interval, yielding well-conditioned hidden mappings that curb noise amplification and feature suppression. Second, $\textbf{\texttt{FLAiR}}$ ($\textbf{\texttt{F}}$ew-step $\textbf{\texttt{L}}$earning for $\textbf{\texttt{A}}$daptive $\textbf{\texttt{I}}$nitialization and $\textbf{\texttt{R}}$epresentation) mechanism that applies a small, fixed number of gradient steps to input-to-hidden weights before freezing them, lightly adapting nonlinear features to task structure without incurring full backpropagation costs. We evaluate both frameworks on 146 diverse benchmark datasets, covering both binary and multiclass classification tasks using standard shallow and deep RdNN architectures. Extensive empirical results demonstrate that our methods significantly improve accuracy, stability, and generalization, while preserving the efficiency of RdNNs. Furthermore, we provide theoretical guarantees showing that $\textbf{\texttt{StaR}}$ yields bounded spectral norms and well-conditioned hidden-layer transformations, and that $\textbf{\texttt{FLAiR}}$ enhances representation quality through limited adaptation. Codes for baseline and $\textbf{\texttt{StaR}}$/$\textbf{\texttt{FLAiR}}$-enhanced models are provided in the supplementary file.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 20437
Loading