Abstract: Nonnegative Matrix Factorization (NMF) is a fundamental tool in unsupervised learning, widely used for tasks such as dimensionality reduction, feature extraction, representation learning, and topic modeling. Many algorithms have been developed for NMF, including the well-known Multiplicative Updates (MU) algorithm, which belongs to a broader class of majorization-minimization techniques.
In this work, we introduce a general second-order optimization framework for NMF under both quadratic and $\beta$-divergence loss functions.
This approach, called Second-Order Majorant (SOM), constructs a local quadratic majorization of the loss function by majorizing its elementwise nonnegative Hessian matrix.
It includes MU as a special case, while enabling faster variants. In particular, we propose mSOM, a new algorithm within this class that leverages a tighter local approximation to accelerate convergence. We provide a convergence analysis, showing linear convergence for individual factor updates and global convergence to a stationary point for the alternating version, AmSOM. Numerical experiments on both synthetic and real datasets demonstrate that mSOM often outperforms state-of-the-art algorithms for NMF.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Kejun_Huang1
Submission Number: 5917
Loading