Analysis of Alignment Phenomenon in Simple Teacher-student Networks with Finite WidthDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: alignment, finite width network, teacher student model, angular distance function
Abstract: Recent theoretical analysis suggests that ultra-wide neural networks always converge to global minima near the initialization under first order methods. However, the convergence property of neural networks with finite width could be very different. The simplest experiment with two-layer teacher-student networks shows that the input weights of student neurons eventually align with one of the teacher neurons. This suggests a distinct convergence nature for ``not-too-wide'' neural networks that there might not be any local minima near the initialization. As the theoretical justification, we prove that under the most basic settings, all student neurons must align with the teacher neuron at any local minima. The methodology is extendable to more general cases, where the proof can be reduced to analyzing the properties of a special class of functions that we call {\em Angular Distance (AD) function}. Finally, we demonstrate that these properties can be easily verified numerically.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We prove the uniqueness of local minima in a simple student-teacher framework, and conjecture the same conclusion for more general settings.
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=JamhVIew04
10 Replies

Loading