End-to-End Training Induces Information Bottleneck through Layer-Role Differentiation: A Comparative Analysis with Layer-wise Training

Published: 31 May 2024, Last Modified: 31 May 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: End-to-end (E2E) training, optimizing the entire model through error backpropagation, fundamentally supports the advancements of deep learning. Despite its high performance, E2E training faces the problems of memory consumption, parallel computing, and discrepancy with the functionalities of the actual brain. Various alternative methods have been proposed to overcome these difficulties; however, no one can yet match the performance of E2E training, thereby falling short in practicality. Furthermore, there is no deep understanding regarding differences in the trained model properties beyond the performance gap. In this paper, we reconsider why E2E training demonstrates a superior performance through a comparison with layer-wise training, which shares fundamental learning principles and architectures with E2E training, with the granularity of loss evaluation being the only difference. On the basis of the observation that E2E training has an advantage in propagating input information, we analyze the information plane dynamics of intermediate representations based on the Hilbert-Schmidt independence criterion (HSIC). The results of our normalized HSIC value analysis reveal the E2E training ability to exhibit different information dynamics across layers, in addition to efficient information propagation. Furthermore, we show that this layer-role differentiation leads to the final representation following the information bottleneck principle. Our work not only provides the advantages of E2E training in terms of information propagation and the information bottleneck but also suggests the need to consider the cooperative interactions between layers, not just the final layer when analyzing the information bottleneck of deep learning.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have deanonymized the submission.
Assigned Action Editor: ~Alexander_A_Alemi1
Submission Number: 2140
Loading