TL;DR: We explore the relationship between explicit regularisers, sharpness of the loss landscape and calibration to better understand notions of flatness and its importance.
Abstract: We probe the relation between flatness, generalisation and calibration in neural networks, using explicit regularisation as a control variable.
Our findings indicate that the range of flatness metrics surveyed fail to positively correlate with variation in generalisation or calibration.
In fact, the correlation is often opposite to what has been hypothesized or claimed in prior work, with calibrated models typically existing at sharper minima compared to relative baselines, this relation exists across model classes and dataset complexities.
Style Files: I have used the style files.
Submission Number: 75
Loading