A Critical Exploration of "Bayesian Model Selection, Marginal Likelihood, and Generalization in Neural Networks"

Published: 16 Feb 2024, Last Modified: 28 Mar 2024BT@ICLR2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: bayesian model selection, deep learning, marginal likelihood, cross-validation
Blogpost Url: https://iclr-blogposts.github.io/2024/blog/clml/
Abstract: This blog post comprehensively reviews the ICML 2022 paper titled "Bayesian Model Selection, the Marginal Likelihood, and Generalization." The paper's central thesis, examining the log marginal likelihood (LML) and its variant, the conditional log marginal likelihood (CLML), in various machine learning settings, is thoroughly analyzed, and the post critically engages with the paper's methodologies and findings, particularly scrutinizing the CLML's applicability and effectiveness in deep learning scenarios. The review extends beyond summarization to challenge assumptions, compare with existing literature, and examine the evaluation. This deep dive aims to foster a better understanding of Bayesian methods in model evaluation, spotlighting both their strengths and limitations in the context of neural network generalization.
Ref Papers: https://openreview.net/forum?id=9YK9NaFT_q8
Id Of The Authors Of The Papers: ~Micah_Goldblum1, ~Pavel_Izmailov1, ~Sanae_Lotfi1, ~Andrew_Gordon_Wilson1
Conflict Of Interest: None. One of the papers cited in the literature review is from the lab (OATML) where I finished my PhD earlier this year. I am not an author.
Submission Number: 10
Loading