A Language Model based Model Manager

ICLR 2025 Conference Submission12920 Authors

28 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Model Manager, Verbalization, Differentiation
TL;DR: The "Model Manager" framework uses a large language model to clarify differences between machine learning models, enhancing transparency and aiding selection.
Abstract: In the current landscape of machine learning, we face a “model lake” phenomenon: a proliferation of deployed models often lacking adequate documentation. This presents significant challenges for model users attempting to navigate, differentiate, and select appropriate models for their needs. To address the issue of differentiation, we introduce Model Manager, a framework designed to facilitate easy comparison among existing models. Our approach leverages a large language model (LLM) to generate verbalizations of two models' differences by sampling from two models. We use a novel protocol that makes it possible to quantify the informativeness of the verbalizations. We also assemble a suite with a diverse set of commonly used models: Logistic Regression, Decision Trees, and K-Nearest Neighbors. We additionally performed ablation studies on crucial design decisions of the Model Managers. Our analysis yields pronounced results. For a pair of logistic regression models with a 20-25\% performance difference on the blood dataset, the Model Manager effectively verbalizes their variations with up to 80\% accuracy. The Model Manager framework opens up new research avenues for improving the transparency and comparability of machine learning models in a post-hoc manner.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12920
Loading