Evaluating Diversity of LLM-generated Datasets: A Classification Perspective

ICLR 2025 Conference Submission659 Authors

14 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diversity evaluation, LLM-generated dataset, Large language models
Abstract: LLM-generated datasets have been recently leveraged as training data to mitigate data scarcity in specific domains. However, these LLM-generated datasets exhibit limitations on training models due to a lack of diversity, which underscores the need for effective diversity evaluation. Despite the growing demand, the diversity evaluation of LLM-generated datasets remains under-explored. To this end, we propose a diversity evaluation method for LLM-generated datasets from a classification perspective, namely, DCScore. Specifically, DCScore treats the diversity evaluation as a sample classification task, considering mutual relationships among samples. We further provide theoretical verification of the diversity-related axioms satisfied by DCScore, demonstrating it as a principled diversity evaluation method. Additionally, we show that existing methods can be incorporated into our proposed method in a unified manner. Meanwhile, DCScore enjoys much lower computational costs compared to existing methods. Finally, we conduct experiments on LLM-generated datasets to validate the effectiveness of DCScore. The experimental results indicate that DCScore correlates better with various diversity pseudo-truths of evaluated datasets, thereby verifying its effectiveness.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 659
Loading