AutoCoder: Enhancing Code Large Language Model with AIEV-INSTRUCT

ICLR 2025 Conference Submission12215 Authors

27 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Code generation, data annotation, Agents interaction
TL;DR: We introduce AutoCoder, an open-source code Large Language Model to surpass GPT-4 Turbo (April 2024) and GPT-4o in pass@1 on the Human Eval benchmark test (90.9\% vs. 90.2\%).
Abstract: We introduce AutoCoder, an open-source Large Language Model to surpass GPT-4 Turbo and GPT-4o in pass@1 on the Human Eval benchmark test (90.9\% vs. 90.2). In addition, AutoCoder offers a more versatile code interpreter compared to GPT-4 Turbo and GPT-4o. It's code interpreter can install external packages instead of limiting to built-in packages. AutoCoder's training data is a multi-turn dialogue dataset created by a system combining agent interaction and external code execution verification, a method we term AIEV-Instruct (Agent-Interaction Execution-Verified). Compared to previous large-scale code dataset annotation methods, AIEV-Instruct reduces dependence on proprietary large models and provides more accurate code annotation data.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12215
Loading