CatCode: A Comprehensive Evaluation Framework for LLMs On the Mixture of Code and Text

Published: 10 Jul 2024, Last Modified: 26 Aug 2024COLMEveryoneRevisionsBibTeXCC BY 4.0
Research Area: Evaluation, LMs on diverse modalities and novel applications
Keywords: Category Theory,LLM,Code,Evaluation
TL;DR: A standard, unified and scalable framework that can support the evaluation for complex coding tasks
Abstract: Large language models (LLMs) such as ChatGPT are increasingly proficient in understanding and generating a mixture of code and text. Evaluation based on such *mixture* can lead to a more comprehensive understanding of the models' abilities in solving coding problems. However, in this context, current evaluation methods are either limited in task coverage or lack standardization. To address this issue, we propose using category theory as a framework for evaluation. Specifically, morphisms within a code category can represent code debugging and transformation, functors between two categories represent code translation, and functors between a code category and a natural language category represent code generation, explanation, and reproduction. We present an automatic evaluation framework called **CatCode** (**Cat**egory **Code**) that can comprehensively assess the coding abilities of LLMs, including ChatGPT, Text-Davinci, and CodeGeeX. Large language models (LLMs) are increasingly proficient in understanding and generating a mixture of code and text. Evaluation based on such *mixture* can lead to a more comprehensive understanding of the models' abilities in solving coding problems. However, current evaluation methods are either limited in task coverage or lack standardization. To address this issue, we propose to apply category theory as math abstraction for code-related evaluation. Specifically, morphisms within a code category can represent code debugging and transformation, functors between two categories represent code translation, and functors between a code category and a natural language category represent code generation and explanation. We present an automatic evaluation framework called **CatCode** (**Cat**egory *Code*) that can assess the coding abilities of various ChatGPT-like LLMs in a *comprehensive* and *standard* way, and further support *composite* task evaluation. The code can be found in https://github.com/scorpio-nova/CatCode.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 551
Loading