If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

Published: 11 Mar 2024, Last Modified: 15 Mar 2024LLMAgents @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: code-LLM, intelligent agent
TL;DR: This survey aims to explain the widespread adoption of code-specific training in the general large language model (LLM) training paradigm and how code enhances LLMs to act as intelligent agents.
Abstract: The prominent large language models (LLMs) of today differ from past language models not only in size, but also in the fact that they are trained on a combination of natural language and code. As a medium between humans and computers, code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity. In this survey, we present an overview of the various benefits of integrating code into LLMs' training data. In addition, we trace how these profound capabilities of LLMs, brought by code, have led to their emergence as intelligent agents (IAs). Finally, we present several key challenges and future directions of empowering code-LLMs to serve as IAs.
Submission Number: 13
Loading