TL;DR: Through appropriate prompting, GPTs can be triggered to perform iterative behaviours necessary to execute programs that involve loops and solve previously challenging problems.
Abstract: We demonstrate that, through appropriate prompting, GPT-3 can be triggered to perform iterative behaviors necessary to execute (rather than just write or recall) programs that involve loops, including several popular algorithms found in computer science curricula or software developer interviews. We trigger execution and description of {\bf iterations} by {\bf regimenting self-attention} (IRSA) in one (or a combination) of three ways: 1) Appropriately annotating an execution path of a target program for one particular input, 2) Prompting with fragments of annotated execution paths, and 3) Explicitly forbidding (skipping) self-attention to parts of the generated text. On a dynamic programming task, IRSA leads to larger accuracy gains than replacing the model with the much more powerful GPT-4. Our findings hold implications for evaluating LLMs, which typically target in-context learning: We show that prompts that may not even cover one full task example can trigger algorithmic behavior, allowing solving problems previously thought of as hard for LLMs, such as logical puzzles. Consequently, prompt design plays an even more critical role in LLM performance than previously recognized.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Position papers
Languages Studied: English
0 Replies
Loading