Keywords: Role-playing, Large Language Models, Knowledge Grounding
TL;DR: We rethink the roles of descriptions and narrations in role-playing and build a system to initialize the characters' behavior modes as executable codes with descriptions and refine with narrations.
Abstract: This paper introduces Codified Profiles for role-playing, a novel approach that represents character logic as structured, executable functions for behavioral decision-making. 
Converted by large language model (LLM) from textual profiles, each codified profile defines a set of functions parse_by_scene(scene) that output multiple logic-grounded assertions according to scene, using both explicit control structures (e.g., if-then-else) and flexible check_condition(scene, question) functions where each question is a semantically meaningful prompt about the scene (e.g., "Is the character in danger?") discriminated by the role-playing LLM as true, false, or unknown.
This explicit representation offers three key advantages over traditional prompt-based textual profiles, which append character descriptions directly into text prompts:
(1) Persistence, by enforcing complete and consistent execution of character logic, rather than relying on the model's implicit reasoning;  
(2) Updatability, through systematic inspection and revision of behavioral logic, which is difficult to track or debug in prompt-only approaches;  
(3) Controllable Randomness, by supporting stochastic behavior directly within the logic, enabling fine-grained variability that prompting alone struggles to achieve.  
To validate these advantages, we introduce a new benchmark constructed from 83 characters and 5,141 scenes curated from Fandom, using natural language inference (NLI)-based scoring to compare character responses against ground-truths.  
Our experiments demonstrate the significant benefits of codified profiles in improving persistence, updatability, and behavioral diversity. 
Notably, by offloading a significant portion of reasoning to preprocessing, codified profiles enable even 1B-parameter models to perform high-quality role-playing, providing an efficient, lightweight foundation for local deployment of role-play agents.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 1593
Loading