Beyond Autocomplete: Designing CopilotLens Towards Transparent and Explainable AI Coding Agents

Published: 24 Jul 2025, Last Modified: 04 Oct 2025XLLM-Reason-PlanEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable AI, Human-Computer Interaction, Code Generation, Developer Tools, Software Engineering, Large Language Models, Human-AI Collaboration, Explainability, Calibrated Trust, Mental Models
TL;DR: CopilotLens is a framework that makes AI code generation transparent by revealing the agent's "thought process"—from its sequence of file modifications to the project-specific context used—to foster deeper developer comprehension and trust.
Abstract: AI-powered code assistants are widely used to generate code completions, significantly boosting developer productivity. However, these tools typically present suggestions without explaining their rationale, leaving their decision-making process inscrutable. This opacity hinders developers' ability to critically evaluate the output, form accurate mental models, and build calibrated trust in the system. To address this, we introduce CopilotLens, a novel interactive framework that reframes code completion from a simple suggestion into a transparent, explainable event. CopilotLens operates as an explanation layer that reveals the AI agent's "thought process" through a dynamic two-level interface, surfacing everything from its reconstructed high-level plans to the specific codebase context influencing the code. This paper presents the design and rationale of CopilotLens, offering a concrete framework for building future agentic code assistants that prioritize clarity of reasoning over speed of suggestion, thereby fostering deeper comprehension and more robust human-AI collaboration.
Paper Published: No
Paper Category: Short Paper
Demography: No, I do not identify with any of these affinity groups
Academic: Year 1-2 PhD Student
Submission Number: 11
Loading