Track: tiny / short paper (up to 4 pages)
Keywords: AI-Assisted Programming, Large Language Models, Retrieval Augmented Generation, Software Engineering
TL;DR: This paper introduces CAMP, a multi-model AI-assisted programming solution that enhances cloud-based LLMs with local context retrieval, achieving significant gains in generative tasks like code completion and error detection.
Abstract: To bridge the strengths of cloud-based Large Language Models (LLMs) in code generation and the adaptability of locally integrated tools, we introduce CAMP, a collaborative multi-model copilot framework for AI-assisted programming. CAMP employs context-aware Retrieval-Augmented Generation (RAG), dynamically retrieving relevant information from local codebases to construct optimized prompts tailored for code generation tasks. This hybrid strategy enhances LLM effectiveness in local coding environments, yielding a 12.5% performance boost over non-contextual generation and a 6.3% gain compared to a baseline RAG implementation. We demonstrate the practical application of CAMP through "Copilot for Xcode," supporting tasks such as code completion, bug detection, and documentation generation. Its success led to integration with GitHub Copilot, underscoring the real-world impact and scalability of our approach in evolving AI-driven software development practices. This work was originally published as a full paper in IEEE CAI 2025. The current version is a concise presentation for this workshop, highlighting the key contributions and encouraging further discussion within the community.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 1
Loading