Variational Prefix Tuning for diverse and accurate code summarization using pre-trained language models
Abstract: Highlights•First work to enable diverse and accurate code summarization for Large Language Models of Code.•Propose a novel approach (VPT) to enable such capability without requiring a costly full retraining.•Demonstrate the adaptability of VPT by applying it to several transformer-based pre-trained models.•Provide open-source implementation and datasets for our proposed approach
Loading