Keywords: LLMs, PEFT, LoRA, personalization, efficient ML
TL;DR: We investigate the feasibility of creating a personalized e-mail writing assistant trained and run on commodity hardware.
Abstract: The availability of powerful open-source large language models (LLMs) opens exciting use cases, such as using personal data to fine-tune these models to imitate a user's unique writing style. Two key requirements for this functionality are personalization–in the sense that the output should recognizably reflect the user's own writing style—and privacy–users may justifiably be wary of uploading extremely personal data, such as their email archive, to a third-party service. In this paper, we demonstrate the feasibility of training and running such an assistant, which we call Panza, on commodity hardware, for the specific use case of email generation. Panza's personalization features are based on a combination of parameter-efficient fine-tuning using a variant of the Reverse Instructions technique and Retrieval-Augmented Generation (RAG). We demonstrate that this combination allows us to fine-tune an LLM to reflect a user's writing style using limited data, while executing on extremely limited resources, e.g. on a free Google Colab instance. Our key methodological contribution is the first detailed study of evaluation metrics for this task, and of how different choices of system components--the use of RAG and of different fine-tuning approaches--impact the system's performance. Additionally, we demonstrate that very little data - under 100 email samples - are sufficient to create models that convincingly imitate humans, showcasing a previously unknown attack vector in language models. We are releasing the full Panza code as well as three new email datasets licensed for research use.
Submission Number: 81
Loading