$``I think I could probably use Large Language Models to solve my tasks.'' Detecting Client Motivational Language in Psychotherapy$
Abstract: $Understand the client's motivation is crucial for successful therapies. When met with resistance, the therapists are advised to soften it first instead of persisting with goal-related actions and thus risking rapport ruptures. Motivational Interviewing is such an approach: the client's utterances are coded as they are for or against a certain behaviour change, plus their commitment strength. Yet, there are fewer than 200 samples labelled with strength value. Recently, Large Language Models (LLMs) have shown impressive capabilities in few-shot learning. We compare in-context learning (ICL) and instruction fine-tuning (IFT) with varying training size. Our experiments show that both approaches can learn under low-resourced settings and are sensitive to the instruction formatting.
Still, IFT is cheaper, more stable to prompt choice, and yields better performance with more data. However, when the label distribution is heavily imbalanced that the models are unable to learn, ICL is preferred because it can exploit the LLMs more effectively.$
Paper Type: long
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
0 Replies
Loading