A Critical Survey on LLM Deployment Paradigms: Assessing Usability and Cognitive Behavioral Aspects

ACL ARR 2024 June Submission4186 Authors

16 Jun 2024 (modified: 17 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Over the last decade, a wide range of training and deployment strategies for Large Language Models (LLMs) have emerged. Among these, the prompting paradigms of Auto-Regressive LLMs (AR-LLMs) have catalyzed a significant surge. This paper embarks on a quest to unravel the underlying factors behind the triumph of AR-LLMs' prompting paradigm. This study summarizes and focuses on six distinct task-oriented channels, e.g., numeric prefixes and free-form text, across diverse deployment paradigms By pivoting our focus onto these channels, we can assess these paradigms across crucial dimensions, such as task customizability, transparency, and complexity to gauge LLMs. The results emphasize the significance of utilizing free-form contexts as user-directed channels for downstream deployment. Moreover, we examine the stimulation of diverse cognitive behaviors in LLMs through the adoption of free-form, verbal outputs and inputs as contexts. We detail four common cognitive behaviors to underscore how AR-LLMs' prompting successfully imitates human-like behaviors under the free-form modality and channel.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: LLMs, Prompting, Cognitive Behaviors
Contribution Types: Model analysis & interpretability, Position papers, Surveys
Languages Studied: English
Submission Number: 4186
Loading