Dissecting similarities in self-consistency: An analysis on impact of semantic consistency on language model reasoning

ACL ARR 2024 April Submission724 Authors

16 Apr 2024 (modified: 23 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: While large language models (LLMs) have rapidly improved performance on a broad number of tasks, they still fall often fall short on reasoning tasks. Wang et al. (2023) propose self-consistency, finding that sampling multiple rationales before taking a majority vote stably improves performance across a wide variety of closed-answer reasoning tasks. Standard self-consistency aggregates the numerical outputs of these rationales; our work instead incorporates the content of the rationales to identify consensus responses, re-weighting solutions based on patterns found in their vector embeddings of sequence outputs.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Language Modeling, Machine Learning for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 724
Loading