Dissecting similarities in self-consistency: An analysis on impact of semantic consistency on language model reasoningDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: While large language models (LLMs) have rapidly improved performance on a broad number of tasks, they still fall often fall short on reasoning tasks.\citet{wang2023selfconsistency} propose \textit{self-consistency}, finding that sampling multiple rationales before taking a majority vote stably improves performance across a wide variety of closed-answer reasoning tasks.Standard self-consistency aggregates the numerical outputs of these rationales; our work instead incorporates the content of the rationales to identify consensus responses, re-weighting solutions based on patterns found in their vector embeddings of sequence outputs. By doing so we analyze and evaluate the implied effect of consistent reasoning paths over the traditional focus on numerical outputs, while improving accuracy on common benchmarks by weighting based on semantically consistent answers.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies

Loading