Abstract: In-context learning (ICL) with large language models (LLMs) has become the modern tool of choice for many natural language processing tasks. However, how the text style of in-context examples influences the performance of LLMs still remains under-explored. This paper presents a novel and effective approach, named \textbf{AlignedCoT}, to improve the reasoning capability of LLMs by aligning the in-context examples with the native style of LLMs. ``Native'' refers to the inherent characteristic of LLMs which can be probed by zero-shot scenarios. We conduct extensive and comprehensive experiments on several benchmarks on mathematical question-answering and common-sense reasoning. The empirical results demonstrate that our AlignedCoT significantly improves performance over the carefully handcrafted demonstrations. Specifically, with AlignedCoT, we observe an average +3.2\% improvement for \texttt{gpt-3.5-turbo} compared to the carefully handcrafted CoT on multi-step reasoning benchmarks. Furthermore, we use AlignedCoT to rewrite the CoT text style in the training set, which improves the performance of Retrieval Augmented Generation by 3.6\%.
Paper Type: long
Research Area: Question Answering
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
0 Replies
Loading