Federated Prompting and Chain-of-Thought Reasoning for Improving LLMs Answering

Published: 01 Jan 2023, Last Modified: 20 Sept 2024KSEM (4) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We investigate how to enhance answer precision in frequently asked questions posed by distributed users using cloud-based Large Language Models (LLMs). Our study focuses on a typical situation where users ask similar queries that involve identical mathematical reasoning steps and problem-solving procedures. Due to the unsatisfactory accuracy of LLMs’ zero-shot prompting with standalone questions, we propose to improve the distributed synonymous questions using Self-Consistency (SC) and Chain-of-Thought (CoT) techniques with a crowd-sourced federated question pool. Our methods can generate significantly more accurate answers for all user queries without requiring sophisticated model-tuning. Through extensive experiments, we demonstrate that our proposed methods can significantly enhance question accuracy by fully exploring the synonymous nature of the questions and the consistency of the answers.
Loading