Chain Of Thought Prompting Under Streaming Batch: A Case StudyDownload PDF

01 Mar 2023 (modified: 01 Jun 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: Large Language Model
Abstract: Recently, Large Language Models (LLMs) have demonstrated remarkable capa- bilities. Chain-of-Thought (CoT) has been proposed as a way of assisting LLMs in performing complex reasoning. However, developing effective prompts can be a challenging and labor-intensive task. Many studies come out of some way to au- tomatically construct CoT from test data. Most of them assume that all test data is visible before testing and only select a small subset to generate rationales, which is an unrealistic assumption. In this paper, we present a case study on how to construct and optimize chain-of-thought prompting using batch data in streaming settings.
TL;DR: We present a case study on how to construct and optimize chain-of-thought prompting using batch data in streaming settings.
6 Replies

Loading