Optimizing Few-Shot Learning: From Static to Adaptive in Qwen2-7B

02 Aug 2024 (modified: 05 Aug 2024)KDD 2024 Workshop Amazon KDD Cup SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Recommending Systems, In-context Learning, KDD Cup
Abstract: Under resource-constrained conditions, our research team focused on two primary algorithm: Qwen2-7B-Few-Shots and Qwen2-7B-Adaptive-Few-Shots. These initiatives centered on official development set benchmarks and our proprietary benchmarks in (track1 and track2), evaluated through the KDD leaderboard. The Qwen2-7B-Few-Shots algorithm leveraged in-context learning methods, specifically analogical few-shot. It progressed from an initial phase to employing multi-agent strategies (Tot and AutoReact), ultimately achieving end-to-end Chain of Thought (COT) few-shot learning. Experimental results demonstrated the efficacy of static few-shots. The Qwen2-7B-Adaptive-Few-Shots algorithm focused on adaptive learning. Although time constraints prevented its inclusion on the leaderboard, future directions include batch inference and data collection via crawler construction. The project evolved from an initial knowledge graph RAG (Father + Child) to end-to-end few-shot learning, and finally to end-to-end COT few-shot learning. Future work encompasses: 1) Fine-tuning to enhance performance; 2) Improving the COT format with SymbCot; 3) Increasing the speed of adaptive RAG through batch RAG and RAG reranking, complemented by other methods such as vector databases.
Submission Number: 9
Loading