Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles

Published: 09 Jul 2025, Last Modified: 19 Jul 2025KDD 2025 Workshop on Prompt Optimization PosterEveryoneRevisionsBibTeXCC BY-SA 4.0
Submission Type: Long
Keywords: Prompting Taxonomy, Cognitive Demands, Prompt Optimization, Universal Evaluation
TL;DR: This paper presents the Hierarchical Prompting Taxonomy (HPT) to improve the performance of large language models across various tasks and provide cognitively inspired prompt optimization for LLMs, grounded in human cognitive principles.
Abstract: Assessing the effectiveness of large language models (LLMs) in performing different tasks is crucial for understanding their strengths and weaknesses. This paper presents Hierarchical Prompting Taxonomy (HPT), grounded on human cognitive principles and designed to assess LLMs by examining the cognitive demands of various tasks. The HPT utilizes the Hierarchical Prompting Framework (HPF), which structures five unique prompting strategies in a hierarchical order based on their cognitive requirement on LLMs when compared to human mental capabilities. It assesses the complexity of tasks with the Hierarchical Prompting Index (HPI), which demonstrates the cognitive competencies of LLMs across diverse datasets and offers insights into the cognitive demands that datasets place on different LLMs. This approach enables a comprehensive evaluation of an LLM's problem-solving abilities and the intricacy of a dataset, offering a standardized metric for task complexity. Extensive experiments with multiple datasets and LLMs show that HPF enhances LLM performance by 2% to 63% compared to baseline performance, with GSM8k being the most cognitively complex task among reasoning and coding tasks with an average HPI of 3.20, confirming the effectiveness of HPT.
Supplementary Material: pdf
Submission Number: 20
Loading