Evaluating steering techniques using human similarity judgments

19 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: cognitive science, transformers, large language models, human-AI alignment, human-centered AI, steering, cognitive benchmarking
TL;DR: Using a cognitive science inspired evaluation paradigm, we evaluate the effectiveness of various LLM representation-steering methods for inducing more human-like semantic alignment.
Abstract: Current evaluations of Large Language Model (LLM) steering techniques focus on task-specific performance, overlooking how well steered representations align with human cognition. Using a well-established triadic similarity judgment task, we assessed steered LLMs on their ability to flexibly judge similarity between concepts based on size or kind. We found that prompt-based steering methods outperformed other methods both in terms of steering accuracy and model-to-human alignment. We also found LLMs were biased towards "kind" similarity and struggled with "size" alignment. This evaluation approach, grounded in human cognition, adds further support to the efficacy of prompt-based steering and reveals privileged representational axes in LLMs prior to steering.
Supplementary Material: pdf
Primary Area: interpretability and explainable AI
Submission Number: 21578
Loading