Cluster-Norm for Unsupervised Probing of Knowledge

Published: 24 Jun 2024, Last Modified: 15 Jul 2024ICML 2024 MI Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretability, Unsupervised Probing, Eliciting Latent Knowledge, Clustering, Machine Learning
TL;DR: This document outlines a method for improving the reliability of unsupervised probes in language models by using cluster normalization to minimize the impact of distracting salient features, thereby enhancing the accuracy of knowledge extraction.
Abstract: The deployment of language models brings challenges in generating reliable information, especially when these models are fine-tuned using human preferences. To extract encoded knowledge without (potentially) biased human labels, unsupervised probing techniques like Contrast-Consistent Search (CCS) have been developed (Burns et al., 2022). However, salient but unrelated features in a given dataset can mislead these probes (Farquhar et al., 2023). Addressing this, we propose a cluster normal- ization method to minimize the impact of such features by clustering and normalizing activa- tions of contrast pairs before applying unsuper- vised probing techniques. While this approach does not address the issue of differentiating between knowledge in general and simulated knowledge—a major issue in the literature of latent knowledge elicitation (Paul Christiano and Xu, 2021)—it significantly improves the ability of unsupervised probes to identify the intended knowledge amidst distractions.
Supplementary Material: zip
Submission Number: 136
Loading