Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks

Published: 06 Nov 2024, Last Modified: 06 Jan 2025NLDL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: human-machine alignment, convexity, deep neural networks, representation learning
Abstract: Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems. Motivated by theories of human cognition, this study examines the relationship between convexity in neural network representations and human-machine alignment based on behavioral data. We identify a correlation between these two dimensions in pretrained and fine-tuned vision transformer models. Our findings suggest the convex regions formed in latent spaces of neural networks to some extent align with human-defined categories and reflect the similarity relations humans use in cognitive tasks. While optimizing for alignment generally enhances convexity, increasing convexity through fine-tuning yields inconsistent effects on alignment, which suggests a complex relationship between the two. This study presents a first step toward understanding the relationship between the convexity of latent representations and human-machine alignment.
Submission Number: 22
Loading