What Comes to Mind? Interpretable Dimensions in Embedding Space Predict Human Ad Hoc Category Construction
Keywords: ad hoc categories, category generation, cognitive modeling, distributional semantics, word embeddings, fastText, interpretability, linear probes, elastic-net GLM, sparse logistic regression, representation similarity analysis, compositionality, leave-one-base-out transfer
Abstract: Humans rapidly construct ad hoc categories—e.g., “vegetables for painting”—by recruiting task-relevant properties and retrieving items that score highly on them. We test whether this behavior can be predicted directly from off-the-shelf word embeddings. Across 20 composite categories, we fit per-category elastic-net binomial GLMs over fastText dimensions and evaluate on item-mention probabilities. A sparse linear readout predicts human behavior with strong aggregate accuracy (r = 0.699 across N = 3458 pairs; Brier = 0.0049) and is well calibrated (ECE = 0.0198, improving slightly with an intercept-only adjustment). Beyond per-category fits, we frame leave-one-base-out (LOBO) as a retrieval transfer test: we learn a single modifier direction from non-target bases and blend it with the held-out base using a mixing weight γ tuned for average precision. This yields coherent semantic shifts and small, mixed retrieval gains (mean ∆AP = 0.0051,median 0.0025; 55.0% of 20 categories improved; mean ∆P@10 = 0.0150). The learned ad hoc axes align with human-rated properties (mRSA up to R2 = 0.227,median R2 = 0.157), supporting interpretability. Overall, simple, interpretable shifts in embedding space capture key regularities in what comes to mind under situational constraints.
Submission Number: 115
Loading