Exploring Human-AI Perception Alignment in Sensory Experiences: Do LLMs Understand Textile Hand?

ACL ARR 2024 June Submission1627 Authors

14 Jun 2024 (modified: 05 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Aligning LLMs behaviour with human intent is critical for future AI. An important yet often overlooked aspect of this alignment is the perceptual alignment. Perceptual modalities like touch are more multifaceted and nuanced compared to other sensory modalities such as vision. This work investigates how well LLMs align with human touch experiences. We created an interaction in which participants were given two textile samples to handle without seeing them and describe the differences between them to the LLM. Using these descriptions, the LLM attempted to identify the target textile by assessing similarity within its high-dimensional embedding space. Our results suggest that a degree of perceptual alignment exists, however varies significantly among different textile samples. Moreover, participants didn't perceive their textile experiences closely matched by the LLM predictions. We discuss possible sources of this alignment variance, and how better human-AI perceptual alignment can benefit future everyday tasks.
Paper Type: Short
Research Area: Human-Centered NLP
Research Area Keywords: Human-Centered NLP, Ethics, Bias, and Fairness
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1627
Loading