Keywords: human-centered alignment, brain-AI representational alignment, human-AI alignment
TL;DR: A short survey to find consensus for alignment between LLMs vs Human-centered studies and representations of LLMs vs human neural-activity
Abstract: Large Language Models (LLMs), Vision LMs and Multimodal Large Language Models (MLLMs) have shown impressive performance across various Natural Language Understanding and Multimodal Understanding tasks while improving physical/spatial intelligence tasks similar to/better than human performance. This work attempts to bring various alignment studies to find consensus about alignment/divergences that are desirable and not desirable. There are broadly two categories of alignment studies included - human-AI and human brain-AI representational alignment. The two categories may evaluate alignment/divergence on the basis of specific tasks, applications, evaluations and hypotheses while considering various types of data such as text, image, audio and video inputs across LLMs, VLMs and MLLMs. This attempt finds that the insights from the human brain-AI representational alignment may help with better human-centered design and human-AI alignment by also including potential research questions/directions. The key finding is that there is, however, a lack of sufficient consensus for alignment given the disagreements within and across the two categories due to both undesirable - divergences and alignment.
Submission Number: 95
Loading