Towards finding consensus about similarity of symbolic encodings associated with concepts between LLMs and human brain
Keywords: cognitive science; symbolic; human cognition; LLMs; similarity measures
TL;DR: I attempt to find if there is a common consensus about similarity of symbolic encodings between LLMs and human brain by referring cognitive science studies and LLMs-focussed works.
Abstract: Large Language Models (LLMs) and Multimodal Large Language Models (LLMs) have shown unbelievable improvement in performance in various Natural Language Understanding and Multimodal understanding tasks. The recent works evaluate representations, alignment, various of types of reasoning, grounding - text, video and audio inputs - over the tasks evaluating LLMs and MLLMs. The recent LLMs with "reasoning" or "thinking" phases generating reasoning traces (Chain-of-Thoughts or CoTs), enacting inference-time decision-making by "thinking" before generating a final response Feng et al. [2025] have shown new directions inspired from Kahneman [2011]. This approach leverages Reinforcement-Learning based finetuning along with rewards signals from variants of reward models while scaling up test-time compute. This paper re-introduces the previously examined individual findings from few different works Silver and Mitchell [2023] Pavlick [2023], Shani et al. [2025], Geh et al. [2024b], Opedal et al. [2024], Saparov and He [2023] and more. This paper attempts to find if there is a common consensus about similarity of symbolic encodings between LLMs and human brain. The symbolic encodings refer to alignment of symbols (words and sentences) w.r.t concepts, conceptual categories and conceptual structures.
Submission Number: 68
Loading