Can “consciousness” be observed from large language model (LLM) internal states? Dissecting LLM representations obtained from Theory of Mind test with Integrated Information Theory and Span Representation analysis

Published: 01 Sept 2025, Last Modified: 24 Nov 2025Natural Language Processing JournalEveryoneRevisionsCC BY-SA 4.0
Abstract: Highlights•We explored LLM representations through the lens of consciousness studies.•We used a triangulated method to observe ”consciousness” in LLM representations.•Findings stem from multiple LLM layers, diverse stimuli spans, and various ToM tasks.•No strong evidence of ”consciousness” but intriguing patterns via spatio-permutation.
Loading