Large language models act as if they are part of a group

Published: 01 Jan 2025, Last Modified: 08 Jul 2025Nat. Comput. Sci. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: An extensive audit of large language models reveals that numerous models mirror the ‘us versus them’ thinking seen in human behavior. These social prejudices are likely captured from the biased contents of the training data.
Loading