“I Am the One and Only, Your Cyber BFF”: Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI
Blogpost Url: https://d2jud02ci9yv69.cloudfront.net/2025-04-28-anthropomorphic-ai-116/blog/anthropomorphic-ai/
Abstract: Many state-of-the-art generative AI (GenAI) systems are increasingly prone to anthropomorphic behaviors, i.e., to generating outputs that are perceived to be human-like. While this has led to scholars increasingly raising concerns about possible negative impacts such anthropomorphic AI systems can give rise to, anthropomorphism in AI development, deployment, and use remains vastly overlooked, understudied, and underspecified. In this blog post, we argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI, and outline a call to action.
Conflict Of Interest: Here are the papers that we cite which have authors who are recent collaborators, or who are at the same institutions we are.
- AnthroScore: A Computational Linguistic Measure of Anthropomorphism. Cheng, M., Gligoric, K., Piccardi, T. and Jurafsky, D., 2024. Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 807--825. Association for Computational Linguistics.
- Mirages: On anthropomorphism in dialogue systems. Abercrombie, G., Curry, A.C., Dinkar, T., Rieser, V. and Talat, Z., 2023. arXiv preprint arXiv:2305.09800.
- The ethics of advanced AI assistants. Gabriel, I., Manzini, A., Keeling, G., Hendricks, L.A., Rieser, V., Iqbal, H., Tomasev, N., Ktena, I., Kenton, Z., Rodriguez, M. and others,, 2024. arXiv preprint arXiv:2404.16244.
- Social simulacra: Creating populated prototypes for social computing systems
Park, J.S., Popowski, L., Cai, C., Morris, M.R., Liang, P. and Bernstein, M.S., 2022. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pp. 1--18.
- Generative agents: Interactive simulacra of human behavior
Park, J.S., O'Brien, J., Cai, C.J., Morris, M.R., Liang, P. and Bernstein, M.S., 2023. Proceedings of the 36th annual ACM symposium on user interface software and technology, pp. 1--22.
- All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI
Akbulut, C., Weidinger, L., Manzini, A., Gabriel, I. and Rieser, V., 2024. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Vol 7, pp. 13--26.
- Responsible AI Research Needs Impact Statements Too
Olteanu, A., Ekstrand, M., Castillo, C. and Suh, J., 2023. arXiv preprint arXiv:2311.11776.
- Stereotyping {N}orwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets [link]
Blodgett, S.L., Lopez, G., Olteanu, A., Sim, R. and Wallach, H., 2021. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1004--1015. Association for Computational Linguistics. DOI: 10.18653/v1/2021.acl-long.81
- Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning: Limitations and Opportunities. MIT Press, 2023.
Not a paper but Kate Crawford. The Trouble with Bias, 2017. NeurIPS Keynote.
- Abigail Z. Jacobs and Hanna Wallach. Measurement and Fairness.
Maurice Jakesch, Zana Buçinca, Saleema Amershi, and Alexandra Olteanu. How different groups prioritize ethical values for responsible AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 310–323, 2022.
- Sunnie SY Kim, Q Vera Liao, Mihaela Vorvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. I’m not sure, but...: Examining the impact of large language models’ uncertainty expression on user reliance and trust. In The 2024 ACM Conference on Fairness, Accountability, and Transparency, pages 822–835, 2024
- Reid McIlroy-Young, Jon Kleinberg, Siddhartha Sen, Solon Barocas, and Ashton Anderson. Mimetic models: Ethical implications of AI that acts like you. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pages 479–490, 2022.
- Kaitlyn Zhou, Dan Jurafsky, and Tatsunori Hashimoto. Navigating the grey area: How expressions of uncertainty and overconfidence affect language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5506–5524, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.335. URL https://aclanthology.org/2023.emnlp-main.335.
- Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On
the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610–623, 2021.
Submission Number: 74
Loading