Publication highlights detail

Adverse reactions to the use of large language models in social interactions

Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, Urs Fischbacher, Gerelt Tserenjigmid

PNAS Nexus 4 (2025): pgaf112

https://doi.org/10.1093/pnasnexus/pgaf112

Large language models (LLMs) are poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make many human decisions, there is limited understanding of how individuals will respond to its use in social interactions. In particular, it remains unclear how individuals interact with LLMs when the interaction has consequences for other people. Here, we report the results of a large-scale, preregistered online experiment (?n=3,552?) showing that human players’ fairness, trust, trustworthiness, cooperation, and coordination in economic two-player games decrease when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse reactions when individuals are uncertain whether they are interacting with a human or a LLM. At the same time, participants often delegate decisions to the LLM, especially when the model’s involvement is not disclosed, and individuals have difficulty distinguishing between decisions made by humans and those made by AI.

 ? 2025, Attribution 4.0 International (CC BY 4.0)

Adverse reactions to the use of large language models in social interactions
Aktualisiert von: MAPEX