Abstract
This study examined the attitudes of individuals toward texts generated by large language models (LLMs), including social networking service posts and news comments. Recently, the number of people viewing texts generated by LLMs has increased. Because an LLM can generate natural texts that are almost indistinguishable from those written by humans, there is concern that generating such natural texts may cause problems, such as maliciously influencing public opinion. To evaluate the reception of LLM-generated texts, we conducted an experiment based on the hypothesis that the knowledge that a text was generated by an LLM would influence user acceptance. In the experiment, participants were shown news comments that included AI-generated comments. We controlled whether the user was aware that the text had been generated by an LLM, and assessed their viewpoints from four perspectives: perceived friendliness, trustworthiness, empathy, and reference. The results showed that a generated comment imitating the opinion of an expert increased in rank when it was disclosed that the LLM generated the comment. In particular, “reliability” and “informative” were sensitive to this disclosure, whereas “familiar” and “empathy” were not. This result suggests that expert labeling significantly enhances perceived reliability, and the finding raises concerns about the potential for news viewers to be implicitly guided toward a particular opinion.
Information
Book title
International Journal of Activity and Behavior Computing
Volume
2025
Pages
1-13
Date of issue
2025/11/28
DOI
https://doi.org/10.60401/ijabc.122
Citation
Nanase Mogi, Megumi Yasuo, Yutaka Morino, Mitsunori Matsushita. Analysis of the changes in the attitude of the news comments caused by knowing that the comments were generated by a large language model, International Journal of Activity and Behavior Computing, Vol.2025, No.3, pp.1-13, 2025.