Conference Proceedings

Human or LLM? Distinguishing Online Comments by Emotion and Tone

Abstract

This study examines whether people can distinguish between online news comments generated by large language models (LLMs) and those written by humans, comparing multiple prompting conditions. LLMs can generate fluent, emotionally expressive texts that mimic human writing, making such distinctions increasingly difficult. This raises concerns for online news platforms, as comment sections play a significant role in shaping public opinion and decision-making. The widespread posting of LLM-generated comments, therefore, poses potential risks to the trustworthiness of these spaces. To examine this issue, we conducted two experiments to identify LLM-generated comments under different prompting conditions. The results showed that participants identified LLM-generated comments correctly at rates of 33.5% and 44.8%, while human-written comments were identified correctly at rates of 62.9% and 68.9%. The results suggest that participants regarded emotion and tone as key factors. Neutral and objective comments were more likely to be perceived as LLM-generated, while comments written in a frank tone were often judged as human-written. In contrast, comments in a polite tone were more likely to be classified as LLM-generated.

Information

Location

Taipei, Taiwan (National Taiwan Normal University)

Citation

Nanase Mogi, Yutaka Morino, Megumi Yasuo, Mitsunori Matsushita. Human or LLM? Distinguishing Online Comments by Emotion and Tone.