Conference Proceedings

Checklist-Prompted Feature Extraction for Interpretable and Robust Claim Check Worthiness Prediction

Abstract

This study explores the use of Large Language Models (LLMs) for Claim Check Worthiness Prediction (CCWP), a critical first step in fact-checking. Building on prior research, we propose a method that utilizes structured checklists to break down CCWP into interpretable and manageable subtasks. In our approach, an LLM answers 52 human-crafted questions for each claim, and the resulting responses are used as features for traditional supervised learning models. Experiments across six datasets show that our method consistently improves performance across key evaluation metrics, including accuracy, F1 score, surpassing few-shot prompting baselines on most datasets. Moreover, our method enhances the stability of LLM outputs, reducing sensitivity to prompt design. These findings suggest that LLM-based feature extraction guided by structured checklists offers a promising direction for more reliable and efficient claim prioritization in fact-checking systems.

Information

Book title

First Workshop on Misinformation Detection in the Era of LLMs, in Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media

Date of issue

2025/06/06

Date of presentation

2025/06/23

Location

Copenhagen, Denmark

DOI

10.36190/2025.23

Citation

Yuka Teramoto, Takahiro Komamizu, Mitsunori Matsushita, Kenji Hatano. Checklist-Prompted Feature Extraction for Interpretable and Robust Claim Check Worthiness Prediction, First Workshop on Misinformation Detection in the Era of LLMs, in Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media, No.1, 2025.