Reasoning for fact verification using language models

Loading...
Thumbnail Image

Date

2024-02

Journal Title

Journal ISSN

Volume Title

Publisher

University of New Brunswick

Abstract

In response to the proliferation of misinformation on social media platforms, this thesis introduces the Triple-R framework (Retriever, Ranker, Reasoner) to enhance fact-checking by leveraging the Web for evidence retrieval and generating understandable explanations for its decisions. Unlike existing methods, Triple-R incorporates external sources for evidence and provides explanations for datasets lacking them. By fine-tuning a causal language model, it produces natural language explanations and labels for evidence-claim pairs, aiming for greater transparency and interpretability in fact-checking systems. Evaluated on a popular dataset, Triple-R achieved a state-of-the-art accuracy of 42.72% on the LIAR benchmark, outperforming current automated fact verification methods. This underscores its effectiveness in integrating web sources and offering clear reasons, presenting a significant step forward in the fight against online misinformation.

Description

Keywords

Citation