A self-attention mechanism based model for early detection of fake news
University of New Brunswick
Extensive studies have indicated that fake news has become one of the major threats to our social system (e.g., influencing public opinion, financial markets, journalism, and health system), and its impact cannot be understated, particularly in our current socially and digitally connected society. In the past years, this problem has been investigated from different perspectives and various disciplines, such as computer science, political science, information science, and linguistics. Even though such efforts have proposed many helpful solutions, it remains challenging to detect fake news in its early phases of dissemination. Based on previously-reported studies, detecting fake news early after its propagation is a very tough task due to the unavailability of context-based features within the first hours of spreading and the ineffectiveness of merely content-based features methods. To address this challenge, we propose a new framework for detecting fake news in the early stages of its propagation. The first three components of the proposed framework convert each news article’s propagation network into a sequence of nodes after preprocessing and feature extraction. The last module of our framework leverages a self-attention mechanism based encoder. Self-attention technique is the core of the well-known Transformer model, which has achieved promising results in different areas, especially in complex tasks like language translation. In this module, a new representation of the input sequence is generated, which is mapped to a label for the news article in the proposed model’s last layer. We evaluated our method on two datasets to show its effectiveness. The achieved F1 scores by the proposed model on GossipCop and PolitiFact datasets are higher than the best baseline model by 9% and 6%, respectively.