A self-attention mechanism based model for early detection of fake news

dc.contributor.advisorHakak, Saqib
dc.contributor.authorJamshidi, Bahman
dc.date.accessioned2023-11-01T12:36:31Z
dc.date.available2023-11-01T12:36:31Z
dc.date.issued2023-02
dc.description.abstractExtensive studies have indicated that fake news has become one of the major threats to our social system (e.g., influencing public opinion, financial markets, journalism, and health system), and its impact cannot be understated, particularly in our current socially and digitally connected society. In the past years, this problem has been investigated from different perspectives and various disciplines, such as computer science, political science, information science, and linguistics. Even though such efforts have proposed many helpful solutions, it remains challenging to detect fake news in its early phases of dissemination. Based on previously-reported studies, detecting fake news early after its propagation is a very tough task due to the unavailability of context-based features within the first hours of spreading and the ineffectiveness of merely content-based features methods. To address this challenge, we propose a new framework for detecting fake news in the early stages of its propagation. The first three components of the proposed framework convert each news article’s propagation network into a sequence of nodes after preprocessing and feature extraction. The last module of our framework leverages a self-attention mechanism based encoder. Self-attention technique is the core of the well-known Transformer model, which has achieved promising results in different areas, especially in complex tasks like language translation. In this module, a new representation of the input sequence is generated, which is mapped to a label for the news article in the proposed model’s last layer. We evaluated our method on two datasets to show its effectiveness. The achieved F1 scores by the proposed model on GossipCop and PolitiFact datasets are higher than the best baseline model by 9% and 6%, respectively.
dc.description.copyright© Bahman Jamshidi, 2023
dc.format.extentxiv, 79
dc.format.mediumelectronic
dc.identifier.urihttps://unbscholar.lib.unb.ca/handle/1882/37515
dc.language.isoen
dc.publisherUniversity of New Brunswick
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.subject.disciplineComputer Science
dc.titleA self-attention mechanism based model for early detection of fake news
dc.typemaster thesis
oaire.license.conditionother
thesis.degree.disciplineComputer Science
thesis.degree.grantorUniversity of New Brunswick
thesis.degree.levelmasters
thesis.degree.nameM.C.S.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bahman Jamshidi - Thesis.pdf
Size:
1.64 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.13 KB
Format:
Item-specific license agreed upon to submission
Description: