An adversarial attack framework for deep learning-based NIDS

dc.contributor.advisorGhorbani, Ali A.
dc.contributor.advisorLashkari, Arash Habibi
dc.contributor.authorMohammadian, Hesamodin
dc.date.accessioned2024-10-28T13:51:58Z
dc.date.available2024-10-28T13:51:58Z
dc.date.issued2023-12
dc.description.abstractIntrusion detection systems are essential to any cybersecurity architecture, as they play a critical role in defending networks against various security threats. In recent years, deep neural networks have demonstrated remarkable performance in numerous machine learning tasks, including intrusion detection. However, it has been observed that deep learning models are highly susceptible to a wide range of attacks during both the training and testing phases. These attacks can compromise the privacy and security of deep learning models, such as model inversion, membership inference, poisoning, and evasion attacks. Numerous studies have been conducted to understand and mitigate these attacks to propose more efficient techniques with higher success rates and accuracy in various tasks utilizing deep learning models, such as image classification, face recognition, network intrusion detection, and healthcare applications. Despite the considerable efforts in this area, the network domain still lacks sufficient attention to these attacks and vulnerabilities. This thesis aims to address this gap by proposing a framework for adversarial attacks against network intrusion detection systems. The proposed framework focuses on both poisoning and evasion attacks. For poisoning, we present a label flipping-based attack, and for evasion, we propose two attacks: FGSM-based and saliency map-based attacks. These attacks are designed by considering the distinct characteristics of network data and flows. Furthermore, we introduce an evaluation model for the evasion attack based on several carefully selected criteria. To assess the effectiveness of the proposed techniques, we utilize three network datasets that cover a wide range of network attack categories: CIC-IDS2017, CIC-IDS2018, and CIC-UNSW. Through extensive evaluations and analysis, we demonstrate that the proposed methods are highly effective against deep learning-based NIDS and can significantly degrade their performance. With the proposed evasion attacks, we show that each feature has a varying impact on the adversarial sample generation process, and it is possible to execute a successful attack even with a few features involved. In conclusion, this thesis contributes to network intrusion detection by providing novel and effective approaches for adversarial attacks, shedding light on the vulnerabilities of deep learning-based NIDS, and emphasizing the importance of enhancing their robustness to such attacks.
dc.description.copyright© Hesamodin Mohammadian, 2023
dc.format.extentxiv, 145
dc.format.mediumelectronic
dc.identifier.urihttps://unbscholar.lib.unb.ca/handle/1882/38173
dc.language.isoen
dc.publisherUniversity of New Brunswick
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.subject.disciplineComputer Science
dc.titleAn adversarial attack framework for deep learning-based NIDS
dc.typedoctoral thesis
oaire.license.conditionother
thesis.degree.disciplineComputer Science
thesis.degree.grantorUniversity of New Brunswick
thesis.degree.leveldoctorate
thesis.degree.namePh.D.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Hesamodin Mohammadian - Dissertation.pdf
Size:
1.72 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.13 KB
Format:
Item-specific license agreed upon to submission
Description: