Mohammadian, Hesamodin2024-10-282024-10-282023-12https://unbscholar.lib.unb.ca/handle/1882/38173Intrusion detection systems are essential to any cybersecurity architecture, as they play a critical role in defending networks against various security threats. In recent years, deep neural networks have demonstrated remarkable performance in numerous machine learning tasks, including intrusion detection. However, it has been observed that deep learning models are highly susceptible to a wide range of attacks during both the training and testing phases. These attacks can compromise the privacy and security of deep learning models, such as model inversion, membership inference, poisoning, and evasion attacks. Numerous studies have been conducted to understand and mitigate these attacks to propose more efficient techniques with higher success rates and accuracy in various tasks utilizing deep learning models, such as image classification, face recognition, network intrusion detection, and healthcare applications. Despite the considerable efforts in this area, the network domain still lacks sufficient attention to these attacks and vulnerabilities. This thesis aims to address this gap by proposing a framework for adversarial attacks against network intrusion detection systems. The proposed framework focuses on both poisoning and evasion attacks. For poisoning, we present a label flipping-based attack, and for evasion, we propose two attacks: FGSM-based and saliency map-based attacks. These attacks are designed by considering the distinct characteristics of network data and flows. Furthermore, we introduce an evaluation model for the evasion attack based on several carefully selected criteria. To assess the effectiveness of the proposed techniques, we utilize three network datasets that cover a wide range of network attack categories: CIC-IDS2017, CIC-IDS2018, and CIC-UNSW. Through extensive evaluations and analysis, we demonstrate that the proposed methods are highly effective against deep learning-based NIDS and can significantly degrade their performance. With the proposed evasion attacks, we show that each feature has a varying impact on the adversarial sample generation process, and it is possible to execute a successful attack even with a few features involved. In conclusion, this thesis contributes to network intrusion detection by providing novel and effective approaches for adversarial attacks, shedding light on the vulnerabilities of deep learning-based NIDS, and emphasizing the importance of enhancing their robustness to such attacks.xiv, 145electronicenhttp://purl.org/coar/access_right/c_abf2An adversarial attack framework for deep learning-based NIDSdoctoral thesisGhorbani, Ali A.Lashkari, Arash HabibiComputer Science