Explainable deep learning for detecting cyber threats

Loading...
Thumbnail Image

Date

2021

Journal Title

Journal ISSN

Volume Title

Publisher

University of New Brunswick

Abstract

Cyber threats have imperiled the security and viability of many entities that exist in this rapidly evolving data-driven world. In this regard, security specialists are designing new mechanisms to deal with the ongoing cyber-attacks, including network intrusions, malware infections, phishing attacks, and website defacements. There has been a growing trend of applying deep learning techniques to image processing, speech recognition, self-driving cars, and even health care. Several deep learning models have been employed to detect a cyber threat; nevertheless, they suffer from not being explainable to security experts. Security experts not only need to detect the incoming threat but also need to know the incorporating features that cause the security incident. Thus, despite the enormous potential deep learning algorithms have shown, their opacity could be a hurdle to realizing their full-fledged application. Extracting if-then rules from deep neural networks is a powerful explanation method to capture non-linear local behaviours. However, there are several shortcomings in employing existing rule extraction methods in cybersecurity applications. (1) The extracted rules are not accurate and comprehensible enough. (2) The rule extraction methods are not efficient because of costly pre- and post-processing tasks. (3) The rule extraction methods are not scalable to large datasets and deep neural network architectures with multiple layers and neurons. (4) Some of the algorithms are not exible and extract rules specific to non-continuous attributes, conjunctive conditions, or are dedicated to only binary classification tasks. These defficiencies are some of the major reasons why an explainable deep neural network for cyber threat detection is not yet viable. In this thesis, we seek to design and develop an explainable deep neural network to explain how security-related input data is classified. The core idea is to extract if-then rules from a deep neural network to increase transparency and reflect the input conditions under which the output is true or false. Based on the proposed explainable deep neural network framework, we design and develop two models, namely DeepMACIE and CapsRule, optimized for security applications with distinct characteristics regarding the complexity of the decision boundary, data types and ranges, classification tasks, and dataset size. We evaluate DeepMACIE and CapsRule on the most up-to-date and comprehensive security datasets, including the UCI phishing websites dataset, our own generated Android malware dataset, and the CICD-DoS2019 dataset. Extensive evaluations show that both models generate accurate, high-fidelity, and comprehensible rules. We verify that the learned features from the rulesets match our domain-specific knowledge. The extracted rules will help security experts understand the relationship between a cyber threat detection system's output decision and the input features. They are also suitable for approximating the non-linear relationships in the training data and significantly reduce the number of false positives in attack detection. Furthermore, the rulesets can help find the flaws of the dataset generation process and erroneous patterns caused by attack simulators. The comprehensible rulesets are generalized enough to be applied as supplemental material to complement human intelligence in cyber threat detection systems.

Description

Keywords

Citation