Browsing by Author "Ghorbani, Ali A."
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item A novel transformer-based multi-step approach for predicting common vulnerability severity score(University of New Brunswick, 2024-06) Bahmanisangesari, Saeid; Ghorbani, Ali A.; Isah, HarunaThe timely prediction of Common Vulnerability Severity Scores (CVSS) following the release of Common Vulnerabilities and Exposures (CVE) announcements is crucial for enhancing cybersecurity responsiveness. A delay in acquiring these scores may make it more difficult to prioritize risks effectively, resulting in the misallocation of resources and a delay in mitigating actions. Long exposure to untreated vulnerabilities also raises the possibility of exploitative attacks, which could lead to serious breaches of security that compromise data integrity and harm users and organizations. This thesis develops a multi-step predictive model that leverages DistilBERT, a distilled version of the BERT architecture, and Artificial Neural Networks (ANNs) to predict CVSS scores prior to their official release. Utilizing a dataset from the National Vulnerability Database (NVD), the research examines the effectiveness of incorporating contextual information from CVE source identifiers and the benefits of incremental learning in improving model accuracy. The models achieved better results compared to the top-performing models among other works with an average accuracy of 91.96% in predicting CVSS category scores and an average F1 score of 91.87%. The results demonstrate the model’s capability to predict CVSS scores across multiple categories effectively, thereby potentially reducing the response time to cybersecurity threats.Item A practical and scalable hybrid quantum-based/quantum-safe group key establishment(University of New Brunswick, 2023-07) Aldarwbi, Mohammed Y.; Ghorbani, Ali A.; Lashkari, Arash H.This Ph.D. thesis investigates key establishment protocols, focusing on two-member and group key establishment protocols, with subcategories including classical, quantum-safe, and quantum-based solutions. I have identified research gaps such as the lack of quantum resistance in classical solutions, the inefficiency of quantum-safe solutions, and the impracticality of group-based quantum key distribution. To address these gaps, I have proposed several novel protocols and analyses. Our first contribution is a novel quantum-safe key management scheme called KeyShield. KeyShield is a scalable and quantum-safe system that offers a high level of security. KeyShield achieves re-keying using a single broadcast message in an open channel, rather than establishing pairwise secure channels. I have also proposed another version of KeyShield, called KeyShield2, which is obtained by applying a set of countermeasures to the original KeyShield protocol after conducting cryptanalysis and identifying its vulnerabilities. Furthermore, I have introduced two receiver-device-independent quantum key distribution protocols, QKeyShield and DGL22, based on entanglement-swapping and quantum teleportation, respectively. Both protocols minimize the attack surface, increase the key rate, and provide additional security enhancements. Security proofs and analyses demonstrate their effectiveness in establishing a secret key between Alice and Bob. I found that group-based quantum key distribution protocols are not effective or practical due to several limitations. I have conducted a thorough literature review and proved the impracticality of these protocols. I have proposed a model that can determine the maximum number of members for which group-based protocols are useful. Finally, I have proposed a scalable and practical hybrid group key establishment scheme. This protocol combines the power of the two-member quantum-based protocol, DGL22, to establish symmetric data encryption keys. The symmetric keys are then used to distribute the quantum-safe (KeyShield2) keying materials. The quantum-safe protocol, KeyShield2, is used to distribute a secure lock that can be opened by legitimate members using the keying materials to extract the traffic encryption key, T EK. This hybrid protocol enables a smooth transition between quantum-safe and quantum-based solutions, addressing distance limitations and pairwise channel requirements. It also allows for forward and backward compatibility.Item A query-efficient black-box adversarial attack on text classification Deep Neural Networks(University of New Brunswick, 2022) Yadollahi, Mohammad Mehdi; Ghorbani, Ali A.; Lashkari, Arash HabibiRecent work has demonstrated that modern text classifiers trained on Deep Neural Networks are vulnerable to adversarial attacks. There are insufficient studies on text data compared to the image domain, and the lack of investigation originates from the special challenges of the NLP domain. Despite being extremely effective, most adversarial attacks in the text domain ignore the overhead they induced on the victim model. In this research, we propose a Query-Efficient black-box adversarial attack named EQFooler on text data that tries to attack a textual deep neural network while considering the amount of overhead that it may produce. The evaluation of our method shows that the results are promising. We demonstrate the impact of keyword extraction methods in generating query-efficient adversarial attacks. Four variants of the EQFooler mode are developed based on different keyword extractors and importance score strategies. We compare the performance of these variants in terms of four evaluation metrics, namely original accuracy, adversarial accuracy, change rate, and number of queries. All the variants of the proposed attack significantly reduce the accuracy of the targeted models. Among those variants, EQFooler-Rake-MS has the best functionality in terms of adversarial accuracy, change rate and the number of queries needed. Also, multiple experiments are designed to compare the outcomes of the proposed method with the state-of-the-art adversarial attacks as a baseline. The results show that the EQFooler is as powerful as the state-of-the-art adversarial attacks while requiring fewer queries to the victim model. In addition, we study the transferability of the generated adversarial examples. Compared to the baseline in any transfer setting, at least one of the variants has better outcomes than the baseline.Item Achieving a generalizable early detection of fake news(University of New Brunswick, 2023-08) Sormeily, Asma; Ghorbani, Ali A.In the era of widespread social media use, combatting the propagation of fake news is of paramount importance. Traditional methods for detecting fake news often struggle to adapt to evolving formats and require extensive data for early detection. To address these challenges, we propose the Multimodal Early Fake News Detection (MEFaND) approach, which leverages Graph Neural Networks (GNN) and Bidirectional Encoder Representations from Transformers (BERT). This approach enables early fake news detection using limited 5-hour propagation data and concise news content. MEFaND achieves an impressive F1-score of 0.99% (Politifact) and 0.96% (Gossipcop), outperforming existing methods. We also analyze user characteristics and study temporal and structural patterns in fake news propagation graphs. In addition, we introduce a User Susceptibility Assessment and Prediction model that employs user features to assess and predict their likelihood of spreading false information. Incorporating user actions, historical involvement, and profile traits, our model achieves 0.93% accuracy in user susceptibility assessment. This research addresses early fake news detection and user susceptibility analysis, contributing to effective strategies against misinformation on online social networks.Item An adversarial attack framework for deep learning-based NIDS(University of New Brunswick, 2023-12) Mohammadian, Hesamodin; Ghorbani, Ali A.; Lashkari, Arash HabibiIntrusion detection systems are essential to any cybersecurity architecture, as they play a critical role in defending networks against various security threats. In recent years, deep neural networks have demonstrated remarkable performance in numerous machine learning tasks, including intrusion detection. However, it has been observed that deep learning models are highly susceptible to a wide range of attacks during both the training and testing phases. These attacks can compromise the privacy and security of deep learning models, such as model inversion, membership inference, poisoning, and evasion attacks. Numerous studies have been conducted to understand and mitigate these attacks to propose more efficient techniques with higher success rates and accuracy in various tasks utilizing deep learning models, such as image classification, face recognition, network intrusion detection, and healthcare applications. Despite the considerable efforts in this area, the network domain still lacks sufficient attention to these attacks and vulnerabilities. This thesis aims to address this gap by proposing a framework for adversarial attacks against network intrusion detection systems. The proposed framework focuses on both poisoning and evasion attacks. For poisoning, we present a label flipping-based attack, and for evasion, we propose two attacks: FGSM-based and saliency map-based attacks. These attacks are designed by considering the distinct characteristics of network data and flows. Furthermore, we introduce an evaluation model for the evasion attack based on several carefully selected criteria. To assess the effectiveness of the proposed techniques, we utilize three network datasets that cover a wide range of network attack categories: CIC-IDS2017, CIC-IDS2018, and CIC-UNSW. Through extensive evaluations and analysis, we demonstrate that the proposed methods are highly effective against deep learning-based NIDS and can significantly degrade their performance. With the proposed evasion attacks, we show that each feature has a varying impact on the adversarial sample generation process, and it is possible to execute a successful attack even with a few features involved. In conclusion, this thesis contributes to network intrusion detection by providing novel and effective approaches for adversarial attacks, shedding light on the vulnerabilities of deep learning-based NIDS, and emphasizing the importance of enhancing their robustness to such attacks.Item An Effective Approach to Detect Label Noise(University of New Brunswick, 2022-11) Abrishami, Mahdi; Ghorbani, Ali A.With the increased usage of Internet of Things (IoT) devices in recent years, different Machine Learning (ML) methods have also developed dramatically for attack detection in this domain. However, ML models are vulnerable to various classes of adversarial attacks that aim to fool a model into making an incorrect prediction. For instance, label manipulation or label flipping is a type of adversarial attack in which the attacker attempts to manipulate the label of training data, thereby causing the trained model to be biased and/or with decreased performance. However, the number of samples that can be flipped in this type of attack can be limited, giving the attacker a limited target selection. Due to the importance of securing ML models against Adversarial Machine Learning (AML) attacks, particularly in the IoT domain, this thesis presents an extensive review of AML in IoT. Then, a classification of AML attacks is proposed based on the literature, creating a foundation for future research in this domain. Next, more specifically, this thesis investigates the negative impact levels of applying malicious label flipping attacks (intentional label noise) on IoT data. As accurate labels are necessary for ML training, exploring adversarial label noise is an important research topic. However, the label noise in datasets is not always adversarial and may be caused due to several other reasons, such as careless data labelling. Classification is an essential task in machine learning, where the main objective is to predict the categories of unseen data. The existence of label noise in training datasets can negatively impact the performance of supervised classification, whether it is adversarial or non-adversarial. Due to the growing interest in the data-centric AI that aims at improving the quality of training data without enhancing the complexity of models, a range of research has been undertaken to tackle the label noise problem. However, few works have investigated this problem in the IoT network intrusion detection domain. This thesis addresses the issue of label noise in the intrusion detection domain by presenting a framework to detect samples with noisy labels. The proposed framework’s main components are the decision tree classification algorithm and active learning. The framework is composed of two steps: making a decision tree robust against the label noise in a dataset and then using this robust model with the help of active learning with uncertainty sampling to detect noisy samples effectively. In this way, the inherent resiliency of the decision tree algorithm against label noise is utilized to tackle this issue in datasets. Based on the results of our experiments, the proposed framework can detect a considerable number of noisy samples in the training dataset, with up to 98% noise reduction. The proposed detection method can also be leveraged as a defense against random label flipping attacks where adversarial label manipulation is applied randomly.Item Causal studies on users’ behavioral choices in social networks(University of New Brunswick, 2022-08) Falavarjani, Seyed Amin Mirlohi; Bagheri, Ebrahim; Ghorbani, Ali A.Causal inference is an essential topic across many domains, such as statistics, computer science, education, and economics, to name a few. The existence and convenience of obtaining appropriate observational data and the rapidly developing area of Big Data has enabled us to estimate causal effects between phenomena that was not previously possible. Scientists refer to causality as cause and effect where the cause, which can be an event, process, state, or object, is responsible for producing the effect, which is another event, process, state, or object. Causal inference is the process of determining a conclusion about a causal connection based on the conditions of the occurrence of an effect. This thesis proposes approaches to explore the potential causal effects of users’ different offline behaviors such as exercising, dining, shopping, and traveling on their alignment with social beliefs and emotions in online platforms. Additionally, this thesis examines whether being aligned with society is contagious. Concretely, this thesis considers the potential causal effects of users’ offline activities on their online social behavior. The objective of our work is to understand whether the activities that users are involved with in their real daily life, which place them within or away from social situations, have any direct causal impact on their behavior in online social networks. This work is motivated by the theory of normative social influence, which argues that individuals may show behaviors or express opinions that conform to those of the community for the sake of being accepted or from fear of rejection or isolation. Our main findings can be summarized as follows (1) a change in users’ offline behavior that affects the level of users’ exposure to social situations, e.g., starting to go to the gym or discontinuing frequenting bars, can have a causal impact on users’ online topical interests and sentiments; and (2) the causal relations between users’ socially situated offline activities and their online social behavior can be used to build effective predictive models of users’ online topical interests and sentiments. We further expand the state of the art by exploring the impact of social contagion on users’ social alignment, i.e., whether the decision to socially align oneself with the general opinion of the users on the social network is contagious to one’s connections on the network or not. This is an important problem as it explores whether users will make decisions to socially align themselves with others depending on whether their social network connections decide to socially align or not. The novelty of our work include: (1) unlike earlier work, our work is among the first to explore the contagiousness of the concept of social alignment on social networks; (2) our work adopts an instrumental variable approach to determine reliable causal relations between observed social contagion effects on the social network; (3) our work expands beyond the mere presence of contagion in social alignment and also explores the role of population heterogeneity on social alignment contagion. We find that a user’s decision to socially align or distance from social topics and sentiments influences the social alignment decisions of their connections on the social network.Item Difficulty adjustment algorithms for preventing proof-of-work mining attacks(University of New Brunswick, 2022-08) Azimy, Hamid; Ghorbani, Ali A.; Bagheri, EbrahimBitcoin mining is the process of generating new blocks in the blockchain. This process is vulnerable to different types of attacks. One of the most famous attacks in this category is selfish mining, introduced by Eyal and Sirer [21] in 2014. Selfish mining is a very well-known attack and many studies tried to analyze, mitigate, or extend it. This attack is essentially a strategy that a sufficiently powerful miner can follow to obtain more revenue than its fair share. To put it simply, it works by slowing down the network and wasting the hash power of both attackers and honest miners but wasting honest miners’ hash power more. This attack is not exclusive to Bitcoin and can be performed on many proof-of-work blockchains and cryptocurrencies (e.g. Ethereum) and it is observed in a few cases on other altcoins (Monacoin). The reason that selfish mining is effective in Bitcoin is the difficulty adjustment algorithm in Bitcoin. Because after the difficulty adjustment, the selfish miner will benefit from higher relative revenue. This is the point that is not well-studied in the literature and we try to address it in this thesis. However, the difficulty adjustment algorithm is an essential part of the Bitcoin protocol and cannot be removed. In this thesis, we analyze the profitability of selfish mining concerning time and also the presence of other selfish miners. We also propose a family of alternative difficulty adjustment algorithms including Zeno’s DAA, Zeno’s Max DAA, and Zeno’s Parametric DAA, that discourage selfish mining, while allowing the Bitcoin network to remain scalable (by adjusting the difficulty of the network). We analyze our proposed solutions, using two methods: mathematical analysis and simulation analysis. Then, we present the results and discuss the effectiveness of our proposed solutions. Based on our analysis, our proposed algorithms effectively increase the profitability waiting time for the attackers to almost double its original value. For example, for a miner with 40% of the network’s hash power, it extends the waiting time from four weeks to more than eleven weeks. This will discourage attackers from performing their malicious activities. We also show that our proposed algorithm allows the network to scale while it increases the waiting time.Item Domain adaptation based machine learning transferability model(University of New Brunswick, 2024-06) Molokwu, Reginald Chukwuka; Ghorbani, Ali A.; Isah, HarunaIn this study, we tackle the pressing issue of accurately identifying IoT devices across a variety of operational environments. With the ever-changing and dynamic nature of IoT ecosystems at the forefront, we introduce an innovative approach for device identification. Our solution hinges on the power of machine learning models’ transferability, enhanced by domain adaptation techniques. These techniques are key in addressing the inconsistencies present across different IoT settings. Our method involves a detailed examination of network packets to extract essential features, which are then utilized in machine learning algorithms to ensure precise device identification despite potential domain shifts and class imbalances. Leveraging this approach, we employ datasets such as IMC’19, CICIoT 2022, and Sentinel to test and validate our module. The outcome is noteworthy; we achieve a significant 98% accuracy in both testing and evaluation phases, demonstrating the effectiveness of our method in the complex landscape of IoT device identification.Item Efficient and privacy-preserving worker selection in mobile crowdsensing(University of New Brunswick, 2022-05) Zhang, Xichen; Lu, Rongxing; Ghorbani, Ali A.The rapid growth of the Internet of Things (IoTs) has enabled a new sensing paradigm, called Mobile Crowdsensing (MCS). A crowd of mobile participants, namely workers, are selected by the MCS platform to outsource their real-time sensing data for specific tasks, such as location recommendation, air quality monitoring, and traffic monitoring. Work selection is the process of allocating qualified workers to suitable MCS tasks. In MCS, workers’ reliability and their sensing data quality play significant roles in the service quality. Therefore, worker selection is always one of the most fundamental problems in MCS applications. Despite the considerable potential and extensive development, there are still several challenges and issues in MCS services which cannot be ignored. i) In worker selection, it is inevitable for the workers to share some of their personal sensitive information, which may be exploited by hostile MCS platform for malicious activities. Therefore, privacy preservation for workers’ sensitive information is a major concern in MCS platforms. ii) Evaluating workers’ trustability or credibility is one of the most essential issues that the MCS platform needs to solve. But these attributes were often neglected in previous literature. iii) Worker selection is a dynamic process where the workers can continuously arrive at/leave the platform. However, most of the existing studies only focus on selecting workers statically. iv) In MCS tasks, the workers are always heterogeneous with different computational resources. Therefore, the MCS platform needs to select qualified workers in terms of their various computing characteristics. v) Worker’s real-time spatial-temporal information plays a vital role in worker selection, and should be paid more attention in designing real-world MCS applications. In this thesis, we focus on efficient and privacy-preserving worker selection in MCS and propose several schemes for addressing the above challenges. Specifically, the main contributions are as follows. i) We proposed a privacy-preserving worker selection scheme based on probabilistic skyline computation for calculating workers’ trustability based on their historical reviews. ii) We proposed a privacy-preserving dynamic worker selection scheme based on probabilistic skyline query over sliding windows, which can continuously and dynamically select qualified workers. iii) We proposed a novel application scenario called Federated MCS by integrating the concept of Federated Learning with MCS. The proposed scheme can select qualified workers based on the group skyline technique and aggregate local model updates for training the global model. iv) We conducted a series of studies on worker selection regarding spatial-temporal matching and evaluation. The proposed privacy-preserving schemes can select qualified workers in terms of their real-time spatial-temporal information. The research findings and experimental results in this thesis should be useful for selecting qualified workers effectively and securely in MCS applications.Item Electric vehicles and charging infrastructure security(University of New Brunswick, 2023-08) Shirvani, Soheil; Ghorbani, Ali A.Electric Vehicles (EVs) have gained popularity in recent years due to their economic and environmental benefits. However, their growth has faced numerous technical challenges, with cybersecurity emerging as critical. To address security concerns, this thesis initially surveys the previous literature to find possible attacks and challenges of EVs. Then, it analyzes the security of all components that can transmit data or are connected to EVs, with the aim of identifying the critical elements in the ecosystem. Utilizing the proposed architecture, this study introduces a comprehensive risk assessment framework that identifies all possible attacks and challenges. These are then ranked based on impact, likelihood, and risk criteria, with subsequent countermeasures proposed. Furthermore, to demonstrate the applicability of our proposed methods, we identify a gap in the Electric Vehicle-Electric Vehicle Supply Equipment (EV-EVSE) system, specifically monitoring charging sessions. Consequently, weproposeafive-stepintelligentmonitoringframeworkforchargingsitestoprevent fraudulent charging and malicious sessions.Item Enhancing EV charging station security: A multi-stage approach(University of New Brunswick, 2024-03) Buedi, Emmanuel Dana; Ghorbani, Ali A.The deployment of Electric Vehicle (EV) charging stations is pivotal to the global shift towards eco-friendly transportation. Nevertheless, as these systems become increasingly integrated into everyday life, they also emerge as prime targets for cybersecurity attacks. The development of cybersecurity solutions encounters challenges due to the deployment methods of EV charging stations, limitations in hardware resources, and the unavailability of attack datasets. Addressing this, our research introduces the creation and publication of a comprehensive dataset, CICEVSE2024, which includes 36GB of benign and attack samples. Additionally, we propose a multistage anomaly detection framework for identifying host- and network-based attacks on EV Supply Equipment (EVSE). A rule-based model is utilized at the EVSE level for preliminary detection. Subsequently, the Charging Station Monitoring System (CSMS) level employs three anomaly detection models alongside an attack classifier. Our approach ensures operational independence, allowing effective attack detection even when EVSE operates in standalone mode.Item Enhancing network intrusion detection of the Internet of Vehicles: Challenges and proposed solutions(University of New Brunswick, 2023-08) Taslimasa, Hamideh; Ghorbani, Ali A.; Dadkhah, SajjadWhile connected vehicles improve driving experience through the Internet of Vehicles network, this connectivity brings privacy and security risks. Current Machine Learning based Intrusion detection Systems (IDS) encounter challenges like preserving users’ privacy, and interpretability. Current intra-vehicle IDS rely on central servers for data aggregation and training, which consumes bandwidth, and jeopardizes user privacy. We introduce ImageFed, an intra-vehicle IDS that employs federated learning with a Convolutional Neural Network to enable distributed learning while preserving users’ privacy. Evaluations show an average F1-score of 99.54% and 99.87% accuracy on CAN-Intrusion dataset with low detection latency. Inter-vehicle networks demand deep learning frameworks for better generalization over intricate attacks. However, Deep Neural Networks (DNN) lack interpretability, eroding trust among experts. To address this, we introduce a rule extraction framework for DNN-based IDS, enhancing transparency via interpretable rule trees. The framework achieves 94% accuracy and 88% F1-score on CICIDS2017 dataset.Item Reasoning for fact verification using language models(University of New Brunswick, 2024-02) Kanaani, Mohammadamin; Ghorbani, Ali A.In response to the proliferation of misinformation on social media platforms, this thesis introduces the Triple-R framework (Retriever, Ranker, Reasoner) to enhance fact-checking by leveraging the Web for evidence retrieval and generating understandable explanations for its decisions. Unlike existing methods, Triple-R incorporates external sources for evidence and provides explanations for datasets lacking them. By fine-tuning a causal language model, it produces natural language explanations and labels for evidence-claim pairs, aiming for greater transparency and interpretability in fact-checking systems. Evaluated on a popular dataset, Triple-R achieved a state-of-the-art accuracy of 42.72% on the LIAR benchmark, outperforming current automated fact verification methods. This underscores its effectiveness in integrating web sources and offering clear reasons, presenting a significant step forward in the fight against online misinformation.Item Towards a Formalization of TrustCarter, Jonathan; Ghorbani, Ali A.This work focuses on the design and implementation of a new model of trust. The new model of trust is based on the formalizations of reputation, self-esteem, and similarity within an agent. Our previous work establishes the formalization of reputation within an information-sharing Multiagent System. The previous work claims that reputation cannot be universalized. This work universalizes reputation through the use of values within all Multiagent Systems. The following values are shown to be manifested within Multiagent Systems: responsibility, honesty, independence, obedience, ambition, helpfulness, capability, knowledgeability, and cost efficiency. Manifestations of these values result in a more universalized approach to formalizing reputation. Self-esteem is formalized as the reputation an agent has with itself. Lastly, similarity is formalized as the difference in the importance of the values previously mentioned. Combined, the weighted components of self-esteem, similarity, and reputation form a new model of trust. This new model of trust is examined within the context of an e-commerce framework. The multiagent system is comprised of buyers and sellers that wish to conduct business. Sellers can engage in untrustworthy business behavior at the buyer's expense. It is the job of the model to decide whether a selling agent is trustworthy enough to engage in business. The trust model is analyzed with respect to stability, scalability, accuracy in attaining e-commerce objectives, and general effectiveness in discouraging untrustworthy behavior. Based on the experiments, the model appears to be scalable dependent upon the agent population of buyers and sellers. It achieves its primary objective of discouraging untrustworthy behavior as measured through the acceleration of Gross Domestic Product growth over time. Within the simulator, a high degree of random outcomes is possible. Stability is used to examine the predictability of the model (on average) given a fixed set of given data about the simulations. Based on the simulations, the model appears to be quite stable.Item Transferability of machine learning model for IoT device identification and vulnerability assessment(University of New Brunswick, 2022-12) Danso, Priscilla Kyei; Ghorbani, Ali A.; Dadkhah, SajjadThe lack of appropriate cyber security measures deployed on IoT makes these devices prone to many security issues. Machine learning (ML) models used to monitor devices in a network and make predictions by differentiating between benign and malicious devices have made tremendous strides. However, most of the research in profiling and identification uses the same data for training and testing. Hence, a slight change in the data renders most learning algorithms to work poorly. This study uses a transferability approach based on the concept of transductive transfer learning for IoT device profiling and identification. We propose a three-component system comprising the device type identification, the vulnerability assessment, and the visualization module. The device type identification component uses the underlying concept of transductive transfer learning, where the trained model is transferred to a remote lab for testing. This type of transfer learning works by explicitly assigning labels to the testing data in the target domain using the test feature space in the target domain, with training data from the source domain. The test dataset (target domain) will employ the trained model (source domain) knowledge. Furthermore, the vulnerability of the predicted device type is assessed using three vulnerability databases: Vulners, NVD, and IBM X-Force. Lastly, the results from the vulnerability assessment are visualized.Item Unmasking stealthy threats: Techniques for identifying and analyzing obfuscated malware(University of New Brunswick, 2024-08) Alkhateeb, Ehab; Ghorbani, Ali A.; Lashkari, Arash HabibiIn the ever-evolving domain of cybersecurity, the challenges of countering obscured malware and crafting effective Anti-Virus (AV) solutions are formidable. This struggle is particularly evident in packed malware, where malicious actors employ encryption and sophisticated techniques to conceal their payloads, thereby circumventing detection by AV scanners and security analysts. Recent studies reveal that addressing both known and unknown packers poses a significant challenge, often due to insufficient datasets and a reliance on raw features. Furthermore, there is a notable absence of a comprehensive approach for unpacking, which involves identifying packers to streamline the overall process and implementing both profile and generic unpacking methods. This thesis introduces an innovative malware packer classifier, meticulously designed to excel in the identification of packer families and the detection of previously unknown packers in real-world scenarios. Our approach relies on sophisticated feature engineering techniques, involving multiple layers of analysis to extract crucial features used as inputs for the classifier. These features encapsulate the intricacies of packed malware, enabling our classifier to reveal their concealed intentions. Furthermore, to enhance packer identification performance, we have diligently curated a dataset comprising precisely packed samples, ensuring the highest level of data quality and relevance to real-world threats. The proposed packer identifier distinguishes itself with high accuracy in discerning a diverse array of known packers with an accuracy of 99.60%, including previously unidentified packers with an accuracy of 91%. It achieves this while maintaining operational efficiency and effectiveness, addressing a critical need in cybersecurity. Additionally, we have introduced a unified unpacking approach leveraging our identification methodology, to minimize unpacking overhead. This involves a strategic decision-making process to determine whether to employ a profile-based or generic unpacking method. Moreover, we have proposed a script-based profile unpacking technique and an Intel PIN tool for generic unpacking. By advancing the state-of-the-art in malware packer identification and unpacking, this research significantly contributes to fortifying defenses against the persistent threat of obfuscated malware, ultimately enhancing the security of digital ecosystems.Item Utilizing trust to achieve cyber resilient substations(University of New Brunswick, 2024-02) Boakye-Boateng, Kwasi; Ghorbani, Ali A.; Lashkari, Arash H.The Smart Grid integrates cyber technology into power grids for automated and efficient management of electricity generation, transmission, and distribution. Key to its operation is the substation, regulating voltage across the system. However, cyberinfrastructure integration has increased the substation’s vulnerability to advanced persistent threats (APTs) like PipeDream that exploit device protocols such as Modbus and Distributed Network Protocol 3 (DNP3). Cyber resilience in substations is crucial because APTs can disrupt operations, necessitating manual interventions for recovery, thus causing downtime. Enhancing cyber resilience helps substations minimize downtime and recover more efficiently in the face of these disruptive events. However, the substation’s constraints pose challenges for implementing cyber resiliency measures such as encryption and intrusion detection. This dissertation proposes a trust-based framework that includes a trust, risk posture, and trust transferability model to enhance the substation’s cyber resiliency. The trust model detects protocol-based attacks on Intelligent Electronic Devices (IEDs) and Supervisory Control and Data Acquisition (SCADA) Human Machine Interface (HMI) systems. The risk posture model dynamically assesses the substation’s risk posture pre- and post-attack, while the transferability model evaluates the device and its trust’s integration across substations. Practical implementation involves a substation-emulated Docker-based testbed with a multi-agent architecture. Following Security Operations Center (SOC) principles, a real-time dashboard offers updates. Using the MITRE Industrial Control Systems (ICS) Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework, evaluation assesses the trust framework against various attacks. The trust model consistently shows efficient performance, with response latency less than 10 ms, superior to alternatives with a minimum latency of 20 ms. Evaluation under rogue devices, compromised SCADA HMI, and compromised IED scenarios highlights robust detection capabilities, except for baseline replay and delay response attacks. The risk posture model effectively represents substation risk postures, providing insights into attack impacts. The transferability model consistently denies admission to devices with malicious behavior in scenarios like normal replacement, compromised replacement, and trust IED with poor trust scores. Results show the trust framework’s efficacy in evaluating substation resilience, identifying malicious behavior, and endorsing trustworthy devices. Additionally, a dataset comprising the experiments’ captures in the testbed is available to the public1.