Faculty of Computer Science (Fredericton)

Pages

EnviroPlanner: design and implementation of a distributed environmental querying system in rule responder
EnviroPlanner: design and implementation of a distributed environmental querying system in rule responder
by Sujan Chandra Saha, Environmental information querying can be cumbersome and time-consuming as users sometimes need to go through multiple Web pages for finding answers to specific questions. As a prototype modeling a multi-agent Virtual Environmental Organization (VEO), EnviroPlanner is developed to allow users to retrieve and deduce environmental information via problem-oriented question answering. In this thesis, we focus on the design and implementation of the EnviroPlanner VEO model that is supported by rule and ontology knowledge. This formalized knowledge allows EnviroPlanner's semi-automated agents to assist human experts in environmental question answering. We realized EnviroPlanner for distributed querying in the Rule Responder framework that consists of three kinds of agents: External Agent (EA), Organizational Agent (OA), and Personal Agents (PAs). The EA is the single point of entry that allows users to pose queries to the system, employing a Web interface coupled to an HTTP port to which requests are sent. EnviroPlanner consists of an extension and an instantiation of the Rule Responder framework similar in the communication architecture to the SymposiumPlanner-2011/2012 instantiations but in a very different knowledge domain. The SymposiumPlanner systems since 2011 have used two Sub-Organizational Agents (Sub-OAs) for question answering. We extended the framework to provide architectural flexibility to developers so they can add as many Sub-OAs as needed, e.g. for answering user queries about different locations or regions. The architecture designed for environmental query answering has been implemented and the EnviroPlanner prototype has been evaluated with respect to efficiency and overall performance. In addition, the EnviroPlanner prototype has been deployed on the official Rule Responder website.
Escape: exploring agency as a fear-inducing mechanic in video games
Escape: exploring agency as a fear-inducing mechanic in video games
by Benjamin Ryan MacPherson, Horror games and movies share many similarities when it comes to how they elicit fear. However, a major difference between the two is the aspect of agency found prominently in games, but little, if at all, in movies. Agency is the capacity to act upon something. Previous work suggests that media with more agency have a greater propensity for eliciting fear than media without. However, this work relied solely on watch (i.e., low agency) versus play (i.e., high agency) comparisons. In this work, an agency manipulation with three separate agency conditions (low-agency watch, medium-agency directed play, and high-agency undirected play) is used in a horror video game. Results of the study show no measured difference in fear, agency, or other metrics where differences were expected. Correlational analyses revealed positive correlations between several factors, but not fear and agency. Potential reasons for the lack of differences are discussed, including the need for the study design to be conducted remotely instead of in a controlled laboratory setup due to the ongoing COVID-19 pandemic. This work brings into question the role of agency and fear in uncontrolled and possibly more ecologically valid scenarios.
Estimating the safety function response time for wireless control systems
Estimating the safety function response time for wireless control systems
by Victoria Pimentel Guerra, Safety function response time (SFRT) is a metric for safety-critical automation systems defined in the IEC 61784-3-3 standard for single input and single output systems communicating over wired technologies. This thesis proposes a model to estimate the SFRT for multiple input and multiple output feedback control systems communicating over the IEEE 802.15.4e wireless medium access control standard designed for process automation. The wireless SFRT model provides equations for the worst case delay time and watchdog timer of participating network entities, including wireless communication channels. Thirty-nine on board, wired and wireless control experiments using real devices were carried out to evaluate control performance, and the applicability of the wireless SFRT model. The estimated SFRT for the wired implementation is 38.2 ms. For the wireless experiments, the best SFRT obtained was 655.4 ms with no acceptable packet loss. The wireless implementation failed to provide successful control on 15 of the 21 experiments.
Explainable deep learning for detecting cyber threats
Explainable deep learning for detecting cyber threats
by Samaneh Mahdavifar, Cyber threats have imperiled the security and viability of many entities that exist in this rapidly evolving data-driven world. In this regard, security specialists are designing new mechanisms to deal with the ongoing cyber-attacks, including network intrusions, malware infections, phishing attacks, and website defacements. There has been a growing trend of applying deep learning techniques to image processing, speech recognition, self-driving cars, and even health care. Several deep learning models have been employed to detect a cyber threat; nevertheless, they suffer from not being explainable to security experts. Security experts not only need to detect the incoming threat but also need to know the incorporating features that cause the security incident. Thus, despite the enormous potential deep learning algorithms have shown, their opacity could be a hurdle to realizing their full-fledged application. Extracting if-then rules from deep neural networks is a powerful explanation method to capture non-linear local behaviours. However, there are several shortcomings in employing existing rule extraction methods in cybersecurity applications. (1) The extracted rules are not accurate and comprehensible enough. (2) The rule extraction methods are not efficient because of costly pre- and post-processing tasks. (3) The rule extraction methods are not scalable to large datasets and deep neural network architectures with multiple layers and neurons. (4) Some of the algorithms are not exible and extract rules specific to non-continuous attributes, conjunctive conditions, or are dedicated to only binary classification tasks. These defficiencies are some of the major reasons why an explainable deep neural network for cyber threat detection is not yet viable. In this thesis, we seek to design and develop an explainable deep neural network to explain how security-related input data is classified. The core idea is to extract if-then rules from a deep neural network to increase transparency and reflect the input conditions under which the output is true or false. Based on the proposed explainable deep neural network framework, we design and develop two models, namely DeepMACIE and CapsRule, optimized for security applications with distinct characteristics regarding the complexity of the decision boundary, data types and ranges, classification tasks, and dataset size. We evaluate DeepMACIE and CapsRule on the most up-to-date and comprehensive security datasets, including the UCI phishing websites dataset, our own generated Android malware dataset, and the CICD-DoS2019 dataset. Extensive evaluations show that both models generate accurate, high-fidelity, and comprehensible rules. We verify that the learned features from the rulesets match our domain-specific knowledge. The extracted rules will help security experts understand the relationship between a cyber threat detection system's output decision and the input features. They are also suitable for approximating the non-linear relationships in the training data and significantly reduce the number of false positives in attack detection. Furthermore, the rulesets can help find the flaws of the dataset generation process and erroneous patterns caused by attack simulators. The comprehensible rulesets are generalized enough to be applied as supplemental material to complement human intelligence in cyber threat detection systems.
Exploring an IPv6 protocol for mobile sensor network communication
Exploring an IPv6 protocol for mobile sensor network communication
by Weiqi Zhang, This research explores IPv6 in mobile wireless sensor networks (WSNs). An indoor WSN mobile sensor network testbed of length 24 m was built and used for mobile WSN testing. The test network enabled the use of one or two moving nodes and six stationary nodes. TelosB sensor nodes were used for testing. The thesis presents a detailed explaination of sending and receiving User Datagram Protocol (UDP) packets using the IPv6 Low Power Wireless Area Network (6LoWPAN) software stack in the Berkeley Low power Internet Protocol (BLIP) implementation of Tiny0S 2.1.1 and 2.1.2. A Java based Web application called WSNWeb was built that displays real-time route topology changes and sensor data. The data is updated in a log file, and used to compute packet loss and determine the number of route topology changes. We created 35 test cases, 15 with two moving nodes, and 20 with one moving node. On a test track, model train velocities between 0.076 m/s and 0.376 m/s were used, with three different routing table update periods (RTUPs) of 60 s, 6 s, and 0.6 s. The results show that the one moving node 0.6 seconds RTUP has significantly higher packet loss (up to 1.4% compared to 0.16%) over a five hour test compared to RTUPs of 60 s and 6 s. The two moving nodes test shows that RTUP of 0.6 s still has a higher packet loss compared to RTUPs of 6 sand 60 s.
Extracting feature words from customer reviews
Extracting feature words from customer reviews
by Ting Zhang, Potential customers often browse online reviews before buying products. Manufacturers also collect customer feedback from the reviews. It is very hard for customers and manufacturers to get useful information from a large number of comments quickly. Thus, automatic information extraction in reviews has become a significant problem. This thesis investigates feature word extraction. Feature words are product components or attributes indicating customer interests. Since there is no systematic study on feature word extraction, we first study three classic methods: (1) the frequency-based extraction method; (2) the Web PMI-based extraction method; (3) the rapid automatic keyword extraction (RAKE) method. To provide an objective evaluation, the performance of each method is validated and compared from the following aspects: precision and recall, time complexity, and robustness. Then a new approach is proposed, the rapid feature word extraction (RFWE) method, to improve the performance. RFWE combines the techniques used in the popular methods and performs well in precision, recall, and runtime. RFWE is a great option for users to extract feature words from customer reviews.
FRIEND: a brain-monitoring agent architecture for adaptive system
FRIEND: a brain-monitoring agent architecture for adaptive system
by Alexis Morris, Brain-monitoring is rapidly becoming an important field of research, with potentially significant impacts on how people interact with technology. As the inner- workings of the brain become better understood, sensing technologies are also advancing, becoming smaller, cheaper, and ubiquitous. It is expected that new forms of computing that take advantage of brain state information to deduce user mental contexts (emotions, intentions, and moods) will be developed. This capability would enable systems to perform streamlined user-interaction, monitoring, and assistance, as they would access, manage, and respond to real-time brain state dynamics for adaptive applications and services. In this new domain of brain-monitoring, particularly for non-rehabilitative purposes, there are few studies that consider how to leverage distributed agent architectures. Additionally, current approaches to brain monitoring systems have tended toward non-scalable, single user, single application situations. However, for a ubiquitous system, it is unrealistic for each possible application to have the specialized overhead required; hence a distributed, yet still personalized, approach is essential. To realize this, a multi-purpose agent system for brain-monitoring and management of brain context is the goal of this work. It involves the selection of a brain-monitoring paradigm, an agent architecture, an inferencing mechanism, and the combination of the three towards a unified framework. This general framework is implemented and tested on an application scenario, leveraging brain context as part of a service-oriented architecture. Finally, an assessment is conducted of the technology, studying the implications of the system. By contributing a unique methodology and approach to making such systems tenable, this work helps to pave the way toward making futuristic, adaptive, human-aware information systems that are both effective and practical.
Feasibility of deception in code attribution
Feasibility of deception in code attribution
by Alina Matyukhina, Code authorship attribution is the process used to identify the probable author of given code, based on unique characteristics that reflect an author's programming style. Inspired by social studies in the attribution of literary works, in the past two decades researchers have examined the effectiveness of code attribution in the computer software domain, including computer security. Authorship attribution techniques have found a broad application in code plagiarism detection, biometric research, forensics, and malware analysis. Studies show that analysis of software might effectively unveil the digital identity of a programmer, reflected through variables and structures, programming language, employed development tools, their settings and, more importantly, how and what these tools are being used to do. Authorship attribution has been a prosperous area of research when an assumption can be made that the author of an unknown program has been honest in their writing style and does not try to modify it. In this thesis, we investigate the feasibility of deception of source code attribution techniques. We begin by exploring how data characteristics and feature selection influence both the accuracy and performance of attribution methods. Within this context, it is necessary to understand whether the results obtained by previous studies depend on the data source, quality, and context or the type of features used. It gives us the opportunity to dive deeper into the process of code authorship attribution to be able to understand its potential weaknesses. To evaluate current code attribution systems, we present an adversarial model defined by the adversary's goals, knowledge, and capabilities; for each group, we categorize them by the possible variations. Modeling the role of attackers figures prominently in enhancing the cybersecurity defense. We believe that having a solid understanding of the possible attacks can help in the research and deployment of reliable code authorship attribution systems. We present an author imitation attack that deceives current authorship attribution systems by imitating a coding style of a targeted developer. We investigate the attack's feasibility on open-source software repositories. To subvert an author imitation attack and to help in protecting the developer's privacy, we introduce an author obfuscation method and novel coding style transformations. The idea of author obfuscation is to allow authors to preserve the readability of their source code while removing identifying stylistic features that can be leveraged for code attribution. Code obfuscation, common in software development, typically aims to disguise the appearance of the code making it difficult to understand and reverse engineer. In contrast, the proposed author obfuscation hides the original author's style by leaving the source code visible, readable and understandable. In summary, this thesis presents original research work that not only advances the knowledge in code authorship attribution field but also contributes to the overall safety of our digital world by providing author obfuscation methods to protect the privacy of the developers.
Game-based myoelectric muscle training
Game-based myoelectric muscle training
by Aaron Tabor, For new myoelectric prosthesis users, muscle training is a critical step that promotes effective use and long-term adoption of the prosthesis. Training, however, currently has several problems: 1) existing approaches require expensive tools and clinical expertise, restricting their use to the clinical environment, 2) exercises are boring, repetitive, and uninformative, making it difficult for patients to stay motivated, 3) assessment tools focus exclusively on improvements in functional, real-world prosthesis tasks, which conflicts with other therapeutic goals in early training, and 4) little is known about the effects of longer-term training because existing studies have subjected participants to a very short series of training sessions. While myoelectric training games have been proposed to create a more motivating training environment, commercially available games still exhibit many of these issues. Furthermore, current research presents inconsistent findings and conflicting results, making it unclear whether games hold therapeutic value. This research demonstrates that training games can be designed to address these issues by developing a low-cost, easy-to-use training game that targets the therapeutic goals of myoelectric training. Guidelines for promoting a fun, engaging, and informative training experience were identified by engaging prosthesis users and clinical experts throughout the design of a myoelectric training game. Furthermore, a newly developed set of metrics was used to demonstrate improvement in participants’ underlying muscle control throughout a series of game-based training sessions, further suggesting that games can be designed to provide therapeutic value. This work introduces an open-source training game, demonstrates the therapeutic value of games for myoelectric training, and presents insight that will be applicable to both future research on myoelectric training as well as aspects of training in clinical practice.
Game-theoretic defensive approaches for forensic investigators against anti-forensics
Game-theoretic defensive approaches for forensic investigators against anti-forensics
by Saeed Shafiee Hasanabadi, Forensic investigators employ methods, procedures and tools of digital forensics to identify and present reliable evidence in court against attackers' crime. However, the attackers employ a set of malicious methods and tools as anti-forensics to impact results of digital forensics and even mislead a forensic investigation. Therefore, regarding the challenging threat of anti-forensics in the forensic investigation, to detect anti-forensics, the investigators employ counter anti-forensics. The review of previous studies in digital forensics shows that existing shortcomings are related to the evaluation of forensic tools; accelerating forensic methods; and the lack of additional research for understanding the attacker's behaviour. The review also shows the existing shortcomings in the area of anti-forensics as the necessity of additional research on anti-forensics; understanding the attacker's behaviour when he/she employs anti-forensics and the evaluation of forensic tools against anti-forensics. In a forensic environment, the attacker and the investigator interact rationally and competitively to increase their payoff. The simulation of their interactions can provide beneficent knowledge for the investigator. However, the simulation of their interactions in the real-world requires enormous financial and human resources. Game theory provides a capability for simulating their interactions. However, the employment of game-theoretic algorithms to simulate their interactions in the forensic environment requires dealing with some shortcomings. The shortcomings are 1) a need for addressing the players' capability to expand their action spaces in the forensic environment; 2) the necessity of constructing a beneficiary model regarding the attacker's behaviour when he/she employs anti-forensics; 3) a need for a criterion to compare the performance of game-theoretic algorithms, and 4) a need for addressing the acceleration for current memory mechanisms. Therefore, in this thesis, we propose a memory-based game-theoretic defensive approach. The proposed approach is for forensic investigators against anti-forensics. The approach lets us simulate interactions between an attacker and an investigator (players) in the forensic environment when the attacker employs anti-forensics, while the investigator uses counter-anti-forensics. The approach enables the investigator to identify the most stable and desired defensive strategies against the attacker's most stable and desired offensive strategy. The investigator can assess the existing counter-anti-forensics using the approach. We identify a set of comprehensive characteristics regarding the players' interactions in the forensic environment to profile potential game-theoretic algorithms and models. Next, we evaluate them using a set of criteria to choose the most coordinated game-theoretic algorithms and models for the simulation of interactions. We consider anti-forensics (i.e. rootkits, backdoors, and Trojans) to define the attacker's action spaces and counter-anti-forensics (i.e. anti-rootkits, anti-backdoors, and Anti-Trojans) to determine the investigator's action spaces, and we build three datasets. We formulate the players' payoff functions and calculate their payoff matrices. Finally, the fictitious and gradient play algorithms are selected as the most coordinated game-theoretic algorithms. Furthermore, to introduce a capability for the players to expand their action spaces in the forensic environment and examine Nash equilibrium of the game without a need for re-simulating the game since the beginning, we propose a memory component and introduce an extended game-theoretic algorithm. We identify the fictitious play algorithm as the best game-theoretic algorithm and introduce assistive rules for the investigator., Electronic Only.
Generating SADI semantic web services from declarative descriptions
Generating SADI semantic web services from declarative descriptions
by Mohammad Sadnan Al Manir, Accessing information stored in databases remains a challenge for many types of end users. In contrast, accessing information from knowledge bases allows for more intuitive query formulation techniques. Whereas knowledge bases can be directly instantiated by the materialization of data according to a reference semantic model, a more scalable approach is to rely on queries formulated using ontologies rewritten as database queries at query time. Both of these approaches allow semantic querying, which involves the application of domain knowledge written in the form of axioms and declarative semantic mapping rules. In neither case are users required to interact with the underlying database schemas. A further approach offering semantic querying relies on SADI Semantic Web services to access relational databases. In this approach, services brokering access to specific data sets can be automatically discovered, orchestrated into workflows, and invoked to execute queries performing data retrieval or data transformation. This can be achieved using specialized query clients built for interfacing with services. Although this approach provides a successful way of accessing data, creating services requires advanced skills in modeling RDF data and domain ontologies, writing of program code and SQL queries. In this thesis we propose the Valet SADI framework as a solution for automation of SADI Semantic Web service creation. Valet SADI represents a novel architecture comprising four modules which work together to generate and populate services into queryable registries. In the first module declarative semantic mappings are written between source databases and domain ontologies. In a second module, the inputs and outputs of a service are defined in a service ontology with reference to domain ontologies. The third module creates un-instantiated SQL queries automatically based on a semantic query, the target database, domain ontologies and mapping rules. The fourth module produces the source code for a complete and functional SADI service containing the SQL query. The inputs to the first two modules are verified manually while the other modules are fully automated. Valet SADI is demonstrated in two use cases, namely, the creation of a queryable registry of services for surveillance of hospital acquired infections, and the preservation of interoperability in a malaria surveillance infrastructure.

Pages

Zircon - This is a contributing Drupal Theme
Design by WeebPal.