Faculty of Computer Science (Fredericton)

Pages

Game-based myoelectric muscle training
Game-based myoelectric muscle training
by Aaron Tabor, For new myoelectric prosthesis users, muscle training is a critical step that promotes effective use and long-term adoption of the prosthesis. Training, however, currently has several problems: 1) existing approaches require expensive tools and clinical expertise, restricting their use to the clinical environment, 2) exercises are boring, repetitive, and uninformative, making it difficult for patients to stay motivated, 3) assessment tools focus exclusively on improvements in functional, real-world prosthesis tasks, which conflicts with other therapeutic goals in early training, and 4) little is known about the effects of longer-term training because existing studies have subjected participants to a very short series of training sessions. While myoelectric training games have been proposed to create a more motivating training environment, commercially available games still exhibit many of these issues. Furthermore, current research presents inconsistent findings and conflicting results, making it unclear whether games hold therapeutic value. This research demonstrates that training games can be designed to address these issues by developing a low-cost, easy-to-use training game that targets the therapeutic goals of myoelectric training. Guidelines for promoting a fun, engaging, and informative training experience were identified by engaging prosthesis users and clinical experts throughout the design of a myoelectric training game. Furthermore, a newly developed set of metrics was used to demonstrate improvement in participants’ underlying muscle control throughout a series of game-based training sessions, further suggesting that games can be designed to provide therapeutic value. This work introduces an open-source training game, demonstrates the therapeutic value of games for myoelectric training, and presents insight that will be applicable to both future research on myoelectric training as well as aspects of training in clinical practice.
Game-theoretic defensive approaches for forensic investigators against anti-forensics
Game-theoretic defensive approaches for forensic investigators against anti-forensics
by Saeed Shafiee Hasanabadi, Forensic investigators employ methods, procedures and tools of digital forensics to identify and present reliable evidence in court against attackers' crime. However, the attackers employ a set of malicious methods and tools as anti-forensics to impact results of digital forensics and even mislead a forensic investigation. Therefore, regarding the challenging threat of anti-forensics in the forensic investigation, to detect anti-forensics, the investigators employ counter anti-forensics. The review of previous studies in digital forensics shows that existing shortcomings are related to the evaluation of forensic tools; accelerating forensic methods; and the lack of additional research for understanding the attacker's behaviour. The review also shows the existing shortcomings in the area of anti-forensics as the necessity of additional research on anti-forensics; understanding the attacker's behaviour when he/she employs anti-forensics and the evaluation of forensic tools against anti-forensics. In a forensic environment, the attacker and the investigator interact rationally and competitively to increase their payoff. The simulation of their interactions can provide beneficent knowledge for the investigator. However, the simulation of their interactions in the real-world requires enormous financial and human resources. Game theory provides a capability for simulating their interactions. However, the employment of game-theoretic algorithms to simulate their interactions in the forensic environment requires dealing with some shortcomings. The shortcomings are 1) a need for addressing the players' capability to expand their action spaces in the forensic environment; 2) the necessity of constructing a beneficiary model regarding the attacker's behaviour when he/she employs anti-forensics; 3) a need for a criterion to compare the performance of game-theoretic algorithms, and 4) a need for addressing the acceleration for current memory mechanisms. Therefore, in this thesis, we propose a memory-based game-theoretic defensive approach. The proposed approach is for forensic investigators against anti-forensics. The approach lets us simulate interactions between an attacker and an investigator (players) in the forensic environment when the attacker employs anti-forensics, while the investigator uses counter-anti-forensics. The approach enables the investigator to identify the most stable and desired defensive strategies against the attacker's most stable and desired offensive strategy. The investigator can assess the existing counter-anti-forensics using the approach. We identify a set of comprehensive characteristics regarding the players' interactions in the forensic environment to profile potential game-theoretic algorithms and models. Next, we evaluate them using a set of criteria to choose the most coordinated game-theoretic algorithms and models for the simulation of interactions. We consider anti-forensics (i.e. rootkits, backdoors, and Trojans) to define the attacker's action spaces and counter-anti-forensics (i.e. anti-rootkits, anti-backdoors, and Anti-Trojans) to determine the investigator's action spaces, and we build three datasets. We formulate the players' payoff functions and calculate their payoff matrices. Finally, the fictitious and gradient play algorithms are selected as the most coordinated game-theoretic algorithms. Furthermore, to introduce a capability for the players to expand their action spaces in the forensic environment and examine Nash equilibrium of the game without a need for re-simulating the game since the beginning, we propose a memory component and introduce an extended game-theoretic algorithm. We identify the fictitious play algorithm as the best game-theoretic algorithm and introduce assistive rules for the investigator., Electronic Only.
Generating SADI semantic web services from declarative descriptions
Generating SADI semantic web services from declarative descriptions
by Mohammad Sadnan Al Manir, Accessing information stored in databases remains a challenge for many types of end users. In contrast, accessing information from knowledge bases allows for more intuitive query formulation techniques. Whereas knowledge bases can be directly instantiated by the materialization of data according to a reference semantic model, a more scalable approach is to rely on queries formulated using ontologies rewritten as database queries at query time. Both of these approaches allow semantic querying, which involves the application of domain knowledge written in the form of axioms and declarative semantic mapping rules. In neither case are users required to interact with the underlying database schemas. A further approach offering semantic querying relies on SADI Semantic Web services to access relational databases. In this approach, services brokering access to specific data sets can be automatically discovered, orchestrated into workflows, and invoked to execute queries performing data retrieval or data transformation. This can be achieved using specialized query clients built for interfacing with services. Although this approach provides a successful way of accessing data, creating services requires advanced skills in modeling RDF data and domain ontologies, writing of program code and SQL queries. In this thesis we propose the Valet SADI framework as a solution for automation of SADI Semantic Web service creation. Valet SADI represents a novel architecture comprising four modules which work together to generate and populate services into queryable registries. In the first module declarative semantic mappings are written between source databases and domain ontologies. In a second module, the inputs and outputs of a service are defined in a service ontology with reference to domain ontologies. The third module creates un-instantiated SQL queries automatically based on a semantic query, the target database, domain ontologies and mapping rules. The fourth module produces the source code for a complete and functional SADI service containing the SQL query. The inputs to the first two modules are verified manually while the other modules are fully automated. Valet SADI is demonstrated in two use cases, namely, the creation of a queryable registry of services for surveillance of hospital acquired infections, and the preservation of interoperability in a malaria surveillance infrastructure.
Generating realistic trace files for memory management simulators by instrumenting IBM's J9 Java Virtual Machine
Generating realistic trace files for memory management simulators by instrumenting IBM's J9 Java Virtual Machine
by Johannes Ilisei, High-level programming languages like Java, C#, or Python rely on memory management systems that allocate and free objects automatically. A Java Virtual Machine (JVM) is responsible to execute compiled Java code. Several JVM implementations are available that include ongoing improvements throughout many years with reductions in execution time and memory footprint as well as the addition of new features. JVM implementations are large-sized projects that consist of many files, classes, and functions. Changing or extending the code can be a difficult and time consuming task. Therefore, simulators that reproduce desired JVM operations are available. They can be used to implement and test new features in little time. As with the Java Virtual Machine, a simulator requires instructions as form of input files with information on what operations to perform. These files are called trace files and they are generated with an instrumented JVM. Relevant operations are captured and printed into a file while running the JVM. This master's thesis focuses on the generation of trace files that represent JVM operations as realistically as possible. At the start of this project, two types of trace file generators already exist. Unfortunately, both of them contain errors that lead to a false JVM representation. Thereby, results gathered from simulators are unreliable. A new form of trace file generation is required that is able to produce correct inputs for a simulator. The project presented in this thesis captures JVM operations directly from IBM's J9 Java Virtual Machine's bytecode instructions. In addition, a comparison between previous and new trace files and their different effects on the simulator is part of this thesis., M.C.S. University of New Brunswick, Faculty of Computer Science, 2017.
Grailog KS Viz 2.0: graph-logic knowledge visualization by XML-based translation
Grailog KS Viz 2.0: graph-logic knowledge visualization by XML-based translation
by Leah Bidlake, Knowledge visualization is the expression of knowledge through graphical presentations with the goal of validating or communicating knowledge. Formal knowledge, which is used in Data Modeling, the Semantic Web, etc., is based on ontologies and rules, which can be represented in (Description and Horn) logics and presented as (generalized) graphs. Graph Inscribed Logic (Grailog) can be used to visualize RuleML knowledge. The earlier Grailog KS Viz transforms Datalog RuleML to Grailog visualizations in Scalable Vector Graphics (SVG). This thesis develops a tool, Grailog KS Viz 2.0, that is able to visualize Horn Logic (Hornlog) with Equality. It uses XSLT 2.0 with internal JavaScript to process arbitrary levels of function nesting in a recursive manner. The tool has also been extended from n-ary relations with n ≥ 2 to those with n ≥ 1 (including classes as unary relations), based on the labelnode normal form of Grailog. JavaScript is used to calculate the coordinates for positioning, and determines the dimensions of, the SVG elements and viewport, but is no longer required in the static image. Our Purifier thus removes the internal JavaScript from the static Grailog/SVG visualization generated by the tool. This assures that there are no malicious scripts, reduces the time required to render the Grailog/SVG visualization, and greatly reduces the final file size. The visualization of function applications with multiple levels of nesting generated by Grailog KS Viz 2.0 was evaluated using test cases that illuminate knowledge about graph-theoretical definitions. A larger use case was developed for teaching the business rules of managing the financial aspect of a non-profit organization. The processing speed as well as quality and accuracy of the rendered SVG are consistently high across common modern Web browsers. Grailog KS Viz 2.0 thus provides increased security, expressivity, and efficiency for viewing, sharing, and storing Grailog/SVG visualizations.
High performance Python through workload acceleration with OMR JitBuilder
High performance Python through workload acceleration with OMR JitBuilder
by Dayton J. Allen, Python remains one of the most popular programming languages in many domains including scientific computing. Its reference implementation, CPython, is by far the most used version. CPython's runtime is bytecode-interpreted and leaves much to be desired when it comes to performance. Several attempts have been made to improve CPython's performance such as reimplementing performance-critical code in a more high-performance language (e.g. C, C++, Rust), or, transpiling Python source code to a more high-performance language, which is then called from within CPython through some form of FFI mechanism. Another approach is to JIT compile performance-critical Python methods or utilize alternate implementations that include a JIT compiler. JitBuilder provides a simplified interface to the underlying compiler technology available in Eclipse OMR. We propose using JitBuilder to accelerate performance-critical workloads in Python. By creating Python bindings to JitBuilder's public interface, we can generate native code callable from within CPython without any modifications to its runtime. Results demonstrate that our approach rivals and in many cases outperforms state-of-the-art JIT compiler based approaches in the current ecosystem { namely, Numba and PyPy.
High-level synthesis improvements and optimizations in Odin II
High-level synthesis improvements and optimizations in Odin II
by Bo Yan, A Field-Programmable Gate Array (FPGA) is an integrated circuit that allows users to program product features and functions after manufacturing. Verilog-to-Routing (VTR) is an open source CAD tool for conducting FPGA architecture and CAD research. As one of the core tools of VTR, Odin II is responsible for Verilog elaboration and hard block synthesis. This project describes the improvements in Odin II on three aspects: for loop support, AST simplication and hard block reduction. A for loop is an important statement in Verilog HDL and should be supported by Odin II. There are different entry points to simplify an AST, and this thesis demonstrates three ways for this purpose: simplifying expressions with variables, reducing parameters with values and using shift operations when possible to replace multiplication or division. For the needs of the Verilog code, some hard blocks in the netlist have the same high-level function. This project provides a method to reduce redundant hard blocks. Each implementation is tested by designed test cases or sets of standard benchmarks, and the results of running them through Odin II and VTR are shown. From the results, improvements are demonstrated., Electronic Only. (UNB thesis number) Thesis 9518. (OCoLC) 965908870., M.C.S., University of New Brunswick, Faculty of Computer Science, 2014.
How the visual design of video game antagonists affects perception of morality
How the visual design of video game antagonists affects perception of morality
by Reyhan Pradantyo, Antagonists play an important role in video games, as they often act as the source of a game's main challenge. A key part of how antagonists are experienced is through their visual design. Antagonists differ from other characters in that they are typically viewed as being immoral. However, there is limited research focused specifically on how antagonists are visually designed, and how this affects players' perceptions of antagonist morality. To build this understanding, we gathered people's ratings of 105 antagonists. By examining the correlation between the prominence of antagonists' visual attributes and how “bad” participants perceive a character, our findings provide new insight into the design of characters. We also show how the antagonist designs in our sample show a spectrum of morality and are not always perceived as purely or clearly immoral. We provide an improved understanding of game design practices and explore how they can be better studied and supported.
ILP models for scheduling while minimizing peak power consumption
ILP models for scheduling while minimizing peak power consumption
by Damian Jewett, The Peak Power Minimization Scheduling Problem (PPMSP) is a job shop scheduling problem where peak power consumption is minimized, as opposed to makespan, total cost or some other common objective. A formal integer linear programming (ILP) model is developed for this scheduling problem, called the initial PPMSP model. This initial model is then used to create the Scheduler, an application for creating production schedules given unscheduled sets of production data constrained under precedence relations. The Scheduler uses a free solver called GLPSOL. The Estimator is another application that, given a production schedule, generates a plot of the expected power consumption over the course of the schedule. Later, an alternate PPMSP model is discussed, which aims to improve solution times by using fewer binary variables. Testing indicates that the alternate model provides no significant improvement in practice. Much better solution times can be achieved with more powerful solvers, such as CPLEX., Electronic Only. (UNB thesis number) Thesis 9455. (OCoLC)956660240., M.C.S. University of New Brunswick, Faculty of Computer Science, 2014.
Implementing a content-based recommender system for news readers
Implementing a content-based recommender system for news readers
by Mahta Moattari, Recommender systems are widely used to suggest items to users based on users' interests. Content-based recommender systems are popular, specifically in the area of news services. This report describes the implementation of an effective online news recommender system by combining two different algorithms. Our first algorithm employs users' activity histories as inputs. Then it processes this data using a Bayesian framework to predict users' genuine interests[10], and as a result suggests new articles based on those interests. The other algorithm attempts to find keyword matches among the user's keywords and new articles' keywords to suggest new articles to that user. The Java language was used to implement these algorithms. To test the system, ten different users were chosen randomly among those users who posted comments for more than 50 articles from 2012/05/01 to 2012/07/30. These experiments show that our system successfully suggested new articles to users based on their fields of interest., A Report Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science in the Graduate Academic Unit of Computer Science Electronic Only. (UNB thesis number) Thesis 9192. (OCoLC) 960905592, M.C.S., University of New Brunswick, Faculty of Computer Science, 2013.
Improved ordering of ESOP cubes for Toffoli networks
Improved ordering of ESOP cubes for Toffoli networks
by Zakaria Hamza, Logic synthesis deals with the problem of finding a cost-effective realization of a given logic function. This uses several state-of-the-art techniques and involves several tools of mathematical origin. In recent years reversible logic has been suggested to address the power consumption associated with computation. To accomplish such a task, synthesis of reversible logic function is needed. Several new synthesis methods have been developed. In this thesis methods are proposed that improve on a given synthesis method. In particular, interest has been demonstrated in the optimization of this class of circuits which use the particular Exclusive-or Sum of Product (ESOP) terms representation. The advantage this representation format offers is in the ease of mapping the function to a network of Toffoli logic gates. However, this synthesis technique provides non-optimal results which could be improved. This problem has roots in both the representation and mapping processes of synthesis. It is well-known that the order of the terms in the ESOP expression will have a direct effect on the cost of the implementation. The problem of finding the optimal order can be mapped into the Generalized Traveling Salesman Problem. Another route of optimization involves reducing the number of terms used to represent the function. This can be achieved by canonical representation of functions. Both of these have proven to offer enhancements over existing synthesis techniques and have been developed in this thesis. Experimental results show that significant improvements can be achieved with the proposed methods., (UNB thesis number) Thesis 8777. (OCoLC)810261535., M.C.S. University of New Brunswick, Faculty of Computer Science, 2011

Pages

Zircon - This is a contributing Drupal Theme
Design by WeebPal.