Faculty of Computer Science (Fredericton)

Pages

Collaborative content distribution with device-to-device communications
Collaborative content distribution with device-to-device communications
by Jianguo Xie, With the increasing penetration of smart devices, device-to-device (D2D) communications offer a promising paradigm to accommodate the ever-growing mobile traffic and unremitting demands. The redundant storage and communication capacities of smart devices can be exploited for collaborative content caching and distribution. In this thesis, we first studied the D2D pairing problem, which appropriately pairs a device requesting a content item with a nearby device which caches the requested item. We formulated the D2D pairing problem as an integer linear program (ILP) and developed a heuristic channel-aware algorithm to solve it. Computer simulations were conducted to compare the channel-aware algorithm with the optimal solution, as well as a minimum distance-based algorithm and a random algorithm. The results show that the channel-aware algorithm outperforms the random algorithm in terms of total number of served D2D pairs and average latency of served pairs. Then, we studied the message allocation problem, which allocates the message requests to be served by the cache devices via D2D multicast. Aiming to minimize the total transmission cost or maximize the gain in cost saving for the base station (BS), this message allocation problem can be formulated from different perspectives, as a weighted set cover problem (WSCP), a hypergraph matching problem, or a multiple-choice knapsack problem (MCKP). We compared three algorithms for the formulated problems, including a greedy algorithm, a heuristic algorithm based on Lagrangian relaxation, and a fully polynomial-time approximation scheme (FPTAS), respectively. The simulations results show that the WSCP based algorithm outperforms the other two in static scenarios in terms of D2D offload ratio and total cost. In dynamic scenarios, the MCKP based algorithm performs the best because the approximation guarantee of the FPTAS results in solutions closest to the optimum.
Collective decision making using conditional preference networks without considering all users' preferences over all attributes
Collective decision making using conditional preference networks without considering all users' preferences over all attributes
by Mina Joroughi, Due to the great increase in e-marketing and high competition to attract more customers and keep them satisfied, collective decision making, the process of making recommendations to a group of people, has become an active research area. CP-nets (Conditional Preference Networks) are a widely used tool to represent users' preferences, but the problem is that in real world situations, we have a large number of users conveying their preferences over a large number of attributes; therefore, comparing the exponential number of outcomes for all users in a collective decision making process is infeasible. In this research, we have looked at reducing the number of outcomes to be considered in the process of collective decision making by examining the question of whether we need to consider all users' preferences over all attributes. We propose a novel procedure for collective decision making by clustering users and considering users' preferences only over the most important attributes for that cluster. The use of attribute-weighting techniques and clustering methods allows for searching in a much smaller subspace of attributes and as a consequence requires a smaller number of comparisons between the outcomes, which makes our method more practical for real world problems. In our approach to make users as satisfied as possible, our methods produce two different kinds of outcomes: a global recommended outcome and cluster-specific outcomes, which can be offered in different situations. The results of our experiments demonstrate that the methods can produce high-quality recommendations despite the fact that users' preferences over many attributes are ignored.
Combining legacy modernization approaches for OO and SOA
Combining legacy modernization approaches for OO and SOA
by Lubna Tahlawi, Organizations with older legacy systems face a number of challenges, including obsolescent technologies, brittle software, integrating with modem applications, and rarity of properly skilled human resources. An increasingly common strategy for addressing such challenges is application modernization, which transforms legacy applications into (a) newer object-oriented programming languages, and (b) modem Service-Oriented Architecture (SOA). Published approaches to legacy application modernization focus either on technology transformation or SOA transformation, but not both. Given that both types of transformation are desirable, it is valuable to explore how to combine existing approaches to perform both transformations types within a single project. This thesis proposes principles for combining such approaches, and demonstrates how these principles can be applied through an example of a combined approach along with a simulated application of this example. The results of this simulated application leave us with considerable confidence that both transformations can be successfully incorporated into a combined project.
Communication-efficient privacy-preserving query for fog-enhanced Internet of Things
Communication-efficient privacy-preserving query for fog-enhanced Internet of Things
by Nafiseh Izadi Yekta, Internet of Things (IoT) has attracted significant attention in recent years and various IoT devices including industrial and utility components and other items embedded with electronics, sensors, and network connectivity already provide rich services to the end users. IoT devices report their data directly to the cloud computing constantly, which causes big data challenges in both storage and transmission. The classic centralized cloud computing paradigm is an ideal solution to deal with the storage issues. However, the cloud computing paradigm faces the challenges of low capacity, high latency, security and privacy and network failure. To address these challenges, the concept of the fog computing has been proposed by Cisco [8]. Instead of sending all the data to the cloud for processing and storing, fog computing provides local data processing capability and storage to fog devices. The goal of fog computing is to improve efficiency and reduce the amount of data transmitted to the cloud. Nevertheless, IoT still faces some security and privacy challenges. Query service is one of the standard services in IoT applications, It is when, an end user requests a value from an IoT device and the server is responsible for sending the value from the specific IoT device as per the query. In some IoT scenarios, privacy-preservation may be required for both the client and the service provider. Therefore, privacy preserving query schemes are desirable in some IoT applications. In this work, we proposed two privacy preserving query schemes with efficient communications. PQuery is characterized by combining private information retrieval and 1-out-of-m oblivious transfer techniques to preserve privacy for both the end user and the service provider in IoT query service. From the performance analysis, PQuery is very efficient in term of communication overheads, i.e., achieving O(n1/3) between the end user and the fog device. However, we also realize that the computational costs are not efficient in PQuery, especially the computational costs at the fog device. Therefore, in the second work, we tried to achieve a better balance between the communication and computational costs. We proposed XRQuery which is inspired by the XNOR gates in logical circuits to achieve privacy preservation for both the service provider and user in an IoT query service. While XRQuery is highly efficient in terms of communication cost, i.e., achieving O(log n) between the end user and the fog device, extensive performance evaluations show that it is much faster than PQuery in all three stages (end user query, fog device response and end user result checking).
Concurrent task execution on the Intel Xeon Phi
Concurrent task execution on the Intel Xeon Phi
by Yucheng Zhu, The Intel Xeon Phi coprocessor is a new choice for the high performance computing industry and it needs to be tested. In this thesis, we compared the difference in performance between the Xeon Phi and the GPU. The Smith-Waterman algorithm is a widely used algorithm for solving the sequence alignment problem. We implemented two versions of parallel SmithWaterman algorithm for the Xeon Phi and GPU. Inspired by CUDA stream which enables concurrent kernel execution on Nvidia’s GPUs, we propose a socket based mechanism to enable concurrent task execution on the Xeon Phi. We then compared our socket implementation with Intel’s offload mode and with an Nvidia GPU. The results showed that our socket implementation performs better than the offload mode but is still not as good as the GPU., M.C.S. University of New Brunswick, Faculty of Computer Science, 2015.
Contextualized embeddings encode knowledge of English verb-noun combination idiomaticity
Contextualized embeddings encode knowledge of English verb-noun combination idiomaticity
by Samin Fakharian, English verb-noun combinations (VNCs) consist of a verb with a noun in its direct object position, and can be used as idioms or as literal combinations (e.g., hit the road). As VNCs are commonly used in language and their meaning is often not predictable, they are an essential topic of research for NLP. In this study, we propose a supervised approach to distinguish idiomatic and literal usages of VNCs in a text based on contextualized representations, specifically BERT and RoBERTa. We show that this model using contextualized embeddings outperforms previous approaches, including the case that the model is tested on instances of VNC types that were not observed during training. We further consider the incorporation of linguistic knowledge of lexico-syntactic fixedness of VNCs into our model. Our findings indicate that contextualized embeddings capture this information., Electronic Only.
Conversation-based P2P botnet detection with decision fusion
Conversation-based P2P botnet detection with decision fusion
by Shaojun Zhang, Botnets have been identified as one of the most dangerous threats through the Internet. A botnet is a collection of compromised computers called zombies or bots controlled by malicious machines called botmasters through the command and control (C&C) channel. Botnets can be used for plenty of malicious behaviours, including DDOS, Spam, stealing sensitive information to name a few, all of which could be very serious threats to parts of the Internet. In this thesis, we propose a peer-to-peer (P2P) botnet detection approach based on 30-second conversation. To the best of our knowledge, this is the first time conversation-based features are used to detect P2P botnets. The features extracted from conversations can differentiate P2P botnet conversations from normal conversations by applying machine learning techniques. Also, feature selection processes are carried out in order to reduce the dimension of the feature vectors. Decision tree (DT) and support vector machine (SVM) are applied to classify the normal conversations and the P2P botnet conversations. Finally, the results from different classifiers are combined based on the probability models in order to get a better result., Electronic Only (UNB thesis number) Thesis 9143 (OCoLC) 960860070, M.C.S., University of New Brunswick, Faculty of Computer Science, 2013.
Core task assistance in video games
Core task assistance in video games
by Jawad Jandali Refai, Video games can be challenging, which is part of what makes games stimulating and entertaining. However, if they are too challenging, the player may find it frustrating. Game designers may balance their game by providing players with assistance. Previous work explores the effectiveness of potential assistance techniques within a particular genre and platform. Complex games could require several types of assistance to support a wide variety of gameplay mechanics. Designers would need to gather information from scattered sources to make informed decisions to apply optimal assistance. In this thesis, we propose a generalized framework for assistance in games, irrespective of genre or target platform. We achieve this by discussing techniques targeted at the 10 fundamental core tasks in video games that are the base of any game mechanic, such as Aiming, Reaction Time, and Visual Search. We also explore the best practices for choosing, interpreting, and implementing one of the 35 assistance techniques.
Correlation between computer recognized facial emotions and informed emotions during a casino computer game
Correlation between computer recognized facial emotions and informed emotions during a casino computer game
by Nils Reichert, Emotions play an important role for everyday communication. Different methods allow computers to recognize emotions. Most are trained with acted emotions and it is unknown if such a model would work for recognizing naturally appearing emotions. An experiment was setup to estimate the recognition accuracy of the emotion recognition software SHORE, which could detect the emotions angry, happy, sad, and surprised. Subjects played a casino game while being recorded. The software recognition was correlated with the recognition of ten human observers. The results showed a strong recognition for happy, medium recognition for surprised, and a weak recognition for sad and angry faces. In addition, questionnaires containing self-informed emotions were compared with the computer recognition, but only weak correlations were found. SHORE was able to recognize emotions almost as well as humans were, but if humans had problems to recognize an emotion, then the accuracy of the software was much lower.
Cross-lingual word embeddings for low-resource and morphologically-rich languages
Cross-lingual word embeddings for low-resource and morphologically-rich languages
by Ali Hakimi Parizi, Despite recent advances in natural language processing, there is still a gap in state-of-the-art methods to address problems related to low-resource and morphologically-rich languages. These methods are data-hungry, and due to the scarcity of training data for low-resource and morphologically-rich languages, developing NLP tools for them is a challenging task. Approaches for forming cross-lingual embeddings and transferring knowledge from a rich- to a low-resource language have emerged to overcome the lack of training data. Although in recent years we have seen major improvements in cross-lingual methods, these methods still have some limitations that have not been addressed properly. An important problem is the out-of-vocabulary word (OOV) problem, i.e., words that occur in a document being processed, but that the model did not observe during training. The OOV problem is more significant in the case of low-resource languages, since there is relatively little training data available for them, and also in the case of morphologically-rich languages, since it is very likely that we do not observe a considerable number of their word forms in the training data. Approaches to learning sub-word embeddings have been proposed to address the OOV problem in monolingual models, but most prior work has not considered sub-word embeddings in cross-lingual models. The hypothesis of this thesis is that it is possible to leverage sub-word information to overcome the OOV problem in low-resource and morphologically-rich languages. This thesis presents a novel bilingual lexicon induction task to demonstrate the effectiveness of sub-word information in the cross-lingual space and how it can be employed to overcome the OOV problem. Moreover, this thesis presents a novel cross-lingual word representation method that incorporates sub-word information during the training process to learn a better cross-lingual shared space and also better represent OOVs in the shared space. This method is particularly suitable for low-resource scenarios and this claim is proven through a series of experiments on bilingual lexicon induction, monolingual word similarity, and a downstream task, document classification. More specifically, it is shown that this method is suitable for low-resource languages by conducting bilingual lexicon induction on twelve low-resource and morphologically-rich languages.
Cryptanalysis of a knapsack cryptosystem
Cryptanalysis of a knapsack cryptosystem
by Ruqey Alhassawi, Knapsack cryptosystems are classified as public key cryptosystems. This kind of cryptosystem uses two different keys for the encryption and decryption process. This feature offers strong security for these cryptosystems because the decryption key cannot be derived from the encryption key. Since the Merkle-Hellman knapsack cryptosystem, the first proposed version of knapsack cryptosystems, many knapsack cryptosystems have been suggested. Unfortunately, most knapsack cryptosystems that have been introduced so far are not secure against cryptanalysis attacks. These cryptanalytic attacks find weaknesses in the designs of the knapsack cipher. There are two cryptanalysis systems mentioned in this thesis. These are the Shamir Merkle-Hellman knapsack attack and the Basis Reduction Algorithm (called the LLL algorithm). Accordingly, the main goal of this thesis is to implement Visual Basic programs with these two knapsack cryptanalytic attacks. These Visual Basic programs are for testing many versions of knapsack cryptosystems including a newly invented knapsack system. The result of the testing shows that the knapsack cryptosystems are indeed weak, especially against the Reduced Basis Algorithm. This result does not appear to hold for all cases such as the new knapsack system suggested and the Super-Pascal knapsack cryptosystem., Electronic Only. (UNB thesis number) Thesis 9191 (OCoLC) 960905126, M.C.S., University of New Brunswick, Faculty of Computer Science, 2013.
Current security trends and assessment of cyber threats
Current security trends and assessment of cyber threats
by Bijiteshwar Rudra Aayush, Continuous functioning of critical infrastructure is one of the foundations for the socio economic activities and development of a country. Owing to the continuous development in technologies, computers, other computing services, software and cyber space are used for interconnection, information processing and communication. The development in technology and the use of cyber space have created new threats and vulnerabilities which could pose at least as significant a threat as a physical attack. Lately cyber criminals and terrorists are using their skills to exploit cyber space and they are committing severe crimes. The objectives of this Masters report are to explain the role of cyber space and computing technologies on critical infrastructure and highlight several cyber threats and countermeasures. This report also highlights the need of secure software development and explains how an average programmer can contribute in securing cyber space and what effect that can have on national infrastructure., A Report Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science in the Graduate Academic Unit of Computer Science. Electronic Only., M.C.S. University of New Brunswick, Faculty of Computer Science, 2015.

Pages

Zircon - This is a contributing Drupal Theme
Design by WeebPal.