Browsing by Author "Herpers, Rainer"
Now showing 1 - 11 of 11
Results Per Page
ItemAccelerating the MMD Algorithm using the Cell Broadband Engine(2010) Schlösser, Michael; Herpers, Rainer; Kent, Kenneth, B. ItemAcceleration of Blob Detection in a Video Stream using Hardware(2010) Bochem, Alexander; Herpers, Rainer; Kent, Kenneth, B.This report presents the implementation and evaluation of a computer vision problem on a Field Programmable Gate Array (FPGA). This work is based upon where the feasibility of application specific image processing algorithms on a FPGA platform have been evaluated by experimental approaches. The results and conclusions of that previous work builds the starting point for the work, described in this report. The project results show considerable improvement of previous implementations in processing performance and precision. Different algorithms for detecting Binary Large OBjects (BLOBs) more precisely have been implemented. In addition, the set of input devices for acquiring image data has been extended by a Charge-Coupled Device (CCD) camera. The main goal of the designed system is to detect BLOBs in continuous video image material and compute their center points. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. The intent is the invention of a passive tracking device for an immersive environment to improve user interaction and system usability. Therefore the detection of the users position and orientation in relation to the projection surface is required. For a reliable estimation a robust and fast computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on an Altera DE2 Development and Education Board with a Cyclone II FPGA. It detects binary spatially extended objects in image material and computes their center points. Two different sources have been applied to provide image material for the processing. First, an analog composite video input, which can be attached to any compatible video device. Second, a five megapixel CCD camera, which is attached to the DE2 board. The results are transmitted on the serial interface of the DE2 board to a PC for validation of their ground truth and further processing. The evaluation compares precision and performance gain dependent on the applied computation methods and the input device, which is providing the image material. ItemAcceleration of Blob Detection Within Images in Hardware(2009) Bochem, Alexander; Herpers, Rainer; Kent, Kenneth, B.This report presents the implementation and evaluation of a computer vision task on a Field Programmable Gate Array (FPGA). As an experimental approach for an application-specific image-processing problem it provides reliable results to measure gained performance and precision compared with similar solutions on General Purpose Processor (GPP) architectures. The project addresses the problem of detecting Binary Large OBjects (BLOBs) in a continuous video stream. For this problem a number of different solutions exist. But most of these are realized on GPP platforms, where resolution and processing speed define the performance barrier. With the opportunity of parallelization and performance abilities like in hardware, the application of FPGAs become interesting. This work belongs to the MI6 project from the Computer Vision research group of the University of Applied Sciences Bonn-Rhein-Sieg. It address the detection of the users position and orientation in relation to the virtual environment in an Immersion Square. The goal is to develop a light emitting device, that points from the user towards the point of interest on the projection screen. The projected light dots are used to represent the user in the virtual environment. By detecting the light dots with video cameras, the idea is to interface the position and orientation of the relative position of the user interface. Fort that the laser dots need to be arranged in a unique pattern, which requires at least five points. For a reliable estimation a robust computation of the BLOB's center-points is necessary. This project has covered the development of a BLOB detection system on a FPGA platform. It detects binary spatially extended objects in a continuous video stream and computes their center points. The results are displayed to the user and where validated for their ground truth. The evaluation compares precision and performance gain against similar approaches on GPP platforms. ItemActive tracking with accelerated image processing in hardware(University of New Brunswick, 2010) Bochem, Alexander; Kent, Kenneth; Herpers, RainerThis thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the exibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the emitted light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user's position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might be several cm2 in size. Having a precision error of only a few pixels, might lead to an offset in the estimated user's position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been implemented. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design with other input devices or to support the detection is possible with some constraints, which comes with the available resources on the target platform. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the large increase of the allocated resources when extending the system to process more BLOBs. With one of the target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system's performance and precision. ItemAuthorship attribution in the dark web(University of New Brunswick, 2020) Sennewald, Britta; Kent, Kenneth; Herpers, RainerThis thesis is about authorship attribution (AA) within multiple Dark Web forums and the question of whether AA is possible beyond the boundaries of a single forum. AA can become a curse for users that try to protect their anonymity and simultaneously become a blessing for law enforcement groups that try to track users. To determine to what extent AA threatens the anonymity of Dark Web users, a dataset of four Dark Web forums was created. Within the analysis, two different approaches are considered: feeding classifiers with posts from two forums, and training classifiers with posts from another forum than what is used for testing. Even for the largest dataset, the author of a post is at least 94% within the top three most likely candidates. This shows that AA can be a danger to the anonymity of Dark Web users across the boundaries of different forums. ItemCorrelation between computer recognized facial emotions and informed emotions during a casino computer game(University of New Brunswick, 2012) Reichert, Nils; Kent, Kenneth; Herpers, RainerEmotions play an important role for everyday communication. Different methods allow computers to recognize emotions. Most are trained with acted emotions and it is unknown if such a model would work for recognizing naturally appearing emotions. An experiment was setup to estimate the recognition accuracy of the emotion recognition software SHORE, which could detect the emotions angry, happy, sad, and surprised. Subjects played a casino game while being recorded. The software recognition was correlated with the recognition of ten human observers. The results showed a strong recognition for happy, medium recognition for surprised, and a weak recognition for sad and angry faces. In addition, questionnaires containing self-informed emotions were compared with the computer recognition, but only weak correlations were found. SHORE was able to recognize emotions almost as well as humans were, but if humans had problems to recognize an emotion, then the accuracy of the software was much lower. ItemDynamic monitor allocation in the IBM J9 virtual machine(University of New Brunswick, 2013) Dombrowski, Marcel; Kent, Kenneth; Herpers, RainerWith the Java language and sandboxed environments becoming more and more popular, research needs to be conducted into improving the performance of these environments while decreasing their memory footprints. This thesis focuses on a dynamic approach for growing monitors for objects in order to reduce the memory footprint and improve the execution time of the IBM Java Virtual Machine. According to the Java Language Specification every object needs to have the ability to be used for synchronization. This new approach grows monitors only when required. The impact of this approach on performance and memory has been evaluated using different benchmarks and future work is also discussed. On average, a performance increase of 0.6% and a memory reduction of about 5% has been achieved with this approach. ItemEnhancing the MMD algorithm in multi-core environments(University of New Brunswick, 2011) Schlösser, Michael; Kent, Kenneth; Herpers, RainerThe work done in this thesis enhances the MMD algorithm in multi-core environments. The MMD algorithm, a transformation based algorithm for reversible logic synthesis, is based on the works introduced by Maslov, Miller and Dueck and their original, sequential implementation. It synthesises a formal function specification, provided by a truth table, into a reversible network and is able to perform several optimization steps after the synthesis. This work concentrates on one of these optimization steps, the template matching. This approach is used to reduce the size of the reversible circuit by replacing a number of gates that match a template which implements the same function and uses less gates. Smaller circuits have several benefits since they need less area and are not as costly. The template matching approach introduced in the original works is computationally expensive since it tries to match a library of templates against the given circuit. For each template at each position in the circuit, a number of different combinations have to be calculated during runtime resulting in high execution times, especially for large circuits. In order to make the template matching approach more efficient and usable, it has been reimplemented in order to take advantage of modern multi-core architectures such as the Cell Broadband Engine or a Graphics Processing Unit. For this work, two algorithmically different approaches that try to consider each multi-core architecture’s strengths, have been analyzed and improved. For the analysis these approaches have been cross-implemented on the two target hardware architectures and compared to the original parallel versions. Important metrics for this analysis are the execution time of the algorithm and the result of the minimization with the template matching approach. It could be shown that the algorithmically different approaches produce the same minimization results, independent of the used hardware architecture. However, both cross-implementations also show a significantly higher execution time which makes them practically irrelevant. The results of the first analysis and comparison lead to the decision to enhance only the original parallel approaches. Using the same metrics for successful enhancements as mentioned above, it could be shown that improving the algorithmic concepts and exploiting the capabilities of the hardware lead to better results for the execution time and the minimization results compared to their original implementations. ItemInvestigation of encrypted and obfuscated network traffic utilizing machine learning(University of New Brunswick, 2020) Boldt, Kay-Uwe; Kent, Kenneth; Herpers, RainerThis thesis utilizes machine learning to investigate the classification of the encryption applied to network traffic and the underlying activities. It is firstly motivated by the difficulty of traditional traffic classification caused by additional encryption as ports and headers are hidden. Secondly, the results also present the effectiveness of currently available privacy-enhancing technologies. A new dataset is created, containing Pure (without additional encryption), Tor, Tor with obfuscation, VPN and VPN+Tor network traffic. Additionally, there are five different activities performed during each kind of traffic recording, namely audio streaming, browsing, P2P/SFTP file transfers and video conferencing. The traffic is classified by extracting features based on flows calculated by ARGUS and CICFlowMeter, combining three classifiers with seven feature selection algorithms. The results for the classification of the encryption are well and clearly indicate the possibility of using this detection system in a modified fashion within a practical application. For the detection of the activities inside the encrypted network traffic, the results show that the theoretical protection is not given. Overall, this reveals the need to improve the resistance of commonly used techniques for the protection of network traffic against machine learning. ItemTracing motivation in virtual agents(University of New Brunswick, 2013) SuBenburger, Eckart; Horton, J.; Herpers, RainerMathematical functions designed similar to emotional concepts in order to provide tools for emotion-based choice are investigated. Emotions have been treated as a disturbing factor to logical assumptions and decisions. Now they are rather treated as a necessity and a requirement for agents in complex or not fully graspable situations with complicated or concurrent goals. The software of SOCIAL, the 'Simulation Of Consciousness In Artificial Life' was developed to provide a research universe for virtual agents named BUGs. The emotionbased models of Affect Logic, Subsumption and Fungus Eater were compared against 2 randomizing benchmark approaches. To compare success of various routines, agents equipped with decision making strategies designed similar to these theoretical approaches were tested in predator-prey simulations with limited energy supply. The quantitative aspect of survival was measured through experiments. Experiments include three kinds of simulations: Simulations where all emotional routines compete inside BUGs of the same type, simulations where all types of BUGs are equipped with the same routine, and simulations with a mixture of type and emotional routine. The evolutionary concepts of selection and mutation were utilized to allow the BUGs to adapt their decision making strategy to the current simulation. For simulations run within SOCIAL, the memory based randomizing benchmark approach outperforms all sophisticated routines. Complex models of emotional choice seem to be not beneficial to virtual agents, but a burden instead. This is probably not a general result but limited to the specific experiment setups tested. ItemVisual exploration of changing FPGA architectures in the VTR project(University of New Brunswick, 2013) Nasartschuk, Konstantin; Kent, Kenneth; Herpers, RainerField Programmable Gate Arrays (FPGA) are used for prototyping hardware as well as in applications with frequently changing requirements. Boolean circuits produced based on hardware description language files are created by a Computer Aided Design (CAD) flow in order to optimize applications for a specific architecture. The Verilog to Routing (VTR) project provides an FPG A CAD flow developed especially for academic and experimental purposes. The CAD flow consists of the tools Odin II, ABC and VPR. This project describes the development of a visualization component capable of showing the netlist produced and optimized by the CAD flow. The ability to simulate the shown circuit not only allows developers to explore the structure of a circuit, but also to verify its functionality. The visualization is part of Odin II and uses its abilities such as the Odin II file handling and simulation. The application aims to assist developers in exploring how a netlist changes during the work flow. The improvement of Odin II and its simulation component is part of the thesis. In addition the ability to elaborate and simulate circuits with multiple clocks was added to the tool and the functionality embedded into the visualization component. Using the new abilities of Odin II in combination with the flexibility of other tools in the VTR CAD flow new FPGA architectures can be evaluated and tested. Designs which utilize multiple clocks in combination with hard logic can be elaborated, simulated and verified. The visual component provides functionalities to assist the process as netlists generated by Odin II and optimized by later stages in the CAD flow can be explored visually. This includes a visual simulation as well as the exploration of activity estimation data. The improvements aim to assist in research and experimentation with new FPGA architectures which could benefit research and industry.