Browsing by Author "Cao, Hung"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Developing a Resource-Constraint EdgeAI model for Surface Defect Detection(2023) Mih, Atah Nuh; Cao, Hung; Kawnine, Asfia; Wachowicz, MonicaResource constraints have restricted several EdgeAI applications to machine learning inference approaches, where models are trained on the cloud and deployed to the edge device. This poses challenges such as bandwidth, latency, and privacy associated with storing data off-site for model building. Training on the edge device can overcome these challenges by eliminating the need to transfer data to another device for storage and model development. On-device training also provides robustness to data variations as models can be retrained on newly acquired data to improve performance. We therefore propose a lightweight EdgeAI architecture modified from Xception, for on-device training in a resource-constraint edge environment. We evaluate our model on a PCB defect detection task and compare its performance against existing lightweight models - MobileNetV2, EfficientNetV2B0, and MobileViT-XXS. The results of our experiment show that our model has a remarkable performance with a test accuracy of 73.45% without pre-training. This is comparable to the test accuracy of non-pre-trained MobileViT-XXS (75.40%) and much better than other non-pre-trained models (MobileNetV2 - 50.05%, EfficientNetV2B0 - 54.30%). The test accuracy of our model without pre-training is comparable to pre-trained MobileNetV2 model - 75.45% and better than pre-trained EfficientNetV2B0 model - 58.10%. In terms of memory efficiency, our model performs better than EfficientNetV2B0 and MobileViT-XXS. We find that the resource efficiency of machine learning models does not solely depend on the number of parameters but also depends on architectural considerations. Our method can be applied to other resource-constraint applications while maintaining significant performance.Item Developing an analytics everywhere framework for the Internet of Things in smart city applications(University of New Brunswick, 2019) Cao, Hung; Wachowicz, MonicaDespite many efforts on developing protocols, architectures, and physical infrastructures for the Internet of Things (IoT), previous research has failed to fully provide automated analytical capabilities for exploring IoT data streams in a timely way. Mobility and co-location, coupled with unprecedented volumes of data streams generated by geo-distributed IoT devices, create many data challenges for extracting meaningful insights. This research work aims at exploring an edge-fog-cloud continuum to develop automated analytical tasks for not only providing higher-level intelligence from continuous IoT data streams but also generating long-term predictions from accumulated IoT data streams. Towards this end, a conceptual framework, called “Analytics Everywhere”, is proposed to integrate analytical capabilities according to their data life-cycles using different computational resources. Three main pillars of this framework are introduced: resource capability, analytical capability, and data life-cycle. First, resource capability consists of a network of distributed compute nodes that can handle automated analytical tasks either independently or in parallel, concurrently or in a distributed manner. Second, analytical capability orchestrates the execution of algorithms to perform streaming descriptive, diagnostic, and predictive analytics. Finally, data life-cycles are designed to manage both continuous and accumulated IoT data streams. The research outcomes from a smart parking and a smart transit scenario have confirmed that a single computational resource is not sufficient to support all analytical capabilities that are needed for IoT applications. Moreover, the implemented architecture relied on an edge-fog-cloud continuum and offered some empirical advantages: (1) on-demand and scalable storage; (2) seamlessly coordination of automated analytical tasks; (3) awareness of the geo-distribution and mobility of IoT devices; (4) latency-sensitive data life-cycles; and (5) resource contention mitigation.Item Unlocking the benefits of transfer learning in edge-cloud computing environments(University of New Brunswick, 2024-04) Nuh Mih, Atah; Cao, HungTransfer learning’s success motivates the need to understand its characteristics across cloud, edge, and edge-cloud computing paradigms. Thus, this extensive research evaluates the role of transfer learning in 1) cloud computing; 2) edge computing; and 3) edge-cloud computing. It first proposes a transfer learning approach to address the data limitation and model scalability challenges for machine learning in a cloud computing environment. Then, this study provides a model optimization for deep neural networks to improve hardware efficiency for training models on edge devices and investigates the role of transfer learning on resource consumption. Finally, a weight-averaging method is proposed for collaborative knowledge transfer across a unified edge and cloud computing environment to improve training performance for local edge models and global server models. The research conclusively shows that transfer learning benefits edge and cloud computing paradigms both individually and collaboratively.