On the evaluation of adversarial vulnerabilities of deep neural networks

dc.contributor.advisorGhorbani, Ali
dc.contributor.authorFoumani, Mehrgan Khoshpasand
dc.date.accessioned2023-03-01T16:17:40Z
dc.date.available2023-03-01T16:17:40Z
dc.date.issued2020
dc.date.updated2023-03-01T15:01:24Z
dc.description.abstractAdversarial examples are inputs designed by an adversary to fool the machine learning models. Despite having overly simplifying assumptions and extensive studies in recent years, there is no defense against adversarial examples for complex tasks (e.g., ImageNet). However, for more straightforward tasks like hand-written digit classification, a robust model seems to be within reach. Having the right estimation of the adversarial robustness of the models has been one of the main challenges researchers face. First, we present AETorch, a Pytorch library containing the strongest attacks and the common threat models that make comparing the defenses easier. Most of the research about adversarial examples have focused on adding small l[subscript p] bounded perturbations to the natural inputs with the assumption that the actual label remains unchanged. However, being robust in l[subscript p] bounded settings does not guarantee general robustness. Additionally, we present an efficient technique to create unrestricted adversarial examples using generative adversarial networks on MNIST, SVHN, and fashion-mnist datasets. We demonstrate that even the state-of-the-art adversarially robust MNIST classifiers are vulnerable to the adversarial examples generated with this technique. We demonstrate that our method is better than the previous unrestricted techniques since it has access to a bigger adversarial subspace. Additionally, we show that examples generated with our method are transferable. Overall, our findings emphasize the need for further studying the vulnerability of neural networks to unrestricted adversarial examples.
dc.description.copyright© Mehrgan Khoshpasand Foumani, 2020
dc.formattext/xml
dc.format.extentxi, 74 pages
dc.format.mediumelectronic
dc.identifier.urihttps://unbscholar.lib.unb.ca/handle/1882/13358
dc.language.isoen_CA
dc.publisherUniversity of New Brunswick
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.subject.disciplineComputer Science
dc.titleOn the evaluation of adversarial vulnerabilities of deep neural networks
dc.typemaster thesis
thesis.degree.disciplineComputer Science
thesis.degree.fullnameMaster of Computer Science
thesis.degree.grantorUniversity of New Brunswick
thesis.degree.levelmasters
thesis.degree.nameM.C.S.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
item.pdf
Size:
4.3 MB
Format:
Adobe Portable Document Format