Membership inference attacks on machine learning models: analysis and mitigation

dc.contributor.advisorAlhadidi, Dima
dc.contributor.advisorHakak, Saqib
dc.contributor.authorRahman Shuvo, Md Shamimur
dc.date.accessioned2023-03-01T16:35:33Z
dc.date.available2023-03-01T16:35:33Z
dc.date.issued2021
dc.date.updated2023-03-01T15:02:48Z
dc.description.abstractGiven a machine learning model and a record, membership attacks determine whether this record was used as a part of the model's training dataset. Membership inference attack can present a risk to private datasets if these datasets are used to train machine learning models and access to the resulting models is open to the public. For example, knowing that a certain patient's record was used to train a model associated with a disease can reveal that the patient has this disease. To construct attack models, multiple shadow models are created that imitate the behavior of the target model, but for which we know the training datasets and thus the ground truth about membership in these datasets. Attack models are then trained on the labeled inputs and outputs of the shadow models. There is a desideratum to conduct more analysis about this attack and accordingly to provide robust mitigation techniques that will not affect the target model's utility. In this thesis, we discussed new combinations of parameters and settings, which were not explored in the literature to provide useful insights about the behavior of the membership inference attack. We also proposed and evaluated different mitigation techniques against this type of attack considering different training algorithms of the target model. Our experiments showed that, the defense strategies mitigate the membership inference attack considerably while preserving the utility of the target model. Finally, we summarized and compared the existing mitigation techniques with our results.
dc.description.copyright©Md Shamimur Rahman Shuvo, 2021
dc.description.noteElectronic Only.
dc.formattext/xml
dc.format.extentx, 88 pages
dc.format.mediumelectronic
dc.identifier.urihttps://unbscholar.lib.unb.ca/handle/1882/14166
dc.language.isoen_CA
dc.publisherUniversity of New Brunswick
dc.rightshttp://purl.org/coar/access_right/c_abf2
dc.subject.disciplineComputer Science
dc.titleMembership inference attacks on machine learning models: analysis and mitigation
dc.typemaster thesis
thesis.degree.disciplineComputer Science
thesis.degree.fullnameMaster of Computer Science
thesis.degree.grantorUniversity of New Brunswick
thesis.degree.levelmasters
thesis.degree.nameM.C.S.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
item.pdf
Size:
418.86 KB
Format:
Adobe Portable Document Format

Collections