dc.description.abstract |
Recent literature shows that attacks such as Membership Inference Attacks and Model Inversion Attacks are capable of exploiting ML models to retrieve user-sensitive information that was used in their training process. Due to this privacy threat, the influence of such sensitive data on the models must be removed, hence the study of Machine Unlearning emerged in order to comply with the 'right to be forgotten' legislation introduced by world-wide regulations such as GDPR. The project DoxMU focuses on researching and introducing a novel approach that can be used to unlearn a given machine learning model. Here, the introduced unlearning approach is targeted at CNN models and performing sub-class (instance) level unlearning by incorporating the concept of Model Shifting, which allows to update the model parameters in such a way that the overall model's performance becomes as close as possible to the performance of its retrained model. The proposed algorithm was tested on binary and multi-class classification CNN models, and the performances after the unlearning process were compared with their respective retrain models that were trained excluding the exact forget sets. The evaluation shows promising results, as the performances of the unlearned and retrained models were almost identical. |
en_US |