Digital Repository

Vit Based Adversarial Example Detection With Explanation in Medical Imaging

Show simple item record

dc.contributor.author Shamry, Mohamed
dc.date.accessioned 2025-06-19T07:36:03Z
dc.date.available 2025-06-19T07:36:03Z
dc.date.issued 2024
dc.identifier.citation Shamry, Mohamed (2024) Vit Based Adversarial Example Detection With Explanation in Medical Imaging. BSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 2019812
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2691
dc.description.abstract "The use of artificial intelligence (AI) models in medical imaging is becoming more prevalent, and this raises questions regarding the models' security against adversarial attacks. Medical imaging is crucial to diagnosis and therapy planning. Medical image analysis systems are at great risk of losing their dependability due to adversarial attacks, which alter input data to trick models. In this case, further research clearly showed that the model used along with adversarial defensive techniques could influence the medical image diagnosis model's accuracy. Furthermore, the interpretability of the current adversarial detection techniques is lacking, which makes them challenging to use in practical situations. In order to achieve remarkable detection accuracy, this thesis provides a unique strategy based on a vision transformer architecture that reduces the influence of adversarial attacks on medical image processing. Our technique incorporates interpretability through visual explanations and improves classification accuracy by utilizing the transforming potential of vision transformers. A greater understanding of model decisions is made easier by the incorporation of visual explanations. This study helps to secure AI models for medical image analysis and opens the door for the creation of robust systems that can withstand adversarial attacks. Our vision transformer-based model's experimental assessment produced promising results in identifying adversarial cases created with the use of adversarial attacks. The algorithm proved to be excellent at spotting fabricated medical images, with an 99.64% detection rate on test data. The additional use of visual explanations strengthens the model's dependability in practical clinical settings by improving interpretability and offering insights into the model's decision-making process. " en_US
dc.language.iso en en_US
dc.subject Security of Intelligent Systems en_US
dc.subject Adversarial Machine Learning en_US
dc.title Vit Based Adversarial Example Detection With Explanation in Medical Imaging en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account