Digital Repository

Enhancing Multi-Class Brain Tumor Detection and Segmentation with Vision Transformers

Show simple item record

dc.contributor.author Dolage, Isora
dc.date.accessioned 2026-03-11T08:12:59Z
dc.date.available 2026-03-11T08:12:59Z
dc.date.issued 2025
dc.identifier.citation Dolage, Isora (2025) Enhancing Multi-Class Brain Tumor Detection and Segmentation with Vision Transformers. Msc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20232181
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2940
dc.description.abstract Brain tumor detection from Magnetic Resonance Imaging (MRI) scans is a crucial component of clinical diagnosis, treatment planning, and prognosis in neuro-oncology. Accurate identification of brain tumors at an early stage significantly improves patient outcomes; however, manual interpretation of MRI scans by radiologists is both time-consuming and prone to human error. With the increasing volume and complexity of medical imaging data, there is a growing need for automated, reliable, and efficient diagnostic support systems. This research proposes the development of an enhanced Vision Transformer (ViT)-based framework for multi-class brain tumor detection and segmentation using MRI images. Unlike many existing approaches that focus primarily on binary tumor classification (tumor vs. non-tumor), this study aims to accurately distinguish between multiple clinically relevant tumor categories, including glioma, meningioma, pituitary tumors, and no tumor cases. By addressing this limitation, the proposed model seeks to provide more comprehensive and clinically meaningful diagnostic insights. A key contribution of this research is the integration of advanced preprocessing techniques, such as rotational invariance, Contrast-Enhanced Pseudo-Color Mapping and Contextual Contrast Augmentation (CCA) to improve feature representation and model perfomance. Collectively, these techniques enhance the discriminative power and generalization capability of the proposed ViT model. Experimental results demonstrate that the proposed four-class ViT model achieves an accuracy exceeding 86%, indicating its effectiveness in both tumor detection and segmentation tasks. Overall, this research aspires to advance precise, robust, and clinically applicable deep learning methods that can assist radiologists in diagnostic workflows and contribute to improved patient care in oncology and neurology. en_US
dc.language.iso en en_US
dc.subject Vision Transformers en_US
dc.subject Brain Tumor Detection en_US
dc.subject MRI Segmentation en_US
dc.title Enhancing Multi-Class Brain Tumor Detection and Segmentation with Vision Transformers en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account