dc.description.abstract |
"Brain tumour is one of the most common, yet deadly cancers seen in adults as well as children. Over the years the number of brain tumour cases has shown an increasing incidence. Early diagnosis plays a pivotal role in identifying and getting chances to cure cancer but is unsuccessful due to its timely processes. MRI (Magnetic Resonance Imaging) screening is commonly used as a part of diagnosing brain tumours, Still, this process requires a concentrated focus from healthcare professionals with expertise on the subject for extensive periods. Thus, there are possibilities of misdiagnoses due to human error. With this increasing concern and advancement of technology, Deep Learning algorithms have been implemented which have already achieved higher accuracy and robustness. However, these systems are untrustworthy, restricting them from being deployed in medical workflows due to a lack of reasonings behind the predictions. A blind prediction through an AI system is insufficient for healthcare professionals to make life-dependent judgements on humans.
Through this research, a pipeline is proposed to automate brain tumour detection and classification of over 15 different types of brain tumours while getting both visual and textual explanations. The proposed architecture involves a transfer learning approach using the pre-trained EfficientNet-B7 model along with explainable AI (XAI) techniques which include LIME, Grad-CAM, Layer-CAM, SmoothGrad and Guided Back Propagation. Primarily the image is preprocessed by applying CLAHE (Contrast Limited Adaptive Histogram Equalization) and afterwards sent for classification. Secondarily, the predicted tumour type together with the DL model is input into the XAI techniques to generate the explanations.
The results of the model are recorded as obtaining a validation accuracy of 94.22% with a loss of 0.2165 and the AUC-ROC is recorded as 0.99862." |
en_US |