<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<channel rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3017">
<title>2025</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3017</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3278"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3277"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3276"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3275"/>
</rdf:Seq>
</items>
<dc:date>2026-05-05T08:07:04Z</dc:date>
</channel>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3278">
<title>Early Skin Cancer Detection System Using Deep Learning</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3278</link>
<description>Early Skin Cancer Detection System Using Deep Learning
Jayaweera Samaranayake, Ramindu
Skin cancer is one of the most prevalent cancers in the world, and hence early detection is crucial to improve patient outcomes. Traditional diagnosis methods such as visual examination and biopsies can be subjective, invasive, and not accessible in developing countries. In this paper, a deep learning web application for early skin cancer detection is presented. The model features advanced convolutional neural networks (CNNs) like VGG16, ResNet18, and ResNet50 to classify skin lesions into seven classes: actinic keratoses, basal cell carcinoma, benign keratosis-like lesions, dermatofibroma, melanoma, melanocytic nevi, and vascular lesions.The project is accuracy, usability, and clinical-relevance-oriented. A huge database of dermatoscopic images was trained using data augmentation methods to overcome class imbalance issues and improve generalizability. Accuracy, precision, recall, and F1-score were chosen as key performance metrics for evaluating the models. Explainable AI features like confidence scores provide transparent predictions to clinicians, thereby being more trustable and usable.A user-friendly web interface, developed with React.js and Flask, enables seamless integration into clinical workflows. The application enables physicians to upload images, receive real-time classifications, and leave feedback for continuous model improvement. Validation testing confirmed system reliability, with fast response times and adherence to data privacy standards such as HIPAA and GDPR.&#13;
This research advances AI-assisted dermatology by closing gaps in multi-class skin cancer&#13;
classification, increasing accessibility, and encouraging transparency. Mobile deployment,&#13;
electronic health record (EHR) integration, and increased coverage of the dataset for less-studied skin types are avenues for future enhancements. By making early diagnosis easier and reducing healthcare costs, this system can have the potential to enhance patient outcomes worldwide.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3277">
<title>Bone fracture detection system using CNN.</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3277</link>
<description>Bone fracture detection system using CNN.
Gunasekara, Ganidu
This project focuses on improving the accuracy and efficiency of bone fracture detection in X-ray images using deep learning techniques. Traditional diagnostic methods often struggle with subtle or complex fractures, especially when used by less experienced medical practitioners, leading to delayed or incorrect assessments. To address these challenges, the proposed system employs two specialized deep learning models—a classification model and an object detection model—both built using Convolutional Neural Network (CNN) architectures.&#13;
&#13;
The classification model is responsible for identifying whether a fracture is present in an X-ray image, while the object detection model localizes the fracture by generating bounding boxes around the affected region. These predicted bounding boxes are then used to estimate the size of the fracture, which further enables the system to provide an estimated healing time, offering additional support to medical professionals. Various model optimization techniques, including batch normalization and dropout, were applied to enhance performance and mitigate issues such as overfitting.&#13;
&#13;
The system demonstrates strong results, with the classification model achieving an accuracy of 97%, indicating high reliability in distinguishing fractured from non-fractured images. The object detection model produced a mean Average Precision (mAP) of 73% and an average Intersection over Union (IoU) score of 60%, reflecting solid localization performance. These outcomes collectively suggest that the developed system is effective in automating fracture detection and has significant potential to support faster, more accurate diagnoses. By reducing diagnostic errors and assisting practitioners—particularly trainees—the system contributes to improved clinical decision-making in orthopedic care.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3276">
<title>BlurFix: Efficient Image Blind Motion Deblurring Using  Generative Adversarial Network</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3276</link>
<description>BlurFix: Efficient Image Blind Motion Deblurring Using  Generative Adversarial Network
Vithanage, Vishmi
Most real-world images are degraded by motion blur due to either camera shake or fast moving objects. This prevails as a great challenge for many applications in computer vision such as object detection and scene understanding, which really require clear images. Blind motion deblurring aims to restore sharp images from blurry ones without any prior knowledge of the blur kernel. Real and effective deblurring remains a big challenge due to detail retention and avoiding artifacts. Even though the recent advances in deep learning and, particularly, Generative Adversarial Networks have shown some promising work for this task, they utilize a high computational cost. &#13;
This work presents a Mobile Vision Transformer (MobileViT) enhanced Wasserstein GAN &#13;
with gradient penalty (WGAN-GP) efficient blind motion deblurring framework. pairing a &#13;
spectrally normalised discriminator for stable training with a U-Net style generator with &#13;
MobileViT blocks for multi-scale feature fusion and global-local representation learning. The &#13;
proposed architecture combines important developments including a hybrid CNN-Transformer generator using inverted residual blocks and skip connections to preserve high-frequency details, a perceptual loss formulation including VGG-16 features alongside adversarial and L1 losses (Mean Absolute error) and lightweight patch-based discrimination using spectral normalising for enhanced training stability.  &#13;
Using the GOPRO dataset, experiments show that our model maintains a lower computational complexity than equivalent GAN-based deblurring techniques while obtaining competitive PSNR/SSIM measurements.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3275">
<title>A Novel Multimodal Deep Learning Approach for Polycystic Ovary  Syndrome (PCOS) Diagnosis with Explainable AI</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3275</link>
<description>A Novel Multimodal Deep Learning Approach for Polycystic Ovary  Syndrome (PCOS) Diagnosis with Explainable AI
Ruwanpathirana, Dewmini Nirasha
Problem: PCOS is a very common endocrine disorder in reproductive-age females characterized by complex symptoms of hormonal imbalance, irregular menstrual cycle, and infertility. The traditional diagnosing methods include clinical examinations and ultrasound imaging, which are often inconsistent and lead to misdiagnosis and delayed treatment for the disease. Therefore, this study presents a novel multimodal system for PCOS diagnosis that integrates clinical data and ultrasound scan images to enhance its diagnosing accuracy and reliability. Additionally, Explainable AI (XAI) techniques are integrated to provide transparent decision-making, enabling users to understand and trust the model’s predictions. &#13;
&#13;
Methodology: The proposed methodology involves a novel, multimodal diagnostic model based on deep learning techniques that use both ultrasound images and clinical data for the detection of PCOS. This approach uses CNNs to capture intricate features from ultrasound images and deploy advanced feature engineering on clinical data to optimize predictive performance. A fusion model integrates insights from both data types to diagnose PCOS accurately.  &#13;
&#13;
Initial Results: The proposed multimodal system achieved exceptional diagnostic performance, with the ultrasound image model reaching 99.8% accuracy and the clinical data model reaching 97% accuracy. By integrating both modalities, the combined approach delivered 98.4% accuracy, significantly outperforming multimodal approaches. The effectiveness of the multimodal approach in utilizing both clinical and ultrasound scan image data for reliable diagnosis was confirmed by its outstanding performance.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
