<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>2025</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3017</link>
<description/>
<pubDate>Tue, 05 May 2026 08:05:19 GMT</pubDate>
<dc:date>2026-05-05T08:05:19Z</dc:date>
<item>
<title>Bone fracture detection system using CNN.</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3277</link>
<description>Bone fracture detection system using CNN.
Gunasekara, Ganidu
This project focuses on improving the accuracy and efficiency of bone fracture detection in X-ray images using deep learning techniques. Traditional diagnostic methods often struggle with subtle or complex fractures, especially when used by less experienced medical practitioners, leading to delayed or incorrect assessments. To address these challenges, the proposed system employs two specialized deep learning models—a classification model and an object detection model—both built using Convolutional Neural Network (CNN) architectures.&#13;
&#13;
The classification model is responsible for identifying whether a fracture is present in an X-ray image, while the object detection model localizes the fracture by generating bounding boxes around the affected region. These predicted bounding boxes are then used to estimate the size of the fracture, which further enables the system to provide an estimated healing time, offering additional support to medical professionals. Various model optimization techniques, including batch normalization and dropout, were applied to enhance performance and mitigate issues such as overfitting.&#13;
&#13;
The system demonstrates strong results, with the classification model achieving an accuracy of 97%, indicating high reliability in distinguishing fractured from non-fractured images. The object detection model produced a mean Average Precision (mAP) of 73% and an average Intersection over Union (IoU) score of 60%, reflecting solid localization performance. These outcomes collectively suggest that the developed system is effective in automating fracture detection and has significant potential to support faster, more accurate diagnoses. By reducing diagnostic errors and assisting practitioners—particularly trainees—the system contributes to improved clinical decision-making in orthopedic care.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://dlib.iit.ac.lk/xmlui/handle/123456789/3277</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>BlurFix: Efficient Image Blind Motion Deblurring Using  Generative Adversarial Network</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3276</link>
<description>BlurFix: Efficient Image Blind Motion Deblurring Using  Generative Adversarial Network
Vithanage, Vishmi
Most real-world images are degraded by motion blur due to either camera shake or fast moving objects. This prevails as a great challenge for many applications in computer vision such as object detection and scene understanding, which really require clear images. Blind motion deblurring aims to restore sharp images from blurry ones without any prior knowledge of the blur kernel. Real and effective deblurring remains a big challenge due to detail retention and avoiding artifacts. Even though the recent advances in deep learning and, particularly, Generative Adversarial Networks have shown some promising work for this task, they utilize a high computational cost. &#13;
This work presents a Mobile Vision Transformer (MobileViT) enhanced Wasserstein GAN &#13;
with gradient penalty (WGAN-GP) efficient blind motion deblurring framework. pairing a &#13;
spectrally normalised discriminator for stable training with a U-Net style generator with &#13;
MobileViT blocks for multi-scale feature fusion and global-local representation learning. The &#13;
proposed architecture combines important developments including a hybrid CNN-Transformer generator using inverted residual blocks and skip connections to preserve high-frequency details, a perceptual loss formulation including VGG-16 features alongside adversarial and L1 losses (Mean Absolute error) and lightweight patch-based discrimination using spectral normalising for enhanced training stability.  &#13;
Using the GOPRO dataset, experiments show that our model maintains a lower computational complexity than equivalent GAN-based deblurring techniques while obtaining competitive PSNR/SSIM measurements.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://dlib.iit.ac.lk/xmlui/handle/123456789/3276</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>A Novel Multimodal Deep Learning Approach for Polycystic Ovary  Syndrome (PCOS) Diagnosis with Explainable AI</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3275</link>
<description>A Novel Multimodal Deep Learning Approach for Polycystic Ovary  Syndrome (PCOS) Diagnosis with Explainable AI
Ruwanpathirana, Dewmini Nirasha
Problem: PCOS is a very common endocrine disorder in reproductive-age females characterized by complex symptoms of hormonal imbalance, irregular menstrual cycle, and infertility. The traditional diagnosing methods include clinical examinations and ultrasound imaging, which are often inconsistent and lead to misdiagnosis and delayed treatment for the disease. Therefore, this study presents a novel multimodal system for PCOS diagnosis that integrates clinical data and ultrasound scan images to enhance its diagnosing accuracy and reliability. Additionally, Explainable AI (XAI) techniques are integrated to provide transparent decision-making, enabling users to understand and trust the model’s predictions. &#13;
&#13;
Methodology: The proposed methodology involves a novel, multimodal diagnostic model based on deep learning techniques that use both ultrasound images and clinical data for the detection of PCOS. This approach uses CNNs to capture intricate features from ultrasound images and deploy advanced feature engineering on clinical data to optimize predictive performance. A fusion model integrates insights from both data types to diagnose PCOS accurately.  &#13;
&#13;
Initial Results: The proposed multimodal system achieved exceptional diagnostic performance, with the ultrasound image model reaching 99.8% accuracy and the clinical data model reaching 97% accuracy. By integrating both modalities, the combined approach delivered 98.4% accuracy, significantly outperforming multimodal approaches. The effectiveness of the multimodal approach in utilizing both clinical and ultrasound scan image data for reliable diagnosis was confirmed by its outstanding performance.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://dlib.iit.ac.lk/xmlui/handle/123456789/3275</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item>
<title>NORA AI - AI Powered Shopping Assistant for Clothing Stores</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3274</link>
<description>NORA AI - AI Powered Shopping Assistant for Clothing Stores
Rifky, Aabidha
The retail clothing sector continues to face significant challenges in delivering seamless and personalized customer service, particularly during peak shopping periods when human support is limited and response times increase. Existing chatbot solutions are largely based on rule-driven logic or traditional machine learning and Natural Language Understanding (NLU) techniques, which restrict their ability to interpret complex customer queries and generate dynamic, context-aware responses. As a result, these systems often fail to provide accurate fashion guidance, personalized product recommendations, and smooth end-to-end support from product discovery to purchase, ultimately reducing customer engagement and potential sales.&#13;
This research proposes and implements an AI-powered shopping assistant tailored specifically for clothing retail, using a hybrid intelligent architecture. The system combines NLU for intent detection and basic query handling, Retrieval-Augmented Generation (RAG) to accurately answer store policies, FAQs, and product-related questions using a curated knowledge base, and a Large Language Model (LLM) enhanced through domain-specific prompt engineering to manage complex, conversational, and advisory interactions. In addition, a recommendation engine is integrated using both content-based and collaborative filtering techniques to generate personalized fashion suggestions based on user preferences, behavior, and shopping history.&#13;
The assistant is designed to support natural language conversations, real-time fashion advice, and guided shopping journeys, enabling users to interact using simple everyday language. System performance was evaluated using response accuracy, relevance, user satisfaction, and engagement metrics. Experimental results show that the proposed hybrid assistant outperforms traditional chatbot systems by delivering more accurate, context-aware, and personalized responses. The study demonstrates that combining RAG, LLMs, and intelligent recommendation methods can significantly enhance digital retail experiences and provides a scalable, user-friendly solution for next-generation clothing store assistants.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://dlib.iit.ac.lk/xmlui/handle/123456789/3274</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
