<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<channel rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/2885">
<title>2025</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/2885</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/2949"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/2948"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/2947"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/2946"/>
</rdf:Seq>
</items>
<dc:date>2026-04-28T14:04:37Z</dc:date>
</channel>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/2949">
<title>Self-Supervised Learning for Automated Fracture Detection in Radiographic Images</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/2949</link>
<description>Self-Supervised Learning for Automated Fracture Detection in Radiographic Images
Perera, Lahiru
Bone fractures are a critical medical condition requiring early and accurate detection to ensure&#13;
timely treatment, yet conventional analysis relies on manual interpretation, which is time-&#13;
consuming and prone to error (Sharma et al., 2025). While deep learning models have been&#13;
applied to this problem, they often struggle with the small and complex datasets typical in&#13;
medical imaging, leading to unreliable results (Alwzwazy et al., 2025). The primary limitations&#13;
hindering their clinical adoption are a heavy dependence on large, expert-annotated datasets,&#13;
which are difficult to acquire, and a lack of model transparency, which erodes clinical trust&#13;
(Alwzwazy et al., 2025).&#13;
To address this gap, this research designed and developed a novel, end-to-end fracture detection&#13;
system built upon a Vision Transformer (ViT) architecture. This approach diverges from&#13;
traditional supervised methods by leveraging a domain-specific Self-Supervised Learning (SSL)&#13;
strategy, a technique that has shown significant potential to enhance clinical diagnostics (Wang&#13;
and Siddiqui, 2024). The core of the solution is a two-stage process. First, a standard ViT-&#13;
Base/16 encoder was pre-trained on a large corpus of unlabeled musculoskeletal radiographs&#13;
from the MURA dataset using a Masked Autoencoder (MAE) framework. This forces the&#13;
model to learn rich, high-level semantic features of radiographic anatomy without requiring&#13;
any human-provided labels. Subsequently, this pre-trained encoder was fine-tuned for fracture&#13;
classification using a smaller, labeled subset of the MURA dataset.&#13;
The developed SSL-ViT model was systematically evaluated on a hold-out validation set,&#13;
demonstrating the viability and effectiveness of the proposed approach. The system achieved&#13;
a validation accuracy of 86.90% and an Area Under the Curve (AUC-ROC) score of 0.8686,&#13;
indicating a strong capability to distinguish between fractured and non-fractured cases. Analysis&#13;
of the training dynamics confirmed that the SSL pre-training provided a robust foundation,&#13;
enabling the model to learn effectively from limited labeled data. These results validate that the&#13;
combination of domain-adaptive self-supervised learning with Vision Transformers presents a&#13;
promising pathway toward creating more data-efficient, accurate, and trustworthy AI tools for&#13;
clinical diagnostics.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/2948">
<title>A Hybrid Approach to Fake News Detection Using Explainable AI  and Multimodal Content Analysis</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/2948</link>
<description>A Hybrid Approach to Fake News Detection Using Explainable AI  and Multimodal Content Analysis
Elpitiya Badalge, Tharindi
The rapid spread of fake news across digital platforms has occurred as a significant risk to &#13;
societal trust, community safety, and political stability. Misinformation campaigns often exploit &#13;
various media formats such as text, images, and videos making it increasingly problematic to &#13;
distinguish between trustworthy and misleading news content. Traditional approaches of false &#13;
news detection primarily focus on textual analysis, leaving gaps in detecting multimodal &#13;
misinformation &#13;
The proposed system has used advanced multimodal models CLIP (OpenAI), ViLT, BERT, &#13;
ResNet50, and VisualBERT to evaluate. Text is processed using BERT, while ResNet50 and &#13;
SAFE handle image features. ViLT and VisualBERT model text-image relationships, and CLIP &#13;
aligns visual and textual semantics. After evaluating all models, the best-performing ones are &#13;
combined to improve accuracy and generalization. Explainability is ensured through SHAP or &#13;
LIME, helping users understand the reasoning behind each prediction. &#13;
This approach is demonstrated through a prototype, assessed using standard metrics like &#13;
accuracy, recall, precision, and F1-score. The initial model, without zero-shot learning, &#13;
achieved strong performance on the Twitter dataset with an accuracy of 97.61%, precision of &#13;
97.62%, recall of 97.61%, and F1-score of 97.61%. When evaluated with zero-shot learning &#13;
on the FakeNewsNet dataset, the enhanced model achieved 93.91% accuracy. The proposed &#13;
solution promises to be an effective tool in combating misinformation by providing a robust, &#13;
explainable system for fake news detection across diverse media formats. Future work includes &#13;
expanding the model’s capabilities to handle real-time data and multimedia content, further &#13;
improving the model's efficiency and adaptability in dynamic environments.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/2947">
<title>NimbusGuard: A Novel Intelligent Orchestration Framework for Proactive Recovery and Performance Optimization in Kubernetes</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/2947</link>
<description>NimbusGuard: A Novel Intelligent Orchestration Framework for Proactive Recovery and Performance Optimization in Kubernetes
Wanigasooriya, Chamath
Reactive, threshold-based autoscaling mechanisms in Kubernetes are inadequate for modern cloud-native applications, as they fail to efficiently manage dynamic workloads, leading to performance degradation and inefficient resource utilization. This inherent "scaling lag" creates a critical need for an automated, intelligent, and proactive management paradigm. This research addresses this challenge by introducing NimbusGuard, a novel, intelligent orchestration framework designed for proactive recovery and performance optimization in Kubernetes. The proposed solution overcomes the limitations of traditional autoscalers by synergistically integrating a Long Short-Term Memory (LSTM) network for predictive forecasting with a Deep Q-Network (DQN) agent for adaptive, multi-objective decisionmaking. A key contribution of this work is the practical implementation of this intelligence within a stateful LangGraph workflow, which includes a crucial MCP safety validation layer to ensure all scaling decisions are orchestrated and validated, preventing system instability. In empirical benchmarks against industry-standard autoscalers, NimbusGuard demonstrated vastly superior performance, reducing the average time to scale by 80% compared to the default Horizontal Pod Autoscaler (HPA) and by over 33% compared to KEDA. This research contributes a production-aware framework that successfully bridges the gap between theoretical AI models and their practical, safe deployment, offering a tangible solution that enhances the performance, efficiency, and resilience of modern cloud-native applications.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/2946">
<title>Enhancing Communication for the Hearing-Impaired: A Vision-Based System for Translating SSL with Emotional Expression</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/2946</link>
<description>Enhancing Communication for the Hearing-Impaired: A Vision-Based System for Translating SSL with Emotional Expression
Segar, Arosh
Communication barriers for the hearing-impaired community in Sri Lanka persist due to the&#13;
lack of accessible, real-time Sri Lankan Sign Language (SSL) translation systems that capture&#13;
emotional expressiveness. Existing systems mainly recognize hand gestures but overlook facial&#13;
expressions, resulting in translations that miss emotional nuance and naturalness, limiting their&#13;
effectiveness in daily and critical communication.&#13;
This project presents a multimodal, emotion-aware SSL translation system combining&#13;
Timesformer-based gesture recognition with MediaPipe facial and body landmark fusion. A&#13;
facial emotion detection module using DeepFace extracts dominant emotions, which are&#13;
converted into expressive speech via a Typecast API-powered emotional TTS. The backend is&#13;
built as FastAPI microservices deployed on cloud platforms, integrated with a Flutter mobile&#13;
app interface. The system employs deep learning, multimodal fusion, and user-centered design,&#13;
validated through extensive training and mixed-method evaluations.&#13;
The prototype achieved around 85% gesture recognition accuracy on SSL datasets, over 80%&#13;
accuracy in emotion detection, and generated natural, context-aware speech with latency&#13;
suitable for near real-time use. Balanced metrics like F1 score and precision-recall, alongside&#13;
expert and user feedback, demonstrate the system’s robustness and improved communication&#13;
clarity in both every day and emergency scenarios.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
