<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<channel rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/1">
<title>Dissertations &amp; Thesis</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/1</link>
<description/>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3164"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3163"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3162"/>
<rdf:li rdf:resource="http://dlib.iit.ac.lk/xmlui/handle/123456789/3161"/>
</rdf:Seq>
</items>
<dc:date>2026-04-15T03:18:07Z</dc:date>
</channel>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3164">
<title>Smart Greenhouse</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3164</link>
<description>Smart Greenhouse
Rajasekera, Dulaksha
Greenhouse farming is a crucial aspect of modern agriculture, enabling crop growth in regulated &#13;
conditions. Despite this, traditional greenhouse management often depends on manual labor, &#13;
which can be inefficient and lead to less-than-ideal growth conditions. This study presents an IoT&#13;
enabled greenhouse automation system that continuously monitors and controls key environmental &#13;
factors such as temperature, humidity, soil moisture, and light intensity. The system employs cost&#13;
effective sensors and ESP32 microcontrollers to transmit real-time data to a cloud-based platform &#13;
for analysis and remote monitoring. &#13;
This research adopts a mixed-method approach, integrating user feedback with sensor-based &#13;
quantitative data. The system undergoes iterative development using a prototyping methodology, &#13;
refining features based on user responses. Additionally, predictive analytics powered by machine &#13;
learning offer actionable insights, allowing farmers to anticipate and optimize greenhouse &#13;
conditions for enhanced crop yield. &#13;
Initial test results demonstrate high accuracy in tracking critical environmental metrics. A &#13;
confusion matrix and an AUC-ROC score above 0.85 validate the effectiveness of the system in &#13;
predicting optimal growth conditions. These promising results suggest that this system can &#13;
improve crop quality, minimize operational costs, and promote sustainability in agriculture &#13;
through precision farming.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3163">
<title>SwinVox: A Hybrid CNN-SwinT Cross-View Attention Architecture for VoxelBased 3D Reconstruction from Single and Multi-View Images</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3163</link>
<description>SwinVox: A Hybrid CNN-SwinT Cross-View Attention Architecture for VoxelBased 3D Reconstruction from Single and Multi-View Images
Samaranayake, Sandeepa
The magnified demand for precise 3D reconstruction in computer vision and augmented reality fields has persuaded significant advances in model architectures that are capable of recovering voxel 3D representations from single or multiple views. Despite the significant progress made, current approaches have yet to witness a typical breakthrough due to the incapability to explore local and global dependencies while preserving the spatial relationships between views, resulting in incomplete or hollowed-out areas. Addressing these limitations, this research work proposes a hybrid model, SwinVox, incorporating the strengths of Convolutional Neural Networks (CNNs) and Swin Transformers (SwinT). The primary objective of this research is to address the above limitation in the field of voxel-based 3D reconstruction by integrating CNN for localised feature extraction and SwinT for capturing multi-scale global features, including cross-view spatial relationships.&#13;
&#13;
To achieve this objective, a novel CNN-SwinT hybrid architecture was designed, where CNN &#13;
layers extract initial spatial features while SwinT captures the long-range global dependencies and provides multi-scale feature representation. In addition, SwinT performs cross-view attention to capture spatial relationships across different viewpoints to recover more accurate 3D representation. The SwinVox architecture is optimised to capture complex topological structures and spatial relationships across the input views, which are then translated into highfidelity 3D voxel representations. &#13;
&#13;
Extensive experiments were conducted on benchmark datasets and evaluated the model’s performance by comparing it with standard 3D voxel reconstruction models, and achieved a promising IoU of ≈0.68 (68%) and F-Score of ≈0.79, suggesting that the SwinVox outperforms many standard models by a large margin.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3162">
<title>Identification of Oily Cinnamon Leaves Using Image Processing (CinnaOil)</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3162</link>
<description>Identification of Oily Cinnamon Leaves Using Image Processing (CinnaOil)
Agampodi, Isuru Udula
The oiliness of cinnamon leaves, a crucial component in assessing their eligibility for&#13;
essential oil extraction, is still evaluated by the Sri Lankan cinnamon industry using&#13;
conventional techniques, despite technological developments. This outdated method results in&#13;
inconsistent evaluations, lower oil output, and unpredictable product quality. These&#13;
challenges limit the cinnamon oil industry's long term growth and efficiency in addition to&#13;
affecting present production. In order to get over these restrictions, this study presents an AI&#13;
powered method that precisely determines the oiliness of cinnamon leaves by utilizing&#13;
machine learning and image processing approaches.&#13;
The VisionTransformImageProcessor model is used by the CinaOil system to extract relevant&#13;
visual information from images of cinnamon leaves. To improve performance and deal with&#13;
dataset imbalance, advanced training techniques such early stopping model checkpointing,&#13;
class weighting, and learning rate scheduling were used. The accuracy and generalization of&#13;
the model were further improved by background removal and other preprocessing techniques.&#13;
Reliable predictions in real world applications were ensured by the model's great stability&#13;
over a range of environmental circumstances and image modifications.&#13;
Confusion matrices and classification reports were used to assess CinaOil's performance, and&#13;
it showed excellent accuracy in identifying between oily and non oily leaves. The model&#13;
demonstrated high precision and recall with a test accuracy of 98.1%, demonstrating its&#13;
efficacy in determining the best leaves for oil extraction. A CNN-based model and an AI-&#13;
enhanced model were the two separate components that were developed and evaluated. The&#13;
AI model was used in the final application because it performed better than the CNN model,&#13;
which had an accuracy of 93.6%, with 98.1%.&#13;
The goal of this scalable, user friendly program is to assist cinnamon producers by offering a&#13;
reliable and automated way to evaluate quality, and ultimately helping in the modernization&#13;
of the cinnamon sector.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
<item rdf:about="http://dlib.iit.ac.lk/xmlui/handle/123456789/3161">
<title>Adaptive User Interface Library - Flutter</title>
<link>http://dlib.iit.ac.lk/xmlui/handle/123456789/3161</link>
<description>Adaptive User Interface Library - Flutter
Harischandra, Chamidu
This work investigates the limitation of current adaptive user interfaces (AUIs) in Flutter-based&#13;
mobile applications. Most of the applications rely on static layouts incapable of dynamic&#13;
generation of personalized user experiences due to individual behaviors and preferences. No&#13;
framework currently offers runtime rearrangement facilities for components of an application.&#13;
This leads to poor user experience and satisfaction. This project recognizes the need for mobile&#13;
applications to shift from one-size-fits-all designs to more personalized approaches,&#13;
considering user context to enhance overall usability and user engagement further.&#13;
This study addresses the problem utilizing a strong methodology that embeds incremental&#13;
learning algorithms since these capture and analyze user interaction data effectively, thus&#13;
driving comprehensive user profiling. This will consequently provide the library with&#13;
capabilities for runtime adaptive changes to user interfaces based on dynamic changes of user&#13;
behaviors. The project also develops a reusable adaptive widgets library for Flutter that will&#13;
easily enable developers to implement dynamic interfaces responsive to user contexts. This&#13;
library will create a bridge between user experience and technical implementation, making it&#13;
possible for personalization of mobile applications, hence setting the new standard of designing&#13;
adaptive interfaces in the Flutter ecosystem.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
