Digital Repository

LLM-based code quality improvement, automated refactoring, and code review suggestions.

Show simple item record

dc.contributor.author Wickramasinghe, Kasun Sameera
dc.date.accessioned 2026-03-11T04:23:32Z
dc.date.available 2026-03-11T04:23:32Z
dc.date.issued 2025
dc.identifier.citation Wickramasinghe, Kasun Sameera (2025) LLM-based code quality improvement, automated refactoring, and code review suggestions. Msc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20222232
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2916
dc.description.abstract Problem: Large Language Models (LLMs) have revolutionized software engineering by automating tasks such as code quality improvement, refactoring, and code review suggestions. Despite these advancements, current tools face significant limitations in generalizability, safety, and evaluation. They are often tailored to specific programming languages (e.g., Java, Python) and lack robust mechanisms to ensure the safety and reliability of refactoring outputs (Pomian et al., 2024). Moreover, the absence of standardized evaluation metrics complicates their benchmarking, hindering their scalability and effectiveness in diverse, multi-language environments (Wadhwa et al., 2023; Liu et al., 2024). Methodology: This research introduces a generalized LLM-based framework designed to overcome these challenges by enabling automated code quality improvement, refactoring, and code review suggestions across multiple programming languages. The framework integrates fine-tuned LLMs (e.g., GPT-4, StarCoder2) trained on diverse datasets such as Ref-Dataset and Java refactoring commits (Yu et al., 2024; Cordeiro et al., 2024). Safety mechanisms, such as RefactoringMirror, are incorporated to validate outputs and prevent unsafe changes (Liu et al., 2024). Standardized evaluation metrics for accuracy, scalability, and generalizability are utilized to benchmark performance (Finkman et al., 2024). The framework development adheres to Object-Oriented Analysis and Design (OOAD) principles and employs Agile methodologies for iterative refinement and adaptability. en_US
dc.language.iso en en_US
dc.subject Large Language Models en_US
dc.subject Automated Refactoring en_US
dc.subject Code Quality Improvement en_US
dc.title LLM-based code quality improvement, automated refactoring, and code review suggestions. en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account