Abstract:
Problem: The growing demand for visually complex and insightful Android applications has
made UI development a time-intensive procedure. Developers often struggle to balance the
need for complex UI designs with the constraints of rapid delivery. Manual coding for user
interfaces is laborious, and even minor design adjustments require significant redevelopment
time. This project addresses the challenge by creating an automated system to generate Android
UI code directly from UI designs, aiming to reduce development time while maintaining high
accuracy in translating design components.
Methodology: The project employs a hybrid approach, utilizing machine learning techniques
to interpret UI screenshots and convert them into structured UI code for Android. The system
analyzes UI elements within screenshots, applies preprocessing methods to refine input data,
and then generates Android-compatible UI code based on recognized components and layout
patterns. This automated workflow is designed to streamline the development process, reducing
the time spent on repetitive coding tasks.
Results: The system successfully demonstrated the feasibility of automatically generating
native Android UI code from UI screenshots. By employing a hybrid architecture combining a
CV with a prompt-driven large language model the solution accurately identified core UI
elements and converted them into UI layout code. The IOU score was 0.73. The generated UI
layouts were structurally valid and visually aligned with the original UI design. In performance
evaluations, the system significantly reduced the manual coding time, with expert and peer
reviews confirming usability and efficiency. These results validate the system’s potential as a
practical tool for accelerating UI development in Android applications.