| dc.description.abstract |
This project addresses the challenge of translating hand-drawn user interface (UI) sketches into digital, high-fidelity UI designs, an essential task for accelerating early-stage design processes. Traditional methods of digitizing hand-drawn sketches are labour-intensive, time-consuming and lack efficiency, presenting an obstacle for designers who seek rapid prototyping options. Automating the detection of UI components within hand-drawn sketches can significantly reduce time and improve accuracy, streamlining the design-to-development workflow.
To tackle this, the author proposed a Figma plugin powered by deep learning to detect UI
components in hand-drawn sketches. As the first step, a dataset of hand-drawn UI components, including various UI elements (buttons, text fields, icons, etc.), was gathered for training the model. The approach focuses on identifying and classifying various UI elements within sketches, such as buttons, text fields, images, etc. This involved training the YOLOv11 model on annotated images of hand-drawn UI elements to accurately detect these components and evaluating performance with metrics such as precision, recall, and F1 score.
Initial results demonstrate the model’s high accuracy and usability, with an F1 score of 0.90 at a confidence threshold of 0.546, indicating strong performance. The precision-recall indicates that the model achieves an average precision of 94.7% across all classes at an IoU threshold of 0.5. The confusion matrix analysis indicated strong performance, particularly in distinguishing between visually similar elements. Precision-recall curves and F1 scores confirmed the model’s robustness, showing high reliability across varied hand-drawn styles. |
en_US |