dc.description.abstract |
"The rapid development of web applications demands efficient user interface (UI) design
processes to reduce time-to-market and improve overall productivity. However, the traditional workflow often requires developers to wait for UI designers to complete and deliver the designs, leading to delays and suboptimal collaboration. To address this issue, this research presents a novel DL-based approach that automatically converts UI sketches into functional React components, allowing developers to initiate development without waiting for finalized designs.
The proposed solution comprises a three-stage process. First, a CNN model is employed to
extract feature vectors from hand-drawn UI sketches. Specifically, a pre-trained ResNet-34
model from PyTorch's models library is utilized, with its last fully connected layer replaced by a new linear layer with embed_size output dimensions. Second, a Long Short-Term Memory (LSTM) model serves as a language model, generating the context for a subsequent LSTM model responsible for Domain-Specific Language (DSL) token prediction. The encoder CNN class also includes a batch normalization layer and initializes the weights for the new linear layer.
The performance of the proposed method was evaluated using a ResNet-34 encoder and a GRU decoder with 256 hidden units, an embed_size of 50, and 3 layers. The results demonstrated the effectiveness of the approach, achieving a loss of 0.061 and a BLEU score of 0.974. These findings indicate that the proposed DL-based method can accurately convert UI sketches into React components, paving the way for more efficient UI development processes and enhanced collaboration between designers and developers." |
en_US |