Abstract:
With the emergence of Human Computer Interaction (HCI), usability and User Experience (UX) has received extensive scrutiny over the past few decades. User Experience and User Interfaces (UI) go hand in hand to facilitate a smooth and intuitive encounter with software solutions. UI is considered one of the most critical aspects in the Software Engineering process because a good interface contributes to a positive user experience which ultimately would be the deciding factor for the triumph of a software solution. Despite the substantial importance of UI & UX, systems developers in the industry are still reluctant to tackle UI/UX and tend to consider the front-end an exclusive responsibility of UI/UX Engineers. Recognizing this fact, researchers in the industry have developed and introduced various tools and methodologies to automate the frontend development of applications, yet this practice is still in its infancy. Researches have already attempted to create methodologies and software to generate designs and front-end code with minimal effort. Efforts like sketchcode, pix2code and skecth2code are prominent in this domain but they do not support a wide array of design elements to choose from and they also do not offer functionality to generate code for inter-app navigation.
This research raises the bar by more HTML elements and attributes that has not been seen in other literature by approaching the task at hand by using deep learning methods to train a convolutional neural network model with image captioning and text recognizing techniques. While the research shows ways to generate front-end code with a range of functionalities, it must be noted that this is not an alternative to deliberate and meticulous design carried out by a designer with extensive knowledge. However, the proposed solution is quite adequate to prototype a product solution or to get a working product coded quickly to test out the feasibility of an idea.