Abstract: Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).
TL;DR: CNN and LSTM to generate markup-like code describing graphical user interface images.
Keywords: computer vision, scene understanding, text processing
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 10 code implementations](https://www.catalyzex.com/paper/pix2code-generating-code-from-a-graphical/code)
Withdrawal: Confirmed
0 Replies
Loading