pix2code: Generating Code from a Graphical User Interface ScreenshotDownload PDF

14 Dec 2017 (modified: 14 Oct 2024)ICLR 2018 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).
TL;DR: CNN and LSTM to generate markup-like code describing graphical user interface images.
Keywords: computer vision, scene understanding, text processing
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 10 code implementations](https://www.catalyzex.com/paper/pix2code-generating-code-from-a-graphical/code)
3 Replies

Loading