pix2code: Generating Code from a Graphical User Interface ScreenshotDownload PDF

Anonymous

03 Nov 2017 (modified: 07 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).
TL;DR: CNN and LSTM to generate markup-like code describing graphical user interface images.
Keywords: computer vision, scene understanding, text processing
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 10 code implementations](https://www.catalyzex.com/paper/arxiv:1705.07962/code)
Withdrawal: Confirmed
0 Replies

Loading