Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Learning to Infer Graphics Programs from Hand-Drawn Images
Kevin Ellis, Daniel Ritchie, Armando Solar-Lezama, Joshua B. Tenenbaum
Feb 15, 2018 (modified: Feb 15, 2018)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract: We introduce a model that learns to convert simple hand drawings
into graphics programs written in a subset of \LaTeX.~The model
combines techniques from deep learning and program synthesis. We
learn a convolutional neural network that proposes plausible drawing
primitives that explain an image. These drawing primitives are like
a trace of the set of primitive commands issued by a graphics
program. We learn a model that uses program synthesis techniques to
recover a graphics program from that trace. These programs have
constructs like variable bindings, iterative loops, or simple kinds
of conditionals. With a graphics program in hand, we can correct
errors made by the deep network and extrapolate drawings. Taken
together these results are a step towards agents that induce useful,
human-readable programs from perceptual input.
TL;DR:Learn to convert a hand drawn sketch into a high-level program
Keywords:program induction, HCI, deep learning
Enter your feedback below and we'll get back to you as soon as possible.