A modern compiler infrastructure for deep learning systems with adjoint code generation in a domain-specific IRDownload PDF

28 Oct 2017 (modified: 07 Dec 2017)NIPS 2017 Workshop Autodiff SubmissionReaders: Everyone
Abstract: Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter, some with inefficient algorithmic differentiation by operator overloading. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domain- specific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler framework inspired by LLVM, DLVM is more modular and more generic than existing deep learning compiler frameworks, and supports tensor DSLs with high expressivity. We argue that the DLVM system enables a form of modular, safe and performant frameworks for deep learning.
TL;DR: We present an optimizing compiler infrastructure with native AutoDiff support.
Keywords: deep learning, neural networks, domain-specific languages, reverse-mode automatic differentiation, reverse-mode algorithmic differentiation, AD, optimizing compiler, adjoint code generation, source code transformation, LLVM, DLVM
3 Replies

Loading