TL;DR: We show that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the network's output for specified inputs.
Abstract: The output of a neural network depends on its parameters in a highly nonlinear way, and it is widely assumed that a network's parameters cannot be identified from its outputs. Here, we show that in many cases it is possible to reconstruct the architecture, weights, and biases of a deep ReLU network given the ability to query the network. ReLU networks are piecewise linear and the boundaries between pieces correspond to inputs for which one of the ReLUs switches between inactive and active states. Thus, first-layer ReLUs can be identified (up to sign and scaling) based on the orientation of their associated hyperplanes. Later-layer ReLU boundaries bend when they cross earlier-layer boundaries and the extent of bending reveals the weights between them. Our algorithm uses this to identify the units in the network and weights connecting them (up to isomorphism). The fact that considerable parts of deep networks can be identified from their outputs has implications for security, neuroscience, and our understanding of neural networks.
Code: https://osf.io/rf4jt/?view_only=e367ddfd3ad84a44b7abeddd27624a88
Keywords: deep neural network, ReLU, piecewise linear function, linear region, activation region, weights, parameters, architecture
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1910.00744/code)
Original Pdf: pdf
12 Replies
Loading