Abstract: We derive a set of causal deep neural networks whose architectures are a consequence of tensor (multilinear) factor
analysis. Forward causal questions are addressed with a neural network architecture composed of causal capsules and a tensor
transformer. The former estimate a set of latent variables that represent the causal factors, and the latter governs their interaction.
Causal capsules and tensor transformers may be implemented using shallow autoencoders, but for a scalable architecture we
employ block algebra and derive a deep neural network composed of a hierarchy of autoencoders. An interleaved kernel hierarchy pre-
processes the data resulting in a hierarchy of kernel tensor factor models. Inverse causal questions are addressed with a neural
network that implements multilinear projection and estimates the causes of effects. As an alternative to aggressive bottleneck
dimension reduction or regularized regression that may camouflage an inherently underdetermined inverse problem, we prescribe
modeling different aspects of the mechanism of data formation with piecewise tensor models whose multilinear projections are well-
defined and produce multiple candidate solutions. Our forward and inverse neural network architectures are suitable for asynchronous
parallel computation.
0 Replies
Loading