FLAN: feature-wise latent additive neural models for biological applications

Published: 01 Jan 2023, Last Modified: 06 Aug 2024Briefings Bioinform. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Interpretability has become a necessary feature for machine learning models deployed in critical scenarios, e.g. legal system, healthcare. In these situations, algorithmic decisions may have (potentially negative) long-lasting effects on the end-user affected by the decision. While deep learning models achieve impressive results, they often function as a black-box. Inspired by linear models, we propose a novel class of structurally constrained deep neural networks, which we call FLAN (Feature-wise Latent Additive Networks). Crucially, FLANs process each input feature separately, computing for each of them a representation in a common latent space. These feature-wise latent representations are then simply summed, and the aggregated representation is used for the prediction. These feature-wise representations allow a user to estimate the effect of each individual feature independently from the others, similarly to the way linear models are interpreted.
Loading