Abstract: Restricted Boltzmann Machines (RBMs) are bipartite graphical models with binary latent and observed variables that have shown promise for representation learning. However, their lack of interpretable parameters limits their utility in domains requiring explainability, like educational assessment. Despite extensive RBM research, non-negativity constraints on weights—essential for monotonicity in educational contexts—remain largely unexplored. To address this, we propose a method to translate RBMs into a specialized class of bipartite Bayesian networks, which we term BN2A networks, characterized by strict 2-layer separation (hidden and observed variables), Noisy-AND conditional probability tables, and directly interpretable parameters for educational models. Our work establishes a mathematical transformation from RBM weights to BN2A’s interpretable parameters (leak and penalty probabilities), theoretical analysis showing BN2A’s constrained connectivity is a subset of RBM architectures, and empirical evidence that the transformation preserves model fidelity under realistic conditions. By bridging these paradigms, our method leverages RBM’s representational power while achieving BN2A’s interpretability, opening new possibilities for adaptive learning systems and diagnostic tools.
External IDs:dblp:conf/ecsqaru/PerezVM25
Loading