An overview on Restricted Boltzmann Machines

Published: 01 Jan 2018, Last Modified: 13 Apr 2025Neurocomputing 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The Restricted Boltzmann Machine (RBM) has aroused wide interest in machine learning fields during the past decade. This review aims to report the recent developments in theoretical research and applications of the RBM. We first give an overview of the general RBM from the theoretical perspective, including stochastic approximation methods, stochastic gradient methods, and preventing overfitting methods. And then this review focuses on the RBM variants which further improve the learning ability of the RBM under general or specific applications. The RBM has recently been extended for representational learning, document modeling, multi-label learning, weakly supervised learning and many other tasks. The RBM and RBM variants provide powerful tools for representing dependency in the data, and they can be used as the basic building blocks to create deep networks. Apart from the Deep Belief Network (DBN) and the Deep Boltzmann Machine (DBM), the RBM can also be combined with the Convolutional Neural Network (CNN) to create deep networks. This review provides a comprehensive view of these advances in the RBM together with its future perspectives.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview