Efficient Similarity-Preserving Unsupervised Learning using Modular Sparse Distributed Codes and Novelty-Contingent NoiseDownload PDF

Anonymous

09 Oct 2020 (modified: 05 May 2023)Submitted to SVRHM@NeurIPSReaders: Everyone
Keywords: Modular Sparse Distributed Codes, Similarity preserving Coding, Canonical Cortical Circuit
TL;DR: A new modular sparse distributed code and using novelty-contingent noise yields an efficient similarity preserving learning method
Abstract: There is increasing realization in neuroscience that information is represented in the brain, e.g., neocortex, hippocampus, in the form sparse distributed codes (SDCs), a kind of "cell assembly". Two essential questions are: a) how are such codes formed on the basis of single trials, as is necessary for episodic memory, and how is similarity preserved during learning, i.e., how do more similar inputs get mapped to more similar SDCs. I describe a novel Modular Sparse Distributed Code (MSDC) that provides simple, neurally plausible answers to both questions. An MSDC coding field (CF) consists of $Q$ WTA competitive modules (CMs), each comprised of $K$ binary units (analogs of principal cells). The modular nature of the CF makes possible a single-trial, unsupervised learning algorithm that approximately preserves similarity and crucially, runs in fixed time, i.e., the number of steps needed to store an item remains constant as the number of stored items grows. Further, once items are stored as MSDCs in superposition and such that their intersection structure reflects input similarity, both fixed time best-match retrieval and fixed time belief update (updating of the probabilities of all stored items) also become possible. The algorithm's core principle is simply to add noise into the process of choosing a code, i.e., choosing a winner in each CM, which is proportional to the novelty of the input. This causes the expected intersection of the code for an input, X, with the code of each previously stored input, Y, to be proportional to the similarity of X and Y. Results demonstrating these capabilities for spatial patterns are given in the appendix.
4 Replies

Loading