Agglomerator++: Interpretable part-whole hierarchies and latent space representations in neural networks
Abstract: Highlights•We introduce a novel model, called Agglomerator++, mimicking the functioning of the cortical columns in the human brain.•Our solution provides interpretability of relationships in data, namely the hierarchical organization of the feature space.•We introduce positional encoding and input masking during pre-training for self- supervised reconstruction.•This neural representation is more efficient and closely resembles human lexical similarities.
Loading