Track: long paper (up to 9 pages)
Keywords: Mathematical cognition, symbolic number, connectionist models, Distributed Alignment Search, neural networks
TL;DR: We use DAS in RNNs, find symbolic alignments, and show how symbolic programs correlate with model performance
Abstract: The discrete entities of symbolic systems and their explicit relations make symbolic systems more transparent and easier to communicate. This is in contrast to neural systems, which are often continuous and opaque. It is understandable that psychologists often pursue interpretations of human cognition using symbolic characterizations, and it is clear that the ability to find symbolic variables within neural systems would be beneficial for interpreting and controlling Artificial Neural Networks (ANNs). Symbolic interpretations can, however, oversimplify non-symbolic systems. This has been demonstrated in findings from research on children's performance on tasks thought to depend on a concept of exact number, where recent findings suggest a gradience of counting ability in children's learning trajectories. In this work, we take inspiration from these findings to explore the emergence of symbolic representations in ANNs. We demonstrate how to align recurrent neural representations with high-level, symbolic representations of number by causally intervening on the neural system. We find that consistent, discrete representations of numbers do emerge in ANNs. We use this to inform the discussion on how neural systems represent exact quantity. The symbol-like representations in the network, however, evolve with learning, and can continue to vary after the neural network consistently solves the task, demonstrating the graded nature of symbolic variables in distributed systems.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 34
Loading