Metaphors we learn by

Published: 11 Nov 2022, Last Modified: 08 Oct 2024ArxivEveryoneCC BY 4.0
Abstract: Gradient based learning using error back-propagation (“backprop”) is a well- known contributor to much of the recent progress in AI. A less obvious, but ar- guably equally important, ingredient is parameter sharing – most well-known in the context of convolutional networks. In this essay we relate parameter shar- ing (“weight sharing”) to analogy making and the school of thought of cognitive metaphor. We discuss how recurrent and auto-regressive models can be thought of as extending analogy making from static features to dynamic skills and pro- cedures. We also discuss corollaries of this perspective, for example, how it can challenge the currently entrenched dichotomy between connectionist and “classic” rule-based views of computation.
Loading