Keywords: MORTAL COMPUTATION, GLOM, test time training, neural fields, implicit representation, distillation
TL;DR: APM introduces asynchronous patch-processing for test-time-training instead of parallel perception. APM leverages GLOM's islands of agreement. We present the scientific evidence towards validating GLOM's insight: if input percept is really a field.
Abstract: In this work, we propose Asynchronous Perception Machine (APM), a computationally-efficient architecture for test-time-training (TTT). APM can process patches of an image one at a time in any order asymmetrically and still encode semantic-awareness in the net. We demonstrate APM's ability to recognize out-of-distribution images without dataset-specific pre-training, augmentation or any-pretext task. APM offers competitive performance over existing TTT approaches. To perform TTT, APM just distills test sample's representation once. APM possesses a unique property: it can learn using just this single representation and starts predicting semantically-aware features.
APM demostrates potential applications beyond test-time-training: APM can scale up to a dataset of 2D images and yield semantic-clusterings in a single forward pass. APM also provides first empirical evidence towards validating GLOM's insight, i.e. input percept is a field. Therefore, APM helps us converge towards an implementation which can do both interpolation and perception on a shared-connectionist hardware. Our code is publicly available at https://rajatmodi62.github.io/apm_project_page/
--------
**It now appears that some of the ideas in GLOM could be made to work.**
https://www.technologyreview.com/2021/04/16/1021871/geoffrey-hinton-glom-godfather-ai-neural-networks/
GLOM = Geoff's Latest Original Model.
```
.-""""""-.
.' '.
/ O O \
| O |
\ '------' /
'. .'
'-....-'
Silent men in deep-contemplation.
Silent men emerges only sometimes.
Silent men love all.
Silent men practice slow science.
```
Primary Area: Deep learning architectures
Submission Number: 7605
Loading