A path toward primitive machine intelligence: LMM not LLM is what you need.

18 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: linear mixture model, cognitive development, hyperspectral, spectroscopy, chemometrics
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: A theory of cognitive development in linear mixture models
Abstract: We live in a world where machines with large language models (LLMs) and deep reinforcement learning have shown sparks of super-human intelligence in question-answering and playing strategic boardgames. At the same time, animals continue to reign supreme in the sense of smell, a primitive form of intelligence. Applying the former (deep learning) tricks on large datasets of hyperspectral hardware and spectrometers may well lead to artificial noses that can detect the chemical composition of a mixture. But it comes at the cost of interpretability! Here, I propose a path that uses linear mixture models (LMMs) to build an engineering theory of cognitive development for chemosensing. With creative mathematical models, we can derive analytical expressions for the limits of chemosensing and advance the statistical mechanics of learning.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1411
Loading