- Keywords: multimodal machine learning, neuro-symbolical inference, knowledge bases, hybrid intelligence
- TL;DR: Position paper on integrating deep learning, neuro-symbolical inference, and knowledge bases for modelling language.
- Abstract: As an answer to recent contributions about the conjectured impossibility of learning meaning from surface form alone,and the dangers of large language models, we argue in this paper that an explicit distinction should be made between (i) perception, (ii) memory, and (iii) inference. We envision a triad of interacting subsystems with corresponding responsibilities. Perception provides the interface between the system and its environment, and is typically realised as a language model. Explicit Memory is a structure of concepts and relations between the concepts, in other words, a knowledge base of facts. Inference, finally, corresponds to mathematical or rule-based reasoning and provides, for ex-ample, classical logic and arithmetic. We note here that such axiomatic systems cannot be deduced from data, only conjectured and tried against data. Our position is that natural language systems should thus combine continuously updated language and other perception models (e.g., computer vision)with one or more symbolic knowledge bases that relieve the models from learning concepts and relations, and finally one or more inference engines to provide formal reasoning.
- Track: Positional paper