Keywords: interpretability, XAI, model theory, computability theory, mathematical linguistics, computational linguistics
TL;DR: We discuss recent results on interpretability of object and processes in mathematical and computational linguistics.
Abstract: We discuss one of the central notions of eXplainable Artificial Intelligence - the notion of interpretability. This notion is actively and fruitfully used in mathematics: interpreting or representing points as (tuples of) numbers is the central idea of analytic geometry, etc. One of the most general formal definition of interpretability is given in model theory, the branch of mathematical logic studying syntax and semantics of formal languages. Computable model theory studies algorithmic interpretations (representations) of classical mathematical structures on natural numbers.
However, objects and processes used in current artificial intelligence are rather different from these studied in mathematical logic (in particular, in model theory, computability theory and proof theory) . One of the main features practical objects lack is precision (exactness).
We discuss recent results (ours and others) on interpretability of object and processes in mathematical and computational linguistics. These two approaches to understanding and interpreting the meaning of a sentence or a text expressed in natural language provide very typical examples of differences between mathematical and computational (practical) models.
Submission Number: 26
Loading