TL;DR: InfoCons is an explanation framework for point cloud models that decomposes inputs into interpretable 3D concepts using information-theoretic principles, ensuring both faithfulness and conceptual coherence.
Abstract: Interpretability of point cloud (PC) models becomes imperative given their deployment in safety-critical scenarios such as autonomous vehicles.
We focus on attributing PC model outputs to interpretable critical concepts, defined as meaningful subsets of the input point cloud.
To enable human-understandable diagnostics of model failures, an ideal critical subset should be *faithful* (preserving points that causally influence predictions) and *conceptually coherent* (forming semantically meaningful structures that align with human perception).
We propose InfoCons, an explanation framework that applies information-theoretic principles to decompose the point cloud into 3D concepts, enabling the examination of their causal effect on model predictions with learnable priors.
We evaluate InfoCons on synthetic datasets for classification, comparing it qualitatively and quantitatively with four baselines.
We further demonstrate its scalability and flexibility on two real-world datasets and in two applications that utilize critical scores of PC.
Lay Summary: 3D point clouds are like images of the world made up of millions of points, captured by sensors such as LiDAR.
Neural networks are trained to percept these point clouds—for example, to help self-driving cars or robots understand their surroundings—but they sometimes make mistakes.
Why do they fail? Our method, InfoCons, uses ideas from information theory to break point clouds into meaningful 3D parts, so we can see which parts of the data were critical to the model’s decision.
This helps us understand failures—for example, when a model overlooks a car on the road, or confuses a flower pot with a plant, or mixes up a table with a desk. These mistakes can have serious consequences in real-world settings.
InfoCons offers visual explanations for model decisions and works across different datasets and tasks, helping researchers and developers diagnose and improve 3D perception systems.
Link To Code: https://github.com/llffff/infocons-pc
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: Information Theory, Interpretability, Point Cloud
Submission Number: 4354
Loading