On Different Notions of Redundancy in Conditional-Independence-Based Discovery of Graphical Models
TL;DR: We show that some but not all additional conditional independence tests can be used to evaluate and improve CI-based discovery of graphical models
Abstract: Conditional-independence-based discovery uses statistical tests to identify a graphical model that represents the independence structure of variables in a dataset.
These test, however, can be unreliable and algorithms are sensitive to errors and violated assumptions.
Often there are tests that were not used in the construction of the graph.
In this work, we show that these _redundant_ tests have the potential to _detect_ or sometimes _correct_ errors in the learned model.
But we further show that not all tests contain this additional information and that such redundant tests have to be applied with care.
Precisely, we argue that the conditional (in)dependence statements that hold for every probability distribution are unlikely to detect and correct errors - in contrast to those that follow only from graphical assumptions.
Submission Number: 459
Loading