Distinguishing between cause and effect via the algorithmic Markov condition
Dominik Janzing and Bernhard Schoelkopf
(Max Plank Institute for Biological Cybernetics, Germany)
Inferring the causal structure that links n observables is usually based
upon detecting statistical dependences and choosing simple graphs that
make the joint measure Markovian. Here we argue why causal inference is also possible when only
single observations are present.
We develop a theory how to generate causal graphs explaining
similarities between single objects. To this end, we replace the notion of
conditional stochastic independence in the causal Markov condition
with the vanishing of conditional *algorithmic* mutual information.
We prove the equivalence of three versions of our algorithmic Markov condition
in analogy to a known equivalence theorem for three versions of the statistical
Markov condition. We justify the algorithmic causal Markov condition by
an algorithmic model of causality, where every effect is influenced by its causes
in a computable way.
In contrast to causal inference methods that rely on statistical dependences, our
theory implies rules for distinguishing between the causal hypotheses
X --> Y and Y --> X for two random variables X,Y.
This is because the causal graphs relating the
individual observations of the statistical ensemble induce different sets of algorithmic independences for the two cases.
The impact of our theory is thus threefold. It provides
- a graphical-model-based framework for inferring causal relations among individual objects
- a probability-free formulation of common independence-based statistical causal inference
- a foundation for new statistical causal inference rules by also taking into account non-statistical algorithmic dependences.