Looking at it further, it's indeed not so easy. If we have a statistics produced by some casual relationship, e.g.
function takeMeasurement() {
// c (the cause) itself has probability 0.5
// d (the dependent) in absence of c has probability 0.5
// but the presence of c increases it to 0.7
const c = Math.random(1.0) < 0.5;
const d = Math.random(1.0) < (c ? 0.7 : 0.5);
return {c, d};
}
one can calculate an equally plausible hypothesis of causality in opposite direction, which would produce an equal statistics:
P(c) = 0.5
P(d|!c) = 0.5
P(d|c) = 0.7
P(d) = ?
P(c|!d) = ?
P(c|d) = ?
-------------------------------
P(d) = P(c)*P(d|c) + P(!c)*P(d|!c) =
= P(c)*P(d|c) + (1 - P(c))*P(d|!c) =
= 0.5*0.7 + 0.5*0.5 = 0.35 + 0.25 =
= 0.6
P(c|!d) = P(c) * P(!d|c) / P(!d) =
= P(c) * (1 - P(d|c)) / (1 - P(d)) =
= 0.5 * 0.3 / 0.4 =
= 0.374(9)
P(c|d) = P(c) * P(d|c) / P(d) =
= 0.5 * 0.7 / 0.6 = 0.35 / 0.6 =
= 0.58(3)
-------------------------------
Bayes formula:
P(a|b) = P(a) * P(b|a) / P(b)
a resulting function:
function takeMeasurement2() {
const d = Math.random(1.0) < 0.6;
const c = Math.random(1.0) < (d ? 0.58333 : 0.374999);
return {c, d};
}