Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bayes' Theorem tells us that the quest for certain knowledge, which drove a great deal of science and philosophy in the pre-Bayesian era (before about 1990, when Bayesian methods started to gain real traction in the scientific community) is much like the alchemist's quest for the secret of transmutation: it is simply the wrong goal to have, even though it generated a lot of interesting and useful results.

One of the most important consequences of this is noted by the article: "Confirmation and falsification are not fundamentally different, as Popper argued, but both just special cases of Bayes’ Theorem." There is no certainty, even in the case of falsification, because there are always alternatives. For example, superluminal neutrinos didn't prove special relativity false, although they did provide some evidence. But the alternative hypothesis that the researchers had made a mistake turned out to be much more plausible.

Bayesian reasoning--which is plausibly the only way of reasoning that will keep our beliefs consistent with the evidence--cannot produce certainty. A certain belief is one that has a plausibility of exactly 1 or 0, and those are only asymptotically approachable applying Bayes' rule. Such beliefs would be immune from any further evidence for or against them, no matter how certain it was, essentially because Bayesian updating is multiplicative and anything times zero is still zero.

There is a name for beliefs of this kind, which to a Bayesian are the most fundamental kind of error: faith.



> Bayesian reasoning--which is plausibly the only way of reasoning that will keep our beliefs consistent with the evidence--cannot produce certainty.

To nitpick: Bayesian updating can produce certainty, in exactly the way you suggest: multiplying by zero. If the evidence you observed has zero probability under a particular hypothesis, then the posterior probability of that hypothesis will be zero. If the evidence you observe has zero probability under all hypotheses except for one, then posterior will give probability 1 to that hypothesis (assuming it had nonzero prior probability).

This won't come up if you're stick to densities like Gaussians that are supported everywhere. And it's certainly a good principle of model design to always allow your beliefs to be changed by new evidence (consistency theorems for Bayesian inference do depend on assumptions about the support of the prior and likelihood). But there's nothing formally preventing you from designing Bayesian models that rule out hypotheses with total certainty. In fact, this is what allows classical logic to be a special case of Bayesian reasoning.


For someone who disdains beliefs which are assigned a plausibility of 1, you sure do seem eager to assign that to one of your beliefs.


I awake. It is dark.

Therefore when I awake it is always dark.

Problem. Mismatch.

Turns out that if I awake between the local hours of 5am and 7pm then it is light. Otherwise it is dark. Problem. Mismatch. Turns out, it depends on the "time zone". Also turns out, depends on whether I'm sleeping inside or outside. In a hotel room or tent. Whether in a tent or in a building room with blinds. Etc. Etc. Each devil-in-the-details helps refine the case even further. But the "bet" to make is always the most "correct" bet to make, based only on the evidence observed to date, at hand. Thus Bayes.

Thus the Turing award.

It's just as perfect and reliable as that. And just as imperfect or vulnerable as that.


Bayes is perfect but also painfully bloodily sharp-edged. Best description I have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: