Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Bayesian reasoning--which is plausibly the only way of reasoning that will keep our beliefs consistent with the evidence--cannot produce certainty.

To nitpick: Bayesian updating can produce certainty, in exactly the way you suggest: multiplying by zero. If the evidence you observed has zero probability under a particular hypothesis, then the posterior probability of that hypothesis will be zero. If the evidence you observe has zero probability under all hypotheses except for one, then posterior will give probability 1 to that hypothesis (assuming it had nonzero prior probability).

This won't come up if you're stick to densities like Gaussians that are supported everywhere. And it's certainly a good principle of model design to always allow your beliefs to be changed by new evidence (consistency theorems for Bayesian inference do depend on assumptions about the support of the prior and likelihood). But there's nothing formally preventing you from designing Bayesian models that rule out hypotheses with total certainty. In fact, this is what allows classical logic to be a special case of Bayesian reasoning.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: