My biggest issue with Bayes' Theorem as a method of making everyday decisions is that it assumes the ability to accurately assess the underlying likelihoods of events taking place, especially on-the-fly.
I would even argue that it's actually providing a false sense of precision because the sig figs are oftentimes not correctly represented.
This is not a problem with Bayes' Theorem. Any alternative method of updating beliefs will suffer from exactly the same problem of noisy inputs, and have the the additional problem that it cannot maintain consistency with all the evidence (which only Bayes' rule is capable of doing.)
Using Bayes' rule consistently will make you aware of how uncertain the inputs are, and that is a feature, not a bug.
How will it make you aware of uncertainty? At some point you do have to guess (called "estimating" here), do you not, and that will have a compounding effect on the outcome. You incorrectly guess a probability somewhere only by a small amount, and it multiplies its way through to the result, and you're looking at a potentially huge difference in resulting probability, which could easily span the "will act" or "won't act" gap.
I think it's worth keeping the principles of bayesian reasoning in mind. The idea that you should update hypotheses on evidence, and adding probability in one place takes it from everywhere else slightly, keeping track of prior probability, etc.
Not that you should actually do mental calculations on made up probability estimates. I mean you can do that, and if your estimates are at all decent, the result might be better. But I don't think anyone actually recommends that.
But in general, no one has enough computing power in their heads to go and do explicit bayesian updates on everything all day long. You have to pick your battles, and use the right tools for the task at hand.
That post is like nobody there has ever heard the phrase "don't fall in love with your model".
Quite a lot of LessWrong posts are of the theme "my model gives this counterintuitive result" - the trouble is they go on to "AND THIS IS VERY IMPORTANT AND SIGNIFICANT!!" rather than "hmm, maybe my model needs work."
As you say, Bayesian statistics is not of much use when there is no prior information as to the frequency of certain events. Once any amount of such information is available, then it comes into play.
1. Bayesian statistics allows use of "uninformative" priors.
2. Discarding your subjective beliefs is less than ideal as well as overweighting them. You have beliefs due to your experience. Weight them lightly but still use them. In the absence of much information your calculations will follow your gut. What else do you have in the absence of other information?
Using quantified subjective beliefs at least has the advantage of enabling you to make consistent choices based on what you know within a rigorously defined framework.
I don't think it's as rigorously defined as it's purported to be. I just envision the practical application of manipulating values to produce desired results, and then post-hoc rationalizing those value manipulations to obtain a higer-than-warranted level of confidence in the result, because, "I applied rigor!"
He's talking about the practical effects of fallible humans using the version of Bayes advocated in the original post (citing Yudkowsky and Muehlhauser).
Pulling numbers out of your backside and running them through a process makes you more confident than just pulling them out of your backside directly, but I've yet to see evidence it does more than increasing your confidence.
Oh yes, I wouldn't question the math -- it's well beyond my expertise, and I've heard of way too many implementations of the theory to refute it at a theoretical level.
that is why you basically don't want to use bayes if you have few samples (or you don't have any but you are preparing to collect them). There are much better models, the first coming to mind is Expectation Maximization, which works with series of inputs. The article per se is so confused at the end I was not sure I was reading about bayes (lol?), so I guess we are all a little puzzled.
I would even argue that it's actually providing a false sense of precision because the sig figs are oftentimes not correctly represented.