> Whereas I've never seen a frequentist book dismissive of Bayes methods
Nearly every Frequentist book I have mentioning Bayesian method attempts to write them off pretty quickly as "subjective" (Wasserman, comes immediately to mind but there are others), which is falsely implying that some how Frequentist methods are some how more "objective" (ignoring the parts of your modeling that are subject does not somehow make you more object). The very phrase of the largely frequentist method "Empirical Bayes" is a great example of this. It's an ad hoc method that somehow implies that Bayes is not Empirical (Gelman et al specifically call this out).
Until very recently Frequentist methods have near universally been the entrenched orthodoxy in most fields. Most Bayesians have spend a fair bit of their life having their methods rejected by people who don't really understand the foundation of their testing tools, but more so think their testing tools come from divine inspiration and ought not to be questioned. Bayesian statistics generally does not rely on any ad hoc testing mechanism, and can all be derived pretty easily from first principles. It's funny you mentioned A/B tests as a good frequentist example, when most marketers absolutely prefer their results interpreted as the "probability that A > B", which is the more Bayesian interpretation. Likewise the extension for A/B to multi-armed bandit trivially falls out of the Bayesian approach to the problem.
Your "likelihood" principle discussion is also a bit confusing here for me. In my experience Fisherian schools tend to be the highest champions of likelihood methods. Bayesians wouldn't need tools like Stan and PyMC if they were exclusively about likelihood since all likelihood methods can be performed strictly with derivatives.
This sounds to me very much like a political debate between people arguing for the best method, rather than focusing on the results that you can get with either method.
As long as this debate is still fuelled by emotional and political discourse, nothing useful will come out of it.
What is really needed is an assessment which method is best suited for which cases.
The practitioner wants to know “which approach should I use”, not “which camp is the person I’m listening to in?”
Nearly every Frequentist book I have mentioning Bayesian method attempts to write them off pretty quickly as "subjective" (Wasserman, comes immediately to mind but there are others), which is falsely implying that some how Frequentist methods are some how more "objective" (ignoring the parts of your modeling that are subject does not somehow make you more object). The very phrase of the largely frequentist method "Empirical Bayes" is a great example of this. It's an ad hoc method that somehow implies that Bayes is not Empirical (Gelman et al specifically call this out).
Until very recently Frequentist methods have near universally been the entrenched orthodoxy in most fields. Most Bayesians have spend a fair bit of their life having their methods rejected by people who don't really understand the foundation of their testing tools, but more so think their testing tools come from divine inspiration and ought not to be questioned. Bayesian statistics generally does not rely on any ad hoc testing mechanism, and can all be derived pretty easily from first principles. It's funny you mentioned A/B tests as a good frequentist example, when most marketers absolutely prefer their results interpreted as the "probability that A > B", which is the more Bayesian interpretation. Likewise the extension for A/B to multi-armed bandit trivially falls out of the Bayesian approach to the problem.
Your "likelihood" principle discussion is also a bit confusing here for me. In my experience Fisherian schools tend to be the highest champions of likelihood methods. Bayesians wouldn't need tools like Stan and PyMC if they were exclusively about likelihood since all likelihood methods can be performed strictly with derivatives.