I was having this exact discussion with a philosophy of science major a few days ago. We were both bothered by the current fetishization of science.
It's troubling that otherwise well-educated, logical people have come to elevate science as this god-like authority, and scientific consensus as the source of all truths.
The discussion came to Kuhn and the often touted belief in popular culture that science is not always completely right but in essence it's one big model of nature that keeps getting better and so we should trust it implicitly, while ignoring the concept of paradigm shift that arises specifically from people outside the consensus. Coupled with the fact that dissent gets blurred with denialism, and that the absence of evidence has come to mean "evidence of absence" on the news, it's become really tricky to discuss things critically without being openly mocked.
I wish I had time to read more philosophy, even as a scientist you learn a whole bunch about science, it's really enlightening.
Feynman's lectures on the character of physical law should be something everyone sees in school.
In general we look for a new law by the following process: first we guess it, then we compute the consequences of the guess... and then we compare those computation results to experiment.
If it disagrees with experiment it is wrong. In that simple statement is the key to science. It doesn't make a difference how beautiful your guess is. It doesn't make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment its wrong, that's all there is to it.
Notice however that we never prove it right... In the future there could be a wider range of experiments, or you could compute a wider range of consequences, and you may discover then that the thing is wrong. That's why laws like Newton's laws for the motions of planets last such a long time... It took several hundred years before the slight error in the motion of Mercury was developed.
Then there is the Asimov quote : "John, when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."
The key to the quote above is that nothing is proven right.
That is the fundamental flaw with science, if you understand this, you understand science.
In science and therefore reality as we know it, nothing can ever be proven true. It is fundamentally impossible. Every single claim made by every single scientist since the beginning of science has never ever been proven true.
This “flaw” is what drives most of the debate and misunderstandings about science in the cultural and political arena. For example... No scientist can ever prove global warming to be true... it is fundamentally impossible. Hence the debate rages on endlessly.
>> In science and therefore reality as we know it, nothing can ever be proven true.
In engineering we take the science and use it to predict how a design will work. When things work as intended it is a confirmation that the science is useful, if not "correct".
To me that is what science does. It allows us to make useful predictions that can inform decisions.
Science is a process of falsification. You have a hypothesis and you attempt to falsify the hypothesis.
What you're describing is a model that has stood the test of falsification. Someone comes up with a mathematical model that can predict the future. They test that model under science to see if they can falsify the model.
If they fail to falsify the model then at best they say this model might as well be true because we couldn't determine otherwise.
That is science.
When you use the model to predict how something will happen in the real world for some real world application. That is more called "engineering."
Engineering is application, science is best attempt verification.
I think you're splitting hairs, but ok. You describe the process of science and how it produces models. I described how we use those models in engineering - not the science I suppose, but the output (useful models) from it. In a sense, engineering is hoping their efforts confirm the model not falsify it :-)
I think there is some important stuff in this, so maybe splitting hairs to explain accurately is important.
There are people who are called "scientists" who test models with scientific experiment. Then there are people are "engineers" who use the models. If our social structure splits the difference by occupation how am I splitting hairs?
Literally if you didn't run an experiment with a hypothesis you aren't doing science, and people in society therefore won't refer to you as a scientist.
Every engineer tests a hypothesis by designing things based on that hypothesis and then testing that they work as intended. We tend to use the hypothesis' that have already been elevated to the status of theory though.
I've done some Greenfield science as an engineer, and plenty of scientists do some engineering. Titles dont really mean that much to me. It's all sort of a continuum and our place on it isn't strictly defined.
Ok let's be real. You're the one splitting hairs now. There is a clear difference between what either occupation does but of course sometimes a scientist needs to do carpentry or machinist work to build the tools for his experiment. Does this make machining and carpentry the same thing as science? No.
For sure. If someone can come up with a way to prove anything to be definitively true, most of these arguments would be over.
It is this flaw in science that is the origin of all these controversial debates that surround science, religion, global warming and other controversial topics.
Nothing can be proven and therefore reality will always be open to interpretation.
Just to clarify, things can be proven false, they just can't be proven true. That's why we speak of a good scientific theory being falsifiable - there must be a way to show it's not true.
The Scientific Method then becomes a process of incremental refinement. Newton's theory of gravity didn't become "wrong" just because Einstein's theory was more accurate - heck we still teach, and use, Newton's theory to this day in both high school and college. It's useful. We just know that all bets are off when velocities approaching the speed of light are involved. Turns out that's not usually the case in everyday situations. We suspect that Einstein's theory may not be the last word on gravity either - but the cases where the theory breaks down are getting more and more extreme. It's not like apples are going to start falling up from trees because we changed our theory!
Yeah I'm already aware of this. Almost no one else is though.
Keep in mind though, Incremental refinement is not a given. There is still no way to know if each new theory is actually closer to the truth. In fact it may even be a step backwards.
Additionally limited accuracy in observation tools actually make it impossible to falsify anything either. Can you trust that the critical observation was 100% accurate? But tbh this is just splitting hairs.
I was wondering if you were going to call me out on that!
> Additionally limited accuracy in observation tools actually make it impossible to falsify anything either
That's a problem with GR too - it's true only within our current ability and accuracy to test that it's true. Better measurement may suddenly reveal something to not be true that we currently believe to be true.
> No scientist can ever prove global warming to be true
Or to be false. Focusing on "true" vs. "false" leads nowhere except, as you note, to endless debate.
The real question is, do we have models that can make reasonably accurate predictions? That is the right question for ordinary citizens, concerned about what, if any, political policies they should support or oppose, to be asking about any scientific claims.
In the case of global warming, the answer to that question is no; we have models, and they make predictions, but the predictions aren't very good, and they haven't gotten any better over the last few decades despite a lot of effort. That means we should be very careful putting much confidence in those models.
If I hypothesize that all zebras have stripes then observing one zebra with spots falsifies the entire hypothesis. This is definitive.
However, no amount of zebras that I observe with stripes can ever prove my hypothesis correct. I can observe 500 zebras all with stripes and at any point in time the 501st zebra can have spots. I can observe 1 billion zebras with stripes and the possibility still remains open that the next zebra I see has spots.
Science is not symmetric. Falsification is possible, proof is not.
Falsification of really, really simple hypotheses (like "all zebras have stripes", or the classic "all ravens are black") is possible, yes.
But no hypothesis of any real significance in science is that simple. In any scientific model of any significance, there are always ways to patch the model to account for new observations. In a model that is destined to be supplanted, the patching gets more and more cumbersome and less and less plausible over time; but there is no hard and fast breaking point at which there is too much patching, that's always a judgment call about which different scientists can disagree. Also, unless and until there is some new model available that can account for all the same observations, including the new ones, in a simpler and more intuitively plausible way, preferably also with more accurate predictions, scientists will continue to try to use the old patched model because there is no alternative.
Except science _can_ set error bounds. No, global warming can never be "true" from the perspective of 100% accurate with no room for error. Science is able to set error bounds though. The latest particle physics results come with "this theory is correct to within 99.99% of the theoretical model". That means that even if the theory is wrong, it can account for 99.99% of all observations you make. That is the power. Obviously climate change models aren't like particle physics so we don't expect such accurate error bounds (too many sources for error). However, the question you must ask is "are the error bars sufficient for the decision I have to make?".
My conclusion thus is that unlike dietary science or pop psychology, the evidence here is very likely real, despite any doubts that may persist. I could of course be wrong. That's a fundamental truth one has to admit to oneself & other peers who understand how science works. That's not a political truth that would make sense to admit because saying "I can never know but I'm pretty sure" has been weaponized into "you don't know anything & you're wrong". Same reason when speaking among science-literate friends I say "global warming" but am careful to say "climate change" to everyone else because the warming aspect got corrupted into "you said it's global warming but winter was especially cold this year".
Maybe there has been a massive social pressure within the scientific community into confirmation bias. Certainly the community does self-police & ridicule anyone who doesn't ascribe to it. Why do I think the theory is reliable & it's not a massive conspiracy (intentional or otherwise)? I trust that there's lots of very bright people who have studied the math, from within & without the field & validated the models don't have any fundamental issues of any kind (conceptual, numerical issues, computer sims are solid, etc). I trust that people outside of the field who have related degrees have validated huge swaths of it. I trust that technological advances have provided us with exponentially better sensors & monitoring and that has fed back into exponentially better simulations to validate models. And with all of that development, over the course of 20 years, the results haven't changed, no new hypothesis have borne out. The scientific community is both small & large. It's large in that there will always be some amount of bad behavior somewhere, bad or unethical science, wrong results, mistaken theories that take hold for a time, etc. It's small though in the sense that when there's a very real problem, it can be brought to light & it's really hard to suppress that knowledge. That's why the chorus about global warming has been increasing. Oil companies knew about this as a problem & suppressed their own voices to the debate because it would hurt their bottom lines. In fact, they often fund the opposition. This isn't a conspiracy theory. There are court documents showing this. A well-informed person can use all of this knowledge to make a guess about who to listen to. A less-informed person will let themselves be swayed by the opinion they want to hear & or as a "fuck you" to the scientific community for ruining their career choice.
No. This is incorrect. You're talking about model accuracy and observation accuracy.
I am talking about something far more fundamental. Using axiomatic logic and probability, you cannot prove anything in science. Even with observations that are 100% accurate with 100% precision. This literally has fundamental consequences on our interpretation of reality as we know it and has affected out perception of reality and our science as well.
This occurs because at any point in time a new observation can be made that falsifies a theory. Let's say you have a hypothesis that all zebras have stripes. You can observe 500 zebras with 100% accuracy and see that all of those zebras have stripes. But at any point in time in the future you can happen upon a hidden island that has 2 million zebras on it that has spots instead. 500 observations is minuscule in the face of 2 million and it literally renders your initial hypothesis ludicrous. Zebras are creatures that are more likely to have spots then stripes is the complete U-turn conclusion based off of new observations.
Keep in mind the new conclusion occurred regardless of how methodological and accurate your initial observations were. The accuracy of the observation is Completely and utterly irrelevant. Because at any point in time I can encounter another new island with 1 billion zebras that has stripes rendering my second conclusion completely wrong, again.
This is the fundamental flaw of science. It is far more fundamental then limited accuracy in observational measurements.
For example take newtons laws of motion. There is no 99% right or wrong on that model. Assuming that our observations are accurate, Newtons laws of motion are 100% percent wrong.
Yes they may be accurate numerically to a certain extent but the theory has ultimately been falsified and we now know relativity is a more accurate description. However, keep in mind that even relativity is not "proven" it can Never be proven and it will always be open for a complete reversal the same way Newtonian motion was.
In fact Newtons laws of motion is the perfect example. It was the ultimate example of scientific verification. All experiments pointed to the theory being completely accurate, to disbelieve the science was to disbelieve reality. It was at the time equivalent to disbelieving evolution.
This is the fundamental flaw with science. Nothing can ever truly be proven. And everything even the fundamental pillars of reality we rely on today from Newtons laws to evolution can never actually be proven to be true, and is always open for a complete rewrite.
This is the exact reason as to why people can pick and choose the reality they believe in, whether it be Christianity or evolution. Neither can in actuality be proven, nothing has and nothing ever will.
You seem to be very strong down the nihilism philosophy. I have a view point that nihilism isn't a useful philosophy & doesn't yield any particularly meaningful insights that help you find success in this world. It's very much, at least to me, of the same vein as the Omphalos hypothesis
(also known as Last Thursdayism by atheists such as myself) which says "Sure sure. You've got all these fancy theories. But how do you *know* the universe wasn't created in its current state Last Thursday & so all your measurements are meaningless?".
Worrying about an epistemological definition of "truth" that is different from the scientifically one is equally unhelpful. Scientific philosophy & the inquiry stemming from that actually yields results in any field you look into & just building on that. Worrying about a higher order definition of truth and certainty that only exists in your own mind (since no two people will agree) is irrelevant & unhelpful. Medicine has come a very long way from where it was & our understanding of it is drastically better than it was. Is it perfect? No. Is it infallible? No. Does it matter? Not really because at the end of the day it's infinitely better than where we started & continuing along this path will continue to yield results over time.
> In fact Newtons laws of motion is the perfect example. It was the ultimate example of scientific verification. All experiments pointed to the theory being completely accurate, to disbelieve the science was to disbelieve reality. It was at the time equivalent to disbelieving evolution.
I'm always fascinated by people who claim that Einstein's theory of relativity somehow undermines the bedrock of scientific inquiry when it's 100% the thing that supports it. Newton's theories weren't wrong. They were 100% correct for the environments we were testing them in. Like the all the equations behind the theory of relativity, if you turn down the speed & mass variables to every day human values, they literally turn into the same classical Newtonian mechanics equations. The *only* instance that you should be questioning the scientific validity of a field is when there's competing theories & the experiments themselves don't really help make decisions. Like dietary science. That's a field that constantly produces contradicting results. There's definitely some good advice but it's mostly hokum except for the parts that actually intersect with medical research or have really wide studies done because of the problems of limited observations. Same with pop psychology & other human-centered inquiries that don't have external sensors against which to measure results & large sample sizes to deal with the variation. Non-physical inquiries suffer very few of these problems & are easier to experiment with.
If it helps you, the scientific method of inquiry of in some ways is directly supported as a fundamental tenet of mathematics (via the fields of probability/calculus). If you sample an underlying distribution enough times with a random enough sample (no bias that's causing you to overlook things), the more the samples match your estimate of the distribution, the less likely it is that your estimate and reality diverge. That resolves, at least for me, the philosophical conundrum of "what is truth" and "have you really done enough measurements". For religious arguments, your form of argument is "the God of the gaps" or "God of the cracks". If you just focus on a crack, all you can see is all that empty space & not the bridge that the crack is a non-critical part of. Even science's philosophy is underpinned by a mathematical truth & our challenges sticking to it are our own failures, not those of science. I recognize this sounds like religion, but the difference is:
* Falsifiability. Good scientists will very quickly discourage any attempt at scientific inquiry of anything that can't be disproven through experimentation.
* Free sharing of knowledge. We're not as great here because of the economic realities of our society, but certainly better than religion organizations that tend to have more of their documentation in private vaults. That being said, this is the most fair point of criticism against scientific inquiry for me & the one where today's scientific industry gets closest to religion.
* Consistency of conclusions. It doesn't matter if a discovery fails to take hold. Over time the same thing gets rediscovered eventually. Like Calculus being simultaneously invented by Newton & Leibniz. Good ideas just have their time & inevitability comes from a build up of knowledge. Religions don't really share this property. Neither does philosophy which just has a bunch of models & no way to model/investigate them. Philosophy is useful as a hypothesis generation machine or maybe as a way to examine how humans can improve the scientific field. That's about it & we need to be quick to discard it when science starts providing answers.
* Belief or lack of it doesn't matter. Science is about making predictions. If the predictions are based on faulty science, they'll not hold up over time. If the predictions do hold up, then they're more likely to be right. Probability is where this gets tricky, especially so when polling human sentiments. That is walking a razor's edge.
Still, science is the only philosophy that's actually yielded tangible results consistently over any period of time. Religion & other philosophies have not.
> You seem to be very strong down the nihilism philosophy. I have a view point that nihilism isn't a useful philosophy & doesn't yield any particularly meaningful insights that help you find success in this world. It's very much, at least to me, of the same vein as the Omphalos hypothesis (also known as Last Thursdayism by atheists such as myself) which says "Sure sure. You've got all these fancy theories. But how do you know the universe wasn't created in its current state Last Thursday & so all your measurements are meaningless?".
What I'm talking about isn't a philosophy. This is the fundamental tenet as illustrated by academia. I'm not pulling this out of my ass. This is what educated scientists understand about science. If you don't know this you literally don't know what you're talking about. I am not arguing my opinion here, I am arguing the academic definition of science.
To prove it to you I'll literally quote Einstein:
"No amount of experimentation can ever prove me right; a single experiment can prove me wrong."
If you don't understand why he said the above quote. You don't understand science in the same way a physicist or a scientist understands science. In fact the above was said in reference to Einsteins and newtons theories.
What einstien is basically saying is this. Science can never prove anything to be correct. It can ONLY falsify things.
>I'm always fascinated by people who claim that Einstein's theory of relativity somehow undermines the bedrock of scientific inquiry when it's 100% the thing that supports it. Newton's theories weren't wrong. They were 100% correct for the environments we were testing them in.
Your fascinated at the entire academic definition of science being different from your own personal definition? You're misunderstanding of science is the real enigma here.
Nothing is undermined. I'm not against science I am simply elucidating what science is to you in the sense that science can never prove anything to be true. Science can ONLY falsify things. It is a very limited tool, but it is also the only tool we have.
I'm an atheist like you, I get where your coming from. But you have not explored science deep enough. Look deeper into this as you are not understanding what is going on here. I am not arguing for religion or creationism or any of that BS as "valid" I am simply stating a fundamental well known flaw with science that is known by all people who know the technical definition of science.
Additionally, Newtons theory is 100% wrong in every environment. It only appears to be correct given limited accuracy of tooling. When you increase the accuracy of the observation the environment is irrelevant, it is always wrong.
>If it helps you, the scientific method of inquiry of in some ways is directly supported as a fundamental tenet of mathematics via calculus.
This is highly highly misguided. Logic and Science are completely separate. This is well known among people who understand the concept.
Logic is a game with rules axioms and a well understood domain. We create the rules and universe and therefore we're able to prove things within that universe.
Science is not the same. It is not an axiomatic game created by us. Science is the consequence of applying certain assumptions to a universe we did not create but only participate in.
We assume two things that are true in science. We assume logic is true. We assume rules like induction will always work even though we have no means of verifying it will work. We also assume probability works. We assume rolling a six sided dice will produce a certain outcome based off of probability and we again currently have no way of verify why or how this occurs. We just assume it.
Based off of these two assumptions we can create the scientific method. But this method is limited as it can only axiomatically falsify things. We can never prove anything to be true with science. This occurs, again because the domain of the real world is not limited like it is in our logical games of math. At any point in time the domain can change, shift and we can encounter a new unexpected observation that can change the entire arena.
Again, this isn't some BS I'm pulling out of my ass. This is science as Feynman and Einstein understood it. You lack understanding and I suggest you read up on the notion of what "proof" and science is.
Proof is only relevant in maths and logic, it is irrelevant in science and therefore reality as we know it. Science is the best tool we have but it is highly highly limited in the sense that it can never actually prove anything.
What ends up happening is science at best produces conclusions in the form of "We think this is true because our repeated attempts to falsify this hypothesis have failed." It can never produce anything definitive.
I find this is too simple, because it assumes the experiment is correct. Experiments can have errors and being able to doubt experiments, especially when you have disagreements between experiments, is a very important step in science.
This isn't to say that one should hold to a belief without any evidence to back it, only that we consider the possibilities that experiments themselves are flawed and take that into account, such as by designing seemingly unrelated experiments to test a single guess.
To give another Feynman quote.
>We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.
>Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of—this history—because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that ...
> In general we look for a new law by the following process: first we guess it, then we compute the consequences of the guess... and then we compare those computation results to experiment. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It doesn't make a difference how beautiful your guess is. It doesn't make a difference how smart you are, who made the guess, or what his name is. If it disagrees with experiment its wrong, that's all there is to it.
That elides a lot of complexity, mostly around the validity of the experiment design and execution. Confounding factors, confirmation bias, selection bias, p-hacking, etc. can all skew the results in ways both overt and subtle.
The following quote from the movie 'Thank you for smoking' may illustrate:
"This is where I work, the Academy of Tobacco Studies. It was established by seven gentlemen you may recognize from C-Span. These guys realized quick if they were gonna claim cigarettes were not addictive they better have proof. This is the man they rely on, Erhardt Von Grupten Mundt. They found him in Germany. I won't go into the details. He's been testing the link between nicotine and lung cancer for thirty years, and hasn't found any conclusive results. The man's a genius, he could disprove gravity."
So no, 'if it disagrees with experiment it is wrong' is a gross oversimplification.
Not to mention that there may be very good reasons that an experiment can't (or shouldn't) be done, or that various 'natural experiments' must be relied upon.
I like it but in every theory, there is always some edge case where the theory doesn't work. In a sense, every models in natural science are wrong but some are more right than other.
I feel like authorities betraying public trust is a large part of what got us into this mess.
I trust the scientific process. I don't trust the people involved in the process and in charge of making and following policy based on scientific and technological process.
One thing that allows for this is the oft neglected "method of discovery".
The usual "scientific process" concerns itself with how to keep an experiment correct/truthful - but how do we decide what experiments to conduct in the first place? Scientific funding can be biased towards the successes of the past, and so future funding can be guided by political agendas.
Pop culture thinks science means space, chemicals, and electronics. We aren't even hearing the words of authority in this case. A scientist describing what science is doesn't sound like indoctrination or blind faith to me.
Science is a method for understanding the world, not an ideology.
I do give most non scientific people the benefit of the doubt. They are at least trying. What is really disturbing is that this is a real problem within sections of the scientific community and especially within the public relations of the scientific community. “Science Communicator” is almost synonymous with this trap. I think it stems from oversimplification and a need to generate funding for research.
It boggles the mind to see this quote by Feynman when the entire premise of the article is that this is already how we teach science and it leads to a whole host of problems.
I read the article and disagree with its conclusions about science education. I think science education isn’t actually taught the way the article says it is. At least that’s not how I was taught it in good public schools until college, at which point it was taught the way Feynman says. If it actually was taught that way at earlier points, I think we’d be in a better place.
We teach a bizarro world version of feynman’s quote, where it’s more like the way we teach the formulas in math. It’s the right formula that they’re teaching, but the way it’s taught is encouraging its use as a black box.
Asking if people read the post is against HN guidelines:
‘Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."’
I read the article. It seems to be an argument that the problems with trust in science come from teaching a kind of logical empiricism, itself a straw man, and not what you’d come away with from listening to Feynman.
There is nothing in the article that supports this claim about the cause of the ‘problem’ or even a really usable understanding of what the problem is, other than that cigarette companies have been able to make people doubt things they shouldn’t have doubted, and that climate deniers and other kinds of ‘deniers’ are doing the same building on the work of the PR firms used by the cigarette companies.
The best I can read as what the problem is, is that we think that teaching people differently could make them immune to propaganda, and so the problem is that we aren’t teaching them differently. I think this is highly questionable.
It argues that we should instead teach science based on a feminist critique.
As far as I’m concerned the article is complete bullshit. It sets up a straw man, and then argues to a completely unsupported conclusion. It’s terrible.
Having said that here’s what it touches on that is useful:
Teaching scientific method alone as ‘science’ is outdated. Science is part of public discourse and so it is important for students to understand how science works as social, sociological, and political processes, as much as it is an epistemological method.
From this point of view, I respect the various ideas that they are advancing as valid for study. However ‘valid to study’ is very different from attempting to claim that science education should be based on this ideology, and is an obvious political land-grab which must be rejected.
The article makes a bunch of naked assertions as if they are simply facts about reality, when in fact they are very much the subject of social science itself.
Consider these statements:
> Believing based on trust is a pervasive human practice, not confined to scientific inquiry.
Seems true enough, right?
> It starts in infancy, when children learn language, everyday facts, and even religious beliefs from their caregivers.
Does it? This is definitely not settled science. Obviously children learn from caregivers, but what they trust is a deeper question, and a subject of study. Also, then sources are far wider than ‘care givers’.
This seems like an attempt to anchor the conversation in a blank slate kind of conception of the emergence of belief. This is discredited in social science, but is popular in the humanities.
> Trust is only as reliable as the source of the knowledge; when that source is unreliable, we sometimes regard beliefs based on trust as the product of indoctrination.
‘Sometimes’ being the operative word.
> Trust can be eroded when there is evidence of the unreliability of the source.
Yes, but research has shown that it can be strengthened when there is evidence of the unreliability of the source, too. It depends on other factors, such as who is providing the evidence. Again this statement is simply not objective or reflective of current social science.
> Reflective knowledge should therefore include some account of the reliability of the source of knowledge.
If you delete all of the preceding elements, and just say:
Reflective knowledge should include some account of the reliability of the source of knowledge.
We are left with a reasonable proposal.
Weirdly, one which is already common amongst science students, who don’t ignore the reliability of the results on which they base their work.
I once thought that by leaving my old religion I was leaving dogma behind. But it appears that in the absence of one dogma/doctrine, people will create another.
1. The mind always tries to simplify. One way that manifests is in dogma. Many people are satisfied or just dont have the time and energy to continually update or critically assess their own beliefs (not that this is justified, just an explanation). This is the entire reason formal science education exists at the highest levels. We didnt evolve to be critical or rational. But at our best we can learn to approach it collectively.
2. Many people think that what they learn in school or from other authorities is immutable fact and that's the end of discussion. This is an unfortunate failing of the education system.
People are people. Take away any differences in appearances, beliefs, communities, etc. and people will still find reasons to divide up and seek to destroy the outsiders.
> elevate science as this god-like authority, and scientific consensus as the source of all truths.
Science is simply the process of using of data to come to conclusions. Is there a better method for building a model of the world?
Therefore, it doesn’t make any sense to talk about “science” as an authority. There are people and organizations that claim they used the scientific process to come to this or that conclusion. Maybe they used proper data and the right math. Maybe they didn’t. But their errors in the scientific process doesn’t say anything about the validity of the scientific process, for which there does not seem to be an alternative.
It's not an enditement of science to accept that it is fallible. It's precisely the point of the article that "science as inquiry" as it is taught has led people to acquire a mystical infallible view of the scientific method which holds that people who come to different conclusions than the current consensus are biased. When in fact scientific consensus and scientific authority are social agreements based on trust.
As far as I can tell, it is not the point of this article that science teaching has led people to acquire a mystical infallible view of the scientific method which holds that people who come to different conclusions than the current consensus are biased; rather, it takes it as a premise.
In the final section, "Scientific Dissent versus Science Denialism", the article takes the position that this posited reverence for science makes it difficult to see the flaws in climate-change denial: "And without a social epistemological critique of the claims of climate change deniers, it is difficult to recognize that they are not doing credible science."
Ironically, the article itself does not present, or even mention, any empirical evidence for this being an important factor in people failing to recognize the above (maybe the evidence is in one of the references, but if the author can't be bothered to say so, I'm not going off on a wild-goose chase. The author here prefers to try deducing the alleged cause of this state of affairs from the position of another philosopher.)
I'm all for people gaining a better understanding of how science works, but I am doubtful that this particular issue is central to denialism not being recognized for what it is.
>Science is simply the process of using of data to come to conclusions. Is there a better method for building a model of the world?
If we go with such a case, then yes, there is a better model. Using data to come to a guess that have yet to be disproven.
The difference is that a conclusion is more final that a guess not yet proven wrong. A scientists steeped in research may read conclusion and automatically conceptualize a notion much closer to guess not yet proven wrong, and maybe include the current winner among countless other guesses that have been proven wrong. They have enough experience to recognize that what we know may be wrong. It may be a little wrong due to some experimental error, it may be complete wrong (but relatively accurate as an approximation in the systems already applied), or some other option.
A layman wouldn't have the same take away. When they see conclusion, they would think it is a settled matter when it isn't. The difference between trusting science because it is best we currently have and trusting science because it is right. One is a trust tempered with caution while the other risks becoming blind faith.
The system could also further be improved by better expanding upon how disagreements to existing guesses are not always equal. A disagreement backed by data is better than one that is not, and a disagreement that fits the current data, though with no unique data supporting it over the existing guess, is better than one that contradicts existing data. Recognizing the possibility that data may be bad, and it is reasonable to be willing to refresh the data from time to time. A tolerance of letting people explore the wrong paths to see what data they may find. Lastly including the issue that all data is done by humans and thus subject to our biases (which is why things like double blind data is better than single blind data which is better than data without any layers of blindness). But that ends up being hard to fit into a few sentence explanation.
Reproducibility of observed reality. The most simplistic summary I can come up with. Manufacturing to baking to software to economic models all strive to be reproducible but with the so many variables there is no way to have a true constant.
People like constants, cookies always taste the same, video game does not have glitches, car manufacturer always produces a fully fictional vehicle right from the plant. Yet some cars a lemons, cookies burn, quality of ingredients change with the seasons, software crashes.
Science is complex while most people just want to have a reduced cognitive load. Spend no time actually thinking about the complexity but accepting a more simplistic story. Since science can in no way offer a true yes or no people look else where, even when the simplistic reasoning is wrong.
A chief can say baking is easy but knowing which ingredients, when to mix, and how to use the tools is even too complex for a lot of people. Walk into a grocery store of how many different types of microwavable meals exist because it is easier to place something in to a rectangular enclosure for 10 minutes and press 5 buttons.
Only thing I've come to is to remind others is our world is complex and requires complex understanding, aka specialist, but often simplified where doing is easier. You might work on a manufacturing line, bake, or even play games but does not mean you know the science behind those processes. You may think there is no science behind baking until you ask questions, such as how does one build an electric oven? That is all I've come to, reminding people that everything is more complex than what it appears to be.
Everyone is limited by models, and scientists certainly are too. If you've ever taken a physics class and drawn a free body diagram, you've made a model. It's a good first order approximation, and an excellent pedagogical tool, but it's also a model that is limited in many ways. Is it too limited? Depends on what you want to do with it.
No. You misunderstand what science is. Science is the process of attempting to falsify a model that is already built.
Modeling in itself is not science. It is the verification of the accuracy of that model that is science and there is even a huge flaw in this science.
The keyword is falsify. Nothing in science and therefore in reality as we know it can be proven to be true.
When a scientist engages in the scientific method, no scientist has ever proved any claim ever made by humanity to be true. They are simply attempting to falsify a claim. Truth remains forever dubious for eternity.
We can only say that a theory is probably true because all our attempts to falsify the theory have failed.
Everything is politicized by everyone to further their narratives. It’s not a meaningful statement.
Science is not just euphemism. Science is the reason we have gotten as far as we have. Science is the reason why infant mortality is almost in much of the world, and food is plentiful.
Some people may use it inappropriately, and try to corrupt it, but it doesn’t change the fact that making conclusions based on data and repeating experiments to verify and adjusting the model as the data indicates is the best way we have to model our world.
Which people are using it inappropriately and corrupting it?
Why are they doing it? Why are they using silly phrases like, trust the science? Which is basically the same thing religious people do, when they say have faith.
Also, science, the classical definition, has no meaning on its own. It’s simply way to describe reality. It’s usually up to engineers and people to put into practice.
The science may describe gravity, although they still can’t answer why the gravitational constant is a constant.
But, engineers and people can test effects of gravity everyday, and act accordingly.
>Which people are using it inappropriately and corrupting it?
Journalists who misrepresent the results of a study to get clicks. Companies that purposely design erroneous experiments to get the results they want (a la the tobacco example elsewhere in this thread). Individual scientists who corrupt their own studies or falsify data to advance their own careers.
>Also, science, the classical definition, has no meaning on its own. It’s simply way to describe reality. It’s usually up to engineers and people to put into practice.
This makes no sense. Science is the process of refining the model of the world by testing hypotheses by performing experiments and then revising the model as new data is collected. This is in contrast to something like religion, where the model is not to be altered regardless of the data.
Science is about describing reality. You may be describing scientism, or some other psuedo scientific charade, which conflates actual science with hand waving mumbo jumbo, like people try to conflate astro physics with astrology.
Gold has always existed, as far as we know. People have known about gold and it’s properties, long before classic science described gold. There’s no model or testing hypothesis for gold needed. Gold exists because people use them regularly.
Science just described gold and other elements, because they exist in reality.
Engineers and people didnt need science to describe gold to use gold. Science is convenient way to describe reality for what it is.
> Is there a better method for building a model of the world?
One of the questions should be: Why should we build a model for the world? Is that really possible/doable? And if the real answer is "no", why should we apply that failed model to the real world, affecting (potentially in a negative way) the lives of millions of people?
There is no alternative to building models of the world. There are only badly built models (based on individual experience, without opposition or falsification attempts) or better built models (like from scientific process).
You can avoid having models of some domain if you can avoid to make decisions in that domain.
> One of the questions should be: Why should we build a model for the world?
The decision not to build a model, is a model.
> Is that really possible/doable?
Models are just data-driven predictions based on understanding. The better the data, the better the understanding, the better the model.
If a model for the world is impossible, then either the data is impossible to gather or process, or the understanding is impossible to achieve.
Data is just facts based on observed events, so I refuse to believe that a model for the world is impossible based on ability to track data.
Understanding is the wildcard - humans are complex beings. But each tiny step forward increases global understanding. Years ago, autism was considered "cold mother syndrome". Now through decades of careful and detailed analysis it's understood as a neurological disorder. Depression and anxiety were thought to be issues of the soul, demonic possession, laziness, etc. Now there is an ever expanding array of diagnostic tools to help sufferers.
So I refuse to believe that understanding is impossible.
tl;dr: Yes, a model for the world is possible.
> And if the real answer is "no",
That's a big if.
> why should we apply that failed model to the real world
"failed model" what failed model?
Also recall doing nothing is also a model. Bring on the dark ages!
> affecting (potentially in a negative way) the lives of millions of people?
In my world view, doing nothing and praying to higher powers, or allowing chaotic despots to rampage the lands has already caused immeasurable premature death and suffering.
Why would anyone vouch for a return to the dark ages, just because progress feels like effort?
The decision of not doing math is not math, unless we go on obscure neo-platonic paths and equate negation with presence.
Recent history is full of failed models, eugenics itself came out of scientists’ books that some German former corporal later enabled. Eugenics was based on a (failed) model of the world.
Chaotic despots have rampaged the lands with the help of technology and science just fine, people wouldn’t have reached Magadan against their own will without railways. Give me “dark ages” every day against millions of people in the Gulag. Also, the trans/Atlantic slave trade wouldn’t have been possible without navigation-related advances, many of them scientific in nature.
It's not even just the trust in scientific consensus, but also the trust in particular scientific papers and studies in e.g. pop science articles. Scientists themselves are usually cautious about what conclusions to draw from a single study that has not been replicated in a variety of experimental conditions.
Unfortunately, the popular idea of Kuhnian paradigm shifts is even more wrong than the popular idea that it's one big model of nature that keeps getting better. Paradigm shifts often don't invalidate earlier results; they refine them. Take the prototypical example: classical vs quantum mechanics. The statement "quantum mechanics is a more refined model of the world than classical mechanics" is far more accurate than "quantum mechanics proved that classical mechanics is wrong". Classical mechanics is still taught in universities (in the physics departement not in the history departement), and those courses spend a lot of time showing how classical mechanics arises as a particular limit of quantum mechanics, and how the two relate, and in which range of conditions the two theories give the same answers.
> Paradigm shifts often don't invalidate earlier results; they refine them
While I agree with your quantum/relativistic vs classical, I think they're the exception rather than the rule.
Like, phlogiston was remarkably wrong. Aether as well. Alchemical vs chemical. And the all-time classic of geocentric vs heliocentric: how can that be reconciled as a "refinement?"
One of the problems Science has in pedagogy. There's a need to teach the material in a reasonable period of time. Teachers don't want to be spending a lot of time talking about things not true in the current paradigm, so there's a huge selection bias on providing a linear history. This has the effect of whitewashing Science to show definite "progress", as opposed to the more realistic twists and turns of wrongness.
Heliocentric is a refinement of geocentric. Geocentric isn’t “wrong.” It’s not even inaccurate. You can make it as arbitrarily accurate as you like. It’s just more complex so choosing the simpler model makes more sense. But they’re both just models.
Heliocentric isn’t “the way it actually is” either. Both objects actually orbit the barycenter (i.e., center of mass). (In our model, not necessarily in reality!) That idea itself is a simplification of our model of how mass, forces, and space time work. And we usually ignore many factors when making these calculations.
This is very much stretching it in my opinion. The cycles and epicycles didn't need to be adjusted--the epicycles were the adjustment--they needed to be completely thrown out.
There aren't any teachers who are like: "Well, we used to think that the Sun was this smallish thing orbiting the Earth, but actually it's only slightly different. The Sun is 3*10^5 times the mass of the Earth, and actually the Earth revolves around it. No big deal! Please turn to Chapter 2 of your astro book, where we will be exploring slight deviations from how the cosmos orbits the Earth."
If it wasn’t a big deal it wouldn’t have been a paradigm shift. However, it was just a refinement. You’re adding things (relative masses) that weren’t part of this shift, they were later refinements.
The reasons this was a big deal were entirely human. It wasn’t religion: If you can accept the universe as a miracle it doesn’t much matter how God did it; God is just as glorious. It was power: the Church claimed the simpler model was The Truth and saw the new model as revealing a mistake and threatening their power; rather than seeing it as just a model.
And I maintain this was not a refinement of a model.
As a challenge: name one piece of geocentric math that can still be modified by today's practicioners to provide a sensible astrophysical answer.
Static mass is still a useful fact in a world of relativistic physics. But we have not tabulated any facts from geocentric concepts in the last few centuries. No one would even know where to begin or what words to use to even describe them. That material produced from Kuhnian "normal science" under geocentrism had to be thrown out, as there was no way of fitting them in with the new paradigm.
You're shifting the goalposts something crazy. I'd like to point out the original poster talked about refinement in the output of the models, not the models themselves. However, having said that, the heliocentric model is still a refinement (and further refinements, which came later) of the geocentric model. We didn't jump right from geocentrism to Kepler's three laws of planetary motion. The Copernican model still had heavenly spheres as a concept: a concept we don't use today. What you're thinking of as heliocentrism is a refinement of previous heliocentrism which itself was a refinement of geocentrism.
In particular, I suspect science only "works" to the extent that a certain amount of heterodoxy is tolerated. If "science" as an institution becomes very conformist, then you'll only get science that reinforces the biases of the prevailing ideology. The quality of the output that the institution produces depends on the health of the institution, where "health" means something like "ideological diversity" as it does with natural ecosystems.
A lot of pro-conformists make the argue that tolerating ideological diversity means regarding creationism and evolution as equally valid; of course this is untrue--science can (and indeed did) find that evolution is the more valid idea without explicitly persecuting creationists. In other words, normal heterodox science would already prefer evolution to creationism--not only is there no need to (for example) write open letters demanding various creationists' resignation/termination/excommunication/etc, but that sort of persecution actually harms the health (heterodoxy) of science (not because science needs creationists, but because science needs heterodoxy to be able to decide that creationism is the invalid idea in the first place).
Creationism has nothing to do with science, neither has ideology any place in science, and you've really got to draw the limit somewhere more reasonable. I don't want my taxes to pay "research" in creationism, morphogenetic fields, astrology, or divining rods.
There is plenty of heterodoxy in science as is - or do you really believe everyone agrees about almost everything in almost every discipline? Pretty much the opposite is the case. Pick any discipline you like and there will be plenty of disagreement about the respective issues in the forefront of scientific research.
- Traditionalism: Driving society according to the word of the elders,
- Modernity: Driving society using science.
Leaders understood that. So there are extremely high stakes in fabricating or twisting science, because once you have science on your side, all modern societies will help you.
Example with feminism: It is literally illegal in some countries (ex: France, law for true equality 2014, followed by multiple decrees since Macron) to communicate information that unflatters women (it’s considered demeaning), so even though some figures are extremely false (girls aren’t better at math, it’s only when we put a female name on exams that the sheet has 6% higher mark, this science is considered demeaning towards women therefore illegal to publish – same goes for the wage gap, women work less and less intensively per hour worked, but it is illegal to point it out), everyone still believes wrong things.
And just like this, laws pass. Same goes for immigration or all other science-backed laws where one side is forbidden to speak.
Science works... assuming equal analysis of both sides of each hypothesis. That’s why it only works in democratic cultures, but that assumes the media is balanced.
>>> I was having this exact discussion with a philosophy of science major a few days ago. We were both bothered by the current fetishization of science. It's troubling that otherwise well-educated, logical people have come to elevate science as this god-like authority, and scientific consensus as the source of all truths.
I live in a neighborhood in the shadow of a big university, where you can't spit without hitting a scientist. I work for a science oriented business. My parents, my siblings, and our spouses, are all scientists.
I don't know a single scientist who treats science as a god-like authority. In fact I believe this is a popular straw man, or a tu quoque fallacy trying to pin the characteristics of religion on science.
I think you're saying the same thing (and I agree with it).
People who actually do science rarely (not never, but rarely) fall into this fallacy. The fallacy takes hold of generally-educated people who lack specific scientific training.
It's not fetishization, it's religion. Everyone has a need for something supernatural and supreme to believe in. For some it's functional programming, for others it's science, for others, it's conspiracy, and for an increasing minority, it's actual religion. We need to figure out what to do (read: better things to do) to replace the fact that people have lost actual religion and have this deep-seated desire for something ultimate to believe in.
> Coupled with the fact that dissent gets blurred with denialism
This is what gives me the biggest headache. When discussing any politically motivated topic you can't ask questions that even remotely disagree with the popular Science opinion of the day. I had a professor scold me for asking if there were any natural phenomenon that may also be contributing to rising CO2 levels
The professor has without doubt heard the question thousands of times, by those with dishonest intent.
Because the answer is always no. There are no natural phenomena that coincide precisely with the industrial revolution. The amount of CO2 in the atmosphere tracks exactly the amount of fossil fuels being dug out of the ground. CO2 levels had been stable for millennia before that, and then they changed sharply. Nothing natural has gone on to change that.
If the question were intended honestly, you wouldn't call it "disagreement". You'd be asking for information and would make your decision after you had it.
If your description is correct, that's why your professor scolded you. The name for disagreeing without knowledge is "denial". The name for asking questions that you don't really intend to listen to the answers is "sea lioning", which is a tactic for wasting time and frustrating people. It's dishonest and rude.
If the situation was as I understood it from your description, I'm not surprised that they spoke sharply to you.
Nothing about the GP’s description sounds like ‘Sea-Lioning’ to me.
‘Disagreement’ was the word the GP used to describe how it seemed the professor was interpreting the question.
Your explanation that the professor had experienced a bunch of sea-lioning before is likely correct.
But it seems likely that it was the professor who was wrong and behaved inappropriately, because their past bad experiences have led them to be distrustful and categorize some honest people as dishonest.
All the professor had to say was this:
> There are no natural phenomena that coincide precisely with the industrial revolution.
Which is a concise encapsulation of what an honest interlocutor would need to hear to answer the question.
Sea-lioning is a real phenomenon, and so is misuse of authority.
The GP said "questions that even remotely disagree with the popular Science opinion". Sincere questions don't agree or disagree; they request information. Phrased that way it sounds very much like they were pre-supposing an answer.
So does their use of "popular" (as if this is merely people agreeing with each other for fun, rather than their best judgment based on expertise) and "opinion" (as if this were merely a guess or an aesthetic preference". Even the capitalization of "Science" seems strange in this context.
Plus, there's the way they went from "this professor acted inappropriately" to "any politically motivated topic you can't ask questions that even remotely disagree" [emphasis mine]. It was intended as an example, but it's a big jump to make, and it just happened to be on a topic where people consistently engage in sea-lioning behavior.
That's why I suggested that it sounded like sea-lioning to me. I may be wrong: as I said, I'm basing that judgment solely on what I read in that very short description. But there were a lot of red flags in that short description.
> "questions that even remotely disagree with the popular Science opinion"
Yes, but we don’t know when they formed that view. Did they hold it before or after the professor scolded them?
We accept that the professor scolded them, and that now they hold the view they are asserting.
I think it’s just as likely that the professor caused their current view as the other way around.
It’s very easy to go from being unfairly scolded by a professor who wrongly views you as dishonest, to then being influenced by the popular idea that "any politically motivated topic you can't ask questions that even remotely disagree".
If they didn’t think that before, the professor’s behavior would be a great way to make them think it now, and we simply don’t know how they thought about it before the professor’s action.
Even if the poster was sea-lioning, which frankly you cannot know, the professor definitely acted inappropriately in such a way as to engender further distrust, thus failing in their role as a science educator and betraying the authority of their position.
Even if the poster worded the question the way it seems - “with red flags”, it is a bad faith act to assume they have ill intent. It’s perfectly possible for someone to be using language they have picked up from popular culture that they haven’t refined, without themselves having bad faith.
Plus I don’t accept your position “questions that disagree”, are always bad faith. It’s possible to ask a question that is based on a premise that differs from the assumptions of the person you are questioning. This would imply disagreement in view, but it doesn’t imply bad faith.
E.g. “Are there any other sources of carbon emissions other than human industry?”
Is there disagreement here? From your view, the answer would be ‘yes’, because the implication is that this is relevant to climate change, which you think it is not.
The professor could have simply used the question as an opportunity to teach about scientific thinking.
E.g. “It’s a valid question from a geological point of view, but it isn’t relevant to the issue of climate change because there are no natural phenomena that coincide precisely with the industrial revolution.”
I disagree so strongly with you that it upsets me to see this. You’re assuming such bad faith here. It’s especially bad when the issue being complained about amounts to “people are so quick to assume bad faith from the most minor things.”
I am not assuming bad faith. I am explaining why it was perceived as bad faith, based on its similarities to other people who have also acted in bad faith. (I go into more detail at https://news.ycombinator.com/item?id=26317788.)
The fact that there are so many people who act in bad faith means that you will need to be aware of that and take it into account. It has been used as a time-wasting tactic, and people are going to take shortcuts. That is a reality, and the OP must live with it.
While you may have a valid point, when you reinterpret the GP comment from:
> if there were any natural phenomenon that may also be contributing to rising CO2 levels
to answering a very different question with:
> There are no natural phenomena that coincide precisely with the industrial revolution.
You're rephrasing the question in your head, assuming intent, moving directly into ad hominem attacks and scolding, and then excusing the same bad behavior from the professor.
The answer actually isn't no in this case, because the question was whether any natural phenomemon may also be CONTRIBUTING to CO2 levels, which of course there are countless natural contributors to CO2 levels. So, just be careful because you just fell into the same trap the professor did.
They asked if it was contributing to RISING carbon dioxide levels. To that, the answer is "no". The natural carbon sources and natural carbon sinks have not changed.
The trick is distinguishing naïve open-minded curiosity from the trolling. It doesn't hurt "science" in the abstract to have a bunch of people asking bad faith questions, but it must be pretty annoying/distracting to scientists.
Yes. I think this is especially problematic when a legitimate policy idea is defended _only_ with "It's science based". That's not enough. Good policy is agreeable to a majority of people (both in benefit and cost and is therefore usually a tradeoff for all parties). Being grounded in truth, as much as science can provide, is not enough.
You can use truth and facts to form an opinion and propose a policy, but it cannot be made to sound like the only option, or else we end up embittering those who would be put out by the policy, to the point that they have no recourse _but_ to attack the science and facts. You _have_ to be willing to tradeoff and find an imperfect compromise.
Add to that the fact that scientific discoveries and understandings are especially easy to attack _because_ they are evolving.
> we should trust it implicitly, while ignoring the concept of paradigm shift that arises specifically from people outside the consensus
There's a difference in trusting "the scientific process" versus scientists; We can trust scientists so far as they are following that process, which isn't guaranteed.
Also, the two things are not mutually exclusive; While peer review is important in science, it is not strictly the same as consensus; The paradigm shift of the outsider is enabled by this fact. What we really have to fear is that peer review degrades into the consensus-base of typical power structures.
> We were both bothered by the current fetishization of science. It's troubling that otherwise well-educated, logical people have come to elevate science as this god-like authority...
I agree. But as you point out this is basically a reaction to the inverse: some loud non-scientists, which are overwhelmingly skeptical of science/medicine for the sake of sowing distrust, rather than being followed up by critical discussion.
I'm not in favor of blind trust of anything. But for "scientism" it's an unfortunate reactionary message against the blind and blanket mistrust of science/medicine that has become popular e.g. on some circles of the internet.
The problem is that many of the "skeptics" in media are more concerned with contrarianism and sowing doubt than healthy debate.
How do you have a rational discussion with someone who comes to the table set on not trusting the scientific process?
>It's troubling that otherwise well-educated, logical people have come to elevate science as this god-like authority, and scientific consensus as the source of all truths.
Who or where precisely have you ever seen that actually done? Are you perhaps confusing people who are asserting/suggesting that science is our best tool to understand truth, and assuming they believe it is always true, or that it can/has uncovered all truths?
From my point of view science isn't fetishized nearly enough. People reject vaccines for crazy reasons, think the Earth is flat, think the moon landing never happened, basically have no idea how anything works at all.
Anywhere people say, "trust the science." Science isn't one thing, so you can't trust it per sé. For a simple discombobulating example, we have the question, "which science?" There is economics and epidemiology, and those often conflict. Who do we listen to?
The only answer is that you have to form an understanding of both sciences in order to synthesize a way that they form a single coherent picture.
It can be paired down to one thing: the scientific method.
When you exclude all human variables, including politics and errors and mistakes, the question is do you trust the scientific method?
Most people don’t understand that even with all these variables removed there is a flaw in the scientific method that is the origin of most of these misunderstandings in science.
In science and therefore in reality as we know it, we cannot prove anything to be true. No claim made by any scientist has actually been proven.
Scientists can only attempt to falsify things. At best, after many attempts at falsifying something we can say that a theory might be true, we can never prove it to be true.
Yes, and add on to that the complication that not all science derives from the scientific method. You have evolutionary biology and theoretical physics, which are essentially retrofitting theories to existing data.
The sea of sciences is vast, and the ones vying for our attention on a daily basis is overwhelming. What you're saying sounds easy for the well-read individuals on HN, but the masses need a trustworthy authority to do the work for them, and knowing what we do of Governments I wouldn't trust them either.
An authority telling people what to think? That sounds an awful lot like religion. I think we need to make up our mind which we want: education to make actually sure people understand science, or to go back to the ways of religion. It’s clear that the middle ground is not working.
The issue is that the language of science is mathematics, and mathematics is a cruel mistress. Conventional ethical boards also makes exploration of these decisions impossible. If a government official makes a decision on an event that lie at the end of two gaussians, the black swan happens and a bunch of people died, who is to blame? The people for not realizing the law of large numbers? Statistical software and Monte-Carlo modeling for claiming things have low probabilities? The government official for deferring to scientific reasoning?
You see this everyday right now during the pandemic. Policy making is very different from say, civil engineering, where failure scenarios can be quantified within tightly defined bounds. At the nation state level every decision may affect everybody. Nobody wants to be told their specific circumstances are the "statistical outliers". But science rarely gives good answers. The only time science has been reliable is when a large amount of resources is dedicated to figuring out answers in a single, specific problem. Unfortunately the life sciences don't have the luxury of physics and the formal sciences. There is no concerted level of effort to cure cancer that is comparable to the Apollo program. The military prefers to buy ten more hellfires than to redo life science ethics. If you quantify the ultimate utilitarian value, the damage ten missiles can do to humanity may be the same as a FDA that focus more on progress than covering their own ass. Yet the former is more acceptable than the latter. Science has failed because it no longer gives useful answers. When somebody's grandmother is lying sick in the hospital, they want a cure that is immediately accessible, not yet another feel-good post on Reddit or Facebook about "Scientists claim experimental cure for cancer found! Appearing in your friendly neighborhood pharmacy in ten years".
As a working physicist I think the authors suggestions are not likely to improve education.
Discussions of Kuhn are just not appropriate for secondary school science education. These classes are for teaching the bare basic of bio, chem, and physics to see if the kids enjoy them. As it stands they can barely do this. They aren't there to teach 'how science works' except in the most rudimentary sense, and its hard for me to think science education would be improved by removing core material in favor of a discussion of the peer review process!
The author's suggestion might be appropriate for social studies, or a a special, AP level class where logical positivism, Karl popper, and Kuhn might be discussed.
Hard disagree. The nature of "science" should be woven into all of these courses. That will be what all students can take away from the science classes, not just the students who decide to stick with them after seeing if they "enjoy" them. (And do we teach English or history simply to see if kids will "enjoy" them later?)
Most good schools actually already do this as part of their standards-based curriculum. The Next Generation Science Standards [1] has "Practices" as one of its three key strands, which focuses on how to ask questions like a scientist, what evidence is, how you design an experiment, etc. The nature of "inquiry" is applicable to all future understanding of what scientists do, more so than memorizing whether mitochondria are the powerhouses of the cell.
We need more science literacy, not humanities trying to stay relevant by dragging science down.
Teaching students sentential and first order logic would be a much better start since they would at least know what they are trying to prove, then some probability so they know what a 5% chance actually means and casual inference if you're feeling overly ambitious to realize why correlation might (or might not) be correlation.
What a very closed minded comment. Humanities and science are deeply intertwined, especially philosophy. Not to mention disregarding an entire category of human expression and experience as "dragging science down". Humanities is just as relevant as science, and so often influence each other that it almost makes no sense to distinguish them.
I highly doubt that by teaching children logic and probability we will solve science denialism. Cramming more science facts or abstract math equations into student's brains doesn't do much for their critical thinking and reasoning skills. Encouraging children to ask questions, reason based off of evidence, and inspiring a love for figuring out how our world works should be the main goal of any science class.
Khun sounds reasonable to people who don't know mathematics, which is just about everyone in the humanities. The few who do know mathematics have very interesting things to say, but again, they are very few and they are very far from being popular in their own fields.
If you can't do something as simple as expand or factor a polynomial then you can't understand science, in the same way that you can't understand Latin if you only speak English.
The idea that we throw out old theories and start over looks like that to people who don't understand the language those theories are written in, to people who do the difference between doing quantum mechanics and classical physics is the difference between saying potahto and potaito. A bit more linear algebra, a bit less differential equations and a few more dimensions than you expect. There is a reason why you study the same maths classes for both classical, quantum and relativistic mechanics.
The stories we tell are just that, stories that help build intuition, but are completely optional, and they even change in the same subject depending on what you're trying to calculate.
>Cramming more science facts or abstract math equations into student's brains doesn't do much for their critical thinking and reasoning skills. Encouraging children to ask questions, reason based off of evidence, and inspiring a love for figuring out how our world works should be the main goal of any science class.
Yes, which is why we should teach them how to actually think in a structured way, something that saying 'science is sexist and colonialist' does not do. But that does increase the power and prestige of the humanities because they have 70 years of pointless theory to talk about *ism.
The humanities don’t say that science is sexist and colonialist. That’s a strawman.
I’ve never understood why STEM folks hate the humanities so much. I for one like both mathematics and history. I like coding and I like music. I find value in all of them.
I don’t think I’m a rare case. Actually, I think most people are like that. It’s only on HN that I see so many people look down on the humanities like they were a scam or something.
At my high school we had a small "science research" program, where students would read lots of papers and write a review, and also try to get a part time position at a local lab.
I'm thankful for that program even though I don't work in Science currently. Many of my peers are terrified of reading journal articles, but I'll dive right in.
In terms of philosophy, a big portion of the Kantian/Enlightenment project was about grounding or justifying Science as epistemological practice. But that's not really what the Public needs, because "Science" now is not "Natural Philosophy" reaching for the things-in-themselves, but an institution with norms, financial and political incentives, etc. Better to read Lyotard ("Postmodern Condition") than early Wittgenstein for this kind of thing.
> These classes are for teaching the bare basic of bio, chem, and physics to see if the kids enjoy them. As it stands they can barely do this.
My SO was a HS science teacher, so I have some second-hand experience, whatever that is worth.
This very much tracks with my SO's experience in both very poor schools and in 'good' schools. It would be amazing to have the time, money, and resources to teach kiddos about the philosophy of science. But just getting through the facts is hard enough. I'll save you the belly-aching about home environments, intra-classroom management, funding, pay, snow-days, etc. Even if/when we get back to all in-person teaching and then get the kids back up to grade level, adding in the philosophy of science to uninterested teenagers isn't likely to produce any measurable results.
My iota of advice: Just vote to increase any school funding you possibly can (bonds, liens, mill-levies, etc). As many parents found out this last year, teachers don't get paid nearly enough for this. The rest will get sorted on it's own.
I know half a dozen or so PhD's in microbiology at one of the leading research institutions. They don't understand the philosophy of science beyond what I was taught aged 10-11.
I don't think the basics of Karl Popper and Kuhn are beyond the grasp of a secondary school student, considering their importance not just in science but in the world we now live in.
Thats because many PhD are just glorified technicians (same technicians we purged out of our labs doing one of the worst mistake of the last decades I believe). So we get all those PhD learning 3d printing, microcontrollers, programming for creating new devices, tools and whatever else. Where for technicians it would have been a much shorter turnaround. Of course the individuals learn new (basic level) skills. But it also impedes discovery and leaves a whole category of people out of the door. In PhD I was once told I couldn't put the technician on the authors list because he was paid to do his job...
The line between scientific and non-scientific contributions can be fuzzy. One would not put the supplier of a chemical in the authors list ... unless the synthesis and distribution was done as part of a collaboration and it is the compound which makes the experiment possible. But then, there is a lot of leeway in "collaboration"
Authorship is a constant battle. You don't put the supplier no, but the technician that ran half of the experiments and also improved them? Of course they should be.
Years back, I went to a couple-of-day conference/seminar that was microbiology outreach to humanities and public. There were some in the audience who really wanted to find a Kuhnian narrative for the transformational progress in microbiology over the last few decades, and were very unhappy with an answer of no, sorry, driver was technology dropping costs by many orders of magnitude, transformationally changing the experiments and questions that were affordable to pursue. So perhaps microbiology is not the best context for suggesting Kuhn as useful?
I got half way through a physics phd and the only time anyone quoted Khun was as what happens when you read science books without understanding mathematics.
On the subject of enjoying science and mathematics, I know that if I had gone the usual route through school there is absolutely no way I would be studying anything remotely scientific a few years down the line. I took a liking to physics over a summer, almost purely by chance. That's it. The way (this is UK specific) we teach science and mathematics is so detached from anything you'd see at university I'm genuinely amazed the Universities haven't started trying to strangle the exam boards.
Brace Yourself: The UK Physics A-level does not require any calculus at all. None - no scientific method, no discussion of any quantum mechanics etc. etc.
As a working philosopher, I agree. More specifically, the following passage in the article seems inconclusive to me and based on a common misrepresentation of Kuhn's work (I've heard things like that many times):
> At least since Kuhn's "Structure of Scientific Revolutions", historians and philosophers of science have been aware that the traditional methods of science—as described in “science as inquiry”—do not give a complete picture of how scientists develop, test, refine, and replace scientific theories. That ideologies, peer pressure, confirmation bias, and a host of other "biasing factors" play an ineliminable role in the process of doing science is now widely acknowledged. Yet "science as inquiry" has not adjusted to these findings. It continues to regard any bias as bad for science and to insist on the overall neutrality, “logicality,” and objectivity of science. As a result, there is much about the history of science, as well as about contemporary science, that “science as inquiry” misrepresents.
The problem is that from the fact that various biases cannot be avoided in science it is concluded that it is somehow desirable to "adjust to these findings" and stop regarding bias as bad. Instead of continuing to strive for elimination of bias and lessen the impact of biasing factors, the author instead insinuates at various places that biases should somehow be embraced, even emphasized in historical examples. But it doesn't follow from the fact that there will always be bias in one form or another that it's not methodologically advisable to attempt to eliminate it as much as possible.
"When scientific disagreement arises, it is interpreted as likely showing that one or more scientists have bias."
That is rarely claimed. Isn't it way more more likely that the experiments are not decisive enough or the theories not yet developed well enough, that more data has to be collected or more theoretical advances have to be made?
Other points in the article are fine. Of course, consensus is not the point of science. But I have to say I've become rather dismissive about articles who build up "logical empiricism" as a straw man and then come with Kuhn. There have never been such trivial logical positivists. The Vienna Circle had quite elaborate and diverse views, and, as one may add, logical positivism was a large reductionist philosophical programme that failed for a vast variety of reasons that have nothing to do with Kuhn's theses and how work in science is actually conducted. Those reasons were, for example, that there was no logical way to generalize from 1st person to 3rd statements, the inadequacy of crude verificationism, the lack of a logic of theory discovery, Quine's work on analyticity and ontological relativity, problems with counterfactual reasoning and theoretical notions, problems with making reductionism credible in light of apparently emergent properties, and the failure of operationalism.
I think there's an issue that sometimes the dynamics of scientific progress that Kuhn described are misinterpreted as basically meaning that, oh look, scientists are just human like the rest of us, so scientific thought isn't somehow privileged over any other point of view.
However, Kuhn's dynamics are ultimately a consequence of reasoning in the face of uncertainty. Simply applying logic to experimental results is not what scientists actually do, instead, they have to develop a sort of taste in reasoning, in order to apply appropriate prior beliefs in the Bayesian sense and in order to apply appropriate weights when new evidence becomes available.
This can lead to scientific thinking getting stuck in local optima of sorts, until evidence comes along (at the right time, potentially) that shakes people up sufficiently to force a new approach.
Still, that isn't equally true for every subfield; some subfields will have proven their worth to greater degrees than others by consistently making good predictions or consistently offering elegant explanations for phenomena.
I guess the difficulty in science communication comes to a large degree from the difficulty in communicating that last point, particularly since scientists themselves may disagree about how mature a given subfield really is.
Is there any book you’d recommend that covers the general topics you just touched on (I mean dismantling the straw-man of logical empiricism), or is this only visible after reading widely on philosophy?
I think society could use an extra hour of school, at least twice a week. This sort of thing would be a great class which has no homework or exam pressure. You get credits for being there and participating in discussions.
> These classes are for teaching the bare basic of bio, chem, and physics to see if the kids enjoy them. As it stands they can barely do this.
Has anyone heard of a setting for discussing doing transformatively better than this?
Not in PER sense of "if only we could teach intro physics classes successfully", or bio's "if only undergraduates had a firmer grasp of central dogma", but more like Larimore's[1] suggestion, in the context of preK science education, that students have a human right to make sense of their world, now. Big hairy audacious goal. And the heavy lifting for it, would have to come from the science community.
As in, kindergarten students have a right to be told the Sun isn't yellow, to not be given popular intro astronomy college textbooks which have that wrong, and to not grow up to be first-tier astronomy and physical sciences graduate students who are still confused about that, and about color in general. Students have a right to chemistry education content which chemistry education research doesn't describe using adjectives like incoherent. A right to content which reflects current insight and understanding of the physical world, not "it's traditional to teach it this way" wretchedness.
Consider a dedicated professional teacher, in a hypothetical country, whose education system is focused on call-and-response rote memorize-and-regurgitate, who argues the implausibility of teaching US-style intro-physics problem solving. That can be both individually correct, and still reflect a profound collective failure of their community and society. Now consider a system focused on plug-and-chug toy problem solving, with no feel for reasonable values, narrow scopes, and topical descriptions divorced from current understanding. That can be both a professionally-pursued local maxima, and still reflect a profound collective failure. Including a failure to explore how to do transformatively better.
Imagine a speculative early-primary atoms-up progression, one emphasizing atoms as nucleons, and building from nucleosynthesis, through a sense of physical size/scale, to a broad grasp of modern material science. Insane amount of interdisciplinary work. But I even found ways to explore it, sort of kind of, and explorations like it - mainly conversations with "the usual suspects" before/after research talks around MIT and Harvard. Then covid. Now... has anyone heard of an online setting for discussing transformative improvement of science and engineering content? Even if we're back to normal next year, I'd appreciate finding a less ad hoc and restricted setting. Thanks.
This was not what I expected. I expected the standard line about how the general public should trust scientists, and a fight for a greater role of scientists in decision-making.
It was a good read.
I think the pieces I would add are a clear understanding of the flaws of scientific processes:
* Circle of mutual adoration hiring processes
* Cursory-at-best peer review in many disciplines
* Impact over integrity in hiring
* Selection biases in science reporting (and more generally, the pay-per-click incentive structures places on games-of-telephone)
... and so on.
As well as a clearer understanding of how we go from hypothesis to fact. In many policy discussions, I see individual scientific papers cited which is a nonsense way to use science. One can find a paper which says anything and most results are false. That's still part of the process. Once a result has been poked at from enough directions, it becomes a theory and then an fact. Connecting the hypothetical process to empirical process would be awesome to see in schools.
Science is seeing full-well the problem that many religions (at least, those with Scriptures) have been having for centuries. (s/hypothesis/belief/ and s/fact/creed/).
* I see individual scripture references cited (out of context). (Called "proof texting"... "See, here's the proof!")
* One can find a verse which says anything (if you ignore the context).
* Once an idea has been poked at from enough directions, it becomes a belief and then a creed (or a part of one).
Science, like religion also has a method: exegesis vs. the scientific method. A source: scripture vs. creation. Etc. etc.
Sadly, this is not a problem which can be "fixed". You can have your die-hard zealots on either side. The evidence can be placed before them. You can show them that their error (heresy) has been denounced for a thousand years. But yet erroneous beliefs are still rampant.
I have a somewhat less charitable take on the whole "context incorporation"/exegesis process. It's simply a process of reconciling an ancient, mostly irrelevant religious text to the modern-day economic, social and political realities. What do you need the Bible to do for you, dear leader/dear society? A la carte options available with a little exegesis.
- Need the Bible to justify slavery? With a little judicious exegesis, here you go! Oh, society realized slavery is an abhorrent crime? Time to dust off your trusty exegesis experts.
- Is homosexuality a horrible sin punishable by death, or is that now an anachronistic view which is making us bleed subscribers... scratch, the faithful? Religion Has A Method to correct this!
Science has also been used to justify some pretty terrible things.
And that is their point, humans are the corrupting force here. Taking a framework and then using it to justify whatever they want. The scientific method is not uniquely immune to people cherry picking quotes from studies to justify their agendas. And in many ways it is very much the same thing as someone cherry picking scripture to justify the same.
Sure, humanity is flawed and all that good stuff. But the central point is that of course there's a huge difference between religion (corruptable by humans) and science (corruptable by humans) - the scientific method is not in itself based on evidence-free magical thinking, and is rooted in verifiable/falsifiable reality. So while the two can be compared at this superficial oh-but-what-about-flawed-humanity level, at their root/essence they are fundamentally, categorically different.
> the scientific method is not in itself based on evidence-free magical thinking
However, the scientific method is, in many ways, based on faith. In science, you can only measure what you can observe. We cannot "observe" what occurred in the past. We can guess but it is a lie that we know without a doubt what happened before recorded history began: Nobody actually observed it.
Each religion makes its own claim about its epistemological systems. It different for each, but each system ought to be internal consistent for it to make sense. Some systems are far more consistent than others. Based on the internal consistency, one can make an objective assessment as to the validity of said belief system. You can test it based on what it produces and how it judges itself.
At the end of the day, science and religion do not stand on equal footing—but religion has to make sense first because our scientific method was built upon Bacon's own religious beliefs:
"Bacon's entire understanding of what we call "science," and what he called "natural philosophy," was fashioned around the basic tenets of his belief system."
This belief system was Christianity—the idea that a creator created a system that can be known and studied and has order rather than chaos.
Sure. There are several that I've personally had to deal with: baptism (by creed? by birth?), the nature of hell/eternal fate of the wicked, nature of Christ, even the support of slavery, etc.
The one that I see that's really in play right now actually has to do with the nature of hell/fate of the wicked.
You've probably heard the fate of the wicked is that whoever does not believe will go to hell: a place of eternal conscious torment (ECT). Proponents of this position will lean heavily on various scriptures:
* "It is better for you to enter the kingdom of God with one eye than with two eyes to be thrown into hell, 'where their worm does not die and the fire is not quenched.'" Mark 9:47-48
* "..lake of burning sulfur, where the beast and the false prophet had been thrown. They will be tormented day and night for ever and ever." (Rev 20:10).
There are a couple of examples. And they are used. A lot. However, the last couple of hundred years (at least, that we know of, perhaps further back, though it wasn't discussed as much) is the idea of annihilationism: That the fate of the wicked is not ECT but rather is simply to finally die again forever.
The doctrine of final judgement (for our purposes here is clearly defined in Matthew 25:31-46) declares that all of the dead will be raised and will be judged. So, if you were dead, you will be raised and then the final judgement will be made and the two sides (the left and the right) will go to their inheritance: either Christ's kingdom or eternal fire.
But the idea that eternal fire _means_ eternal conscious torment for the wicked makes some specific assumptions that many people assume but are not supported:
* Why is it assumed that all who are raised are now immortal?
* Why does the nature of the fire (i.e. that it is eternal) mean that it must burn eternally rather than simply describe its provenance and its effectiveness.
For me (and for many others), we do not find the "evidence" of ECT compelling and can actually point to a number of other areas where ECT makes no sense. The biggest example is everyone's favorite verse: John 3:16 (For God so loved the world that he gave his only begotten Son that whoever believes in him shall not PERISH but have ETERNAL LIFE). EMPHASIS MINE. In what world does "Perish" mean "be alive but tortured forever"?
Anyway, the point is that many people start with the end in mind. They start with ECT and work their way backwards. Scripture has the advantage over the natural sciences in that (at least for protestants) we believe the canon is closed and therefore no new information can be made available. We have what we have to work with.
The point is that both science and religion can "start with the end" in mind and back-fill to make their position seem solid.
This has been done with global-warming/climate-change. Being done with COVID measures. Each side has their own set of data that they want to use but the wholistic set of data may or may not ultimately support an individual position.
These are political flaws and flaws with human systems.
You are missing the logical flaw buried within the scientific method itself.
Nothing in science and therefore reality can be proven to be true. We can only falsify things in science.
What this means is every claim ever made by anyone or any scientist from now to eternity can never be proven to be true.
We can only make repeated attempts at falsifying things. After we fail to falsify a thing enough times we can say, hey this theory is maybe true. That is the best science can do and that is what nobody, including most of the people on hacker news do NOT understand.
There is literally no such thing as an actual “fact” in science and therefore in reality as we know it.
This flaw is the origin of most of the distrust in science it is also the reason why blind trust of the scientific method itself is wrong. No claim made by a scientist has actually ever been proven.
Why do you see it as a flaw? Constantly shedding false beliefs equates to constant improvement.
We all want to get closer to the truth; science is one technique for doing so, and since there's no direct line to God, it's the most efficient one. It has the best corrective feedback loops.
I see it as a flaw because in logic and math, it is very possible to prove your theorems to be true. It is just not possible to do so in reality. Logic is a game with a limited domain. Reality has an unknown domain, hence the inability to prove anything.
Because this feature doesn’t exist in science it is therefore a flaw.
Additionally it doesn’t align with human intuition. Humans assume by default that things can be proven and that facts exist. The truth is that nothing can be proven and facts don’t actually exist. This is counter intuitive and therefore although science is the best tool we have, the counterintuitive qualities science possesses justifies calling these things flaws.
This relatively unknown “flaw” is what drives most of the debate in the political arena. No scientist can ever prove global warming to be true. It is fundamentally impossible. The debate would be over long ago if there was some way for science to prove global warming to be real.
They didn't at any point say the scientific method. And I'm pretty sure that's intentional because there is nothing inherently wrong with the scientific method. The problem is with the processes we have built around modern academia which leads to a system that does not give an ideal or intended outcome.
Yes, but I guess I wouldn’t label them as scientific processes, which I would assume be in line with the scientific method.
These listed problems are all people and people’s incentives problems, and have nothing to do with science. They exist just as much in non scientific endeavors, and so attaching the word science to them seems meaningless, and at worst muddies people’s understanding of science.
Processes are very much a business concept that most adults can understand. We develop new processes every day
Furthermore it was used plural so it can't possibly be the scientific method because not only is that a proper noun but there is only one scientific method while there are many scientific processes.
And yes they can be specific to science. The process of getting published in a peer reviewed journal is unique to academia.
Or what about the process of getting grant funding for research?
I know, you will say but that isn't science.
But nobody is arguing against the validity of the scientific method. At least nobody worth listening to. When people say there are flaws with science that must be addressed they are saying they are flaws with the business of science. And the scientific community is not doing a great job of fixing them.
> Yes, but I guess I wouldn’t label them as scientific processes, which I would assume be in line with the scientific method.
It's that exact assumption which is common, incorrect, and which I'd like to see addressed in K-12 education.
The scientific process in my discipline works as:
1) Graduate students come in to do research
2) Ones which align with hot topics have venues to be published, people to cite them, sources of funding, etc.
3) Some people do diligent work, and write a publication a year. Those don't go anywhere. Some people write 12 publications per year, often based on poor scientific methodology and buggy analysis algorithms.
4) The highest-impact results often come from methodological errors, and never replicate. Most errors are never found, but even if they are, by that point, the people who wrote those papers have found tenure.
5) No one has time for good peer reviews, so reviewers glance at the abstract, sometimes to see if they were cited, and skim the paper.
6) Press picks up the sexiest results, which almost always are nonsense. People also cite the sexiest results.
Increasingly, this has moved from sloppiness to intentional gaming of the system. About half of academics I saw hired in the past decade had some level of intentional sketchiness going on.
This doesn't align well with scientific methods.
In the same way, I think high school civics courses should talk about issues like buying influence, polarization, and corruption as part of civics.
Simplifying a bit, I believe it's important to distinguish between hard, experimental science (physics), not fully experimental science (climate, medicine) and social science (economics). For instance, medicine is not fully experimental in the sense that you cannot take the same person and subject it repeatedly to the same, or a slightly different, experiment.
Hard science has a) better tools at its disposal (repeatable experiments) and b) is not subject to the influence of financial interests. Companies don't care about this or that theory, all they care about are results that work and provide useful technology.
Some companies do, however, have a strong interest in the outcomes of research in climate change, medicine, and economics, so they will exercise influence on research precisely in those domains that are less founded in experimentation and more uncertain.
From about the 1600s, science gained authority at the expense of theology due to its ability to predict and manipulate the physical world. Physics, chemestry and engineering allowed us to do things that would have been seen as magical a couple of centuries earlier. Those nations and individuals who rejected science became losers.
But as the term "science" gained in authority, other institutions were losing influence, in particular theologians and similar ideologes. Many of these wanted to be seen as having authority similar to what scientists enjoyed.
Since most people realize that "Theology" is not real science, new names and tactics have been invented to help camuflage institutions as "scientific", such as "Intelligent Design", "<insert identity group> studies" where the practitions care more about the idelogical consequences than the ability to predict real world outcomes.
These institutions will both promote science denialism within their own ranks and also cause it within opposing ideologies. Both intelligent design advocates and gender studies advocates will reject findings from science (mostly biology) when they contradict their preferred ideology, even when the findings are as close to scientific facts as it is possible to come. Both do that by presenting bad faith arguments and also labelling non-believers within their ranks as what would in earlier times be labeled "evil heretics".
When even "scientific facts" are attacked like that, other disciplines with varying predictive power, such as climate science, economics or cosmology will suffer even more effective assaults by ideologes.
If we cannot somehow stop this developement, we are heading for another "Dark Age". I honestly dont know how this developement can be stopped at this point. At the very least, it will require cooperation across idelogical boundaries to do so, and this can only happen if the science denial of both sides are simultaneously purged.
I'm not very optimistic about avoiding a "Dark Age".
Rational thinking and the idea to look at data seems to be completely foreign to most people I know. Higher educated people seem to absorb and repeat dogmas and propaganda unfiltered even faster.
Some patterns I have come across:
Alpha-Rational:
Claiming that ideas that come from authority (dogma) or self (narcissism) are rational - rejecting to look at any data.
Anti-Rational: Reject the idea to look at data "life is uncertain", "data is garbage", "it's immoral to look at data" - although there is a lot of good data and drawing conclusions is not even difficult.
Malicious rational: Know your science and cherry-pick data for profit to support the highest bidder.
I question the premise of this article - what evidence is there that the general population doesn’t understand how science works?
I think they know perfectly well how it works - and that like everything else it’s been corrupted by politics and money. Arguably the only fields of science free from this corruption are physics and chemistry, but I wouldn’t be surprised if I was wrong about that.
I read the headline and thought, hey, isn't this the web site where I've read all those articles about how the BUSINESS of science has been completely perverted by the weird interplay between universities, funding, and journals?
The blinders came off when the majority of people finally saw that you could pay doctors and university researchers -- two of the most respected types of people in the post-war world -- to produce "studies" to show that, say, smoking was GOOD for you, against all data on the subject.
I'm 50. I can't tell you how many headlines I've read over my adult life that flip-flop between eggs being good for you, versus that they're killing you.
In my 4th-grade science textbook, I was told that we were entering a new mini ice age, and that we'd be completely out of oil by the year 2000. (Which kind of sucked, because I was going to be driving by then.)
We all know where the climate thing is now, and I don't hear ANYONE making noises about "peak oil" any more.
THIS is precisely why it's so hard to sell the public on the dangers of climate change. The BUSINESS of science has destroyed their credibility, and they only have themselves to blame.
I don't hear ANYONE making noises about "peak oil" any more.
As far as I'm aware, conventional oil production did in fact peak more than a decade ago. And while the Hubbert model failed to predict global oil production, in the absence of disruptions, it can work well (see eg Norway[1], or the lower-48 US states[2], where Hubbert's predictions made in '56 held true for 4 decades).
However, there's still unconventional oil[3] aplenty, hence the lack of panic. We should keep as much of it in the ground as we can.
How often have you heard the phrase "gravity/evolution/sedimentation/cosmology is just a theory".
Maybe some of the people who are using that are rhetorical, but as somebody who was very invested in the Evangelical Christian community in Arizona for the better part of a decade, there are a lot of people who believe it genuinely, that when scientists use the word "theory", it should be understood to mean "guess".
But that's what "theory" means. It is, in fact, a guess; a guess that could be flat-out wrong the second a piece of contradictory evidence comes to light.
But, instead of acknowledging that, for some reason a large group of Science Defenders have chosen this hill to die on where actually theory means infallible truth that mustn't be challenged, especially not from the likes of the common man.
There is such a drive to prove the religious wrong that we've thrown the baby out with the bathwater and created some kind of anti-religion where all skepticism is immediately put down and skeptics are excommunicated.
"Theory" does not mean "guess", it means a comprehensive explanation that has been shown on many occasions to make predictions that are more accurate than any alternative explanations.
A guess in science is called a "hypothesis", it is only after it is proven right that it is properly called "theory".
There are infallible truths in science. We call them "observations". Any idea that follows from existing observations and has been used to predict new ones can be reasonably believed to be correct, but the truth is not the idea, it is the observations that inspired it.
And this is exactly why the public needs a better understanding of how science works.
If you really insist on making religious comparisons, a theory is neither a guess, nor an infallible truth, but a prophet who we trust because they make no predictions that cannot be verified, and because they have never been wrong so far, and so we will continue to believe them until the moment one of their predictions proves incorrect, at which time we will stop believing without hesitation.
The problem, as mentioned above, is that the public understands exactly how science works, and it makes the scientists (and the companies paying the scientists) very upset.
skepticism is good. people should be skeptical of science, especially by the time it reaches popular media. but skepticism isn't very useful unless you can evaluate roughly how skeptical to be about a particular finding. five sigma physics finding replicated by multiple studies? probably good, but there's always some chance of systemic error. n=10 psychology study never replicated and all the subjects were white male college students? let's take that one with a big grain of salt.
Most people on hacker news don’t understand science.
Probably you don’t either. Did you know it is axiomatically impossible to prove anything to be true in science?
Every claim made by every scientist from the beginning of science to eternity has never ever been proven true. In science you can only falsify things.
If you don’t know this or don’t logically understand why this is the case then you never truly understood science.
People understand science from a cultural and political perspective. From a logical and axiomatic perspective, in science you cannot actually prove anything to be true.
This flaw within the actual framework of science itself is the origin of much of the misunderstandings and debates in the cultural and political arenas of science. For example no scientist can actually definitively prove global warming is real.
It is literally fundamentally impossible. Hence the politics.
Honestly, as someone who spends a significant amount of time reading and attempting to replicate papers... I have very little faith in most papers, though the concepts themselves often appear okay.
My friends and family also doing research have added to my questioning most papers. It’s probably healthy to have a lot of skepticism, the paper itself should convince through the research methods and sample size.
Unfortunately, the incentives align more with skirting the truth than being honest. That’s why each paper, in my opinion, requires a healthy dose of skepticism.
As an example, in January 2021 the PCR protocol to identify covid-19 was finally updated and a warning sent out that there were possible issues. Specifically, issues that would lead to higher false positives. My wife who does PCRs regularly immediately identified the issue in AUGUST 2021. This means it had to have been known prior to that date, yet wasn’t discussed for political reasons
The point I'm making is they knew months prior to announcing it that there was an issue. If we were worried about the "science" it would have been announced right away, instead it was the day after inauguration.
I think that there really is a need for the general public to know what science IS. How it came about, how it works.
Unfortunately, for that to happen at the level it needs, there needs to be a change in how Education works everywhere.
We are still using a 19th century tool designed for a very different context.
Maria Montessori tried to bring the tool into the 20th century about 100 years ago but who is trying to do the same for the 21st century?
The way I found it explained best was in an Alan Kay presentation where he spoke about the insight Francis Bacon got into the fact that the brain works in a faulty way and what to do about it.
Scientific thinking is how we get around the very well documented [1] array of cognitive biases. Learning about those biases in ways that a) makes them obvious, b) provides ways to compensate might make a person able to detect them early and to approach them in the proper way.
Another wonderful tool I found is Chris Argyris's Ladder of Inference. Trouble is that this kind of tools need to be inculcated at a very early age so they get used and become second nature to the person. Learning them later in life is a heck lot more difficult, especially when they conflict with fundamental aspects of one's identity.
Science is always inconclusive. Theories are not theorems; everything is open to questioning, no matter how much sense it seems to make. That is why science advances, to begin with. And yet, people are told (mostly by politicians with agendas) that they are supposed to "believe" science, which of course means to believe the latest a scientist said.
No, don't teach the young that they need to believe science. Teach them to question it, always. And to question it in a formal, disciplined and rational way, even if they agree with what is being said. Especially on issues like anthropogenic climate change, where the implied solution is to surrender individual freedoms to a omni-powerful faceless government that is supposed to protect us all.
In a sense yes. Science cannot prove things to be true, but falsification is conclusive. If observations are 100% accurate you can definitively falsify things and conclude that something is false.
The same cannot be said for proving something to be true.
A long time ago I studied biochem in university. It took me until the third time memorizing the Krebs cycle (energy metabolism) to start asking "how the heck did Krebs et al figure this out?" And I had already read Thomas Kuhn's Structure of Scientific Revolutions when I memorized Krebs for the SECOND time. I, personally, got stuck in received wisdom for too long.
Science is a discipline for confronting mystery. It's about what we don't know. It's also big business, driven by money, prestige, and power (like most human endeavors).
In that sense, it's a lot like religion. Religion, as practiced in the US, has the four pillars of the Wesleyan quadrilateral:
* Scripture * Tradition * Reason * Experience *
The Pennsylvania science-teaching guidelines notwithstanding, school science curricula are mostly teaching "scripture." No, I don't mean wakky interpretations of ancient texts. I mean memorizing received wisdom and getting tested on accurate memories. The Krebs cycle is an advanced hunk of that kind of scripture.
Tradition comes into play when, for example, grad students fear to challenge the stuff discovered by the Nobel laureates who run their labs. It's valid to critique the way tradition controls inquiry.
Reason and experience: Those are cognitive tools to use when doing the Karl Popper thing: dreaming up things that can be proven wrong and then trying to prove them wrong. But we need at least some scripture and tradition to apply reason and experience.
Learning science is hard even when it's just "scripture" and "tradition". Embracing the mysteries -- the unknown -- is even harder. Science means accepting that "everything we know might be wrong." We should still try.
This article is complaining that people are taught a rigorous form of science that not enough science meets, and asks that it’s replaced by some sociology of science to determine which scientists to trust. It’s not wrong, and maybe I’m too much of an idealist in believing that “science as inquiry” is the right goal to aspire to, and that it’s feasible.
In my mind, we just don’t teach kids science as inquiry well. It’s too much like the heavily complained about way we teach math, where it’s formulas you just plug things into. Kids can plug a, b, and c into a quadratic formula but then can’t handle a word problem. Likewise with science. You’re taught it as a bunch of rote mechanical steps at the expense of teaching to do the “word problems.”
Any science education track that doesn’t include a stats and experimental design course isn’t teaching science as a tool, in my book. It’s just teaching facts.
Maybe I’m wrong and teaching kids more than memorization is too lofty a goal, but I think statistical illiteracy is more of a barrier than “science as inquiry” is.
I don't think most folks understand the problem, although the essay is getting there. In general, I don't think epistemology can be taught until pupils have enough world experience to map what's being taught with the rest of the body of scientific literature. In general, and I am wildly stereotyping, all people want a quick answer and then to move on. Younger people tend to do this quicker. Older people tend to dwell too long on all of the possibilities. Epistemological understanding of how science actually works encourages us to accept both of these tendencies as being needed. That's a tough thing to get across if people are already predisposed to learning the quick answer and going to the next topic, whether that's in a blog entry, book, or secondary school education. I'm not sure it's possible.
Quite related: I published this a couple of days ago. Been debating whether or not to submit it here. It's a difficult subject. It's extremely important, but sadly simply because it's important doesn't mean people will be able to accept or understand it.
Science has political problems and social problems yes. But many people don’t understand that from a purely logical perspective there is a fundamental flaw with science. Almost no one knows this.
You cannot prove anything to be true with science. Therefore, in other words, you cannot prove anything to be true in reality as we know it.
Even when your observations are 100% accurate it is fundamentally impossible to prove anything in science (and your observations can never be 100% accurate).
If you understand this concept then you fundamentally understand science and you’ll be able to see where all the debate and misunderstandings about science comes from. Every single claim made by a scientist has never ever actually been proven ever. The key is that you need to understand why science can’t prove anything. There is an extremely logical explanation for why proof is impossible in reality. Proof is the domain of logic, not science.
To quote Einstein:
“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”
Additionally there are two main axiomatic assumptions about the physical world that has to be made that have to be made in order for science to work as a logical consequence. You must assume logic is true (which is recursive and obvious) but you also must assume probability is true. Probability applying to the real world does not arise as a theoretical consequence of axiomatic logic being true. Probability itself must be assumed to be an axiom alongside logic, science works because we assume it works.
The replication crisis is serious, but it's not the collapse of science. Maxwell's equations still stand, for example. You can still identify chemical elements from their spectra. And so forth.
That is not to diminish the problem. Over-reliance in purely statistical methods has gotten us in trouble -- in some fields more than others. Probably the main issue is that the fields affected are ones that happen to the biggest right now, with potential impact on quality of life and public policy.
Fields that are doing a better job of weathering the storm, such as physics, have tended to have a larger tool belt, and the use of statistics is kept in check by other methods such as the development of theory, and the ability to study problems from multiple vantage points. Also, physicists can choose our battles, and only look at problems where more robust statistics can be used.
Even in the fields most affected, I'm not aware of any knowledge generation efforts outside of science that are going better.
It might just be that the social sciences and medicine are really hard, and our best efforts fail often.
Edit: I mean the best effort we could reasonably make as a society, naturally there have been individuals and groups who have been doing things they knew or should have known were wrong for their own reasons.
> It might just be that the social sciences and medicine are really hard
As one field often builds upon another, the so-called soft/social sciences could also be called Higher/high-order sciences; and the complexity progression might not be linear, since at the high-level these topics tend to merge such that they are no longer layers but rather aspects of the same societal mega-graph e.g. economics is the study societal economy, political science is the study societal politics etc, all of which are part of the same system, and hence inter-react, in the same way a study of volcanic activity, the tides, the magnetic field, the atmosphere, weather and the terrestrial water table/system are all inter-related topics of the same planetary model (I expect all these things to become distinct topics of study too, at a greater scale if they aren't already).
Another consideration; Natural versus "human" sciences - any system that involves humans (any study of an aspect of human society) differs from the study of natural processes.
1) the scientists, or scientific institution is a part of society,
2) humans can react to scientific investigation and inquiry - a natural process doesn't do PR, no matter how strange the quantum is.
3) humans culture and behaviour is both complex, and changes over time in a way e.g. chemistry, does not.
I think you're right, and I would go further and say that it becomes even more complicated because the higher order fields are often as old or older than the foundational ones.
We created frameworks for understanding political science and the humanities, still relevant today, millenia before any real neuroscience occurred.
Possibly the biggest issue is ethics. Speaking as a physicist, we can perform experiments on atoms, that would hopefully be unthinkable on humans. Even psychology gets more results if ethics are overlooked, e.g., in areas such as advertising and interrogation.
Well I'm extremely gratified that you even thought about it, I've been trying to express something like this in different forms with little success.
And you're certainly right that we need to take into consideration how sure we can be about a result before relying on it, while recognizing that sometimes decisions have to be made based on the imperfect information available at the time.
In my view the issue with the social and medical sciences is that they are being pulled from two opposite sides. On the one side is the motive of profit, professional advancement, etc. On the other side is the fact that science might be slow but people are suffering right now and can't wait for care. There's a similar problem with using scientific knowledge to guide public policy.
Calling it the collapse of science is definitely hyperbole, science is a method of falsification, it can't "collapse".
What is at risk is established academia and society's trust in the institution. The replication crisis is not well known outside of academic or technical circles for now. But eventually it will become a major scandal and you can't blame people for losing their trust at that point. And I think the worst part about it all is you can watch academia make the issue worse in real time. I see discussions about how to fix things but everyone ignores the gorilla in the room. They stop short of the truth, that academia is fraught with fraud and the system has set up in such a way as to promote it for anyone who actually wants to have a stable and successful career.
I've seen it myself first hand, when someone has been caught unequivocally committing research fraud they are fired and blacklisted from the community at large but are always treated as an isolated incident. The degree to which research cannot be reproduced means we have a serious fraud problem.
Watching a friend go through the process of attaining tenured position in Economics recently has taught me to never listen to Economists. They are all focused on impressing each other MUCH more than progressing Economics. And that's not even considering the Ergodicity crisis there.
Once you see the sausage being made, you realize "peer review" is a way to project power in academia, it is not about science.
Eric Weinstein has been talking about this lately, something he calls DISC "The Distributed Idea Suppression Complex". Some of his first hand stories are shocking, but are exactly the kind of thing those I know in academia tell me they face every day.
That’s not a collapse of science, or the scientific method. It’s simply demonstrating how hard it is to do perform the science method given our boundary conditions and other incentives in life.
Most things we think come from the scientific process in fact are the result of trial & error and tinkering, and then back-filled with the scientific process and so only appears historically as if they came from theory.
Galileo would never of survived peer review.
"Science progresses one funeral at a time" - Max Planck
Science is also more accessible now. There are more academics and universities pumping out papers than ever before, whereas in the past only the best were able to get into academia.
> Believing based on trust is a pervasive human practice, not confined to scientific inquiry.
Rationalists ignore the role of identity in beliefs.
My socially awkward brother transmuted from becoming the next Jacques Cousteau to full throated Young Earth Creationist. He had the best schooling. He assisted with biological and sociological research.
Why did my brother flip?
I've been pondering this for decades. In a nutshell, my theory is he traded his beliefs for acceptance. He adopted a completely new identity in order to belong to a group.
The evidence for the centrality of identity as basis of our beliefs, what some LessWrong pundits called Belief As Attire, is slowly accumulating.
No amount of scientific literacy, media literacy, critical thinking, logic will successfully inoculate people from cults.
There's some new research that shows early exposure helps. Such as teaching kids about pseudosciences like cryptozoology.
Similarly, there's some evidence that empathy and community engagement and socialization help protect people from falling prey to cults.
The root of this discussion is about whether science is necessary or suitable for public policy, and we should probably accept that policy is not scientific, and nor should it be. The alleged scientific findings used as a basis for policy justifications today is mostly compromised and irreproducable junk designed to befog, mesmerize, and browbeat average people into submission, while an elite for whom words have no fixed meanings struggle among themselves for control of bureaucracies.
When policymakers say they defer to "the science" it is forfeiting their responsibility to act with wisdom and good judgment.
Democracy predates science by some three millennia, and to say that we can or should create a scientific democracy is nonsense. Democracy is an ideal and a quality, it is not a system that can be engineered from data with scientific findings or processes. Further, if you engineer a society with policies based on data and laws that only a few initiates understand, that is the specific antithesis to democracy. In the west, we use democratic processes to produce our elite classes, but unfortunately we do not produce wise ones, and they're happy to dodge accountability by blaming scientists or the weather.
Science doesn't create trust in authority either, it produces verification, reproducibility, and models for new tools. Science and data are neither sufficient, nor necessary conditions for wise policy that supports life and creates growth, even when it has made some great contributions to our well being. To say that science shows a given policy is good because it has a scientific justification is circular reasoning.
When you say you believe in science, either you practice it, or you trust its institutional practitioners, which basically means you trust the systems of academic administrators to make policy. Again, not democratic at all.
We actually don't need a public understanding of how science works, we just need to hold policymakers to a standard of accountability where they can't slither behind binders of gibberish when called to account for their decisions.
This is a really important point--exemplified by COVID. You can't delegate governance to scientific experts. That's not their job and they're not qualified to do it.
Don't get me wrong--public officials should base policies on the scientific consensus, when it exists. But a scientific consensus can't tell you what to do about it. It can't tell you what trade-offs to make, or how to integrate a scientific truth into a social framework. Most important, politicians must deal with the fact that science often doesn't have clear answers on important matters. Something as simple as "should pregnant women get the COVID vaccine?" is the subject of dispute.
The media use "the science" not "science" to tell people what is "the truth". That is the problem. People are first told that something is "the truth" then if it later turns out to be wrong it eradicates trust in science. Its backwards. In science if something is proven wrong, it should increase trust because it demonstrates that mistakes if made are found and corrected. It demonstrates that science does accumulates more valid than invalid information over time because invalid stuff is removed.
There is also a fear to accepting that we (humanity or science) dont know something yet. Its better to simply accept that we dont know than to choose or worse let others choose based on no evidence.
> Moreover, scientific agreement among experts —typically interpreted by logical empiricists as the result of univocal scientific method and as an indicator of truth— should be seen more critically for what it is: social agreement.
Isn't the whole point of science that you can get out of that by trying? Apply your theories in practice, and see if they work. It doesn't matter whether the theory goes with or against general consensus.
That unfortunately doesn't hold for theories that describe only minute, very restricted domains. And that not only includes fields like dark matter, where experimentation is nearly impossible, but also 99.9% of the social sciences, where consensus is the only yard stick for theories that by far outsize the data on which they are based. Fortunately for the discussion, the author kept that thorny problem far away by focusing on the safe area of natural sciences.
Now back to the article: "science education ... should normalize scientific disagreement (distinguishing it from science denialism) and include discussion of the social epistemological institutions and processes by which scientific inquiry proceeds. These include the education of future researchers as graduate students and postdoctoral fellows, the peer review process for grants and publications, the discursive interactions of researchers in laboratory meetings and conferences, diversity and inclusion issues, and the norms of scientific integrity."
While social aspects of practicing science may throw some light on why one theory gets more traction than another, it seems to me putting the cart before the horse. It will not make students appreciate the difference between the competing theories, but will make science education a dull exercise in processes and factoids, to be listed in a test, where you'll get a passing grade if you can keep "epistemological" and "inquiry" apart.
And in the worst case, it will undermine the last bit of trust in science, because it's just an attempt at a centralizing power grab. Who proves that which ideas about these issues are right? I doubt the author has even got the tiniest bit of data to underline her argument.
The government advertised the food pyramid as a scientific fact. About a year ago the government advertised the general population should not wear masks as a scientific fact as well
The whole concept of "scientific consensus" is unscientific. It's always politically loaded. It also screws up the concept of checks and balances when the reason of existence for certain scientific jobs (and titles) is dependent on the existence of (and payed for by) a certain political issue.
I read Carl Sagan's, "The Demon-Haunted World: Science as a Candle in the Dark" and it really nailed home how many issues are caused by a fundamental lack of understanding and belief in science. It's a great short read.
The University of Medicine where I got my Bachelor of Science in Biomedical Engineering is now teaching a course in Homeopathy. So, yeah.... there is also a need for "scientists" to know how science works.
Maybe some billionaire could set up an institution to keep scientists honest. Like it wouldn't try to do any original research, it would be purely dedicated to finding whether published research replicates, verifying the statistical analyses on which published research rests, and busting academic fraud (eg. citation rings). Over time legitimate scientists would crave its stamp of approval and make their research easier to verify (eg. publish the source code). It would be a good training ground and employment scheme too.
Any developers who want to help with the scholarly infrastructure that underpins scholarly publishing workflows (DOIs, scholarly metadata and related things), Crossref have a job opening. Speaking personally, it's one of those rare jobs that feels embedded in an important mission.
I found this assertion, near the end of the article, questionable:
“Science as inquiry,” with its focus on evidence and logic, can do no more than communicate the details of a few empirical studies.
To my knowledge, inquiry can be as broad or as narrow as one wishes to make it. The utility of evidence and logic is not limited to narrow domains, is it?
I hope I’m not just cherry picking something to satisfy my own biases.
It a way science is very similar to free speech. Its can be very harmful when misused, but generally the answer to bad science is not LESS science, but BETTER science.
The proper role of the education system of any given country, and a good predictor of its success, is how well it teaches us to evaluate the difference.
I don't think it's so much a public understanding issue, since most of the public doesn't understand much of anything about the world they live in. Rather, it's our inability as a society to deal with those who need facts to go away for their ends.
I'd settle for a rough familiarity amongst science journalists and their editors, but I like the aspiration. Couple of questions, though: do we mean how science should work, or how it actually works in practice (and no, not just Kuhn: his conclusions were only surprising to other non-scientists, and primarily seem to have been valuable for Sociologists nursing a discipline-wide inferiority complex and 'scientists' like Andrew Wakefield who want to blame the establishment and not their own sociopathy.) We're coming up on two decades since Ionnidis, and Psychology still seems to be primarily a collection of unconfirmed small population studies of undergraduate psych students. Medical doctors can't explain or apply freshman statistics. Research directions (and, too often, results) are chosen by investors and donors. The modern west has more taboos about what assumptions and received truths it is permissible to question and investigate than the most repressive theocracies. Much of theoretical physics seems to be aimed at exploring untestable hypotheses and frameworks. And, my personal favourite: the phrase 'scientific consensus' is now used so often and without critique and introspection that it's actually become trite.
the problem is that in schools, kids learn not science, but examples of science, without understanding what science is.
They don't learn biology, but examples of biology, without understanding any context, history, aims.
They don't learn about languages, but just an example of a language, and only small parts of it.
There is no context, and especially, there is not even an attempt to connect one thing to another, or to provide any context or relation.
Sadly, schools don't even teach learning anymore. They just teach succeeding by imitation. So yes, we're in big trouble. Or they, if you're old enough.
This does not address a concern I have had: models are not science. Many models can be created, and then over time one if found to be correct, so we assume that one can predict the future. It cannot.
I can't blame people for being sceptical about science given all the crap that is in our food these days, all the fake nutrition science, fad diets, and the inability of governments to fix it.
I think it's natural for an expert in any field to be frustrated by the general public's lack of expertise. Indeed, as true expertise seems to become less and less common, I don't think it's unusual for this frustration to increase. I think this is the reason for the absurd notion that "everybody should learn to code" that was fairly pervasive on Hacker News a few years back. Thankfully that seems to have died down...
As far as trusting the actual experts to make good decisions and tell the truth, that's tough. It seems to me that so many people are just out to get a quick buck. Often that means exaggerating and/or twisting the truth to suit a particular narrative. I'm not anti-capitalist, but there is clearly room for improvement to the system.
> “Science as inquiry” education, at its best, teaches students an ideal set of methods for improving scientific knowledge. It suggests that our current scientific knowledge has grown, over the past four centuries since the scientific revolution (when modern science is widely held to have come about), in this ideal way. Even if that were true, there is an implicit dependence on students trusting that what they are taught in the classroom has been discovered and justified in this ideal way.
Very much agree with the author that the social aspect of the industry of science should be taught more.
But it's exactly thanks to "'science as inquiry' education" that so many embrace mainstream climate change theories. Many genuinely think there's some undeniable evidence and logic making "climate scientists" believe in cataclysmic runaway warming.
The truth is, we don't even have good temperature measurements. Most "climate scientists" have no expertise in measuring temperatures. They're not that interested in improving temp monitoring tech. They're not embracing satellite temperature monitoring which is superior in almost every way to traditional surface stations. Instead, they tend to ignore it or focus on its (few) flaws. Why? Because satellite data shows no net warming since 1998 which contradicts many mainstream theories. The guy who pioneered satellite temp monitoring happens to be a major skeptic.
GISS, the most widely used temperature dataset is based on simple old-school surface stations. Not that many of them, in fact. It's so bad that they're known to publish errors so blatant that they have been spotted (by "deniers") by checking against local weather websites.
One has to understand negative results are inherently uninteresting to scientists and the broader public. It's not that rouge scientist conspire to dominate a field. When you conclude there's nothing going on or that there are fundamental epistemic limitations to studying a given field, you have much less of an incentive to participate. If you still do, your work is bound to be "against" [what's out there], rather than "for" something. Hard to make a career out of being a hater. Or to even enjoy it.
The thing is that the widespread believe in global warming doesn't just stem from temperature measurements, but a multitude of other observable facts that do fit in the theory (i.e. experiments do not invalidate the theory).
But just picking up your satellite argument. A quick google search reveals many articles disputing that satellites show no increase in global temperatures. One article [1] even claims corrected sattelite measurements show a higher increase than surface measurements.
The whole point of science is that you don't need to trust someone's words, you can repeat the experiment for yourself.
I trust physicists (in re: physics) more than I trust my mother.
But if you can't repeat the experiment, if you can't make precise predictions that can be invalidated, is what you're doing really science?
This is crucial when it comes to fixing policy: economists, sociologist, psychologists and other quasi-scientific folks are the astrologers and numerologists of our current age. They trade on the successes of genuine science to garner political legitimacy in the eyes of the masses. But by doing that they tarnish the public perception of the impartiality of the proper scientist, diminishing their own glamour and polluting the public's appreciation for the value of genuine science.
Science gets respect from pretty much everybody (whether they own it or not. Very few anti-science folks go so far as to deny themselves the benefits of science like electricity and aspirin.)
It's the "soft sciences" that can't make predictions, that can't be tested, and that are used as political leverage that people (rightly IMO) don't trust.
- - - -
Let me throw out a personal example. There's a system of knowledge and practice called "Neurolinguistic Programming" that I know from personal experience works really really well, but if you go look at the Wikipedia page for it it's decried as pseudo-science. I have little respect and no trust for the "scientists" who can't replicate NLP stuff because I've been able to do it as well as tens or hundreds of thousands of other people pretty easily. Now that's bad enough but if these "scientists" managed to persuade legislators to make laws against NLP on the basis of their "science" I would see that as a very serious problem. (Although not a problem with science. Science isn't the problem because science is not what they are doing.)
Yet I am not an anti-vaxxer nor a climate denialist.
- - - -
Bottom line: Science has nothing to prove. Repeat the experiment for yourself. E = mc² for everybody everywhere. That's science.
This article classifies differing opinions into 2 categories, those I like (disagreement) and those I don't like (deniers). The strategies outlined here will further decrease the publics faith in science.
No mention of git. If it's not on the global information tracker, it ain't science. There's no excuse anymore.
Let me repeat because this is important. If you aren't putting your research in Git, and publishing the repo, and are instead doing things like PDFs and Docs and private datasets, please don't call yourself scientist (exception: "bench" folks who are grinding out the data and not too involved in the analysis/publishing process). That was acceptable 50 years ago, but the environment has changed. That is not acceptable anymore. I don't care if that's not how academia does it. It's so obviously the right thing to do.
You have the right idea, but an unhelpful perspective. Certainly not everyone who isn't a software developer is a "phony scientist!"
If you'd like to convince scientists to come around to your line of thinking, explain the problems with current research papers. Tell them about the problems with publishing papers only, how others struggle to reproduce their research, and how it leaves much bad research that would be invalidated far sooner except that the information to recreate their trial was absent.
Then explain how sharing the code and data with Git (or similar) can solve this problem. You'll find that you are much more persuasive.
I agree with you that this isn't a persuasive approach. But I'm cutting to the chase.
If anyone is working on this—helping onboard academics and non-programmers to git—this is an idea I'm looking to support (as an investor, donor, or whatever).
Isn't git an implementation detail here? Git repos don't have to be publicly available. They also don't have to contain history or multiple versions at all.
I'm guessing what you really want is:
1) open formats for data
2) plain text instead of proprietary blobs
3) the ability to replicate calculations easily
4) the ability to see some sort of evolution or mistake-fixing in the project's history.
> 4) the ability to see some sort of evolution or mistake-fixing in the project's history.
Rather than 'mistake fixing', what I want out of a project's history is evidence that the author hasn't done some form of p-hacking. But that purpose can be served just as well by preregistration.
https://en.m.wikipedia.org/wiki/Preregistration_(science)
Git is an implementation detail in the way x86 in the 2000's was an implementation detail. Of course in theory what I'm talking about doesn't have to be git, it could be another extremely well engineered, capable, reliable, trustworthy, auditable, fast, ubiquitous version control system. But in practice that means Git and there is nothing on my forecasts that will come close to dethroning it in the next 10 years.
I have seen 50 startups that have pitched building a shittier version of git for academics because they can't be bothered to spend time reading the internals of git and have no idea how well done it is.
> I have seen 50 startups that have pitched building a shittier version of git for academics because they can't be bothered to spend time reading the internals of git and have no idea how well done it is.
I suspect git would make a fine back end for a more polished product for scientists. A github for scientists sounds very interesting.
The reality is much more nuanced than that, especially when you're dealing with sensitive human-subjects data. Confidentiality and consent are important factors to consider. Let's say you're doing work on how abusive romantic partners use technology to keep tabs on their victims [1]. Would it be reasonable to expect the raw recordings, transcripts, and recruitment process be made publicly available?
This example is a little clunky, bc was ~2 years ago, but for
a GWAS study we did on Early-Onset Preeclampsia where we had clinical and genomic data for 109 pregnant woman in Hawai'i, we put everything on GitHub.
[1] We created a grammar for the schema, which not only allowed us to double check the type safety of our data but then also synthesize a CSV [2] with one click and stick that in the repo so that reviewers/researchers could `git clone` the repo and reproduce everything without any permissions bottlenecks (obviously the results are gibberish with the synth data, and you have to contact us to get the real data, but this way you can get the code working and then it's just a simple file move to reproduce the full results). This is the way.
The discussion came to Kuhn and the often touted belief in popular culture that science is not always completely right but in essence it's one big model of nature that keeps getting better and so we should trust it implicitly, while ignoring the concept of paradigm shift that arises specifically from people outside the consensus. Coupled with the fact that dissent gets blurred with denialism, and that the absence of evidence has come to mean "evidence of absence" on the news, it's become really tricky to discuss things critically without being openly mocked.
I wish I had time to read more philosophy, even as a scientist you learn a whole bunch about science, it's really enlightening.