Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a question of efficiency.

Technically, science can work if your colleagues are untrustworthy. This is one of its big, famous features. Over the centuries, scientists have published a great many howlers, ranging from honest mistakes to rushed procedures, deliberate disinformation, and straight-up fraudulent data. These things get caught, their perpetrators get punished to some extent, and science makes progress. Eventually.

The problem is that "eventually" can be a really long time: Years or even decades. (The Piltdown Man hoax wasn't exposed for forty years.) In the meantime, bad science will confuse the analysis, corrupt the textbooks, and injure the careers of a few unlucky grad students. It will waste a great deal of time and money, perhaps that of the most prominent people in the field.

For example, when cold fusion hit in 1989 dozens of scientists dropped everything for at least six months to try and replicate it. Millions of dollars were spent. Obviously, while those folks were tinkering with cold fusion they weren't tinkering with anything more interesting or useful.

We've made a lot of progress since Galileo, the frontier has moved a long way, so it takes more than a couple of pendulums to replicate most modern scientific papers. It could take half a decade, the entire productive career of one or two grad students, an entire research grant, a lab full of equipment, and the lives of two hundred mice just to replicate one paper. So the mutual trust is essential for speed: You have to be able to gamble your time on the results of other people's experiments with some hope of a positive return [1], or the speed of science slows down to the speed of one person's work. (Even that could work - you can discover things even as a sole practitioner - but it would be incredibly slow. Particularly because a scientist working without good criticism will make mistakes, lots of them.)

---

[1] The return will never be 100%. One of the things that disturbed me as a physicist switching to biology was that even the best biology papers are inevitably riddled with likely sources of error: The subject is just too complex to control everything perfectly. There are, for example, systematic sources of error that underlie entire fields, like the fact that most results are tested either on one highly inbred species of lab animal or on lines of human cells that have been selected to thrive well in dishes, and which are therefore, at some level, unlike any cells seen in any living human. So, science is inevitably a kind of gambling: Will you see consistent and useful results from this particular corner of experiment space? If the thing kills cancer in the dish, will it work in mice? if it works in mice, will it work in humans? if it works for 10% of humans, will it work for 40%? You gamble and you hope. You hope that you aren't wasting your grant, your career, or your entire field. The good news is that we do tend to win, in the long run, but anything that improves the odds on the bet is helpful.



Thanks for the detailed explanation.

Maybe I've been too close to science for too long but the whole line of argument seems so obvious to me that my reaction is "I don't know where to begin" when someone implies science could be done through coercion.


Yeah, that is the way it works, isn't it? The idea of just straight-up lying to your colleagues is unthinkable, directly analogous to releasing an open-source library with a deliberate flaw in it, or loaning your fraternity brother a bicycle with broken brakes and neglecting to tell him about it.

Of course, just because it makes no sense and is terrifyingly sociopathic doesn't mean it doesn't happen. Among other things: Mental illness happens. It's scary when it does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: