Grad students / postdocs / human lab rats aren't scum, the incentives just aren't in place to promote good behavior (such as calling other researchers out on their bullshit). If you're trying to acquire a vaunted tenure track job, you can't afford to piss off $senior_tenured_researcher_at_prestigious_institution, since $senior could blacklist you so that you won't get hired at the incredibly small set of universities out there. Sometimes things work out despite pissing off major powers (Carl Sagan technically had to "settle" for Cornell due to being denied tenure at Harvard, in no small part because of a bad recommendation letter from Harold Urey [0]), but not often.
Even if you do manage to get a tenure track job, you pretty much have to keep your head down for 7 years in order to secure your position.
And once you have tenure, you still get attacked vociferously. Look at what happened when Andrew Gelman rightly pointed out that Susan Fiske (and other social psychologists) have been abusing statistics for years. Rather than a hearty "congratulations", he was called a "methodological terrorist" and a great humdrum came about [1].
When framed against these circumstances, it should be evident that there is literally nothing to gain and everything to lose from sending out a short e-mail pointing out that someone's model doesn't work.
I'm a researcher myself and I guess this is one of those "does the end justify the means?" scenarios... Out bad research and its perpetrators and science loses out on a scientist that actually wants to do good work. Or don't and then watch yourself rationalize worse decisions later on for the sake of your research, slowly becoming as corrupt as they were and realizing that a lot of your cited work could potentially be as bad (or worse) as the ones you helped get published.
I really believe we need a better way. Privately funded / bootstrapped OPEN research comes to mind as a potential solution to bring some healthy competition to this potentially corrupt system. Mathematicians are starting to do this, I think computational researchers have the potential to be next.
> Grad students / postdocs / human lab rats aren't scum, the incentives just aren't in place to promote good behavior
The question is, would additional incentives promote good behavior or just lead to more measurement dysfunction. Some people think that just giving the "right" incentives is needed, but actual research shows otherwise.
Without reading through that very long text, claiming that incentives don't influence human behavior is a wildly exotic claim.
There is near infinite evidence to the contrary. That said, constructing a system with "the right incentives" can of course be devilishly hard or even impossible.
The claim is that it does change behavior, but only temporarily and it doesn't change the culture in a positive way / doesn't motivate people. It ends up feeling like a way of manipulating. That being said, according to this article, the entire incentive system would need to be dismantled. Simply adding more incentives wouldn't necessarily produce higher quality, at least not in the long run. So essentially the process of incentivizing new amazing research for funding is the primary issue and adding incentives for pointing out issues would just be a bandaid.
This sounds like a good critique of naive incentive schemes.
I don't think there is any doubt that humans follow incentives.
But working out what the core incentive problems are, and actually changing them might be both (1) intellectually difficult, and (2) challenge some sacred beliefs and strong power structures, thus making it practically impossible.
The HBR article's discussion of incentives is not really quite what I was thinking of when I wrote my comment. Specifically, the article you cite refers to the well-known phenomenon of how introducing extrinsic rewards via positive reinforcement is counterproductive in the long run. I've often noticed this form of "incentive" / reward being offered in the gamification of open science, such as via the Mozilla Open Science Badges [0], which in my opinion are a waste of time, effort, and money that do little to address systemic problems with scientific publishing.
With regard to the issue of grad students being unwilling to come forward and report mistakes, incentives wouldn't be added, but rather positive punishment [1] would be removed, which would then allow rewards for intrinsically motivated [2] actions.
It's not at all uncommon that implementations provided with papers do not actually do what the paper says it does. You're often lucky when there is an implementation at all. But most of the time it's just that running the exact same implementation under the same conditions requires setting up a very specific environment, installing specific versions of libraries, using some niche software and converting data from one byzantine format to another. Each deviation from the original paper is liable to subtly affect the results in nondeterministic ways. That's why no one really gets surprised or even cares, life is too short to call out all the bad software written in academia.
I was more referring to changes in the API where the input and output suddenly have to be in different formats in the middle of a pipeline, causing a crash. What can also happen is that somehow the old format is still valid and gets processed all the same, thus yielding nonsensical results. Sometimes a lab devises their own format which no one else uses, and the specification may be updated without notice between the moment they publish and the moment you try things out. Most people have no idea about things like 'backwards compatibility', 'unit tests', 'containers', etc. Code is just a tool to them and the fact that they had to write some is annoying them in itself.
> requires setting up a very specific environment, installing specific versions of libraries, using some niche software and converting data from one byzantine format to another
Containerisation is fairly mature and simple to use. Many in other fields struggle with these exact same issues and are able to create reproducible environments just fine.
I find it amazing that those publishing don't include their implementation, all that work locked away on a rusty hdd.
Do you think that they would use a VCS that was less invasive and more transparent to their workflow, like dropbox?
I'm thinking of making a VCS that simply runs in the background and
- Automatically records every file save (effectively a git commit without a message)
- Allows adding messages through tagging (like git tag)
- Handles 'branching' just by asking you make a copy of the directory with a different name, properly understands how to diff/merge/etc copied files/directories that have since diverged.
In the software dev world--but outside of BSD--I would argue that containerization is extremely immature. Outside the software dev world, it is not easy to use.
I'm sorry, what? It almost sounds like everyone in your lab is scum. Hopefully YOU at least spent the 60 seconds to write an email?