In related news, the academic publishing problem is still unsolved. There is no standard model to fund the resource intensive process of peer review in the open access journals and their role as a gatekeeper for scientific relevance, advancement and funding.
> There is no standard model to fund the resource intensive process of peer review in the open access journals
This process doesn't necessarily need to be funded. In my own field, most journals are published by learned societies. They were founded with endowments large enough to cover the costs of publication (i.e. printing) in perpetuity, but the work of editors and peer review is unpaid. This doesn't strike most of us as a problem.
Even the big publishers do not compensate peer reviewers or sometimes even editors. They don't even provide typesetting or copyediting anymore -- authors are expected to provide camera-ready output. So, a lot of the money being gained by the big publishers does not actually go to fund the whole process of creating those journals.
For example, in my country, every assessment we have to take (be it for a tenure-track hiring process, for getting tenure, for asking for a grant, etc.) has as the most important criterion "publications in journals indexed by ISI JCR" together with their quartile.
Most of the journals in ISI JCR follow this model where they cost money (be it to publish or to read) and provide very little value... except for being on that list and being necessary to (aspire to) stay in academia and feed your family, of course.
Other countries have better systems in the sense that they may be more open to other venues not in ISI JCR, some may even actually look at the quality of the papers instead of just blindly following rules to score quartiles. But scientists everywhere have the same problem in larger or smaller degree.
A solution that is sometimes proposed is that authors who are no longer struggling for their career (e.g. tenured full professors) take a stand and refuse to publish there. Some movements have been made in that direction, e.g. in mathematics. But in most fields a senior professor will work together with Ph.D. students and postdocs who are in the struggle, so it isn't realistic either.
The truth IMO is that the solution must come top-down, from governments. The EU has made some progress, e.g. mandating open access for EU grant holders, but what happens then is that publishers will let you make your paper open access in exchange for a hefty fee (which again, is paid from taxpayer money). The real solution would be to mandate by law that research paid by taxpayer money is published in non-for-profit venues, period.
It's a coordination problem - if all scientists in a given field moved away from the established Elsevier journal to a new one, everyone (except Elsevier) would be better off. However, if any individual academic tried to move, he'd be much worse off.
Historically, coordination has worked sometimes. In 2003, after prodding by Don Knuth [1], the editorial board of the Elsevier Journal of Algorithms resigned en masse [2] and started a new cheaper journal, ACM Transactions on Algorithms. A few years later the Elsevier journal was shut down.
But I agree, it seems we can't rely on this process, and the solution must involve regulation.
My incomplete understanding is that publishing in big-name journals provides prestige and improves funding prospects for academicians. Since academia is very competitive, researchers will do whatever it takes to publish in the most presitgious journals. In other words, they provide a brand that researchers want to associate with; analogous to how rappers mention luxury brands (sometimes with but often without being paid) in their songs.
I don't know what benefits reviewers receive, but they are gatekeepers to the journal's brand, so conceivably they are able to obtain some benefit to themselves.
As a reviewer, you get to read relevant new research in your field several months before it gets published. This doesn't work in physics and maths, thought, where the whole field has the habit of pre-publishing manuscripts in arXiv, so everyone gets to read everything before it's published.
Is there a reason arxiv doesn't directly manage what would be an analog to paper reputation?
My impression is that journal selection is doing some work of signaling how awesome scientists think a particular paper is, either actually or aspirationally, and so is capturing some sense of group regard. It seems like just keeping track of views, downloads, and "likes" on arxiv might serve much the same function although would clearly require a lot of work to get right to be credible.
Overlay journals and the reputation graph as the result of citations. The more reproducible your work is, the higher the reputation should be. Part of getting an undergraduate or masters degree should be in reproducing research.
More easily gamed, I'd think. Also, if you remove the curators (journals), then people looking for research in the first place (who are driving the ones arxiv stats) won't know where to look first.
I'm not sure the previous poster is 100% correct but honestly I have not looked into it in depth. I know the times I have gone through it the journals did provide typesetting and copy-editing for some aspects of the paper.
However, to get to your actual question a big reason people use journals is simply the name. As a researcher getting your paper published in Nature is not only big because the prestige but presumably more people read/see nature articles so you have a better chance at high impact.
> There is no standard model to fund the resource intensive process of peer review in the open access journals
I have been a peer reviewer over the last 5 years or so for multiple journals and conferences in Computer Science like Elsevier, IEEE etc. Not once have I receive any remuneration for my comments in the article. Most of us do this as voluntary community service.
Yeah, while it's easy to criticise, based on Elsevier's reputation, reviewing for them strikes me as somewhat unethical by most standards.
EDIT: by most standards that at least don't involve some situation of comfort. If you're comfortable in academia, you're probably enabling some unethical shit, is what I've come to believe. It's up to you to decide if you can live with the degree of it.
You're the resource :) The alternative to you doing it for free would be paying someone. Having said that, I presume that you're doing it because you work in a certain role, and because it's advantageous to your career. I.e. you're getting paid by someone, and it's assumed that you'll review as part of your job.
Peer reviewers are unpaid. Editors are usually unpaid though some large / prestigious journals can manage a few paid staff to edit/layout, the publishing decisions are usually taken by committees of unpaid volunteers (renumeration is basically reputational.)
The solution is technically easy - reciprocal peer review so you pay other people to review your papers by agreeing to review others’ papers, and then publishing on arXiv. Computer science basically does this but replaces arXiv with an industry body the ACM. I don’t see why it needs to be any more complex than that.
The review process gives authors feedback which they use to improve their paper before it's published, and authors usually appreciate that.
It's also a chance to stop major errors being published, and again authors usually appreciate things like that being caught before the paper becomes public.
I mean, why not though? We do it with wikipedia. Maybe a publication could be submitted for review, but not made public (like, its up on the wiki, but not 'publicly view-able yet, only available to editors).
Wikipedia/ Stackoverflow style review process. Also, ffs can we get hyperlinked citations? Its fung ridiculous that I can't just click the link for your citation and go to that paper.
I don’t literally mean you do n reviews to get your review - I mean members of the community generally spend some time reviewing. If you're new you will get some of your papers reviewed before you start to contribute back but it’s not a problem. It’s how it works now in CS.
It doesn’t need to be directly reciprocal. You could have a system where having your paper peer reviewed costs X tokens, and you earn tokens by peer reviewing other papers that don’t have citation links to yours. Or you buy tokens with cash which sustains a fund for external reviewers. Or other people can transfer their tokens to you.
Sure, but the identical exchange of value occurs in the current system, it just goes unsaid/unspoken. You review my journal and maybe ignore some of the problems, poor assumptions, whatever crap I used to make the sausage, just let me get it published; I do the same for you. You may not 'know' who is reviewing your journal, but at least in the natural sciences, there are so few technically qualified individuals to review some very domain specific publications, so its almost inevitable that although you may not know for sure who is reviewing your paper, you can figure it out without too much work. Since both reviewers and authors benefit from getting their own work published, there is a silent consensus for letting bad work slip through.
No obviously I don't mean you review a person's paper and they review yours - use your common sense, mate.
I mean if you publish a paper and get four reviews, you then review another four papers at some point - perhaps at an entirely different conference later that year. You just make sure you review at least 4n papers for each n papers you publish.
> Computer science basically does this but replaces arXiv with an industry body the ACM.
There's a big difference: arXiv papers are open-access (everyone can download them), but ACM papers are closed-access (you need a subscription to read them). I wish that computer science research were published on arXiv, but we're not there yet...
1) Essentially no one knows about the author-izer system or understands how it works, especially outside academia. Readers in companies, poorer countries, etc., can't be expected to guess that the way to read an article is to go to the author's webpage and follow author-izer links (assuming they have been set up). What these potential readers will do is: search something on Google (or follow a link from somewhere else), hit the paywalled ACM DL page, give up. This convoluted system of "open-access from one place, closed-access from another" makes no sense.
2) For authors who actually post preprints of their work, yes, you can read it this way. But then you end up with multiple versions of the same work, that are often subtly different: does the author's preprint integrate reviewer feedback? does it fix some bugs that were found after the camera-ready version was submitted? And anyways, preprints posted on authors' websites usually disappear when they change institutions or retire, so it's not a good solution.
3) Yes, you can retain copyright on papers published with ACM, but then you need to give them an exclusive license to publish, so this still limits what you can do with your work (besides some narrowly worded exceptions). There is also a 3rd option of making the work open-access with no exclusive transfer, but this costs at least $700 per article, which is obviously excessive compared to the actual costs of hosting a 12-page PDF.
So I don't think it's fair to compare publishing with ACM and publishing on arXiv, because ACM is not open-access and publishing with them requires you to pay excessive fees or sign agreements restricting how you can publish your work, i.e., the opposite of what's in the interest of science.
Purely open-sourced publication and review; you have to publish your data along with your paper. A wikipedia style review process (or maybe more of a stackoverflow/ wikipedia hybrid).
The biggest and most critical miss of the whole process is not having the data a paper is based on published along with the paper. If something is irreproducible, is it really scientific? If I don't have your data, can I really reproduce your results?
> resource intensive process of peer review in the open access journals
I think you may be unfamiliar with how the reviewing process works. I get an e-mail asking me to review an article for a journal. If I say yes, I donate between two and ten unpaid hours of my time producing a review. After I am done with my review, I send it to the AE. The associate editor donates additional hours of his/her time reading my review, plus one or two others, along with potentially the entire manuscript, in order make a decision on the paper. This recommendation gets forwarded to the editor, who makes the final call on publication. With few exceptions (the only ones I'm aware of being the biggies like Nature, Science and Cell) every single person involved in this process is uncompensated. In some cases the editor may receive a small honorarium, but it's trifling compared to amount of time it takes to run a large, prestigious journal.
You're absolutely correct that reviewing is a resource intensive process. But you're wrong if you believe that the publishers are shouldering a significant part of the resource burden. This is exactly why people are so pissed off when these same publishers turn around and charge our own campus libraries a five-figure sum to access the same journals that we work basically for free to produce.
A token-curated registry (TCR) is an emerging model of information curation developed by Mike Goldin & Simon de la Rouviere from Consensys. It's an incentive system which decentralises the work of creating and maintaining high-quality repositories of valuable information.
Academic publishing seems like an ideal use-case for this model. There's a frenzy of activity going on in this space at the moment, and hundreds of details to work out - rather than get wrapped up in theorising, we want to release a v1.0 quickly and learn from there.
Please reach out to me via LinkedIn if you want to contribute to the pilot, or can help us raise awareness among the academic community. We're a small but dedicated group, open to partnering with universities, journals, crypto-developers, and other interested parties (especially academics).
Note: we have no plans or desire to make money out of this project. We're a group of well-connected enthusiasts who want to make headway on solving this problem.
I’m not an academic so forgive me for a noob question.
Is there a good introductory essay/book on how the academic publishing industry is setup, the workflow and the incentive structure for each party involved?
As an employee at an open access publisher, I can't agree that funding the peer review process is the biggest problem. Our surveys of our own reviewers have shown that only a small (<20%) minority of reviewers wish to be paid a fee for their reviews. The majority of referees prefer the current model of crediting volunteer reviewers in a regularly published "acknowledgements" article. I assume the incentive may be that reviewers show these to their tenure committee.
In fact the most common complaint from peer reviewers is about the length of the review period. Because open-access journals have authors as our customers, the market pressure is to provide excellent customer service, and authors prefer publishers who will process their papers quickly. This pressure is passed on to reviewers who must complete reviews much faster than the old norms under Springer.
Funding, or a startup enterprise, is certainly needed to administer the peer review process. I'm willing to bet it would be less than $10 per paper in universal use. No money is needed to pay reviewers. In fact the savings would be so great, their could be a courtesy payment. The ruinous censorship and false ownership of the publishers should be broken.
Citation count isn’t always the best indicator depending on the field.
It’s hard enough to find good papers without the literature being further polluted with substandard work.
The job of a reviewer isn’t a simple yes/no answer. Reviewers often suggest sweeping changes to work to make it publishable. This helps to ensure that the literature is populated with well put together work that is free of bias and glaring mistakes. Otherwise we’d spend every minute helping students to spot the difference and wading through misleading research.
>Otherwise we’d spend every minute helping students to spot the difference and wading through misleading research.
I already spend far too much time doing this. I can't imagine how futile literature research would be without peer review. There are already enough illogical conclusions and poor study designs to wade through.
Absolute crap will be roundly ignored with or without peer review. Science and the Humanities worked fine before peer review and if it’s dropped they’ll work fine again. If Grigori Perelman puts another paper on arxiv that’s groundbreaking people will look at it without benefit of peer review, just like the last one.
> Science and the Humanities worked fine before peer review
Before peer review they were not dealing with the volume of scientific content that is submitted nowadays. One could debate whether citations are the only metric necessary, but there needs to be a filter before that even happens, or else we will be flooded with an avalanche of garbage research. It would be trivial for someone with deep enough pockets to order a network of junk papers citing each other and debunking climate change or evolution and make a real mess. As painful as reviewing is, it's a necessary evil for now.
> Absolute crap will be roundly ignored with or without peer review.
Things might be ignored, but only after a person has already wasted their time looking at it and determining it is crap.
It was harder for cranks to publish back when journals were purely physical and a crank would have to come up with the printing costs himself. Now that anyone can publish for free on the internet, peer review is even more important for establishing what content out there is worth paying attention to and which is not.
You neglect the costs of submission, revision, resubmisssion etc. which are borne by the authors. Even if we assume that all involved are pure of heart and no one is deliberately delaying publication of their rivals’ work submission to publication in economics is on the order of two years. This is insane so the actual intellectual conversation has moved to working papers with final publication being in a journal being as much for archival and career progression metrics as anything else. I believe the situation in much of computer science is similar, with conference papers serving as the workaround for the fact that pre publication peer review is unbearably slow, not working papers.
Peer review happens anyway, but faster, and in public, without the insanity of revise and resubmit.
Journals are not where the action is in Economics or Computer Science. It works for them. Why not for everybody?
> You neglect the costs of submission, revision, resubmisssion etc. which are borne by the authors.
I neglect those because journals in my own field are both free to publish in and, more often than not, open-access. I do understand that not everyone is so fortunate, however.
I’m talking about time, not money. Because those costs absolutely are borne by the authors. Insofar as peer review hinders the free dissemination of ideas it also hurts the scientific community and those who depend on its research.
> You’d be stunned at some of the rejected submissions
You'll be stunned by some of the accepted submissions, such as these highlighted by "New Real Peer Review", @RealPeerReview on Twitter:
Glaciers, gender, and science: A feminist glaciology framework for global environmental change research. "Merging feminist postcolonial science studies and feminist political ecology, the feminist glaciology framework generates robust analysis of gender, power, and epistemologies in dynamic social-ecological systems, thereby leading to more just and equitable science and human-ice interactions." http://journals.sagepub.com/doi/abs/10.1177/0309132515623368
Black Anality "In turning attention to this understudied and overdetermining space — the black anus — “Black Anality” considers the racial meanings produced in pornographic texts that insistently return to the black female anus as a critical site of pleasure, peril, and curiosity." https://read.dukeupress.edu/glq/article-abstract/20/4/439/34...
Rum, rytm och resande: Genusperspektiv på järnvägsstationer PhD thesis on gendered train stations. "The overriding aim of this study is to examine how male and female commuters use and experience railway stations as gendered physical places and social spaces, during their daily travels. [...] Through this theoretical frame the thesis analyses gendered power relations of bodies in time, space and mobility." http://liu.diva-portal.org/smash/record.jsf?pid=diva2%3A7424...
“I'm a real catch”: The blurring of alternative and hegemonic masculinities in men's talk about home cooking "while many participants drew on what they saw as alternative masculinities to frame their cooking, these masculinities may in fact have hegemonic elements revolving around notions of individuality and romantic or sexual allure." https://www.sciencedirect.com/science/article/abs/pii/S02775...
becoming cyborg:
Activist Filmmaker, the Living Camera, Participatory Democracy, and Their Weaving "Throughout this article I use lowercase letters to deemphasize the importance of the individualized human in cyborg connection." http://irqr.ucpress.edu/content/10/4/340
Speciesism Party: A Vegan Critique of Sausage Party "In this article, we have described how Sausage Party reflects and reproduces intersecting oppressive power relations of species, gender, sexuality, ethnicity and different forms of embodiment." https://academic.oup.com/isle/advance-article/doi/10.1093/is...
Your choice of articles says more about you than the quality of the articles. Also, a number of these articles were published pursuant to mandatory thesis requirements. They weren't intended to expand the frontiers of human knowledge; they were intended to show that the degree-earner is capable of deep analysis in their respective PhD program.
The otter article, for example, appears to be a well-researched analysis of otter populations in Alaskan waters correlated with human settlement and economic activities. The pumpkin latte article looks into the correlation of light colors and whiteness and advertising in the US, an issue which frequently pops up in marketing gaffes (including one as recently as last week for a beer commercial). The queer organizing article looks at diversity in organizations, analyzed using sociological framework (i.e., "norms") rather than the business management framework. The Sausage Party article is an academic analysis of the movie, which is a surprisingly deep commentary of race, gender, and ethnicity in the U.S.
The above is a completely fraudulent article that was accepted for publication to demonstrate the counter point to your argument. The article is literally gibberish, but got through 'peer-review'. I know the lead author on this one and she did this to make a point about the inconsistency of peer review. The article so clearly fake that its laughable, but that is the point.
The choice of articles is not mine, but recent examples from said Twitter account. There are many more (eg hashtag #NRPR50K)).
I don't want to discuss the merits of individual papers, but I think as a whole they say something worrying about the current state of academic publishing.
So, let me just tell you why I am annoyed and concerned with these and similar papers:
1. Often, they torture language, in that the authors don't seem to write to elucidate and educate, but to obfuscate. (I'm well aware that scientific fields develop their own jargon; that it is useful; that the meaning of a term doesn't necessarily correspond with the ordinary meaning; etc. But even granting all that, it seems to me that a lot of that writing is wilfully opaque and unnecessarily jargon laden.) (Note BTW that you communicated the gist and utility of the articles much better than the authors themselves.)
2. Publishing "pursuant to mandatory thesis requirements" is part of the problem: people get degrees and university positions with research that does not expand the frontiers of human knowledge. Autoethnographic research is particularly galling in that respect (such as a recent paper about the time the author fell of a chair).
3. They delegitimise academia. There used to be a very broad consensus in most societies that education and research is immensely valuable and ought to be supported by government, and that academics should have immense freedom to pursue what they deem important and valuable without any interference and censorship (the essence of tenure). Papers such as these corrode that consensus.
4. Critical theory papers are ineffectual, I'd say, in achieving their laudable goals. I was going to mention MLK and Marx and Keynes ("Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist."), but this is too long as it is.
5. "Publish or perish" and Pay-to-publish open access also rear their ugly head.
I maintain that not only rejected papers, but also some accepted papers are shocking.
Number of citations is gamed too easily. You would first get self-citations and, after you started ignoring those, citation rings (groups of people colluding to citate each other excessively).
So, yes, you would need to assign a reputation to citations. I don’t think that’s a solved problem or even simpler than the problem of assigning reputation to papers.
This is the same problem google faced and 'solved' by with the page-rank algorithm. I've seen some attempts at applying this to the reference graph, but it doesn't seem to be taking off.
It's not taking off because, out of the box, it can't replace the gatekeepers: there needs to be a certain time period, sometimes decades, from publication until the moment a major work is recognized and cited. Cites form a DAG (old papers never cite newer ones) so do not converge on a recursive application of PageRank, newest works have all identical weights by definition.
So in the short run, you still need a prospective measure of scientific value, and acceptance for publication in a selective journal ("impact factor") serves that purpose well.
The key thing not being pointed out is that these papers are generally printed in a journal or as proceedings to a conference. So there needs to be a hard deadline by which the work needs to be complete, and seen as valid, for printing. So in that sense, there needs to be a peer review, because there aren't any take-backs at this stage.
If you're really interested in the question, there is a field of research called scientometrics and that is dedicated, among other things, to study this kind of question.