The current scientific publication model is only benefits the gatekeepers in publishing industry. It directly goes to the packet of publishing industry by ripping off the tax-payers twice with a steep price. Nowadays research institution pay high fees both for paywall and open-access. No matter you access or publish you some how give tax-payers money to these publishing industries. It has been shown the steep price is also un-justified as peer-review are done freely (for credit, not money) by science worker themselves and there are plenty of open-review and open-science projects which prove the price is not justified.
It might worth it to repeat part of my comment in [1]:
> The current business model as a whole is a legacy institution based on earlier monopoly by a charlatan named Maxwell [2]. He basically lured scientist by shiny hotels+extra packages to build the initial reputation and then monopolize the entire industry for decades. You can find a good review of this scheme from below YouTube video[3].
A friend calls the latter bridge trolls. They find a useful resource that someone created and many need, and aim to extract a toll for it - without having created it.
Let’s be more like the creators and less like the bridge trolls.
Why can't we have a wiki style libgen "alternative" if the only "copyright" arises from "editing" and miscellaneous efforts they put into it. Would libgen be "low quality" if it somehow had access to all research irrespective of quality?
There are numerous open-access academic publishers or prepublication serviers, including PLOS, Arxiv, and others. Some of those (SSRN) have been bought by academic publishing monopolists (Elsevier), which have also bought some content management tools (Mandeley).
The problem going forward is in breaking the stranglehold over academic careers that prestige journals have, a key source of their power. In order to gain entry into or promotion through academic ranks, academics in many fields must publish in prestige journals, many owned by the monopoly cartel.
There's also the small matter of the 60+ million previously published academic articles, many under a copyright regime which would have seen them enter the public domain years or decades ago, which remain locked behind copyright prisons.
In short: your idea's not bad, it's actually being implemented. It remains difficult to implement because of structural reasons, and still only addresses a part of the problem.
how do i know from a DOI/name/title that the research is good? just "being" on the journal is more important than the details of peer reviewers and their own research? i am saying, how do you evaluate the "goodness" of a paper if you weren't told it came from Elsevier or some other publication? i am genuinely curious
For me personally, I'm involved in academia and research so I live and breathe it. It's difficult for me to evaluate papers outside of my domain, and takes a huge amount of background research to do so.
A good first filter is usually (not always) seeing the journal its published in. The primary thing here though, is not looking for good journals but rather avoiding the known predatory ones.
My concern is for the people who don't live and breathe research. There is a massive amount of depth and detail that is easily overlooked when you're unfamiliar with the subject.
Things like poor method can invalidate results, and people unfamiliar with the particular methodology can miss that. Even the statistical analysis could be flawed, or they use a statistical method which is not appropriate for their data or study design.
The studies published by people who are really into ESP and paranormal ideas are annoying like this.
It's an exercise in making papers that look like they have vaguely decent protocol and statistical methodology, but arrive at insane, obviously wrong results through cleverly non-obvious, intricate flaws.
Like the Underhanded Code Contests, but for research.
> It's an exercise in making papers that look like they have vaguely decent protocol and statistical methodology, but arrive at insane, obviously wrong results through cleverly non-obvious, intricate flaws.
What if I told you that 90% of all research findings are like that.
>just "being" on the journal is more important than the [...] research? i am saying, how do you evaluate the "goodness" of a paper if you weren't told it came from Elsevier or some other publication?
I think I understand the motivation for your question and why you're confused.
Your mental model for your question seems to be this: if science person is competent, he/she should be able to evaluate any Libgen paper without needing to outsource the assessment of "good/bad" to a middleman publisher like Elsevier. Therefore, a file-hosting website like Libgen is all that academics should need.
But the mental model is this: the scientists/readers are busy and they don't want to waste time on bad papers. E.g. Nature journal rejects ~93% of 10000 submissions in a year.[1] Therefore, reading Nature means people don't have to wade through ~9000 other papers. In short, readers still want curation because it saves time.
Platforms like Libgen or Scihub solve distribution/downloads of pdf files. But hosting documents is the easy part. The hard problem they don't solve is the human curation. Conceivably, an academic "Libgen" would host all 10000 submitted papers and busy readers are not interested in that. Instead, having papers pass several levels of human curation filters into a manageable subset for readers leads to accumulated prestige for both the Journal and the particular paper. Prestige also leads to academic promotions, lab funding, etc. Libgen as a pdf hoster does not have a prestige-accumulation feedback loop.
EDIT reply to: >I don't understand, how does sci-hub undermine the prestige-accumulation feedback loop?
I was not saying Scihub undermines prestige at journals like Nature. Instead, Scihub does not _solve_ the problem of creating prestige/impact for submitters (including unknown ones) who need papers validated by peers (especially respected peers). Scihub only solves the files hosting aspect -- the pdf download button. But hosting the "download links" is not the hardest problem that middlemen publishers solve.
why can't we have a system of peer review on scihub? scientists have a verified profile and they well, "review" papers as they go so if you are interested in a field, you would wither find the topic or find a name you know and go from there?
why would that not work?
what i am suggesting is, let scihub help the researchers do the peer review work themselves. think of it as a highly restrictive version of wikipedia? that sounds stupid but it would be "open" for public scrutiny and well, free for people to use the research
You would expect that at least during the covid remote-work thing they would not go after sci-hub, the remote work tool that makes science faster and more productive (TM) .
There are selfish individuals in this world who just want to tax society and not contribute much in return. They are in most industries, but this one is particularly detrimental to everyone. You would think that there would be some activist hackers out there fighting on the side of sci hub beyond just extracting documents.
It might worth it to repeat part of my comment in [1]:
> The current business model as a whole is a legacy institution based on earlier monopoly by a charlatan named Maxwell [2]. He basically lured scientist by shiny hotels+extra packages to build the initial reputation and then monopolize the entire industry for decades. You can find a good review of this scheme from below YouTube video[3].
[1] https://news.ycombinator.com/item?id=29218202
[2] https://en.wikipedia.org/wiki/Robert_Maxwell
[3] https://www.youtube.com/watch?v=PriwCi6SzLo