Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Value add in to who? It sounds like you believe open source developers owe something to someone which simply isn't the case. You should evaluate the license(s):

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.



I don't see where this license obligates github to distribute future versions.

Also if we're expecting people to do the bare minimum specified in their license, github's license gives them all the leeway they needed for their actions too.


GitHub is under no obligation to distribute further versions. Neither is Marak under any obligation to maintain and upkeep the repository. While in bad faith of the community, you are the consumer who CHOSE to use his software, free and at no cost to yourself. You have no right to dictate how that repository is used, especially if you never donated or contributed to the project itself.


> While in bad faith of the community, you are the consumer who CHOSE to use his software, free and at no cost to yourself. You have no right to dictate how that repository is used, especially if you never donated or contributed to the project itself.

Either we're going by a legalistic interpretation of the terms in which case Marak was free to fuck up his project and github was free to kick him off npm for it, or we're agreeing that people can be held to moral standards apart from legal ones.

If we agree that moral standards about bad faith should prevent npm/github/microsoft from taking control of something that Marak has put work into, then we should also be able to agree that moral standards should prevent Marak from releasing a deliberately broken version of a package as a fuck you to corporate users. Even that action of Marak's I think is wrong, but the backlash also landed on many open source projects.


I don't have a problem with GitHub or NPM taking down his project. Just like I don't have a problem with him poisoning it if he so chose. I do have a problem with people here whining about their own selfish wants. Again not one person is obligated to use faker.js, if you wanted the security that parts of your code base would not be tampered with, then you probably shouldn't have been using a third party library that wasn't under your control in the first place. Common sense is all too lacking here across the first world.


At the end of the day, open source is built on trust. Even the more paranoid-architected flows outside of npm (checksums via side-channel, curated package distributions maintained by a third-party such as debian) don't protect the end-user from actual malicious action on the part of the trusted source. Consider the story of how Univesity of Minnesota got banned from adding patches to Linux (https://www.theverge.com/2021/4/30/22410164/linux-kernel-uni...). In that case, they were caught. But if they weren't caught (or if a critical mass of Linux maintainers went rogue and were in on it)? Enough malicious actors with the right credentials can publish and checksum a damaging package in any system that allows code reuse. It is, perhaps, riskier to rely on a system with one maintainer. If that's the case, moving Faker .js to community controlled was a great first step in restoring trust in the package; it's harder to compromise a group.

We can sit here and cluck our tongues and say "Should have known better than to trust someone else's code," but that's just victim-blaming. Marak broke trust. He took advantage of a system with a vulnerbility and he exploited it. And everybody uses a system that is vulnerable in some way.

Because he did this, the system interpreted his actions as damage and routed around them. The system may change to make this attack harder in the future. And the result will be more complex and have more failure modes, and everything will be slightly worse as a result because we have to replace with process what we were previously able to do with human-to-human trust. "Nice job breaking it, hero."


IANAL but i highly doubt that that license would protect you if you were being intentionally malicious (as opposed to say grossly negligent).


I'm just as much an armchair lawyer as the most of the rest of HN but my understanding is that liability waivers aren't considered enforceable if malicious intent or gross negligence is involved. Anybody with more legal expertise want to clarify?


I am also not a lawyer, but this is what I found for NY and would be surprised if it doesn't apply in most states and many other countries too:

> Under New York law, a party can waive ordinary negligence, but not gross negligence, reckless conduct, willful/wanton conduct, or intentional acts. See Kalisch-Jarcho v. City of New York, 58 N.Y.2d 377 (1983); See also Restatement (Second) of Contracts § 195 (1981) (“A term exempting a party from tort liability for harm caused intentionally or recklessly is unenforceable on grounds of public policy.”).


This would be like suing a food pantry because they stopped giving out free food.


Marak didn't stop giving out free food. He intentionally poisoned a food delivery without notifying the recipients.

Using your food pantry analogy, it sounds more like he should go to jail.


He didn't deliver food. People picked up whatever they could find cause they felt entitled, and then said... oh look there was a sign here all along.


If we're stressing the analogy, Marak put food out in a park with a sign "Take the food, just tell everyone you got it from Marak, quality not guaranteed".

Then one day the food had laxatives because he felt not enough people put money in the tip jar.

I think that would still be actionable. Most people wouldn't bother, just like I don't think anyone is seriously thinking of taking Marak to court over what is mostly broken CI builds, but maybe the park staff won't let him offer food there anymore.


There was no more "food". An loop in a script isn't malicious, as a user can terminate a script and by running an unknown script they are assuming some liability as well. What you're advocating for is that if an open source developer changes their code, even to say, prompt the user to confirm executing when before they didn't prompt and that somehow breaks automation that the user has built (not the developer) then they should be liable for harm. It sets terrible precedence, and the end result is no on will want to create open source software anymore. I can only hope you are simply playing devils advocate than being serious, cause if you are serious then I hope you reap what you sow.


"By eating food of unknown provenance they are assuming some liability as well" isn't really an argument that would hold up in a court of law if someone intentionally taints the food.

> What you're advocating for is that if an open source developer changes their code, even to say, prompt the user to confirm executing when before they didn't prompt and that somehow breaks automation that the user has built (not the developer) then they should be liable for harm.

We should probably divide the conversation into two threads: one on the tainted-food analogy, and one on the changing-code reality. Because they aren't the same, and one can reach weird conclusions trying to conflate them.

Liability for tainted food is pretty settled law. If someone eats your food and gets sick, it's a problem for you. If they eat it and get sick and can prove you poisoned it, it's a real problem with real legal consequences. Food handlers and preparers go out of their way to avoid both of those scenarios.

Intentionally modifying code knowing you'll break downstream consumers hasn't been tested (to my knowledge) in court, so we can set that aside. But is it immoral? That's going to depend on one's morality, but I have a hard time seeing my way to agreeing with the standpoint "Sure, it's moral. User beware." That principle, written large, creates a strictly worse world, where people are hiding in their digital caves, unable to trust anything outside. A lot of people (including GitHub and NPM's owners) are trying ot build something better than that.

Marak had a right to do what he did, but that doesn't mean it was right, we don't have to agree that "because he could, it was good" (that's just rule-by-power, and almost nobody thinks that's a good moral philosophy), and I applaud the open-source community who stepped in to minimize his harm.


GitHub doesn't = open source community. Quite the opposite actually. It is a closed system designed to take open source software and put it behind a closed-source ecosystem, and apparently to moderate open source developers and taking away their individual freedoms.


How was Marak's individual freedom taken away? They locked his account temporarily (because what happens looked like somebody had stolen his credentials and impersonated him)... Then what happened?

His freedom to post what he wants in his repo does not extend to a freedom to screw users depending on the software he licensed for open source use working. GitHub and npm took steps to protect users from his malicious actions.


Value-add to the people who actually use github to build software, of course. It's fine to discuss license terms and what you should or shouldn't expect when you use github/npm/etc, but in the real-world JS landscape, many projects (commercial and otherwise) use many open-source packages through complex dependency hierarchies.

Your can think what you want about whether that's good or bad, but it's unquestionably our current reality. Protecting JS projects from malicious updates, regardless of whether or not the project license technically permits this by the author, is clearly in the best interest of users of this ecosystem.


No one took steps to protect JS projects from malicious updates. They took action in this one case but did not fix the underlying issue which is with package managers.


It is true that there are some vulnerabilities in the standard design for npm package management. Packages are designed to assume by default sources can be trusted and to pull aggressively. That's a system that's very convenient for developers... Assuming somebody doesn't use it in an extremely malicious way by building our trust with a working package and then pushing a change designed to screw users.

Fortunately, it appears the system has been stress tested now and we can see how that damage can be mitigated. If this kind of attack can be minimized by what is essentially moderation and curation, everything's good.


The only reason anything happened cause the developer intentionally crippled their own package which they had the right to do. If someone was modified a package to do something actually malicious you could go months without ever finding out.


I don't disagree that there are degrees of harm and differences in the ease of detecting such harm. But what's the significance of the distinction? Whether it's found out immediately or found out months later, the remedy for the community will likely be the same if the damage is widespread enough... Flag the version as bad in npm, break the connection between npm and the GitHub repo if the damage was purposeful, and the community picks up the package and starts maintaining a non-malicious version.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: