Hacker Newsnew | past | comments | ask | show | jobs | submit | femto113's commentslogin

Users already proven to be trustworthy in one project can automatically be assumed trustworthy in another project, and so on.

I get the spirit of this project is to increase safety, but if the above social contract actually becomes prevalent this seems like a net loss. It establishes an exploitable path for supply-chain attacks: attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects). If this sort of cross project trust ever becomes automated then any account that was ever trusted anywhere suddenly becomes an attractive target for account takeover attacks. I think a pure distrust list would be a much safer place to start.


Based on the description, I suspect the main goal isn't "trust" in the security sense, it's essentially a spam filter against low quality AI "contributions" that would consume all available review resources without providing corresponding net-positive value.


Per the readme:

> Unfortunately, the landscape has changed particularly with the advent of AI tools that allow people to trivially create plausible-looking but extremely low-quality contributions with little to no true understanding. Contributors can no longer be trusted based on the minimal barrier to entry to simply submit a change... So, let's move to an explicit trust model where trusted individuals can vouch for others, and those vouched individuals can then contribute.

And per https://github.com/mitchellh/vouch/blob/main/CONTRIBUTING.md :

> If you aren't vouched, any pull requests you open will be automatically closed. This system exists because open source works on a system of trust, and AI has unfortunately made it so we can no longer trust-by-default because it makes it too trivial to generate plausible-looking but actually low-quality contributions.

===

Looking at the closed PRs of this very project immediately shows https://github.com/mitchellh/vouch/pull/28 - which, true to form, is an AI generated PR that might have been tested and thought through by the submitter, but might not have been! The type of thing that can frustrate maintainers, for sure.

But how do you bootstrap a vouch-list without becoming hostile to new contributors? This seems like a quick way for a project to become insular/isolationist. The idea that projects could scrape/pull each others' vouch-lists just makes that a larger but equally insular community. I've seen well-intentioned prior art in other communities that's become downright toxic from this dynamic.

So, if the goal of this project is to find creative solutions to that problem, shouldn't it avoid dogfooding its own most extreme policy of rejecting PRs out of hand, lest it miss a contribution that suggests a real innovation?


I suspect a good start might be engaging with the project and discussing the planned contribution before sending a 100kLOC AI pull request. Essentially some signal that the contributor intends to be a responsible AI driver not just a proxy for unverified garbage code.


That's the most difficult part oftentimes. People are busy and trying to join these conversations as someone green is hard unless you already have specifically domain knowledge to seek (which requires either a job doing that specific stuff or other FOSS contributions to point to).


I think this fear is overblown. What Vouch protects against is ultimately up to the downstream but generally its simply gated access to participate at all. It doesn't give you the right to push code or anything; normal review processes exist after. It's just gating the privilege to even request a code review.

Its just a layer to minimize noise.


Did you experiment with getting an AI to critique incoming PRs, and ignoring ones where it finds clear red flags?


And then they become distrusted and BOOM trust goes away from every project that subscribed to the same source.

Think of this like a spam filter, not a "I met this person live and we signed each other's PGP keys" -level of trust.

It's not there to prevent long-con supply chain attacks by state level actors, it's there to keep Mr Slopinator 9000 from creating thousands of overly verbose useless pull requests on projects.


That is indeed a weakness of Web of Trust.

Thing is, this system isn't supposed to be perfect. It is supposed to be better, while worth the hassle.

I doubt I'll get vouched anywhere (tho IMO it depends on context), but I firmly believe humanity (including me) will benefit from this system. And if you aren't a bad actor with bad intentions, I believe you will, too.

Only side effect is genuine contributors who aren't popular / in the know need to put in a little bit more effort. But again, that is part of worth the hassle. I'll take it for granted.


It's just an example of what you can do, not a global feature that will be mandatory. If I trust someone on one of my projects, why wouldn't I want to trust them on others?


Yeah, as that's a different problem unrelated to the problem that this is trying to solve.


> attacker "proves" themselves trustworthy on any project by behaving in an entirely helpful and innocuous manner, then leverages that to gain trust in target project (possibly through multiple intermediary projects).

Well, yea, I guess? That's pretty much how the whole system already works: if you're an attacker who's willing to spend a long time doing helpful beneficial work for projects, you're building a reputation that you can then abuse later until people notice you've gone bad.

This feels a bit https://xkcd.com/810/


Unless you know all the terms the valuation is pretty meaningless. For example if I invest $500 for 1 share of your startup with an extra clause saying that I get the first $500 if you ever sell the company at any price then you could claim I valued you at $500 a share but since I make a profit if you sell the entire company for over $500 you could also I valued you at $0


I share your feelings. What it most brings to mind for me is the infamous StackSort from the image alt text on XKCD comic 1185 (https://xkcd.com/1185/)


Some additional color:

CORS today is just an annoying artifact of a poorly conceived idea about domain names somehow being a meaningful security boundary. It never amounted to anything more than a server asking the client not to do something with no mechanism to force the client to comply and no direct way for the server to tell if the client is complying. It has never offered any security value, workarounds were developed before it even became a settled standard. It's so much more likely to prevent legitimate use than protect against illegitimate use that browsers typically include a way to turn it off.

With CSRF the idea is that the server wants to be able verify that a request from a client is one it invited (most commonly that a POST comes from a form that it served in an earlier GET). It's entirely up to the server to design the mechanism for that, the client typically has no idea its happening (it's just feeding back to the server on a later request something it got from the server on a previous request). Also notable is despite the "cross-site" part of the name it doesn't really have any direct relationship to "sites" or domains, servers can and do use the exact same mechanisms to detect or prevent issues like accidentally submitting the same form twice.


CSRF wouldn't work as easily if CORS (or, more precisely, the single origin policy that CORS allows you to circumvent in controlled ways) weren't there. And both cookies and TLS also rely entirely on domains being a meaningful security boundary.

Without the SOP, evil.com could simply use JS to read the pages from bank.com, get a valid CSRF token, and then ask the browser to send a request to bank.com using its own CSRF token and the user's cookie. This maybe could be circumvented by tying the cookie and the original CSRF token together, but there might be other ways around that. Plus, if the browser wasn't enforcing the SOP, then the different tabs might just be able to read each other's variables, since that is a feature today for multiple tabs accessing the same origin.


I’m not sure in what world domains aren’t a meaningful security boundary, but cross-origin prevention is absolutely necessary in a world with private web apps and scriptable browsers.

Maybe you are of the opinion that the web should have stayed document only and apps should have stayed native binaries, but as far as the web is concerned the default cross-origin request policy is a critical security pillar.


> domain names somehow being a meaningful security boundary

That's your Internet opinion. Perhaps expand on why you think that?

I reckon domains have quite a few strong security features. Strong enough that we use them to help access valuable accounts


well it does make sense to assume that by default different origins belong to different people, and some of those people don't have to behave friendly to each other.

There is little server can do with that, because of the request-based model. The state that persists between requests lives in cookies, and it's browser job not to expose those cookies all around. Turning off single origin policy would be a terrible idea. For one, it makes CSRF work by not allowing cross-origin reads.


> the AWS team has implemented it poorly by enforcing it

This is whiny and just wrong. Best behavior by default is always the right choice for an SDK. Libraries/tools/clients/SDKs break backwards compatibility all the time. That's exactly what semver version pinning is for, and that's a fundamental feature of every dependency management system.

AWS handled this exactly right IMO. Change was introduced in Python SDK version 1.36.0 which clearly indicatesbreaking API changes, and their changelog also explicitly mentions this new default

   api-change:``s3``: [``botocore``] This change enhances integrity protections for new SDK requests to S3. S3 SDKs now support the CRC64NVME checksum algorithm, full object checksums for multipart S3 objects, and new default integrity protections for S3 requests.
https://github.com/boto/boto3/blob/2e2eac05ba9c67f0ab285efe5...


I want to see the author using GCP. That's where you get actual compatibility breakages.


You mention semver, yet you also show that this API breaking change was introduced in a minor version.

Not entirely sure that's how things work?


You're not wrong - the semver doesn't indicate a breaking API change. But, to be fair, this wasn't a breaking API change.

Any consumer of this software using it for its intended purpose (S3) didn't need to make any changes to their code when upgrading to this version. As an AWS customer, knowing that when I I upgrade to this version my app will continue working without any changes is exactly what this semver bump communicates to me.

I believe calling this a feature release is correct.


While I agree that the author is just whining about this situation and that AWS did nothing wrong, I'd argue that a change in defaults is a breaking change.



I don't think that's the case? AWS didn't fix a bug or removed some UB.

That's closer changing default key-binding. Anyways, all I'm saying - I would have considered it a breaking change because it changes default behavior.


i treat defaults as a convenience feature which are subject to change :shrug:


Correct, I just would appreaciate that change in default behavior would be treated as a breaking change. It's really no that hard to grasp and many OSS projects treat it as such.


Potentially unpopular take but I don't think free services linked to physical goods are a good idea in practice. Maintaining such services costs money forever, companies can't sustain that as a business model, so the market is littered with hardware that is now useless because the services it required has gone offline. If there's something to gripe about here it's that Mazda removed the fob-based remote start, or that $10/month is too high, but it should not be that they're charging a maintenance fee for something they have to maintain.


This is definitely worthy of concern. There's an infamous case where an AI was trained to detect cancer from imaging but all the positive examples included a ruler (to measure the tumor) so it turned out it just was good at detecting rulers. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9674813/#:~:tex....


Definitely agree liquidation is non-starter here. They don't sign long term deals with their own customers so WeWork's only real asset is the brand. What the creditors will do is take over ownership from the equity-hodlers, then try to milk the brand for any remaining value. It's conceivable many of the building owners might actually do ok directly operating WeWork branded spaces and keeping the margin that used to go to WeWork for themselves.


The insurmountable problem is that the practical interests of "consumers shopping on Amazon" don't actually align with the abstract interests of "consumers in general" that the government is purporting to defend. On Amazon we want to find the right item (search, description, reviews), have strong confidence in the inventory and shipping promises (fulfilled by Amazon) and have reasonable confidence we're not getting screwed on price including shipping (Buybox, Prime eligible etc). If you chop those things apart it becomes essentially impossible to offer the overall experience that consumers clearly prefer.


The cause of this is that it should be an anti-trust violation for any wholesaler or manufacturer to dictate retail prices to the retailer. They agree on the wholesale price because that's what they're negotiating with one another, then the retailer chooses the retail price in their store.

Now if Amazon wants the MFN clause, no problem -- but it's the wholesale price they can't sell to someone else below, not the retail price. If Amazon wants the lowest retail price, that's up to them.


Since I haven't seen it mentioned I'll throw out the Rails/Merb split in the late 00s as a significant momentum killer for Rails (and, by extension, Ruby). Rails 3 reunified them but I don't feel like it ever fully recovered it's developer mindshare, and the timing was such that it really opened the door for rivals like Express (Node) and Django (Python) to gain traction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: