Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

red teaming without approval of the target org is out of fashion, yes.


that helps a bit with regards to understanding why people are so upset about this.

but honestly, it seems like valuable research to me. it's unfortunate that it took some time away from busy kernel developers, and it's unfortunate that it ultimately makes the project look worse...

...but isn't that supposed to be part of the promise behind open source? it wouldn't surprise me if i learned that management of private orgs hire security firms to do this sort of thing. where does that come from in foss land, other than the ethers of others tinkering, researching and experimenting?

i think this whole calling for the researchers heads business is overblown. i read their paper, it looks like they approached the situation quite ethically to me.


The deception / con job was done last year. This time around, there was another series of patches from the same research group, which were claiming to be security fixes, but which scored really high on the you-have-got-to-be-kidding-me scale of incompetence. When the Grad Student was called out on it, he claimed it was due to a static code analyzer which he was testing.

This was not disclosed up front, and at this point, it is impossible to tell whether he was creating an incompetent, no-where-near-state-of-the-art code analyzer which gave bogus results, and then was too incompetent to realize it was bogus, and instead submitted the patches to kernel developers asking us to be his QA, without disclosing that this was what he was done ---- or this could be another human experimentation where they are trying to see how gullible the patch review process is at accepting bogus patches.

We have no idea, because his research group has previously submitted patches in bad faith, and UMN's IRB board gave it the A-OK ethics review. At this point, the safe thing to do is to assume that it was also submitted in bad faith.


If you wanted to know if the kernel review process is able to reliably catch malicious attempts you literally could have just asked the kernel maintainers and they'd told you that no, review can't go that deep. Or looked at non-malicious bugs and observed that no, review does not catch all bugs with security implications.

You'd very likely would have been able to get code past them even if you told them that there are attempts coming and got their approval.

Given that, you need a good justification why that wasn't enough for you, and what value you truly added over what already was the common view on the issue. But at least we got valuable recommendations from their paper, like "there should be a code of conduct forbidding malicious submissions" and "you could try vetting peoples real identities". (I guess they added to the last bit, giving additional data points that "works at a respected institution in a related field" is not sufficient to establish trust)


the project is 30 something odd years old now, and is no longer a hobby, it is now critical infrastructure that powers virtually everything.

it's unfortunate that something happened in the project that cost the maintainers a lot of hours, that comes with the territory of working on important software, i'd argue.

i don't want an espionagetastic fundamentally untrustable and inescapable computing hellscape, airplanes falling out of the sky, cars crashing or appliances catching fire because it "already was the common view on the issue."

if the paper raises awareness of the issue, it's a good thing for society, it seems. if money materializes to do background checks on kernel contributors, that seems a good thing, no? if resources materialize for additional scrutiny, that seems a good thing, no?

if anything, the "common view" / status quo seems terribly broken, as demonstrated by their research. while what they've done is unpopular, it seems to me that ultimately the project, and the society of which large chunks it now powers, may be better of for it in the long run...


So you are saying that because a non-controversial method to show the same issue wouldn't cause the publicity connected purely to their way of operating, it was right to ignore the concerns? Lots of slippery slopes there, and science has spend a long time to get away from such an "the end justifies the means" attitude.


> Lots of slippery slopes there, and science has spend a long time to get away from such an "the end justifies the means" attitude.

if the developers were working for a private company, there could be a similar loss in time for a similar exercise that could be approved by company leadership, no? if tim cook ordered an audit and didn't tell anyone, wouldn't there be developers at apple feeling the same way?

look, i get it, it's unfortunate and people feel like it's a black eye... but it's also a real issue that needs real attention. moreover, linux is no longer a hobby, it is critical infrastructure that powers large chunks of society.


> So you are saying that because a non-controversial method to show the same issue wouldn't cause the publicity connected purely to their way of operating, it was right to ignore the concerns?

what's the non-controversial alternative? alerting the organization before they do it? that doesn't work. that's why scientists do blinding and double blinding.

if you mean something else, then i'm missing parts of this (really complicated) debate.


To quote my comment above:

> If you wanted to know if the kernel review process is able to reliably catch malicious attempts you literally could have just asked the kernel maintainers and they'd told you that no, review can't go that deep. Or looked at non-malicious bugs and observed that no, review does not catch all bugs with security implications.

> You'd very likely would have been able to get code past them even if you told them that there are attempts coming and got their approval.

If you want to know that the process isn't catching all attacks, that should be all you need. For the second case, getting patches past a warned maintainer is harder and should be even better evidence of the problems of the process, without any of the concerns. There is a wide range of options to learn about code review, and what they did was one of the extreme ends - just to find "yes, what everyone has been saying is true". And then not putting in the work to make amends after it becomes clear it wasn't appreciated, so now this other student got caught in his advisors mess (assuming the representation of him actually testing a new analyzer and not being part of a repeated attempt to introduce bugs is true - the way he went about it also wasn't good, but way less bad).

But you don't get splashy outrage, and thus less success at "raising awareness" with people that didn't care before, which is what your comment seemed to argue for.


> But you don't get splashy outrage, and thus less success at "raising awareness" with people that didn't care before, which is what your comment seemed to argue for.

the reason for doing it, is basic quality science. it's proper blinding.

the result is raising awareness, which if it leads to more scrutiny and a better and more secure linux kernel, seems to be a good thing... in the long run.

i mean, i get it. a lot of this security stuff feels a lot like gotcha qa, with people looking to make names for themselves at the expense of others. and yeah, in science, making a name for yourself is the primary currency...

but honestly, they ran their experiment and it worked, uncovered an actual, not theoretical, vulnerability in an irrefutable way, in a massive chunk of computing infrastructure that powers massive chunks of society.

papers like this one can have a lot of potential in terms of raising funds. this is the sort of thing that can be taken to governments, private foundations and public corporations to ask for quite a lot of money to help with the situation.


Here's a practitioner with a better elaborated variant of what I'm trying to argue, so I'll defer to this: https://davisjam.medium.com/ethical-conduct-in-cybersecurity... IMHO worth a read


So wait, anyone can run an attack if they publish the results? No matter how senseless, or pointless?


i think the discussion should not be around banning them as known bad actors, but instead should be around how to detect bad actors or better introduce safety and security into the project.

i'll tell you one thing, it has shaken my trust in the oss kernel development model as it operates today, and honestly that seems like maybe a good thing?

how many companies are literally printing money with the linux kernel? can't they throw a few bones at helping beef up code review and security analysis?


No development model is protected from malicious actors, and this is not unique to OSS. Could the Ministry of State Security sponsor a student to study at the US, and then after graduate, that student gets a job at Microsoft, and then introduces vulnerabilities in Windows? In theory all patches should get code reviews, but could someone get a bug past code reivew? Sure!

You can try to detect it before it happens, but very often you won't catch it until after it's landed in the source code repository, and some cases, it'll actually make it out to customers before you notice.

It's true for proprietary code; it'st true for open source code; it's true for x.509 CA certificates[1]. We should still do the best job that we can, if for no other reason that there are plenty of zero-days which are introduced by human error, never mind by malicious actors.

[1] https://www.thesslstore.com/blog/final-warning-last-chance-t...


so if satya nadella hires security firms to try this on the nt kernel (do they still call it that) and they succeed, then they learn from it, tighten security and process, and then move forward...

but if a set of academic researchers try it on the linux kernel, nothing changes and then there's a bunch of internet drama with people calling for them to be fired because why?

honestly, i've believed in oss since i encountered it in the early 90s. but this is making me start to reconsider proprietary software again.


The more accurate anology would be academic researchers sending graduate students to get hired by Microsoft under false pretenses, and then demonstrates that they can introduce security vulnerabilities that don't get caught by Microsoft's code review practices --- and the submits a paper to the IEEE saying that obviously Microsoft's hiring and software engineering practices could be improved.

At least with OSS everyone can audit the code, and run their own security scanners on the open source code. If you think that somehow proprietary software is magically protected against the insider threat, you're kidding yourself. Even the NSA couldn't protect against an inside like Snowden.


> The more accurate anology would be academic researchers sending graduate students to get hired by Microsoft under false pretenses, and then demonstrates that they can introduce security vulnerabilities that don't get caught by Microsoft's code review practices --- and the submits a paper to the IEEE saying that obviously Microsoft's hiring and software engineering practices could be improved.

sounds good to me! (j/k, sorta)

except here's the key point, and here's where i think the issue is: "...obviously Microsoft's hiring and software engineering practices could be improved"

...this isn't about the people involved being bad at what they do, or them being bad people, or the project being silly in some way. it's about the people, the process they use and the project itself meshing together in an unfortunate way to create a real vulnerability for society. linux is no longer a hobby project. every effort can and should be made to ensure that it is secure as possible as linux is now so pervasive that defects can literally have life and death consequences.

this isn't about some maintainer failing to catch security bugs, this is about the growing influence and criticality of the project and the vulnerability of the project to security bugs, both technically and culturally.

the only real human failure is seeing egos get in the way of improvement.

who am i to be making these arguments? i'm just a nobody. a nobody who has to live in a society that is increasingly being built on this stuff...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: