4) Target's computer recognizes the file as a signed MSI, ignoring the extra stuff at the end that's not part of the MSI
5) Target computer calls the JRE to launch the JAR
6) JRE ignores the MSI parts at the beginning of the file because compressed JAR files are loaded back-to-front
Obviously a major bug that signature validation ignores "extra" parts of the file, but a signature validation bug that (1) only affects specific types of files executed by a third-party runtime and (2) doesn't threaten system files (or any native executables at all) isn't an obviously critical vulnerability like an arbitrary "signature validation is totally broken" bug.
Disagree on lack of severity; this is the kind of thing that calls into question any signed MSI I've downloaded over non-TLS HTTP or even over HTTPS served from a possibly-downgrade-attacked HTTP→HTTPS redirect.
If I read the description correctly, it would need to be a jar file, though, not an msi. The trick allows to hide a malicious jar file from the scanner, not to hide a malicious msi. I think it’s better described by “take a malicious jar and prepend a signed msi to hide it from the scanner.”
This is separate from anything an AV might do, so forget the scanner part. A virus scanner with a signature for this malicious JAR will still catch it. What this is avoiding is the prompt and extra scrutiny Windows throws up if you try and run an executable from the Internet.
Think of the legitimate MSI and malicious JAR files as separate short stories in the same book. The 'book' comes to your computer claiming to be one thing (the legitimately signed MSI) but the second chapter is something completely different (the malicious jar file). When your computer reads the book (calculates and verifies the signature) it only does it off the first chapter. That lets the second part slide under the additional scrutiny Windows applies to files downloaded from the Internet.
By “scanner” I meant the built-in windows defender tooling, which is - afair - affected. It trusts the signature. But you’re entirely correct - a virus scanner that does not rely on the signature w could catch the malicious code.
Wtf? Am I reading this right, if part of the file is signed by MS then it just doesn't bother running it through the detection algo?
That looks exceedingly like a designed in security hole.
Presumably paired with the OP "bug" means MS could, with an NSA letter say, drop malware on devices if those devices were using Windows Defender for AV.
Just as well MS don't snoop on what software user's are running./s
“ Digitally signed files are more trusted by the Operating System. This higher trust allows such files to execute in sensitive contexts or excluded from Antivirus scans.”
It seems that msi installer packages with a trusted code signature (1) are excluded from scans by various antivirus protections. Which kind of makes sense: driver packages may contain code that would trigger heuristics a lot.
> Presumably paired with the OP "bug" means MS could, with an NSA letter say, drop malware on devices if those devices were using Windows Defender for AV.
Microsoft can already do that. To be quite honest - pretty much every institution with the right signing powers can on practically all OS. Have you verified the latest chrome installer package?
This bug seems to allow something more insidious: you could download the latest google chrome msi and append your payload jar to that msi and redistribute it. The signature remains valid. This allows bypassing the code signing checks even if you have no code signing powers.
(1) not from Microsoft, Microsoft only hands out the certificates, the signing is done by the developer
All AV ignores files which are concatenated to an MS-signed file? Huh, presumably you mean only on Windows, but really? No AV treats .jar files concatenated with signed MSI as suspicious -- how did that situation arise?? It's a strange heuristic.
Isn't it like having a special 'I know my bag smells like drugs to sniffer dogs but I promise I don't have drugs' channel at airport arrivals; and when people go down that channel you don't bother to check their bags.
Yes, kind of like that. That’s basically the point of digital signatures (when they don’t have a bug to bypass them, like in this case).
They don’t treat concatenated malicious files as safe, they trust that files signed by MS are safe. You aren’t supposed to be able to concatenate a file and still have the signature check out. That’s the bug.
If you want a good reason why, ask McAfee about the time that they incorrectly detected svchost.exe as a virus and made every customer’s windows machine around the world unbootable.
Don’t we have that? I don’t think TSA pre-check goes through the sniffers. Not sure what benefits that gives on arrival. I know there’s an express lane on the US/Canada border though.
I don't think that follows from what's been disclosed. What seems to be the case, based on the disclosures, is that the signature validation code does a correct job validating the parts of the file the MSI header describes, but does not check parts of the file that aren't described in the header. As far as I'm aware, adding arbitrary anything to the end of an MSI doesn't do anything since it's not described in the headers, so you can't use this to compromise an actual MSI file. You can only use this bug to fake signing for specific other file types that are read back-to-front where the actual MSI parts will be ignored.
Other than a possible checksum calculated from the entire file contents, many binary formats use either:
- Headers describing a section, including the length of a section.
- Prefixing each section with a size.
- Using a marker pattern (such as a null byte) to mark the end of a section of data.
(there are more than these three, but these are the most common)
That last one isn't used very often, and when it is, it's mostly for textual data.
The consequence of using the first two techniques is that it's entirely possible to add data to the end of the format, since a parser would just read the last section and stop (assuming the format supports knowing what the last section is).
If you hashed the entire file you wouldn't be able to append that hash to the file for distribution because it would change the hash of the file. That's why FOSS software signatures are usually MD5/SHAwhatever in a separate text file.
You can, if the consuming system is format-aware. I think MSI can fall into that category.
I developed something like this to put an md5 hash in a comment in the first line of a code module at a place I worked. It simply read the file, removed that line, and hashed the rest of the file data to generate the hash for comparison.
This was just to stop programmers changing the code without submitting it to a secondary system for validation/tracking (easy to work around, but that shows intent to bypass policies when we see it in the repo).
And since Microsoft still serve updates from the Microsoft Update Catalog over HTTP, let's not mistakenly think that all downloads are protected via TLS, even today.
Or any signed MSI if downloaded from a country where security services can secretly oblige companies to act against their users - China, USA, and probably many others.
Just to elaborate since there is some confusion about when the name would occur.
1) Take MicrosoftSigned.Msi
2) Append Malway.jar to it
3) Take file and rename it as RunMe.Jar
# Now you have a jar file that appears to be signed by Microsoft but contains malways.
4. Give target RunMe.jar and ask then to dbl-click it.
# Usually this file would prompt the user with a "You shouldn't trust this arbitrary file from the internet" prompt, but it doesn't because Windows thinks this file is correctly signed
Zip files have the index at the end of the file so that new files can be added to the archive without rewriting the whole thing.
Zips can also be read from front to back, reading the header of each entry and skipping the compressed data to the next header, which is most of what zip repair utilities do (or did, not sure anyone repairs zips any more). But it's faster to read the index.
Not every zip-based format behaves like this, Microsoft's zip based container formats tend to require a proper beginning too. For example if you add docx (which is a zip file) to the end of something else and open it in Microsoft Word, it will complain that the file is corrupted, and will suggest to repair it.
There's an old similar technic - concatenate JPEG with RAR, JPEG header only describes image part, for archive files they're opened from their correspondent header.
Windows marks the JAR file as verified in the properties just because there's a verified msi structure at the front of it. I count that as a claim to have verified the file. Because that's what windows displayed.
The tricky bit I imagine is the gap between step 4 and step 5 in your attack scenario. To get this to trigger you need to get the JRE to invoke on the MSI file. The JRE isn't typically associated with MSIs so you would need a secondary step to trigger the JRE to look at that file and load the malicious JAR tacked on to the end of the MSI.
No, because the attacker renames the file to .jar before sending it to the target.
Windows still sees it as a MSI file while doing signature checking. But because it has a .jar extension, the JRE runs it when you double click.
Signature verification is done as a independent check, completely separate from execution. Because windows needs to know the signature is valid before even allowing the file to be executed. It also needs to show the signature check in the right-click -> properties dialog.
I've been thinking about the ability for neural networks to determine flaws like this. If we could describe what a 'goal' would be (executing arbitrary binary data for example) and then some of these design aspects it might be possible to determine flaws automatically. This is only 6 steps... I imagine an AI could find more.
I imagine others are thinking about this, too. I feel as if we are headed towards either an era of radically secured code where serious vulnerabilities become rare as hen's teeth, or... the discovery that we exist in an environment of radically vulnerable code that cannot be secured at all. A world where AI script kiddies can take down civilization.
I don't see any way it would be possible for such a file to be simultaneously executed by the system as a MSI and as a JAR (you can probably always invoke the JRE on a .msi file, just not through the system file associations).
It's renamed, not simultaneously-associated. From the fine article:
'Quintero said this weakness would particularly acute if an attacker were to use it to hide a malicious Java file (.jar). And, he said, this exact attack vector was indeed detected in a malware sample sent to VirusTotal.
“In short, an attacker can append a malicious JAR to a MSI file signed by a trusted software developer (like Microsoft Corporation, Google Inc. or any other well-known developer), and the resulting file can be renamed with the .jar extension and will have a valid signature according Microsoft Windows,” Quintero wrote.'
Exactly, you can't have it be an MSI and a JAR, it has to be a JAR to invoke the JRE automatically.
In terms of practical impact, it sounds like you need a target where the JAR file will be run, but also where you'd expect Windows signatures validation alone to protect against malicious Jars.
Sure, but if you already have code execution on the target to run the rename or directly invoke the JRE, either (1) you've already exploited the target or (2) lack of signature validation (since the rename script or equivalent to invoke it wouldn't be signed) wouldn't have been a barrier to running the malicious JAR.
No way, exactly the opposite. Attacks are often described as "chains" because it usually requires simultaneous application of multiple exploits to achieve an attack's goal.
Or iOS exploit chains, some of which I've used intentionally as part of iOS jailbreak tools like unc0ver. From Project Zero's "very deep dive into iOS Exploit chains found in the wild": "Earlier this year Google's Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day."
https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...
I imagine the real-world application of this MSI thing is something like:
1) Get silently-malicious MSI on to target machine (via download server compromise, poisoned non-TLS download, redirect TLS-downgrade, etc etc)
2) Got malicious "trigger" exploit code on to any web page used by the target (via web server compromise, weak human security like admin passwords, etc etc).
For example, one of them (CVE-2020-6493) is "Use after free in WebAuthentication in Google Chrome prior to 83.0.4103.97 allowed a remote attacker who had compromised the renderer process to potentially perform a sandbox escape via a crafted HTML page". How many more of these do you think are hanging around out there, waiting for Google to get the 'okay' to fix?
I'm familiar with the concept of chaining multiple exploits to default different layers of security. That said, for step (2) to work, you need either:
a) A separate vulnerability that provides arbitrary code execution to run your trigger exploit
or
b) A vulnerability that allows you to execute a specific program/command on the target system
If you've got either of those, the ability to run a pre-existing JAR doesn't really increase your capabilities. You've already got at least user-level access. Having a way to distribute a malicious JAR in advance might be useful in some cases, but having a file sitting on disk in advance significantly increases the risk of having your exploit detected before you get to use it.
I'm glad we're understanding each other, but I completely disagree with the conclusion you reach in your last paragraph. Networks are iffy. People already expect their signed MSI installer to be multiple megabytes of binary garbage and probably wouldn't notice it being a megabyte of two larger than normal, so that sounds like one of the best possible places for a payload to hang out and wait to be triggered. No competent attacker would make their payload-delivery and their command-and-control the same thing.
Hiding your malware in some other file is a good idea, although sticking it in an executable file format is only going to invite more attention and relies on your target only using security tools that assume Windows signed == safe.
It strikes me that the primary use for this would be attacking a target that (1) has some level of system lockdown/scanning such that signing is required, (2) has Java installed on computers and uses it (not just a program that bundles in its own JRE), (3) has scanners vulnerable to the signature validation bug that assume signature OK == safe, (4) has some sort of network scanning IPS/IDS such that the "run malware" trigger needs to be small to avoid detection.
If (1) isn't the case, then there are probably easier ways to sneak your malware into position. If (2) isn't the case, your malware can't run. If (3) isn't the case, your attack risks getting detected in advance and burning any chance of using it. If (4) isn't the case, you'd probably be better off delivering the full malware via the browser sandbox escape/whatever network exploit you're using to avoid needing 1-3 in place first.
I would venture the guess that this has been used for (1) opportunistic malware intended for wide distribution with a low probability of any given target being compromised due to requiring the user to run the malicious JAR and (2) very targeted attacks on specific institutions that meet the 1-4 criteria.
I think you're making a lot of assumptions and pigeonholing the applicability of this based on things we have only passing knowledge of, like "there is a JAR of some sort". I wonder what kind of fun HotSpot exploits might be in the JAR a real attacker would use. Hopefully Oracle will get permission to fix them some day if there are :)
I don't think it's too many assumptions. The number of personal Windows users with Java itself installed (rather than programs with their own bundled JRE) is probably very low at this point, so it means targets for this are either "download eXtra_special_cod3cs_for_pr0n.exe" users, or institutions that match this specific profile.
I would actually expect a number of interesting institutions to match that profile - places likely to keep systems updated (or at least updated AV), so not guaranteed to be vulnerable to last month's patched bugs, who have some level of system lockdown and scanning such that hiding in plain sight behind file signature verification is necessary, who have some type of IDS/IPS such that the separate trigger needs to be kept to a minimum, and who rely on a set rather poorly packaged Java software. I'm sure the list of targets matching those criteria includes a number of financial institutions and infrastructure operators - targets that need to pay more that trivial lip-service to security, but where security is a "we've checked all the boxes so we're good" mindset.
> The number of personal Windows users with Java itself installed (rather than programs with their own bundled JRE) is probably very low at this point
This can depend on your country. Here in Brazil, the income tax preparation software (http://receita.economia.gov.br/interface/cidadao/irpf/2020/d...) is written in Java, and the install instructions in that page tell you to update Java, so it's probably not bundled. Since everyone with income above a certain amount (and several other situations) has to use that software to fill the tax report (the option to use the paper form has been discontinued many years ago), I'd guess a high number of personal Windows users in this country have Java installed.
Even better, those users are probably suffering from alert fatigue already due to all the security warnings about the ancient kvm software, so using a technique like this to bypass AV scans would probably be enough to compromise some major systems.
You can damage most software vendors with just #1.
Then #2 is even worse.
#3 you already own them because #1,#2 let you download a poisoned msi to the machine. You can get cleaver and dynamically change the hash per download as described year ago when signature based AV was all we had, you dont need this trick.
Or... you could just drop powershell, python and execute it and own the machine without trying so hard with just #1 and #2.
The issue is it would seem the file needs to be an MSI file to get signature validation, but need to be a .jar file to run the malicious part. Transparently going from "File passes signature verification" to "running the jar payload" seems like it would require another (more serious than this) bug or active steps from the user.
> seems like it would require another (more serious than this) bug
Yes on your first part; no on the parenthetical. One exploit to let the malicious hanger-on into your computer. Another exploit—maybe in a web page you visit regularly—to trigger it. Compare to this real-world iOS attack: https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...
For the audience: How many exploit demonstrations have you seen that launch `C:\Windows\calc.exe` or something to show you that you just got exploited? Now imagine the same exploit runs a program that does not announce its presence.
Because you told the target "if you have trouble running it, right click and open it with Java" and you assume they'll do it? Not clear how you could automate without using a different vulnerability to invoke the JRE on the file.
What about criminal liability? I know that it's highly unlikely that Microsoft will ever get what it truly deserves (at least as long as the USA remains what it is), but still.
I find it rather telling, how many argue that Microsoft is (obviously) wrong in their behavior, yet I don't read much about how the company deserves severe punishment (maybe even forced break-up or having control taken away from it altogether), for knowingly not fixing a security fuck-up like this, for this long. Maybe an interesting case for anthropologists one day, to study social/cultural conditioning.
I'm sure that Microsoft will have it's reasons and considerations, but that should not excuse or exclude them from liability for the consequences of their actions. Whether that will ever succeed in court, given both the reality of what Microsoft and the USA are (many other countries too, by extension), that is another story. But it should not take rocket science to figure out that it will only get worse if Microsoft gets away with this without repercussions (again). Guess, how we got here in the first place.
How would you define criminal liability to achieve this?
You have a bug that, at best allows a third-party runtime to execute a malicious program without warning the user that the program is untrusted. Practically, how would you write a statute so that this case would be different from Windows not warning about malicious file in $arbitrary third-party file format$ that exploits bug in $arbitrary third-party software$? You don't want vendors legally responsible for the behavior of third-party code on their systems, that's how you get entirely walled-garden platforms that have no user freedom.
Laws don’t need to be that specific, that’s what the legal system is for. So instead terms like negligence are used which have very specific legal meanings, and then people consider if a Microsoft was legally negligent in their behavior. The downside is it’s more vague, but the upside is it can rapidly evolve over time as new information comes to light.
Laws don't necessarily need to be specific, but if you rely solely on interpretation to distinguish between different types of responsibility involving third-party software, you will either achieve (1) appeals courts defining responsibility as narrowly as possible, or (2) the complete destruction of third-party software and all independent software development as no vendor is willing to risk exposure.
Liability is always limited to the value of the company. Thus even in the extreme case liability would at worst force companies to run each pice of software as it’s own legal entity. Aka Microsoft is going to keep selling Windows as an OS even if they need to spin off a subsidiary.
Well, then you've still destroyed small independent software vendors, open source projects, and individual projects for which the cost and complications of running an LLC as a liability shield is unsustainable.
The issue with defining liability here is primarily what conduct should make a company liable in the first place (and doing so in a way without horrible consequences for user freedom), not what the damages can be.
How would you define criminal liability to achieve this?
Easy. If a producer of software is made aware of a defect in their software that may lead to a breach of security, they are required to either fix it in x amount of days, or publicly disclose the full details of the vulnerability so that users of the software may make an informed decison on how to proceed.
You could even say that if they disclose that there is a vulnerability along with a temporary workaround, they get an extension of time to fix it or release the details.
Edit: after reading what I wrote I think it actually should apply to all bugs, not just security related ones. Either fix it, or let everyone know about it.
In this case, the worst extent of the flaw is that Windows doesn't warn that a file used by a piece of third-party software could be dangerous. If you want a case like this to involve liability, that's what you have to include.
If a producer of software is made aware of a defect in their software that may lead to a breach of security
Now, I can guarantee you 100%, based on its track record, that a code execution and sandbox vulnerability exists right now in Adobe PDF readers. Should Windows be required warn users before opening a PDF file that it could be dangerous? What would be different for any other non-trivial software that consumes a non-trivial file format?
It is by no means a stretch to say that if Microsoft is liable in this case, they must be liable for not warning users on a whole host of other issues with third-party software.
I'm generally in favor of some kind of serious liabilities for vulnerabilities, but let's not pretend defining liability in an effective way is easy.
In this case the defect is with how Microsoft handles MSI files. It is Microsoft's problem to fix.
Now, I can guarantee you 100%, based on its track record, that a code execution and sandbox vulnerability exists right now in Adobe PDF readers. Should Windows be required warn users before opening a PDF file that it could be dangerous? What would be different for any other non-trivial software that consumes a non-trivial file format?
No, they only should only have to make a public release about their own, known defects. They shouldn't even have to notify users directly, just make it publicly known. Though it would be nice if they kept track of other vendor's defects and alerted users.
In this case the defect is with how Microsoft handles MSI files
Except, of course, that Windows is absolutely correct in validating them as a MSI file. It just happens that it fails to correctly validate a file that is both a valid MSI and a valid JAR. Windows itself is incapable of doing anything dangerous with the (to its perspective) nonsense data appended beyond the validated contents of the file.
This is essentially a TOCTOU bug involving one vendor performing the check (MS) and one vendor performing the use (whoever shipped the JRE), both of which are technically correct in the most narrowly-scoped sense but produce a significant issue when combined.
Sure, one would want Adobe to be the liable party in such a case. But that case wouldn't be too different from the current one, where the problem is Windows failing to warn on a file that is otherwise harmless which becomes harmful when opened by a piece of third-party software.
I don't think it's fair to characterize this as a JRE bug. The only programs that had the opportunity to recognize that the MSI file was signed too early did not account for it. The JRE can't be expected to know that Edge and whatever thought the valid JAR it's reading was considered a MSI by the piece of software that processed it, despite being dispatched to the JRE from the explorer.
It's not a bug in the JRE, but the bug is a problem because Windows itself doesn't see the file as dangerous, while the JRE invoked on it treats it as code to execute. Windows, on its own, is not aware that the file represents executable code outside of the validated sections of the MSI.
I disagree, the JRE is a virtual machine; if not for the OS, how would you protect against it? Isn’t the OS itself responsible for authorizing whether the invocation of the JRE process is authorized, rather than the virtual machine itself?
Are there any comparable virtual machines that require signed bytecode by default? I’ve personally never heard of it, most of the time it’s verified when the package is downloaded, rather than when it’s executed.
Java applets did actually have their own security model using signatures, which did not necessarily require the host OS to be part of that verification.
The JRE is a runtime for running arbitrary code, it's the whole purpose of it. If you can put something at the start of a JAR, you can put whatever you want in the JAR. Meanwhile, Windows knows that this will be run by the JRE, because when you open it from the explorer, it is associated with the JRE while the part that validates the file considers it an MSI.
Windows is basically completely responsible for this: Windows validates the MSI, windows knows what an MSI is, Windows knows it will be run by the JRE and validates it just as an MSI instead.
To be fair, all Windows knows that a program, in this case the JRE, is registered to open the file. Nothing about that necessarily means "this file will be executed", nor that that program will interpret the file differently than Windows has.
> Windows, on its own, is not aware that the file represents executable code outside of the validated sections of the MSI.
I think Windows is aware of this though, it's called JAR and explorer says the JRE should open it. Furthermore, should there be any sections in a signed MSI that aren't signed? Could that serve any legitimate purpose? No, it entirely defeats the purpose of signing it.
Allowing "unreachable" unsigned data to be appended to a MSI isn't really a threat in and of itself, since the data shouldn't be reachable from any of the valid parts of the file. I could easily see some tools appending their own metadata to the end of the file and thus actually relying on such modification not invalidating the signature.
I would not be surprised if part of the delay fixing this involved MS finding out early on that a major user of MSI files was actually relying on this (perhaps some installer creation tool or AV scanner?) and decided that the user needed to fix their product and distribute the fixed version before a Windows patch was viable.
Microsoft is claiming to be able to tell if a file is signed, and warn if it is not. They found out 2 years ago that this functionality has a problem. If they turned around and released a report saying they know it doesn't work for jar files, then they shouldn't be liable, but they didn't do that.
There is probably something in the contract that says that Microsoft is not criminally liable. I trust their lawyers for that.
But if companies become criminally liable for bugs, this can be bad news for open source projects. For example. My company wants to use some GPL software, but we need some extra features, so following the license, I implement them and release the source. The original maintainers like it and put it into the official tree. Unfortunately, my contribution introduces a serious security flaw in some other part that my company doesn't use, we simply overlooked that part, it is our fault, but we didn't mean any harm. We may even have seen the CVE, but because it wasn't about something we used, we didn't look into it. And now, suddenly just because we did everything by the rules and released our source code, we are now criminally liable... That kind of thing makes you think twice before you contribute to any open source project...
Usually open source projects have a very big "we are not responsible for anything, to the maximum intent permitted by law" warning to avoid this kind of problem. Companies are of course free to provide additional guarantees on derived products, that's one business model for open source developers.
But if you make a law where software developers are responsible for their bugs no matter the contract, even if there is no ill intent, this is going to be a problem.
You are pointing at a huge elephant in the room. When it comes to software, companies have incredible leeway.
A company can develop some very popular software and earn billions while also inserting malware or simply leaving vulnerabilities unpatched without any repercussion.
It is even legal to discover vulnerabilities in your own software and sell them (anonymously) on the 0-day market.
As much as I love to see it, I don't see the damage. Users have a choice these days to even download/install a free! OS. This is a clear case of vote with your wallets. There are TONs of Operating systems, why stick with Microsoft at this point?
The problem with “the market will sort itself” takes like this is that the market is not as efficient and elastic as we ideally hope. It, first of all, assumes that everyone has the ability and knowledge to pick the operating system that is best for them. Things like marketing, personally critical (but platform dependent) software (as the other child comment says), market share, and previously formed platform comfort all influence the decisions that users and developers make regarding OS. Not to mention that possibility that someone needs to use two different operating systems for two different tasks.
If everyone at once decided that they were no longer using windows, it’s not controversial that there would at least still be a massive amount of work and time to shift every legacy on windows to another platform. Even in this most extreme example of the market shifting, windows still has the security issues for the transition timeframe that the market alone can’t solve.
Can you explain why you are claiming this as a "problem"? There is no reason to think that anyone is qualified to tell others what products they should or shouldn't buy. The idea that people need to be "saved" from making wrong purchasing decisions is abhorrent.
> There is no reason to think that anyone is qualified to tell others what products they should or shouldn't buy.
What nonsense is this?! Do you maybe mean "can or can't buy"? Otherwise, I don't see how this statement can be parsed such that it isn't obviously incorrect.
There are lots of possible choices, but if you depend on software not available on other platforms, it doesn't matter. If Microsoft doesn't do their job once properly notified, they deserve a slap on the wrist.
Maybe if the exploit is actually leveraged to cause damages. Outside of that, you think they should get fined because the possibility for exploit is there?
Edit: missing clarity - I think fines are appropriate when/if damage occurs, my reply was based on if notification happens without actively being exploited.
Why else would they leave a known vulnerability unfixed for two years!?
ISTM that someone else found out about the vulnerability out ("... which Microsoft acknowledged was actively being exploited.") -- perhaps after the NSA used it against them? -- so now that the cat is out of the bag and they are at risk of compromise -- they let Microsoft know and get it fixed.
This doesn't seem like the kind of vulnerability a security service would really care about. It's just about whether Windows does or doesn't show a nag prompt when trying to run the malicious file.
I doubt there are any real systems where the only thing standing between the system being secure or not is Windows code signing.
> This doesn't seem like the kind of vulnerability a security service would really care about. It's just about whether Windows does or doesn't show a nag prompt when trying to run the malicious file.
And yet, if memory serves, they went to the trouble of stealing a code-signing certificate from a software development company so that they could do exactly that in an Iranian nuclear facility!
Besides, even if this one weren't all that useful on its own, we've seen time and again how a few of these these "minor" vulnerabilities can be "chained" together, eventually resulting in a full compromise.
To be specific, in that case the NSA wanted to install a malicious driver on Windows machines that required driver signature enforcement, so there was no other way to get a driver installed except to steal a certificate to sign their driver.
Why wouldn't a security service care about the ability to hide malicious executable payloads in files that are presented as having a higher-than-normal level of trustworthiness?
I doubt it's been stalled because the government still wanted to exploit it. I think it's rather them not wanting past use to be detected.
You can't always stop using a 0 day. If the binary is deployed somewhere on a air gapped host computer that still receives updates, it's out of your hands. How can you be sure the update will get there while the binary is still on the computer? You risk detection which is very very bad when you run an APT. The tiny part of the campaign can make an entire country or even group of countries adopt stricter policies.
The government probably knows about enough holes to fix their malware promptly, but even if it does, it needs to "deploy" the fix all around the world otherwise it risks being noticed.
That doesn't seem to apply in this case. The vulnerability isn't masking the existence of the malware, it's only allowing its installer to run undetected (in some cases). Once the installer has done its job, they're golden. The vulnerability can be patched, but since the malware was already installed, it doesn't help the victim.
I have a more crackpot theory. Maybe intelligence is one reason (some revenue there?) but business is a second reason.
For years, they depended on a variety of pressures to get people to upgrade to new OS's. First there's the whole office/client/server version dependency graph including CALs and DCs to force businesses to upgrade whole swaths. And of course bug fixes: why fix in a current version when they can tell you it's fixed in future, go buy that? Psychology.
Then there's malware. Why would they sit by for around years (McAffe came out in 98!) watching the market get super-saturated with sketchy malware products and ignore security so badly? I think it's the same as above: upgrade now hoping it got fixed in the next new shiny. It's a human flaw.
This makes me want to scream to closed source lovers. Why do you put yourself in this position? Open source users can audit and fix it themselves, pay someone to do it, chose another fork, or even discuss flaws openly without getting sued (cough oracle cough). Closed source users have no insight, no transparency, and no recourse but to suffer their abuser.
>And of course bug fixes: why fix in a current version when they can tell you it's fixed in future, go buy that? Psychology.
Only problem with that theory is that windows 10 has free updates, seemingly forever. It was released in 2015 (with a free upgrade program from prior versions), still supported to this date,and has no signs of stopping.
> Microsoft’s advisory makes no mention of security researchers having told the company about the flaw, which Microsoft acknowledged was actively being exploited.
Most likely part of an ongoing campaign by a three letter agency which would have requested the delay...
I doubt that three-letter agencies typically attack targets whose only protection is the Windows "this file may be dangerous, are you sure you want to open it?" pop-up..
Who said anything about "only" protection? Besides you, I mean. Real-world attacks tend to involve simultaneous exploitation of multiple vulnerabilities, like this real iOS Safari watering-hole/drive-by attack maliciously using the same exploit chains as my favorite iOS jailbreak tool: https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...
I see. If you visit a sketchy website then it is entirely possible that you won't notice anything. If a random unsigned installer nag pops up after you clicked on an ad there is a decent chance that the user closes the dialogue and "walks" away unscathed.
The Virus Total detection list is very interesting [1]. How can your antivirus-software not detect a dangerous, publicly known, two year old exploit? How can anyone take these products seriously? Of course McAfee does not detect it.
.. but keep your eyes peeled, for cases where the latter is intentionally used as a lame excuse for any of the two former (or Kolmogorov's razor itself, for that matter).
There is "zero day" mentioned in the title, but nothing in the article text. I am confused: what does this has to do with "zero day" now? It's not "zero day" now, is it.
It isn't zero day now but it was for at least two years. That rather stretches the definition of zero day but for two years Windows machines have been vulnerable to something nasty that was known about.
It was a bug in the third party scanning products though. ("This behaviour could be used to hide and distribute malicious code in MSI signed files, in fact several security solutions rely on the output of Microsoft Windows code signing validation to avoid an in-depth scan").
The days are counted from the release of a fix. If something been known for two years but a fix was released on 11th August, then it was a zero-day for the two years until 11th August and it's a "day-6" vulnerability today.
IIRC the term was introduced to contrast with day-1 attacks with exploits developed by reverse engineering patches on the day they are released and attempting to exploit systems in the gap until they get patched.
Are the users supposed to receive an unexpected jar file and run it? Or can it pretend it's an MSI the user legitimately wanted, yet execute malicious code?
The latter. You download an MSI from some file hosting mirror, who cares, because you then diligently look and see that it has been signed correctly by the company that made the software.
But the vulnerability makes it rather trivial to have appended arbitrary content, even executable code, to it, without impacting the supposed validity of the signature.
No, the former. The file is renamed it .jar by the attacker and executed by the java runtime. The msi part is ignored - it only serves to cloak the jar from the scanner - it recognizes the file as msi and ignores signed msi files.
I think signing software is futile anyway since it now behaves like malware that exfiltrates user data anyway. An installer that verifies its file hash has the same level of protection.
Software development licences are a huge scam doing more for DRM than for actually real security.
I guess it would not work as an MSI but would be fine as a JAR.
As far as I can remember, jar is zip and zip has headers at the end of files. Making it possible to append a jar to something and disregard the first part. In this case a signed MSI.
And so you have a code signed JAR from Microsoft. Then what? I don’t know.
However, from the description, it would not execute the payload as an msi either - the appended garbage would just be ignored from the installers perspective.
A legitimate downloaded msi file with attached jar payload passes code signature verification. You rename it to .jar and the system executes the payload.
An attack—even an intentional one like a jailbreak—is usually many individual exploits applied simultaneously that each achieve a different step toward the goal of the attack. This MSI thing means a malicious blob of code can hang out on your machine in a supposedly-safe signed file, then something like a waterhole attack will use a browser exploit to trigger that malicious file. Compare to this real-world iOS attack: https://googleprojectzero.blogspot.com/2019/08/a-very-deep-d...
If microsoft was doing their job they wouldn't ignore a mysterious blob of binary stuck to their MSI. That is HORRIBLE, even the most 101 student of security would realize that, if you've ever even done a CRC on a file you would realize why that is HORRIBLE. I'm with the others in thinking that they deliberately ignored this for some reason, possibly 3 letter agency access.
Well, you can just write an installer that uses an elevated cmd anyway. I think conemu does that? (not sure, maybe it is another software)
Is unsigned software subject to further restrictions by default compared to signed but "unknown" software (sometimes a pain to use with Defender)
And I kind of remember there was already other bugs with MS code signing checks? (not sure)
I agree this bug probably breaks the (informal/perceived?) security model of some hardened configuration, but that kind of hardening should not be considered completely sound anyway. It's like the UAC that is not to be considered a security boundary, yet somehow useful against casual threats or even just errors.
If you download and try to execute random code, you should consider it will be executed regardless of the expected checks and perceived hardening of your system. And you should consider that your OS is full of security bugs absolutely everywhere. Your web browser too (yep, that means JS is an insane threat now).
Not really a reason to not fix that kind of things though, but nothing serious enough to loose our mind.
That’s nothing. While nowhere near as severe there’s an ansi padding bug in SQLserver when doing replication that they refuse to fix for going on 10+ years now iirc
The zero in 0-day refers to the number of days since a patch/fix was released. Since a patch wasn't released for two years, it was a zero day that whole time.
You are right, but to most people that term seems very scary. Most people don't know what zero day means, and so putting it in the title invokes doomsday type visions for a lot of people.
Most people that don't know what it means also aren't reading Krebs' blog. I feel like different expectations apply here than on a mainstream publication - where not using it in the title or immediately clarifying/reiterating the definition would be appropriate.
This is why zero-days should be made public a week after reporting them to the company...without ANY fear of prosecution.
Trust me, after the first 50 cases - the companies will have a dedicated team working on such exploits.
Wow, they made this public and MS still didn't handle it...I guess it should be more widely advertised then.
Not sure why you're picking a week, that's hardly enough to escalate it to the right team. You're going to have publically announced zero days. Even if companies hopped to you'd get rushed deployments. There needs to be time to schedule the update etc, see also how long it takes for things to spread across Linux distros
Besides, your point stands with Google's Project Zero preexisting 90 day period. & yet they get flak too about putting out this information (not that I agree with that flak)
I fear that because of the asymmetric power/financial balance between the parties involved, such a choice could easily lead to a kind of war, in which the companies will have the upper hand. They will have may more means/resources at their disposal and when it gets ugly (out of the public eye) things could get nasty.
Please, don't get me wrong. On itself I think this whole responsible disclosure culture is bullshit. It wouldn't be if more companies actually treated it more sincerely. But I have seen too many companies abuse responsible disclosure, or simply hide behind bug hunting programs to limit/squash exposure (too many times even without fixing anything), and then burn anyone who doesn't want to play by rules they themselves set.
The problem is, there are far more things than just legal prosecution when this would turn into a clash between companies and security researchers. However, maybe a union or anonymous organization could level that playing field. Problem with that is that many security researchers also want recognition, at the same time as feeling safe (.. something about cake).
Just publish it anonymously? Make a site called exploithub and host it as a hidden service. I'm sure most cybersecurity professionals wouldn't have problems accessing it.
If you're disclosing it to the vendor, then you're probably cooperating with them and are going to give them 90 days to fix it. If they don't fix it within that time, anonymously publish it a month or two after. That gives a total time of around 4-6 months between you discovering it and the "anonymous researcher" on exploithub "independently" discovering it.
This is an easy and popular bug to write. The ZIP file format (which is used for JAR files as well), which puts the header at the end, is truly the sin that keeps on giving.
A consequence of this choice is that ANY file concatenated with a ZIP file is a ZIP file (and the same therefore goes for JAR files as well.) So if you concatenate an MSI file with a ZIP/JAR file, your MSI file detector will look at the file and go "yeah, looks good!", and your ZIP/JAR file detector will also say yes. (This also shows off the hazards of automatic filetype sniffing.)
This is related to one of the very oldest Android rooting vulnerabilities. The update.zip files use the signed JAR file format, where the file contains a signature on its own contents. Naturally the signature can't cover the entire file; it only covers the contents referenced by the header.
But the sin-that-keeps-on-giving strikes even harder here: The end-of-file ZIP header also has an end-of-header comment field, of arbitrary size! This means that a single file can actually have MULTIPLE valid ZIP headers. Which means two different tools can interpret the file as two different ZIP files (much as the bug here can interpret the same file as either a ZIP or an MSI.)
Don't do drugs, kids. And don't do automatic filetype detection. And don't do ZIP/JAR files if you can avoid it. And for the love of god, don't put your header at the end.
1) Take signed .MSI
2) Add my JAR of malware to the end of it
3) Send file to target
4) Target's computer recognizes the file as a signed MSI, ignoring the extra stuff at the end that's not part of the MSI
5) Target computer calls the JRE to launch the JAR
6) JRE ignores the MSI parts at the beginning of the file because compressed JAR files are loaded back-to-front
Obviously a major bug that signature validation ignores "extra" parts of the file, but a signature validation bug that (1) only affects specific types of files executed by a third-party runtime and (2) doesn't threaten system files (or any native executables at all) isn't an obviously critical vulnerability like an arbitrary "signature validation is totally broken" bug.