I've been wondering lately how these USB execution attacks happen. Surely no modern system auto-runs things from a USB, so there has to be some kind of executable on the drive which the user of the drive either A. Expects to be there, or B. Doesn't notice is there. A sounds a bit strange, but maybe the system is updated over USB, that means that the hackers got into the update pipeline which is very bad. B might be more likely, create an EXE with the thumbnail of an image and maybe you could trick a user into clicking it. Or maybe some nefarious excel macro. But in this case it's strange that the system allows these things to be executed.
Does anyone have more details on how this is done?
> It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
Does the malware EXE that now looks like a Folder icon with same name as the last modified actual folder (which is now hidden) ... also redirect the user to the actual folder and its contents in file Explorer after successfully delivering its malicious payload?
THAT would probably ensure the user does not suspect anything nefarious has happened, even after the fact.
Now how Windows Defender and other heuristics based firewalls would not treat the malicious EXE with folder icon as a threat and quarantine it immediately -- I dont know.
>how Windows Defender and other heuristics based firewalls would not treat the malicious EXE with folder icon as a threat and quarantine it immediately -- I dont know.
The "malicious" exe, as I understood it, just boots up Python to run a script, where the actual malice lies. Windows Defender has to treat an executable that does only this as benign - because Python's packaging tools provide such executables (so that Windows users can get applications - including (upgrades to) Pip itself - from PyPI that "just work" in a world without shebangs and +x bits). For that matter, standard tools like Setuptools could well have been used as part of crafting the malware suite.
Presumably they could notice that an .exe has the normal folder icon. But presumably that icon could also be slightly modified in ways that would defeat heuristic recognition but still appear like a folder icon to a not-especially-attentive human.
>Does the malware EXE that now looks like a Folder icon with same name as the last modified actual folder (which is now hidden) ... also redirect the user to the actual folder and its contents in file Explorer after successfully delivering its malicious payload?
I didn't see anything about that in the description of the attack. But I assume that the Python script could accomplish this by just making an appropriate `subprocess.run` call to `explorer.exe`.
And also, which person setting up an air gapped system allows execution from a removable media? You'd think with that level of paranoia you'd have a couple more rules in place.
There are many ways. A simple way is to simulate a USB hub with an input device and a usb drive. You use the input device to execute whatever is on the drive. Another way is to identify as a device whose driver has some vulnerability. Windows auto-installs that driver, then you exploit it.
Sure, if you're the one who created the USB drive then you could make it not actually a USB drive. But this sounds like an infected machine infecting previously safe USB drives and turn them into malicious ones. And I'm not sure I get how a USB drive can be turned malicious. I vaguely remember there was a bit you could flip in older USB drives to make them appear as disk drives and enable autorun, but I doubt that's how this is done.
I think its the firmware. Outside of the main drive, there are smaller chips that work with the OS to r/w the main drive. Each chip has firmware whose memory is usually r/w as well.
Once you can manipulate the code on the firmware, its probably pretty easy to find a kernel level exploit.
> simulate a USB hub with an input device and a usb drive
Yea but that has to be a custom or specific kind of programmable USB device. Or one that somehow unintentionally allows you to reflash its firmware to something else.
And also if anyone ever plugs your malicious USB device into a Mac, they will get a pop-up from macOS that asks you to identify the keyboard. Although maybe if it fakes a specific USB keyboard that macOS knows out of the box, you could avoid that?
Does anyone have more details on how this is done?