Just a reminder to anyone interested in doing this kind of research, what Billy did here is illegal under CFAA. As we've seen from recent cases, he could be prosecuted and imprisoned even if Google declined to press charges.
This worried me also. I've seen many articles here on HN where people have done far less and have had serious crimes pressed. Glad Google takes the high road but with the state of the current US legal system regarding "hacking" I definitely would not be so bold.
You don't need a "custom exploit" or a "custom developed tool" to access a public file called config.bog and base64 decode the user:pass. This Tridium exploit was well publicized in the past year but too many people (including this contractor who installed it) failed to upgrade the security or install the patches.
"Custom exploit" might be a bit of a misnomer, Billy and Terry (the same researchers that are credited on the initial ICS-CERT Advisory regarding the Niagara framework) were referencing a set of utilities/scripts they put together to automate testing.
Running on Windows on the developer's workstation, no less...
Let me quickly clarify that I'm not anti-Windows, it was just a double-take to see it used as a workstation for security research like this (though I'm using the word 'research' lightly). Strange article all around, lots of it caught my eye.
Maybe they have only windows licenses of IDA pro? There are a few very useful tools for windows - especially for reverse engineering and hardware/embedded stuff.
What I find most worrisome about this is that it can enable attackers to access internal video feeds. Seems like an excellent vector to grab someone's credentials.
Also, ironically, one of the people mentioned in the WaPo article who discovered these vulnerabilities used to work for Google.
edit: Aaron's reading comprehension is evidently VERY LOW today ;)
> (We don’t know what this button does… and we were afraid to test it :-))
How do you hack HVACs and not know what an after hours button does?
(it extends operation of the system so if you're working after hours, you won't freeze/boil to death, without wasting energy running the HVAC out of hours when no-one is there.)
Their hack wasn't really specific the HVAC systems though. It's just grabbing a poorly secured password file from a web server more than anything HVAC specific.
Posting the complete details of your felonious actions on the internet = not bright.
Note that it doesn't matter whether Google is cool with your actions, after the fact. What matters is whether the local prosecutor is cool with your actions, or whether he needs an extra easy slamdunk conviction.
It probably helps that "Wharf 7" is in Australia [1] and the access was from the US. I can't imagine any officer in Pyrmont Police Station being too keen on the paperwork involved in following up an incident of this magnitude!
I know some of the guys @ Cylance, they're good people. They've done a lot of good work regarding embedded and grid security awareness. It is pretty funny to see what people leave unprotected on the internet when they usually have pretty good security practices.
In a situation like this, I'm going to guess that "facilities" was run as a fiefdom and its network presence was obfuscated from infosec staff. Or in the worst case, infosec was told to leave it alone...
Unfortunately this is far from an isolated issue. There are a multitude of BMS's and control systems out there where security has had next to zero consideration. Traditionally these systems have sat on isolated networks and favoured serial communication. Unfortunately many of the people who have spent the majority of their lives designing, installing and deploying these systems have very little exposure to even the most basic network security principles.
When you consider these system have complete control over many environments - signal distribution, HVAC, occupancy sensing, motor control for things such as dropping 3 tonne screens from roofs, even occasionally extending to physical access control - this is a very scary thought.
I'm impressed that they had the balls to actively compromise the device before reporting it to Google... under normal circumstances, wouldn't most companies go after you in court for a CFAA violation or somesuch?
You certainly see lots of examples of lawsuits over changing numbers in URLs, so you'd figure downloading configuration info from a machine and then reversing a password would definitely provide grounds for a suit.
Google Sydney has servers on site, but they may well just be to be local productivity aids (mirrors for development etc). Google generally don't publicise where their servers are or what they're for. Even if the servers are just like any standard office's servers, this exploit could result in some serious issues.
When I was at Google Sydney a few years ago for an internship, the AC died prompting an interesting response. The server temps were rising to unsafe levels and the AC wasn't expected to come back in time. The MacGyver solution was to buy portable AC units and pump the heat into the coder's workspace. That was a distinctly unpleasant afternoon =]
If the machines weren't important for production or productivity I'm certain they'd save us the hassle and shut them down. If nothing else, abuse of this office's AC system could severely impact the productivity of the office and spring dozens of people into action.
Whilst not under the usual purview of the rewards program, I'd still think it's noteworthy of recognition.
I wonder if that was a similar time to when I visited that office ~feb 2011.
The aircon was clearly over capacity then, and there were portable air conditioners scattered around the floor I was on, with flexible ducting feeding up to the return ducts in the ceiling.
I assume they've fixed that by now, I know they've gone through at least on remodel since.
Well, it's actually a bit more complicated than that. The bug is definitely something we wanted to know about, and we're thankful for the report. That said, there are some constraints that we put in place for the reward program to protect researchers from harm.
For example, we don't want physical security or the police second-guessing the intent of someone trying to sneak into one of our buildings - so we set a very clear scope for the program, pragmatically focusing on our user-facing applications and excluding things such as attacks on our facilities and corporate infrastructure. It's a broad exclusion, but it's hard to come up with something finer-grained yet clear enough.
In the same vein, we ask researches that they don't go after any systems unless it's perfectly clear that the application is owned and operated by us - for example, because it's in an IP range registered to us. Again, while this may be more limiting than we'd like, it protects the community against overly litigious parties if the system proves to belong to somebody else.
In several unusually serious cases, we have made case-by-case exceptions and have paid external researchers for nominally non-qualifying bugs; but it's a tricky balance, and we use this power very cautiously.
Source: I authored a good chunk of the current rules for Google VRP ;-)
I can definitely see the reasoning behind not wanting to encourage people to don their black ski mask and attempt to weasel their way into the building beneath the cloak of "security researcher." You are absolutely right, the police are not going to split hairs attempting to decipher the intentions of the individual and will act swiftly to neutralize the threat.
However, this case is different. While it was absolutely an attack on Google's infrastructure, it was discovered through a vulnerable external web service. Even though the control panel application is not a Google product, it stores user passwords in clear text as decoding it seems to be trivially simple. At the very least, Google should be responsible for picking third party vendors that store passwords using one-way hashes ;).
If you ask me, this should be one of those special cases!
I'm surprised there aren't botnets running on these embedded devices... yet. Probably because most people don't leave it openly available on the internet.
There are millions of rarely updated devices... printers, security systems, fire alarms, cameras, etc... the list goes on and on.
If Google can fall victim to an ICS attack, anyone can.
Did Google write this software? If not, it's kind of like writing "Google locks vulnerable to lock picks". Well yeah, just like every other pin tumbler lock ever made.
I think the point was that if any company should have awareness of what internal tools are pointing web servers at the outside world, should be capable of auditing its own security, should easily understand what that software is doing and how it should be secured, it should be Google, a company whose primary output is web software.
No they didn't. This is actually run by a third party as Google does not own these offices. FWIW Google has a pretty decent security team, although for some reason most of them are arrogant assholes (e.g. Tavis Ormandy)
Makes more happy that I vetoed the vendor's request to one of my Customers to expose one of these systems to the Internet. They asked for exposure to the Internet and I laughed them out of the room.