This is basically a PR piece on how careful they're being and how "seriously" they're treating it.
> Uncovering the full scope of the attack took significant forensic work, the CEO said. However, despite the RAT's presence on the Starwood IT system, at that point, there was no evidence that unauthorized parties had accessed customer data located in Starwood's guest reservation database.
> By the next month, in October, the forensic firm also found Mimikatz [...]. The tool was most likely used to help hackers acquire passwords for other Starwood systems and help them move to other parts of the IT network.
> Yet again, investigators didn't find evidence that hackers had accessed customer data.
> Yet again, there was still no evidence of hackers accessing customer data.
With infrastructure so compromised, and so few protective measures, the default assumption has to be some kind of customer data was accessed. Spin at its worst.
And while I haven't watched the hearing, you'd expect a post-mortem to include maybe how they gained access, and measures to fix it in future. Even if the CEO didn't elaborate, you'd expect a reporter to ask similar questions, and at least point out nothing was said.
I also like how there's no mention of revoking the admins' credentials after Guardium detected the anomaly. I would say it's obvious that they changed the password, but then I may be giving them too much credit.
> there was no evidence that unauthorized parties had accessed customer data located in Starwood's guest reservation database.
This has to be one of the most disingenuous phrases one could possibly utter about an event like this. The fact that the attacker left no evidence (or that you failed to identify whatever evidence was left behind) in no way assures people that their data wasn't stolen. It's fallacious and it reeks of negligence and a complete disregard for accountability.
I'm never staying at a Marriott again if I can help it. It's not much, but it's better than nothing... Last year I hosted a corporate event at a Marriott in NY and we paid north of $15k.
I don't think that's how to read it. They probably did not assume that at all. But they are not going to go public with any information until they have confirmation.
You're reading a postmortem and not necessary exactly what they were thinking at the time.
"The Guardium alert was triggered by a query from an
administrator's account to return the count of rows from a
table in the database," Sorenson said.
Such queries are considered dangerous because the software
that runs on top of a database doesn't usually need to make
them.
"Dangerous" is probably the wrong word - anomalous is a better one. Most software doesn't need to know, say, how many entries there are in the Users table. It generally is querying for a specific user, or a range of users with some particular property.
It sounds like they had some software watching for any out-of-the-ordinary queries, which they then manually verified with the apparent person who was executing them. When that person had no idea what they were talking about, that's when the red flags went up.
I do it several times per day. When you are maintaining a piece of garbage application, completely undersized, absurdly opened in term of filtering and extremely badly designed, it's the only way to manage this mess, not to mention the times when I have to fix the DB by hand, replaying, with some manual tweaks, queries normaly executed by the application.
And unfortunately, I don't think that's so uncommon.
I was the lone technical hire at a startup I was at a couple of years ago. We ran our app on top of postgres, and I used a native macos postgres client to keep an eye on our database (accessed via an SSH tunnel). The startup is very high-touch with clients - we built personal relationships with our clients.
Unsurprisingly our CEO loved looking at the numbers and the data. We could see conversion, and keep an eye on what our users were up to and how they were using our product. I was the sole engineer and I was only working 3 days a week. I didn't have time to build monitoring dashboards and all that; so instead I spent half an hour teaching our CEO the basics of how to do postgres read queries, and how to pull the results of a query out into CSV and into excel. That was probably the most productive half hour of my time at that startup - it was a huge enabler for him, allowing him to explore the data in arbitrary ways. I showed him how to sort, filter, select specific fields and join tables.
If I had instead built a specialised dashboard, it would have taken me away from our product. And I wouldn't have any idea what queries to display - any dashboard I made would have been worse for him than just querying the database directly. I'm still convinced that I made the right call.
(We did eventually build a dedicated admin panel, but months later - only when we actually needed it because some common tasks were too difficult through SQL alone).
Nothing you describe there requires write/admin access though.
I'm not religious about this. My team automates anything we have to do more than a couple of times, and from time to time we do have to manually alter something in a prod db. Do it often/long enough though and someone _will_ screw something up.
It's not particularly common but I've been in some what of a similar situation. If you're a senior techy person I'd suggest taking a bit of time to either find a new position (as I'm sure all of HN would echo, simply for your sanity) or I'd suggest taking that time to assemble a report about how your spend your hours each day and make sure it bubbles up past your manager to someone who deals with money.
That sort of situation is constantly costing the company money for no good reason, it'd be fiscally sound to spell out these costs and contrast them with any estimates you could assemble on making the application start working again. Manually replaying queries is expensive, terrible, terrifying, and can potentially break data-integrity really badly.
> assemble a report about how your spend your hours each day and make sure it bubbles up past your manager to someone who deals with money
Assumes poster works at a company where management cares past "Does it work?"
That's less rare than it used to be, but HN sometimes underappreciates how much of the economy is still governed by companies with CFOs who couldn't tell you the first thing about their enterprise infrastructure.
They -always- care about money. That is why I said to focus it on hours and pass it to someone who deals with money. If you focus it on best practices and tech trends then there may be no one who will care but (except in real dumpsterfire companies, which often includes start ups) someone always cares about the money.
For single host MySQL setups there is the mysql audit plugin and related configuration. It's not perfect but if you already have other SIEM/alert infrastructure it should glue in OK. You'll want to test it to make sure it doesn't interfere with expected SLA's.
You point out what alarmed me. An unknown entity had admin access. That's 5 alarm stuff....but they waited 3 days to look into it. Pasted here again because formatting:
"As part of our investigation into the alert, we learned that the individual whose credentials were used had not actually made the query,"
Yea... that's like saying "Oh yea, this attacker did have root access and we checked the logs of what he did and apparently he just logged in and then three hours later he edited some log file and logged out... it's weird he didn't do anything while on the server..."
Your question is unclear. IBM designed this product to audit for things that look like signs of an attack/probing and if it generated a lot of false alerts, the product would be worthless.
How often do you design a database query to get the count of all rows from a table? It sounds like an uncommon use case, but probably one you could mute from the audit if you do it intentionally.
I just wanted to mention that this product has the capability of capturing queries and match them to a particular login session. Given this there may be other reasons why this particular query stood out after reviewing administrative queries for irregularities or using some of the analytics available in the IBM product. There's a couple of ways this could have come up and it depends on a particular security team's policy and review strategy. They may have been able to look at the rest of the queries in the session to have an immediate feel that something totally irregular was happening. Generally you get used to the application traffic/change requests/administrative surgery style of queries after monitoring a particular database for a while so when something new comes up it immediately looks weird, like a log message for a familiar daemon you've never seen before.
Doesn't make sense to query the db for a count of the number of records that are returned. You can figure that out from the result set inside your application.
If the result set is "all users", you probably don't want to pull tens or hundreds of millions of records just to count them up if you're showing 100 per page.
I think that something like a query for hundreds of millions of records that also required pagination for the exact number of records returned is anomalous and not something that their application would do.
That is just regular DAM (Database Activity Monitoring).
I know few companies use it on even fewer DB's but it can really help to identify "rogue access" to your data. It takes (a lot) of time to tone it down to your specific needs and DB access, but once it is up and running it can trigger at the right time (as in the article).
There are different kinds of DAM, inline just reading the network traffic, missing the direct access accounts would have from the DB server itself, and the ones that monitor the DB service memory directly. Of course you could start way cheaper / simpler with some proper logging from the DB software itself.
It is a growing security area that is a nice (once tamed) additional information source.
All properly configured applications should be running with least required db credentials, limiting what they can do. That Accenture was monitoring all administrative level queries against the db makes perfect sense in a locked down production environment.
Since the table in question isn't mentioned in that quote, is it possible that what was really anomalous was a number of similar queries on somewhat unusual tables?
Meaning, something that indicated some software was doing queries that were both unusual and somewhat high frequency?
> The Marriott CEO said again that investigative efforts have yet to uncover evidence to suggest that hackers gained access to the encryption key used to encrypt the payment card numbers, meaning that most of the compromised payment card numbers are still useless to attackers.
So the clearly sophisticated hacker(s) had access to the network for four years yet somehow forgot the private keys to decrypt the millions of records they went through all that effort to steal?
I think you are misunderstanding... They got dumps of a couple of databases, which they encrypted, downloaded, and presumably decrypted. Those dumps themselves contained a credit card number that was encrypted by Marriott with a key they don't think the attackers got.
> Uncovering the full scope of the attack took significant forensic work, the CEO said. However, despite the RAT's presence on the Starwood IT system, at that point, there was no evidence that unauthorized parties had accessed customer data located in Starwood's guest reservation database.
> By the next month, in October, the forensic firm also found Mimikatz [...]. The tool was most likely used to help hackers acquire passwords for other Starwood systems and help them move to other parts of the IT network.
> Yet again, investigators didn't find evidence that hackers had accessed customer data.
> Yet again, there was still no evidence of hackers accessing customer data.
With infrastructure so compromised, and so few protective measures, the default assumption has to be some kind of customer data was accessed. Spin at its worst.
And while I haven't watched the hearing, you'd expect a post-mortem to include maybe how they gained access, and measures to fix it in future. Even if the CEO didn't elaborate, you'd expect a reporter to ask similar questions, and at least point out nothing was said.