> Those sources said the breach appears to have started when the attackers somehow gained access to the company’s Gitlab code repository, and in that repository was a token or credential that gave the bad guys access to Sisense’s Amazon S3 buckets in the cloud.
So plaintext AWS credentials checked into source control.
> Both sources said the attackers used the S3 access to copy and exfiltrate several terabytes worth of Sisense customer data, which apparently included millions of access tokens, email account passwords, and even SSL certificates.
And those credentials can access tons of other credentials, also stored in plaintext.
Even if the security policies didn’t pick this up, didn’t they at least having billing alerts for the terabytes of AWS bandwidth?
Isn't S3 egress billed at something like $0.01/GB? Even if you assume they exfiltrated 100TB, you're only talking about maybe $1k. I can't imagine they'd have alerts fine-tuned enough to catch that without driving their on-call staff crazy.
My previous team argued with me when i said to avoid using terraform with vault with our state in S3.
I eventually got my way after 18 months and found out it was really bad - hard coded s3 access keys were floating around that could read any bucket and they had used terraform with vault and that had pulled all sorts of root level creds into the plain text state files.
Lots of other ways to control this risk but I removed vault from TF.
State encryption won't solve this problem, it will simply move the key management problem further down the stack. I.e. to the users who are least equipped to deal with it.
The problem is storing anything sensitive at all in terraform. Terraform is terrible for secrets.
Do you think a feature like this is tied to distancing OpenTofu from Hashicorp, e.g., AWS & friends have less incentive to do enterprise feature gating around security?
State Encryption was one of those long requested features[0] (I had it on my ideas list for years[1]) that Hashicorp didn't have much incentive to build. I don't think it has to with distancing opentofu as such, but the opentofu team prioritizing the right things that customers actually need.
The 'much incentive' part is the interesting thing to me. Hashicorp did build encryption-at-rest for state into their paid remote cloud service. They explicitly recommend using the paid service instead when security is required, and instead of having both versions secure, the paid form simply disables the local one [1], drawing a strong line here between paid/secure and unpaid/insecure.
So I'm wondering if that created a disincentive, from a perspective of enterprises upselling for security & compliance. OpenTofu community members aren't relying on selling a remote management service, and when a cloud provider, more usage in general is better, and that includes security for everyone. I can be off, so not quite clear to me the more I look.
Encryption at rest of local terraform state seems beyond the threat model & compliance needs of a pre-PMF consumer startup
It's a lot harder to be an enterprise company or b2b startup (SOC2) and brush that off in front of knowledgeable auditors. I'm guessing most would miss it and rely on self-reporting, which in turn raises how 'loud' this issue is for most OSS terraform users.
We aren't heavy terraform users, so I'm not sure of the threat modeling here. Understanding these kinds of product decisions and OSS security ramifications is a pet interest of mine, and this is a fascinating one!
Hard agree. When I started my company we were playing with Terraform but left it pretty early and before taking anything into production because of the issues with state management and security.
We moved on, but there are areas where I wouldn’t mind poking at it. I’ve been thinking OpenTofu is where I’d start afresh with TF.
The thing is that most HN folks, competent IT people see tools like Terraform, ansible, docker et al as neat tools that simplify and speed up annoying chores, so you get to spend your time on solving actual problems. What it means to managers is that task X has become so easy that you can hire complete idiots who would've failed without the tool but now manage to cobble something together quickly that appears to work well. Except you now have passwords and keys in repos, open s3 buckets, ...
> didn’t they at least having billing alerts for the terabytes of AWS bandwidth?
AWS is notoriously bad about this. You can't set hard limits. Even if you set a limit, your notification won't come in until long after that limit has been hit and blown through.
My business has not (yet?) been compromised, but I did accidentally send my S3 bill into the stratosphere in the past 12-18 months by miscalculating API charges for some intra-AWS changes I was making with lifecycle management and bucket-to-bucket moves.
I wrote to support asking for a refund and explained my calculations. I pointed out where I had made the mistake, and asked for some relief on the bill. Honesty went a long way in that case, and they gave me a credit on the bill equal to about 70% of the total S3 spend. So they ate 70% of my mistake. That was more than fair enough; it was my error.
The amounts weren’t huge. They could have eaten the entire amount and it wouldn’t have even been a rounding error on their books, but I didn’t ask and I didn’t want to expect that.
On the other hand, once they had source code access, does it matter?
People play make believe about this stuff. “The practice I feel strongly about, it’s secure. And the other guy is a moron.” The XZ crisis shows how little credential isolation matters. Maybe even at Google, you could work there and feel very strongly about their security practices, and then governments have access to everything anyway, at the front end proxies or whatever. It’s important to pay attention to security, but I think programmer resources should focus on “cheating,” not “security.”
Yeah it does, because you left the key for the place where you stored everyone else's keys out in the relative open. It reduces your defensive depth to 1, as in, 1 compromise was all it took.
For a tortured analogy, let's say you keep your guns in a gun-safe in your garage, and then hang the key for the safe in the garage. With a sign saying "gun safe key".
Not leaving the safe key out in the open doesn't guarantee your guns are impervious, of course, a sufficiently motivated and skilled burglar willing to put time and effort into breaking open the safe would be able to access your guns.
But, that takes time. And it makes noise. Both of which increase the chance of being caught.
But if the key is right there? Easy as.
Yep, they had obtained access to the repo, like the burglar had obtained access to the garage.
But if the AWS secrets weren't just hanging on the wall, then progressing from Gitlab compromise to S3 compromise would a) be harder and b) take longer, both of which increase the chance of discovery.
Just saying, if you care about the security of your guns and/or customer data, don't leave the key hanging about in plain sight as a good first step.
Make them compromise multiple things, not just one thing.
I'm desperately trying to work encryption at rest into this analogy. Um... trigger locks?
> Yep, they had obtained access to the repo, like the burglar had obtained access to the garage.
I'm not going to talk in analogies. There's no more depth at the stage that the source code is compromised, regardless of how many credentials are put where. It's sort of a matter of opinion. I mean this is what I am saying by make-believe: everything you say is, quite literally, a bunch of analogies, because there isn't any actual hard evidence to any of it, it is wood carving traditions. Of course I agree that they're good practices, but man, there are unlimited good security practices, and only limited money and time.
Another point of view is, it's easy to provide generalized, vendor-colored advice about credential security. You know, just buy AWS KMS whatever the fuck, right? It is hard to write software that accommodates arbitrary changes to authorization requirements & stories. There are a lot of commenters here giving these Sisense people a hard time, and I doubt they're stupid, more so that they are really unlucky.
I feel like the comments miss the forest for the trees. Most startups have admin roles, and if you're using something like Next.js, Elixir, Ruby on Rails or Go, your application's API methods have ad-hoc, in-source policy enforcement and definitions.
This is what happened here: the developer's GitHub credentials are an Admin role. It could happen in any scheme. Can you develop an application without an admin role? Not a SaaS. Now you should see why it's okay for Postgres to have all its source code open, but not your SaaS company's, for the purposes of security.
I know this says credentials were in a repo but I wonder if this has to do with their self-hosted (EE?) GitLab instance. I've found mine (CE) sort of difficult to upgrade sometimes.
Not sure how they store keys/tokens but it is sort of easy to leak to logs, and perhaps there was some exploit.
Should maybe be using cloud hosted GitLab at their level of scale, even though that option has had problems with uptime among other issues.
A past company used Sisense because the new manager came in penny wise and pound foolish. However, the other ETL/visualization products (e.g. Tableau) truly were expensive. I found Sisense's ETL / viz product to be meh.
This sounds nuts. Are people still entrusting bearer tokens to third parties? I haven't seen that done in at least 10 years. Surely it's prohibited by the ToS of the first party providers?
> If they are telling people to reset credentials, that means it was not encrypted.
I'm not sure that follows. If I leaked customer information, even encrypted, I believe it would still be responsible to tell people and give them the option to react. Encryption is not foolproof technology. Vulnerable to cracking, or perhaps the key could simply be leaked as well.
> “If they are hosting customer data on a third-party system like Amazon, it better damn well be encrypted,” Weaver said. “If they are telling people to rest credentials, that means it was not encrypted. So mistake number one is leaving Amazon credentials in your Git archive. Mistake number two is using S3 without using encryption on top of it. The former is bad but forgivable, but the latter given their business is unforgivable.”
I've got all our buckets encrypted-at-rest with a CMK, but if someone compromised a key, the role that key is connected to would necessarily have to have permission to decrypt the data as well. at-rest encryption just means nobody's going to buy a used hard drive and suddenly have access to gigabytes of healthcare or financial data or passwords. Or am I missing some nuance or implication of the conversation?
That's basically it. Encryption is a little like 2 factor auth in that way, if you really only have one control to reading the data, the encryption isn't a solution for that particular risk.
The way I see it -- what I want to protect against is an attacker being able to slurp up all of my online, potentially accessible protected data, while at the same time, I generally want to access it for legitimate purposes. And that's difficult -- so what we wind up doing is making it expensive/time consuming/audited to access the protected data.
It’s both a clever headline and also one that was inscrutable until I looked at the article (I suppose I should know CISA and CISO but I’ve reached the stage in life where I really don’t want to keep learning the damn acronyms all the time).
"""
Earlier today, a public relations firm working with Sisense reached out to learn if KrebsOnSecurity planned to publish any further updates on their breach (KrebsOnSecurity posted a screenshot of the CISO’s customer email to both LinkedIn and Mastodon on Wednesday evening). The PR rep said Sisense wanted to make sure they had an opportunity to comment before the story ran.
But when confronted with the details shared by my sources, Sisense apparently changed its mind.
“After consulting with Sisense, they have told me that they don’t wish to respond,” the PR rep said in an emailed reply.
"""
I think adding the comment that they “changed their mind” was unreasonable because they didn’t change their mind. They wanted an opportunity which they got but didn’t exercise. They didn’t say “don’t contact us for comment in the future.” Which would be “changing their mind”. Their comment was simply “no comment”. This seems completely reasonable to me.
Because "no comment" is less of a plan than even mindless PR pablum, that a PR agency should have been able to churn out without thinking.
Unless Sisense (a) had no prepared PR plan for this scenario and/or (b) has no idea what actually happened, so are still terrified to legally expose themselves by putting any words to paper.
Do you think "no comment" is a good plan, when you've just sent out an emergency email to all of your customers telling them to rotate any credentials they entrusted to you?
> ...a company entrusted with so many sensitive logins should absolutely be encrypting that information.
> “If they are hosting customer data on a third-party system like Amazon, it better damn well be encrypted,” Weaver said. “If they are telling people to reset credentials, that means it was not encrypted. So mistake number one is leaving Amazon credentials in your Git archive. Mistake number two is using S3 without using encryption on top of it. The former is bad but forgivable, but the latter given their business is unforgivable.”
The penalties for gross negligence are not high enough. These guys should be sued into oblivion; I hope their insurance limits are high enough.
It probably was, but Amazon's bucket checkbox encryption doesn't protect against an authorized user reading the data.
It's trivially easy to encrypt things on AWS, but the attack you're protecting against is someone walking off with the HD, which is not really the important threat. Well, that and making sure that the auditor can check off 'encrypted at rest' on the sheet.
So plaintext AWS credentials checked into source control.
> Both sources said the attackers used the S3 access to copy and exfiltrate several terabytes worth of Sisense customer data, which apparently included millions of access tokens, email account passwords, and even SSL certificates.
And those credentials can access tons of other credentials, also stored in plaintext.
Even if the security policies didn’t pick this up, didn’t they at least having billing alerts for the terabytes of AWS bandwidth?