Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Despite decades of hacking attacks, companies leave sensitive data unprotected (propublica.org)
137 points by danso on Jan 27, 2022 | hide | past | favorite | 43 comments


As I see it there's two things at play here that feed into one another:

1. The re-framing by financial institutions of them being defrauded as "identity theft" and pushing this responsibility onto their customers.

2. Because of the above, the data can be valuable, incentivizing the compromise.

Re 1: Note that credit card companies have had this problem for ages and treated it as fraud for decades, which is why established card companies can have very reasonable processes to cancel transactions, mark some as fraudulent, and probably why they have reversible transactions. However, because of the rest of the industry card companies now appear to be jumping on board with the "identity theft" concept.

By collectively not treating it seriously and essentially letting it happen and inconveniencing their customers instead of the vendors, banks have essentially washed their hands of it and give zero incentives to the vendors to seriously try to protect the data. If every transaction marked fraudulent meant the vendor didn't get the money, there would be a lot more serious action here.


“Damn you, masquerading as hundreds, if not thousands, of customers! How dare you steal their identities!”

— Mitchell & Webb, Identity Theft (https://www.youtube.com/watch?v=-c57WKxeELY)


Lol. This is a clever way of putting it. Actually helped me understand some of this


The video should be required to accompany any article about “identify theft” to raise political awareness that a business being defrauded should be the business’s problem.

Edit: the business and law enforcement/court system’s problem. But certainly not an uninvolved individual’s problem.


> The re-framing by financial institutions of them being defrauded as "identity theft" and pushing this responsibility onto their customers.

as with so many other things, elite impunity is the fundamental problem.

for the banks to be putting their own failures of due diligence on consumers’ heads is as an outrage and an absurdity. it shouldn’t even be possible.

if they screw up, they should face the consequences of their incompetence. but they don’t. after the Equifax breach, the CEO retired with $90M.

$90M for presiding over a corporation whose entire business model is an exemption from defamation law, screwing it up, blaming someone else down the line, and exposing millions to so-called “identity theft” —- in other words, millions of people now have an uncompensated permanent commitment to doing due diligence for countless banks, car dealerships, and even Walmarts throughout the country.

> collectively not treating it seriously and essentially letting it happen

the modus operandi for the whole problem space


I'm not sure I agree with your last point. The merchants have historically been powerless but the merchant side bank ends up patching things up to fix it's own costs. Often this patching means penalizing merchants with products that are fun to buy fraudulently for failures that originated or could only have been detected at the buyer's bank.


agree - add to that a system side-effect -- with strong consumer protection and easily reversed transactions, plus the financial policy to cover the dollars involved, that leads to a big increase of crooked insiders doing the fraud transactions. Without evidence, I believe that VISA and MasterCard in the early days, found that the massive money they made on consumer credit was worth the sort-of-unstoppable insider scamming as a "cost of doing business"


> Note that credit card companies have had this problem for ages and treated it as fraud for decades, which is why established card companies can have very reasonable processes to cancel transactions, mark some as fraudulent, and probably why they have reversible transactions.

Do they do that of their own volition, or because there's some legal requirement forcing them to?


If someone steals from the debitcard, they steal your money.

If someone steals from your credit card, that's not your money, it's the bank's money - the bank was trying to give you a loan and gave it to the wrong person. That's their problem.

If we didn't have this rule, there would be unlimited rampant fraud - you don't just loose all you have, you loose what you haven't. We would suddenly find out that we are a million dollars in debt for no reason.


Credit cards are heavily regulated in the USA and, yes, there is a legal requirement here.

No such requirements exists for debit cards. This is why I hate them. But from what I can tell, most people don't seem to have a problem getting fraud reversed.


Why treat anything serious if it doesn't impact the board? I wouldn't either if I'm in that position. For sure I'm going to hire consultants with coats made of certificates and then sleep tight. I have done what the law or insurance company wants to see and I have consultants as black sheep. What on earth do you expect me to do more? Better processes? Sure let me hire more consultants wearing suits...


> I have done what the law or insurance company wants to see and I have consultants as black sheep.

You hit a great point. The law and insurance companies have a major impact on what companies do.

If it's illegal (and they'll get caught) or the insurance companies say "do this to get insurance or if you don't do it the insurance doesn't cover you" people will make change.

That right there is a way to bring change.

Call your gov reps.

Let's get insurance companies to put security controls into coverage policies.


The problem is always the same: to do things right, you need people who know what they are doing. Redundantly. Yet, most of us don't know what we are doing, so in practice we end up creating proxies for "the people who know what they are doing according to their certificates certify that I know what I'm doing". Because otherwise you wouldn't be accepted in a cool position, and we all want to be in a cool position. And that's how we end up with so much shit overflowing in the world, but people still pretend they have their own under control. Feeling greedy? Play pretend. A few hours later... Want to be accepted? Play pretend. A few hours later... Want to not be left out? Play pretend.


Except that there are real attacks and the insurance companies affects will do postmortems to figure out why what they demanded didn't work. It will be a few rounds before they figure out how to train for this though. More than a few rounds because the bad guys are not stupid and thinking up new things that smart good people need to mitigate.


One thing I'm a bit pessimistic is that insurance policies usually bring a lot of paper work and eventually it's just certificates over certificates. But again, maybe (a big maybe) this is still better than what things are going on right now.

The best solution is for board members to have respect to their best techincal people and let them create processes best for individual companies. Sadly this is too personal and usually dies when a couple of people jump ship.

Damn, management is so hard.


Hopefully that make those certificates worthwhile someday. Right now they have earned a bad reputation, but with the right training behind it things could change.


You forget that the board is also full of slightly-more-official-looking guys wearing suits. Guys in suits often forget that there is more to life than appearances, and that there is a cold hard reality unaffected by spin. Ransomware is gonna ransomware regardless of how many boxes you check and how shiny your suit looks.


That's exactly what I'm trying to say. I don't have high hope for this.


I'm sure Experian's data breach went to the board. I suspect other boards have started asking questions to ensure they are not surprised by such things.


There is no punishment.

If you're holding a gold bar for me and you lose it, you owe me a gold bar.

If you're holding a photo of my drivers licence and you lose it, nothing happens.


This is what happens when there are zero legal repercussions for companies with sloppy data security. Regulatory capture strikes again.


Literally the only reason any company invests actual time into data security is HIPAA, GDPR and SOX. I keep wondering why people haven't demanded more regulation after all their SSNs got leaked


Because there are very few regulations that can effectively capture the intent of the rules instead of "tick boxes" that might or might not mean very much.

Sure, "has firewall" is pretty effective but how do you encapsulate how it should be managed effectively?

What happens when a system that was supposedly secure installed by a previous employee fails? The company's fault? How would they know? The employee's fault? Maybe they thought it was good but were simply wrong?

I think a more fundamental approach would be to set mandatory qualifications for IT workers/devs to ensure a base-level of security/understanding. I know great web devs who don't know about web app security - that shouldn't be possible. It wouldn't be perfect but it would be easier to do refreshers/regular testing for things that people should already have learned, just like train drivers do.


The same is true of legislation like SOX, which is very much a checkbox approach and has not solved everything related to financial reporting, by a longshot. But that doesn't mean regulation would be totally useless. From the article:

>The European Union has been operating under such a standard since May 2018. Known as the General Data Protection Regulation, the law requires companies to implement security measures to protect sensitive personal data and to promptly notify regulators and affected consumers when it gets compromised. Violations of the data protection rules can result in fines as high as 4% of a business’s annual worldwide sales. “You have to implement cybersecurity measures if you process personal data, and if you do not, you will have a legal problem,” said Stefan Hessel, a cybersecurity specialist in Germany at the Reuschlaw law firm.

>Such measures may in fact make it harder for hackers to ply their trade, if Pompompurin’s postings are any indication. In August he was asked on RaidForums why large collections of personal data always seem to come from the U.S. He responded: “Because its the easiest to get, other countries have load of protection laws & shit, in the US your address is basically public information no matter how hard you try not to be put on lists like this.”


>Because there are very few regulations that can effectively capture the intent of the rules instead of "tick boxes" that might or might not mean very much.

So HIPAA fines a company up to $50,000 per patient when a data leak occurs. They don't have to regulate how to secure the data, they just have to establish a fine with teeth requiring that companies secure their data with punishment when they don't.

Of course if congress would apply HIPAA rules to everyone's data and actually enforce it, data leaks for the most part will stop.

https://www.hipaajournal.com/what-are-the-penalties-for-hipa...


Fines don't happen until they get caught. How long can a company go and how much can they make before they get caught? What happens to the executives? They just move on pointing to their old success numbers.

I run internal audits for a large org as part of a strike team when my company is acquiring smaller orgs. External auditors are a joke and it's incredibly easy to slip things by them. The only reason we catch stuff is because we assume full ownership as part of our takeover process and actually build and deploy product to find issues.


But your company wouldn't even have a line item to look for those problems if regulations didn't require it. It's not a fool-proof solution but it's at least a foot in the door for improvement


Look up the cap on fines per year. It's less than $2m.

I've consulted for healthcare companies where that is a literal rounding error on their bottom line.

They. Do. Not. Care.


There's also a criminal liability aspect to it. I've worked for healthcare companies too and they do care, at least enough to have it in the conversation and to include the HIPAA officer in those conversations. Nowhere I have worked have they been flippant about it.

I agree with you that the cap is too low. It should cap based on gross or revenue. I suspect it's so that the smaller companies won't get destroyed by fines leaving the larger ones largely unaffected, but I'm speculating.


It also doesn't help that these frameworks are often dated and don't align with modern best practices. Shops have the choice to check the boxes and do things the dumb way or to fill out page after page after page of special exception documentation for their auditors. Most take the easy way.

And that doesn't even cover the part where PCI, SOCII, and SOX all have various bits that contradict or are not compatible with each other.

I've seen too many times where the head of security or IT or whatever picks a pre-made package off a shelf from one of the audit providers where they guarantee you will pass all of them. Then they follow it like it's law ultimately leading the swe/devop/sre groups to build out layers of shadow it/ops to actually get productive work done.

My work primarily is to jump into startups after they are acquired to make them "enterprise ready" for a bigger org and its always a unique shit show dealing with the preexisting war between their security/it orgs and their actual product development orgs.


>there are very few regulations that can effectively capture the intent of the rules instead of "tick boxes" that might or might not mean very much.

>set mandatory qualifications for IT workers/devs to ensure a base-level of security/understanding

What is it about this regulation that prevents it becoming a useless box for IT pros to check?


I think just enough components of the problem are too abstract for most people to practically reason about.

HIPAA passed when people expected Clinton to push for health insurance improvements. The HITECH accompaniment passed in 2009 when health care was a huge issue in the wake of the 2008 disaster, and people expected the govt to crack down on big company malfeasance. Subjectively, I think 'keeping your health information secret because it should be secret' seems more viscerally compelling. SOX passed in the wake of Enron and WorldCom during the .com bust. The EU, broadly, seems less regulation averse than the US, but I'm no expert. That the US hasn't followed suit, despite the current backlash against social media and data tracking in general, is telling.

Most folks think someone getting ahold of their CC# is the worst-case scenario and they or someone they know has probably experienced it. It was probably resolved with a 5-minute phone call, and they probably blamed the last in-person retail transaction they executed before the fraudulent charges rather than some online company they bought a potholder from 18 months prior. They likely don't even consider the implications of someone using their SSN to open a mortgage, lease a boat, claim unemployment benefits, or work a year claiming total tax exemption on their W4.

Even many people who understand the privacy implications might not understand how frequently breaches happen, the practical steps to mitigate them, and whether they're proportional to the risk. Few could factually evaluate the inevitable industry FUD. I think it'd get way more pushback than the right to repair did in Massachusetts, and industry flung some pretty outrageous fear-mongering BS over that one— they implied non-proprietary car computer interfaces would result in women being stalked and raped. In a television commercial.

I think it's doable and very important that we do, but I completely understand why there hasn't been any popular grassroots uprising about it.


Who do you ask?

Politicians will say it’s the companies responsibility, and you should talk to them. Companies will say that they follow all data protection laws and that policy discussions should be up to the government.


There is also PCI, but that just makes you invest in an auditor.


I see a lot of comments about more punishment and regulation - but that could actually backfire and cause the problem to become worse.

More penalties would raise the stakes for the data, making it far more valuable to hackers. What's the value of data which, if exposed, can cause people to go to jail or lose lots of personal wealth? A lot, especially for ransom or blackmail.

Take HIPAA data as a prime example. What is the inherent value of health data to hackers - not much, for the most part. Maybe a little value if you get some public-figure data, but who cares what prescriptions I (a nobody) am on? The primary value to hackers of health data is exactly that it is very regulated and penalties for exposure are very high. This makes it great for blackmail and ransom. So now we have tons and tons of various nonsense "security" and "privacy" hoops we have to jump through and many $B's of cost related to protecting it. How many BS privacy policies have you gotten in paper form? How many privacy liability-waivers and other such docs have you been forced to sign before every medical treatment?

The better approach is to reduce the value and quantity of this data in the first place, so that hackers won't spend as much effort trying to steal it, and when they are successful it doesn't matter as much.

What if SSN's weren't used as ID numbers everywhere? What if it were much harder to abuse credit card numbers? What if we didn't keep allowing/REQUIRING companies to store so much personal information about us in the first place?


In Japan there are real consequences for exposure of personal information. Companies can be fined x-yen per customer based on the type of PII exposed. In addition to this the government can force a company to cease trading for a certain number of days based on severity (this has already happened at least once). I don't have any data pointing to whether breaches are any more or less common in Japan. But I can say security is taken more seriously, at least compared to other countries I've worked, and that full encryption of PII is now the rule rather than the exception.


One of the reasons is that businesses fail to understand or identify what the real risk is of having sensitive data taken. Even so, most organisations I have worked with couldn't even tell me where all of their sensitive data is, or even what they classify sensitive data as.

If we take a scenario where sensitive data is taken but there is no impact on business operations (e.g. encrypted systems & backups) it is difficult for businesses to be impacted apart from reputational damage. And realistically given the amount of cyber attacks on the news, chances are your customers will forget about it over a short period of time.

The issue with reputational damage is that we are in this "new" world of cyber security and we know its important and everyone else is being attacked (it seems). But also many people if not most people don't have security together themselves, or have any idea of how to poke a stick at it. So we're in this not so mature stage of cyber security where if a organisation has its data taken, it's more forgiving than say rampant financial fraud. Which has had decades if not centuries of global fraud incidents impacting the pockets of billions of people. As such when it comes to financial risk and penalties, there are more real scenarios in which people can do the wrong thing and it have a material impact.

It's simply a case of there not having been enough material cyber security incidents that have generally impacted people and organisations. It's got to get worse before it gets better.


Until they faces consequences other than "1 year of credit monitoring" it will continue. Especially given at this point various lawsuits mean most people have credit monitoring already.

The fact that credit agencies, etc are allowed to report incorrect information without consequence is a similar problem. There's no incentive for them to ensure accurate info. In fact they're incentivized to accept false information as it lets them offer "credit monitoring" as a service people have to pay for.


I'd argue this is absolutely the role of government institutions: To enact legally-enforceable standards of behavior that protect their citizens.

Won't happen.


That means it is uncorrelated with their short/medium term success.

If we wanted to fix that, we would either vote with wallets (we aren’t, so this doesn’t work) OR we could assign higher penalties for breaches.


For a lot of companies, it's not a real risk until it actually happens. In other words, despite thousands of years of scams, people still get scammed.


There's a lot of great comments in this thread pointing at different aspects of this issue. I think it's actually more complex than all of that, because while all of these things are true, it misses the primary cause of bad information security: People.

At every layer of nearly every company, nobody has any understanding of information security. Where-as, as a society, we have at least a basic understanding of physical security (we understand the gist of a lock, and use them regularly, and guard the keys), we have basically zero understanding of information security.

I've worked, in some capacity, around information security for nearly my entire career and I have found that even people I highly respect as technologists rarely have any knowledge of information security. Most of the information security side of the industry is filled with people who are trained on compliance and regulations, not on security, and they are seen as completely synonymous. Security has to be layered in order to be effective, yet that's been taken to mean several layers of different types of brightly colored middle-boxes with pretty dashboards, rather than an actual layering of security principles and a reasonable organizational posture. While these things can be tools, they are treated as solutions rather than tools that help you create a solution.

Most SWEs know next to nothing about application security. Most web devs don't even know what OWASP is, much less have any understanding of web security. Most networking folks (even those with Network Security in their title) know little about network protocols and protocol security, instead being glorified firewall rule writers. Most security architects only know about compliance and policy, nothing about actually identifying threat vectors and constructing robust organizational postures. And most executives don't care beyond what's required to comply with the law or their contractual agreements so leave it to "experts".

Most of the "experts" aren't experts. The fraud with information security isn't just what's being perpetrated by attackers, it's also what's being perpetrated by the entire information security industry, which is mostly filled with puff pieces calling themselves "experts" who don't actually understand anything about security at all, as well as vendors who sell security products that themselves may not be secure on the backend but have privileged access within their client's data environments.

All an attacker must do is find your weakest link. What you must do to protect yourself is ensure that your weakest link is stronger than anyone else's strongest link. There's a huge disparity in the effort and investment required, and it's an issue that can't simply be resolved by throwing money at it because most of the people lining up to take your money are their own sort of attacker committing their own sort of fraud.


And then you occasionally have people who know a thing or two but their hands are tied because organizations are dysfunctional and people have no autonomy.. or worse yet, are punished for sticking their nose into things that weren't on their task list.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: