Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
L0pht’s warnings about the Internet drew notice but little action (washingtonpost.com)
177 points by weld on June 22, 2015 | hide | past | favorite | 67 comments


The more I think about this story, the dumber it seems to me.

The narrative seems to be, L0pht testifies, world ignores them, chaos ensues. But Mudge's testimony coincides almost perfectly with a software security renaissance. The reality is more like: L0pht testifies, world ignores them, gigantic sea-change in security leads to 9-figure investment in securing Windows, the near eradication of SQL injection from popular applications, universal deployment of TLS in financial applications, chaos ensues anyways.

What, exactly, would be different if people had "listened" to the L0pht? Would we have S-BGP? DNSSEC?

The simple fact is: in 1998, when this happened, nobody knew how to fix any of the problems. If we had known, we'd have been doing that. There were still servers in 1998 that used deslogin.

I'm very happy that a bunch of people I like got to put their handles on nameplates and get recorded testifying to dummies in Congress. I do not, however, think it was an event with much meaning.

Later.

I think it's literally the opposite of the gist of this story. Everything is much, much better than it was in 1998. We have made surprising progress, and addressed security problems with an improbable seriousness:

1. Most new software is no longer shipped in C/C++.

2. The devastating bug class introduced with new languages (SQLI) was, for public-facing software, ratcheted back from "universally prevalent" to "rare" within a decade.

3. 3 billion Internet users all run software that downloads unsigned code in a complex, full-featured language with a dizzying variety of local C library bindings, right off web pages, executes it locally, and it's a news story when Pinkie Pie wins Pwn2Own with a working reliable Chrome clientside.

4. Anyone who wants strong crypto can have forward-secret elliptic-curve DH AEAD transports with a config file tweak on their servers.

5. Microsoft went from MSDOS levels of security to "you can live like an investment banker if you can reliably produce a couple Windows exploits a year" levels of security, again inside a decade.

6. Despite its emergence as an entire new category of computing platform, with its own new feature set, the most popular mobile OS has --- it appears --- zero effective malware outbreaks.

7. Remember Sendmail? Remember BIND? Probably only if you're a security nerd. The last working SSH vulnerability was how many years ago?

As usual: everything's amazing and nobody cares.


> the near eradication of SQL injection from popular applications

There is a great chart from an analysis on the history of software bugs showing the rise and decline of SQLi/XSS bugs from between 2005-2010:

https://imgur.com/cY2cJ8W

Memory bugs remain relatively consistent (despite PaX and others effort to change that).

Source: https://www.isg.rhul.ac.uk/sullivan/pubs/tr/technicalreport-...


It's painful that they combined XSS and SQLI, because progress on SQLI has been much better than progress on DOM corruption bugs.


Indeed, I'd love to see an expanded chart (as well as a more recent one). This one is focused on two categories, memory vs web vulns.


I'm not in love with White Hat as a company, but they do collect stats across their customer base, and their annual stats have shown sharp declines in SQL injection.

DOM corruption is a somewhat complex class of vulnerabilities (see lcamtuf's "Notes From A Post-XSS World" for an example of why), and it's not surprising to see we're making less progress.


> I'm not in love with White Hat as a company, but they do collect stats across their customer base, and their annual stats have shown sharp declines in SQL injection

White Hat has a set of tests they run against their customers over time. They tell their customers what problems they find. Their customers (mostly) fix the problems.

I'm not sure that translates correctly to the outside world. The fact that their stats show a decline in the presence of SQL injection vulnerabilities could only be showing us that they have more old customers that have gone through a couple of reports and patch cycles than they have new customers who might not yet have fixed what they're told to fix.


I don't know: their observations square with my anecdotal observations over 10 years of appsec consulting. On my first ever web pentest, I got a 'OR''=' SQLI in the username of a login form. In 2014, when I left Matasano, that would have been absolutely shocking. SQLI has become far less common:

* Developers are taught to use parameterized queries

* Fewer big applications are built in PHP

* More projects use ORMs now than don't

* Random testers hoping for bug bounties hammer every application with SQLI scanners


Anecdotally, I've recently come across XSS in search fields and SQL injections in login forms.

One could argue that because of reputation and market share, Matasano gets customers who prioritizes security, making such vulnerabilities less occurring for Matasano customers.

Your points are valid.

Even if secure development practices exists, there's a lot of software in production being run by companies and government agencies with a very poor understanding of these practices. It may also be that these entities have very good security departments, but these departments are very limited in what they can improve internally because of lack in resources or policies.

There's a lot of companies out there who outsource a lot of stuff to people who don't know how to write secure code. Like White Hat (Error138): https://github.com/WhiteHatSecurity/Aviator/blob/e2d03093b94...

There's a lot of different angles to it.


As someone who's actively engaged in webappsec stuff, I concur with Thomas's observations.

XSS is far more prevalent, and I'm more likely to find PHP Object Injection via unserialize() protected by weak md5/sha1 auth (or outright naked) than I am to find SQLi in modern PHP apps.


With all due respect to you, everything would be different. I know even then it would've been a daunting thing to do but perhaps 1998 was the last year when it could've been done: tear it down and rebuild it securely.

I am sure you know this too well but let me remind a few people here who were not even born when some of this happened: The early years were mostly of trust. As an example, I remember running around even in 1994 with a 10BASE2 network tester to find where the network broke. 10BASE2 was a perfect example of a more trustworthy age: many machines shared a single cable and eavesdropping required zero effort. Every machine got all the frames and there was no encryption. Then came Fast Ethernet and with it 100BASE-TX and slowly switches replaced active hubs and this went away. But it often required rewiring buildings which took a long time. I do not have hard data and it's hard to define the start and end points but I would say it took at least five years if not a whole decade to really sunset 10BASE2.

Now of course doing the same on a world level would've been a daunting task. However the number of websites at this point were growing somewhat sedentary at least compared to years prior and later, see the data http://www.internetlivestats.com/total-number-of-websites/ here, compared to the meteoric growth in 1997 (334%) and 2000 (438%) the years 1998 (116%) and 1999 (32%) were slow and peaceful.


I understand that's what people think, but what I'm saying is that in 1998, we wouldn't have known how to rebuild everything securely. We'd have ended up with slightly better C standard libaries, S-BGP, IPSEC, and DNSSEC.

Here, let me sum it up this way: I think it's possible that the L0pht testimony predates SQL injection.


Buffer overflows were certainly recognised considerably earlier than that. I remember a colleague pointing out buffer overflows in the first STL string implementations. It's not hard to go from that to SQL injection, or any other similar technique.

Certainly, my (possibly rose-coloured) memories of the time includes a lot of, "OMG. How stupid can people be? Surely they know better than that!"

I guess what I'm saying is that some people definitely knew what to do about this and were trying to do it. Most people were ignoring it and saying things like, "Oh, you're just being paranoid. Why would anyone try to do something like that?" It's a bit pointless to say, "What would have happened if people had listened" because the point was that people didn't listen. That was the whole problem.


The first modern overflow exploit was Thomas Lopatic's 1995 HPUX httpd exploit. When he wrote it up, he claimed it followed the blueprint of the "microscope and tweezers" paper Spafford wrote about the Morris worm. The Morris Worm, of course, was from 1988. In the years between 1988 and 1995 there were, so far as anyone knows, a total of zero code-exec buffer overflow exploits.

I was in the room with Peiter, at a DC Summercon, as he tried to work out the exploit for Sendmail 8.6.12 that 8lgm had teased. He definitely didn't have it before 8lgm, and 8lgm didn't have it before Lopatic. Even the virus guys didn't have it.

It's weird to think that nobody put two and two together in, say, 1991 --- there certainly was motivation (that's the timing of the Sun-Devil Raids!) and so much vulnerable software.

But then, in the late 1990s, people honestly thought they could mitigate overflows by moving buffers from the stack to the heap. Reliable heap exploits were a big deal as late as 2003, when Matt Conover spoke to a packed CanSec room about the Windows Heap, in excruciating detail for over an hour. That's close to a decade between Lopatic and mainstream heap exploitation on modern heaps.

>shrug<


It is possible that I am misremembering. I remember him submitting a bug and being ignored as a crank, though ;-). It may have been some other kind of memory corruption.

It's hard to believe that it's only been since the late 90's that buffer overruns exploits have been around. I will have to believe you as you have considerably more knowledge on the subject than me.

I'm very much wondering now about the times I used to boot trace games to crack them and if I ever used such a technique. It seems so obvious now that I may be assuming that I must have, but it's so long ago that I really can't remember. Certainly getting the loader to move your code around rather than theirs was a normal trick.


I'd say buffer overflows "went mainstream" roughly after November 1996 when Phrack 49 with "Smashing the Stack for Fun and Profit" was released. At least I'd guess that's the most influencing article on the topic.


I thought by then there was at least some awareness - old memories of people evangelizing prepared statements with placeholders – but this was the oldest reference I found, from December 1998:

http://phrack.org/issues/54/8.html


Hrm you are right it certainly predates XSS http://www.thesecuritypractice.com/the_security_practice/201...


> better C standard libaries

What's the story with C standard libraries and security? How could they be better, as you see it?


You see, this is a much older discussion than most folks think. Back in the 70s, there was a lot of talk then about how badly software was built. "If buildings were built the way software is built, the first strong wind would destroy civilization". Remedies were proposed, dogmas (er, sorry, methodologies) developed, proselytized, replaced. Languages came and went. Howling winds came (see the "voodoo gods" in Count Zero) when we connected this all to the internet. Y2K came and showed some of the underbelly.

So, tear it down and rebuild it securely is actually many years to late 1998.

There are a few that know how to build software that isn't swiss cheese. Just picking two that I know of, nobody reads the whole volume set (as noted in Coders at Work), and for the other one, nobody wants to use it because it isn't under active development. The idea, even today, that a chunk of important internet software can actually be finished seems to be met with cognitive dissonance.

And there are organizations that know how to build very good software. But in todays fast moving businesses, who wants to be in a CMM 5 organization? Doesn't sound like much fun to me, and probably not to you either.

My estimate of the last year that all of this could have been fixed was at least 30 years earlier than your estimate. Or sooner. I worked in an organization whose core software, running today, was first written about 50 years ago.


"Tear it down" never happens for systems that are basically working, even if they have serious flaws.


It definitely happened to 10base2!


I think if the world had listened the software engineering business process would be different. Security would be a primary concern. Instead of management focusing on cost and time to delivery, ignoring technical concerns and treating IT like things like a black box cost center. Security is often a reactive concern, Target/Home Depo take it seriously AFTER they have lost their customers data once. If the world had listened, boards of companies wouldnt brush off concerns, or wouldnt be in these situations. If the world had listened people would be willing to spend more on the up front cost of development to do things right the first time. If the world had listened, upgrading decaying infrastructures would be preemptive.

The people this message should have been directed at were non-technical executives and managers.


> Most new software is no longer shipped in C/C++.

What language are the applications in? What makes them more secure than C/C++?


Prety much anything is more secure than C, because the language doesn't offer any support for the programmer. No matter how careful one is, a simple mistake could lead to a (potentially exploitable) crash.

Modern C++ can be written in a secure way, however, this requires more discipline and knowledge when compared to languages such as Java or C#, which are very forgiving with programmer errors.

As for what software is written in, that depends. For Linux, I guess that's C and C++. For Mac it's mostly Objective-C, C and C++. For Windows probably a mix of C#, C and C++. On iOS it Objective-C and C or C++, but new apps will maybe move to Swift. On Android it's mostly Java, with some C and C++. Web apps don't use C or C++, however, they're not as hot as they used to be, mobile is the cool thing now.

As one can see, there's actually a lot of C and C++ in production or still being written.


> dummies in congress

Are the folks in congress actually stupid? Or do they practice a different profession than you? Namely: the structure and interpretation of laws and policies.

How much do you know about, say... the field of nursing?


The problem is that they deal with making laws on a variety of subjects, which necessitates understanding said subjects. They don't understand the subjects.

Say what you like about programmers, but most of them don't actually have any job-related responsibilities in the field of nursing, breaking the analogy.


It isn't meant to be an analogy but a contrast.

They are indeed required to understand the subjects, but they have no realistic way to. There are simply too many subjects. If we want to sit around saying "legislators are dumb and don't understand us", fine. It won't solve anything, but it will make us feel really nice about how smart and special we are. I like feeling smart and special too.

But if we actually want to fix anything, we have to think about the system wholistically and understand what motivations and pressures a legislator is under. There are simple too many subjects for a legislator to understand all of them well. Committees help somewhat, but are flawed. Lobbyists are the current way that legislators gain information about industries but that comes at the cost of drastically warping priorities. If anyone wants to comment with some actual insight and detail into those problems, that would be nice.


>Are the folks in congress actually stupid?

Not all. But some of them are, for lack of a better term, really fucking stupid.


Some of them actually, honestly, don't believe in climate change, or nursing.


It's more or less beyond question that some of them are either genuinely stupid, or genuinely evil. Which do you prefer? I'm not sure, myself.


l0pht is a successor to Cult of the Dead Cow, which goes back to the 1980s.[1] Their "Tao of Buffer Overflow"[2] is still a good read.

The two big problems in computer security used to be Microsoft and C. Amit Yoran said that publicly when he was Homeland Security's head of computer security. That made him unpopular, and he resigned in 2004. Yoran was then replaced by a Cisco lobbyist who kept his mouth shut. (Yoran did OK; he's now the CEO of RSA.)

[1] http://www.cultdeadcow.com/ [2] http://www.cultdeadcow.com/cDc_files/cDc-351/


Yoran wasn't making a philosophical point about Microsoft. He was responding to the news cycle: we had just suffered the "Summer of Worms", which, because of Microsoft's position in the market, involved almost exclusively Microsoft systems.

Microsoft, to their credit, responded admirably to the events: they invested a spectacular amount of money shoring up the nuts-and-bolts quality of their software, training their entire development team (one of the largest in the world) on secure coding standards, hiring researchers to revise their libraries and deprecate unsafe interfaces, and adopting hardened C/C++ runtimes.


Successor? I don't think so. I think some individuals were part of both groups. L0pht and cDc were part of the same BBS scene. Like QSD's x.25 address, the phone number for Demon Roach Underground is a number I'll never forget.


At some point those got replaced by Flash, PDF reader, and Java. Now, as others have pointed out, the main threat is nation-state actions.


Rather encouraging to see mainstream media describe hacking accurately: "...insights about how various systems worked — and in some cases could be made to do things their creators never intended. This is the essence of hacking. It is not inherently good or evil. It can be either, or in some cases a combination of both, depending on the motives of the hackers."


The Internet itself, he added, could be taken down "by any of the seven individuals seated before you" with 30 minutes of well-choreographed keystrokes.

If this wasn't exaggeration, we should study the fortunate circumstances by which this calamity has been avoided for 17 years.


Peiter was talking about BGP. In 1998, you had to be somewhat diligent to get to a vantage point from which you could inject bogus BGP, and the Venn diagram between those people and "nihilistic assholes" is not that scary. In 2015, you can still technically fuck up BGP, but probably not for very long, and not without burning a lot of assets. Why would anyone bother?

The hunting and taxidermy of corrupted BGP advertisements is basically what got the NANOG crowd out of bed every morning; it's a pretty big chunk of the job. I always felt like the alarmism over BGP was a bit tone-deaf. Certainly, nothing Peiter said came as any surprise to anyone who'd ever managed default-free peering.


Further, I recall several of the L0pht members were heavily interested in TEMPEST and van Eck phreaking at the time. Really played it up in an ominous tone.


Well, that sort of scaremongering was part of the PR aspect of the whole thing. Back then (I've been out of the scene for a decade and a half now, I don't know if it's still as bad) the amount of money you could sell your 'company' (read: two guys in a basement) for, was directly correlated to the scariness of the stories you could get into the press.


I think this happened right before @stake "acquired" L0pht, but I'm not sure how lucrative that really was for them.


What would you have to do to fuck up BGP in 2015? Is it more or less the Autonomous-System version of ARP cache poisoning?


That's a reasonable way to look at it, I think. Except imagine an ARP where there were thousands of very highly paid network engineers constantly monitoring the tables.


Well, it's somewhat exaggerated. But a bit of BGP hacking can take large areas offline for hours. Why not done more often? No lulz or money in it. Hackers want the net as a whole to stay up for the same reason as everyone else. It's specific sites that are targets for humiliation or extortion.


The problem is now more state actors than 'hackers'.


It wasn't an exaggeration. Remember this was 17 years ago, we've gotten way better at firming up stuff everywhere, even if it's not perfect.

Imagine someone using all of the advanced techniques from today (DNS cache poisoning, DNS/NTP amplification attacks, BGP hacking, SQL injection, and so on and so on) and taking them back to 17 years ago when the world was naive and unsecure.

Also, we've had calamities, we just got over them. Code Red and then NIMDA caused huge disruptions, so did sasser and SQL Slammer. And we've gotten used to a world where people will use DDoS to try to take down sites or services for a variety of reasons ranging from profit to spite to boredom.

No, the internet never completely fell over and was unusable for days or weeks at a time, but a lot of people have been affected and it's just sort of become background noise in our lives the way tuberculosis and smallpox used to be.


We've seen some pretty large drops in the last two decades due to BGP hijacking with entire countries going dark.

So I wouldn't say that it's been entirely avoided.


It hasn't been avoided. This has happened a few times to various pieces of the Internet.

https://en.wikipedia.org/wiki/IP_hijacking#Public_incidents


Maybe something about possible outcomes? If you're a bad guy with a super exploit, you could bring down the internet. You'll get a laugh for a few hours, but then the world will respond with enormous resources to find you and bring you to justice.

Or, alternatively, you could go after some smaller internet companies, demand extortion money, buy a nice car and treat your friends to drinks.


> If this wasn't exaggeration, we should study the fortunate circumstances by which this calamity has been avoided for 17 years.

Leaving aside the truth of their claims at the time–because it's irrelevant–your comment makes the fatal error of assuming conditions haven't changed at all in 17 years.


Assuming I'm not dead, ISTM "changed conditions" would be such a fortunate circumstance, although perhaps somewhat unspecific. How have conditions changed? Have all the hackers been eliminated? Do hackers have no interest in taking down the internet? Have previously vulnerable processes been made more secure? Have previously trusted parties been removed from positions of trust? Can you fill in the blanks for us?


It was an exaggeration.

But it is certainly not especially hard for _governments_ to take down the net in their own country and in many cases reduce the degree of interconnectedness with other countries so far as to effectively take down large chunks of the Internet. The problem is that we do not truly have a network, instead we have a tree structure connected to a very small number of fat pipes. As originally envisaged the internet would be resilient in the face of the failure of one route because there would be many alternative routes but that is not what we have today.

This is a much bigger threat than the cracking of individual machines.


Depends on what you think you're trying to protect against. Having foreign powers or foreign mafias in control of large parts of your infrastructure seems like a big threat.


The title picture is wonderful.


And the name plates.


Yeah, that's an amazing image. I thought it must be doctored... but in the intro to the recording of their testimony, the person speaking ("chairman"?) says:

"Due to the sensitivity of the work done at the L0pht they'll be using their hacker names of Mudge, Weld, Brian Oblivion, Kingpin, Space Rogue, Tan, and Stefan."

https://www.youtube.com/watch?v=VVJldn_MmMY


The picture is worth a million dollars. Especially their nameplates, and the way they've dressed but are still very clearly pure fucking hackers, just by the looks on their faces. I want this framed.


> Even today, many serious online intrusions exploit flaws in software first built in that era, such as Adobe Flash, Oracle’s Java and Microsoft’s Internet Explorer.

Isn't that like saying "Many accidents happen to models of cars first built during that era?" Just because they debuted then doesn't mean they are substantially, or even remotely the same thing. How many complete rewrites of Internet Explorer have we had since then?


    How many complete rewrites of Internet Explorer have we had since then?
None?

There's an ongoing one that was announced in January with a preview released in March. https://en.wikipedia.org/wiki/Microsoft_Edge


Spartan?

https://twitter.com/dildog/status/612795030345007104

"It feels like 1996 again."


Microsoft Edge is Project Spartan.

"Microsoft Edge, initially developed under the codename Project Spartan..." (same Wikipedia link as above)


Some software has remarkable staying power; look at the incomprehensible mess that is the TLS implementation from Mozilla, NSS, or at the insane amount of features required to implement the pdf spec.


I believe I saw more IE6 users in the last year than drivers with 50+ year old cars.


There's two fundamental systemic blockers to investment in information security.

The first is a problem is with incentives over time. (The same thing happened with global warming, with overfishing, with deforestation, with cyber privacy rights, etc.) The problem is that the immediate incentives do not align with the long term incentives. If the country that can cut down the most forest or burn the most oil is the one that wins, relative to the other, a global race for power projection - no country will want to perform in the short term what it must in the long term.

Alas, today the short term incentives in software and hardware development are mostly the same. The security community has long preached that built in security as a crucial and fundamental engineering design goal. Today, as it has been for decades before, software is not competitive if it has security built in. It raises the costs of development and it slows production and building security awareness into every developer would require years of additional professional experience or schooling: building in security is a competitive disadvantage.

The second problem is that everyone's threat model is different:

- Consumers want their computers to run quickly and do not want their information or identity stolen. They want to have convenient and reliable control over the privacy of their online interactions - from the public and from law enforcement.

- Industry does not want to spend more time and treasure creating fewer visible features. Their existential threat model is losing their business by being too slow at production. Corporations are also scared dumb of having a SONY-style or Target-style breach.

- Government wants to be able to peek into all communications of everyone including its citizens. It wants to be able to hack into other countries - both their industrial and their government sectors - and those of private foreign citizens. It does not want the same to be true in reverse.

It's also true that the types of systems used by the military are different than those used in industry which are further different than those used by consumers. Where do you allocate investment in security? Consumer internet browsers? Virtualization for enterprises? Network intrusion detection for corporate LANs? Access control for government systems? Which do you prioritize? (Granted, its true that some technologies are shared between these classes, such as web browsers)

What's happening right now is that the discussion about threat model is being negotiated (though not in those conscious terms). Governments make their case about national security - how they need backdoors - and how they would like computer security to work. Security professionals - many of them private citizens - have separate threat models and can't agree with government. Individual citizens want privacy - and can't agree with government or industry. Industry wants to get customer and competitor data but also doesn't want to leak their own.

To the degree that the threat models are compatible, some level of real investment can be made (today there do happen to be large scale efforts to mitigate cyber security risks - particularly threat intelligence sharing programs).

Yet fundamental contradictions in threat models will keep the direction of security in limbo and worse if some threat model 'wins' it will comes at he expense of the others. Government's goals, even in so labeled 'free' countries, are disaligned with their citizens on threat model. Government goals themselves are further internally contradictory, as they would like computer networks to be both secure and insecure (giving birth to phraseology such as "NOBUS").

Today not only are we not able to secure the internet and computer systems, we still don't really know what a secure internet would mean.


long hair: hacker credibility +1

beard: hacker credibility +1

nickname/handle: hacker credibility +1

glasses: hacker credibility +1

suit: hacker credibility -1


suit: hacker credibility -1

And here I thought we where above judging people by how they chose to dress.


> suit: hacker credibility -1

Social engineering.


suit: hacker credibility -1

social engineering +1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: