Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would like to see stats from Tier1/Tier2/IX for that. Krebs claims it's 665Gbit/s https://twitter.com/briankrebs/status/778404352285405188 Such attack must be visible in many places, however not a single major ISP reported that in mailing list. Previous smaller attacks were reported 'slowing down' some regional ISPs. Perhaps ISPs got better.


I've seen the graphs from prolexic, the claims are legit


In a world of 10 and 40 gigabit NICs, why is 665 considered big?


Because you have to classify and filter out the spam packets before they reach the intended host and content, which is really hard to do at line rate, especially if you also plan to serve useful traffic.


I really like this comment. I think you're saying that moving data is easy, while computing the data hasn't kept up?

Then again, I could be reading into this too much, and the computing part has always been a bottleneck at backbone level.


At line rate, with millions of small packets coming in every second, even counting the number of packets per flow, a prerequisite for some of the simplest mitigation strategies, is really hard (often requires either expensive hardware like ternary content-addressable memory, or exotic data structures like counter braids that are very expensive to decode). A lot of the time people try to get around this by probabilistic sampling, etc. Similarly, MICA's ability to handle key value lookups at line rate on commodity hardware was considered a big success in the database community: https://www.usenix.org/node/179748. Hopefully that should be a good indicator of the challenges inherent in performing any sort of nontrivial computation at line rate, even really, really simple computation.

(This is why most DDOS mitigation strategies involve getting peers to load balance their traffic when it's still manageable, rather than buying bigger and bigger pipes; it's also why ultimately a large part of the responsibility for handling DDOS attacks rests on the shoulders of ISPs).


Linespeed is pretty fast, and there's a lot moving through. Routers are a bit like GNU grep -- they way to be fast is to not touch most of the data.

The more you have to touch, and the deeper you have to touch it, the more expensive it gets.

The real trick is figuring out what's good, what's evil, and downrating the latter whilst allowing the good. Given peering relations, BGP routing, and the sorry state of much of those protocols, tracing problems to their source, quickly, and getting a useful response, is difficult.


It seems to me that he's saying in order to move data at volume, you have to compute on it.

Computation is tricky. Per-bit, if you can handle the network input, you're probably able to fire packets up to the OS layer.

But when you need to run stats on the incoming data, e.g. an ML classifier of "bad/not bad" or "stop/passthrough", you might be O(n^2) or worse. Moore's can't hang.


Interesting. What ML algos are used to classify packets against DDoS?


None, really. It's mostly filters against common types of attacks at L3/L4, then OODA. Variations from normal get looked at and custom filters applied as appropriate.

And of course, there's lots of NOC to NOC back channel comms around this stuff constantly to stay relatively on top of things.


Not sure if this applies to DDoS, but a baseline ML method for security is outlier detection. For example, (1) you get a dataset that is mostly "good" data, with some "bad" data (2) you cluster it using something fast like k-means (3) data points are labelled as outliers if they fall further than some threshold from a cluster center.


I think that's the secret sauce for these quys. I'd be surprised if you can find out a lot about current techniques without signing an NDA and leaving your mobile phone in a box at security.


I still don't get it. Why not just absorb it? Serving static HTML and related data is fast. A single rack in a datacenter could easily serve this with out breaking a sweat.

Sure, this costs Akamai money they don't want to spend, but is such an attack noteworthy? Eh.


If the tiny "GET" requests are flooding in at 1Tbps, then the responses are going to be orders of magnitude larger, maybe somewhere around 500Tbps


You'd still be filling the pipe. And possibly filling many small(er) pipes at interconnection points. The last meters of pipes into a rack isn't where the issues are.


I'm curious, what sort of datacenter racks do you run that can serve 665 Gbps of traffic? For instance, Google's ToR switches (in its own datacenters) as of Jupiter (2012) were 16x40G, which is just 640 Gbps. Obviously, all of Google can serve a lot more than that, and you can get absurd stuff like 640-port Infiniband etc., but you seem to have pretty unrealistic ideas about network capacity. And, as another comment pointed out, that's just the input for this attack... the output would be a lot larger.


The limitation isn't the NIC, it's the processor (and possibly storage). The 10 and 40 Gigabit NICs exist but processing them on a conventional PC is hard.

It is, just about, possible to perform actions on every packet in a 10Gb stream on an x86 machine. You have to use a userspace stack, handle packets across multiple cores, and be VERY careful with what you are doing so you don't do cache misses. At 10Gb/s you're talking only a few hundred clock cycles per packet - anything that doesn't work as planned causes massive backlog.

Now try serving (dynamic) HTTP to that.


you need 665 gigabits of idle capacity to... wherever the attacker is coming from. If the attacker can send five gigabits from some town in vermont, and your provider(s) can't get five gigabits plus normal traffic from that town in Vermont to your scrubbers, then legitimate users within those networks will be denied service, even if users on other networks are just fine.

Essentially, it comes down to the fact that getting packets from point a to point b requires a lot of cooperation, and cooperation is difficult. Yes, yes, if you bought me the fiber, I could build you a 665 gigabit network, on the kind of money that a nerd could come up with, (not counting the fiber) but interconnecting that network with other people's networks? yeah, that's gonna cost you. Settlement-free peering is a thing, but it is really difficult to set up and maintain those relationships.


Because 665 is a bigger number than 10 and 40

:-)


theoretically if it was a well distributed bot net then it could be a few megs from a TON of different sources.. and since akami had their own ASN it could just be jamming up Akami and not other AS


Akamai has ~140 000 servers around the world. Attack was probably spread around many locations that's why you don't see any report on mailing lists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: