Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Attack on DNS root servers (root-servers.org)
182 points by sajal83 on Dec 8, 2015 | hide | past | favorite | 90 comments


I suspect that this might have been a botnet showing off to its potential clients. This may explain withholding of the domain names queried (not to give advertising to the botnet).


Well I'm pretty impressed.

    observed traffic volume due to this event was up to approximately 5
    million queries per second, per DNS root name server letter receiving
    the traffic.
i.e. over 50 million queries per second distributed evenly across IPv4


It was presumably done over UDP where it's trivial to fake source IP. What's the minimum size of a valid UDP-based DNS request? Let's guesstimate 25 bytes. Then 50M/s * 25 bytes = 1.25 GB/s. Or 10 Gbit/s.

Is that really so impressive these days?


It's a 12 byte header + the query section, which is 4 bytes + the qname, a sequence of labels. Each label is a length octet + the ascii encoded characters of the label. A query for "us" would be 12 + 4 (02 75 73 00) + 4 = 20 bytes, plus the IP header (20 bytes) + the UDP header (8 bytes), so 48 bytes.

https://www.ietf.org/rfc/rfc768.txt

https://www.ietf.org/rfc/rfc791.txt

https://www.ietf.org/rfc/rfc1035.txt


Thanks :). So ~20 Gbit/s.

(Also: I want your username.)


You're definitely asking the right question, and IMO the answer is "sortof." It's not that impressive in the grand scheme of DDoS, given major attacks will sustain 100 gigabit traffic[1] sometimes for days on end. But 20gbit/s would likely be enough to take down a small & unprepared business.

[1]: http://www.techworld.com/news/security/worlds-largest-ddos-a...


20 Gbps definitely isn't very impressive given the current landscape, but it can probably take down many medium-sized businesses as well, if they have no DDoS mitigations. Even a large one, if they somehow had no mitigations or no decent security team.


20Gbps is actually still impressive if it is a set of valid requests like the parent seems to imply when doing the calculations. 20Gbps via a reflection technique is easy, but also relatively easy to filter and just requires a large pipe. 20Gbps of traffic doing legitimate requests that you can't immediately and trivially filter/rule out is still an impressive feat right now.


It's high, but DDoSs over 500 Gbps have been observed in the past 2 years. So relatively speaking, it's not that extraordinary.


Is it really that trivial to fake source IP? I think pretty much any ISP wouldn't let such packets through (or am I missing something?), and you are also easier to find then (well, if you are not careful that is).


It is, way too many ISPs still don't filter packets with obviously spoofed IP addresses.

That said, even if the attackers in this event didn't spoof the IP address, they would almost certainly have still had a very wide distribution of addresses.

> DNS root name servers that use IP anycast observed this traffic at a significant number of anycast sites.

DNS root name servers are BGP anycasted: without knowing the maintenance routes, any packets you send will get routed to the topologically nearest instance. So, since the traffic source managed to hit multiple, geographically disperse anycast sites we can infer that they were able to generate traffic from worldwide traffic sources.


If this was a botnot, any one ISP is only seeing a tiny fraction of the load.

Even if it were from a single source, it also isn't that hard to find an ISP that doesn't care. (They cost slightly more, but if you're a bad actor, presumably it is worth it.)

Edit:

"I think pretty much any ISP wouldn't let such packets through"

If you google "BCP38", you will find well over a decade of network operators discussing specifically this topic and the reasons why ISPs (and other networks) don't, not to mention all the fun the kvetching and meta-kvetching that accompanies any technical discussion that's lasted so long.


After you have been a target of ~50 Gbps NTP reflection attacks that nearly destroyed our company, it's hard to be impartial in these discussions.


50gbps? Who did you piss off, damn.


Yes, it is (especially UDP). Many networks still don't filter properly.

"The solution to this problem, described in RFC2827, which was written some 13 years ago by Paul Ferguson and Daniel Senie, is to block IP packets entering the internet which have source IP addresses which are forged..."

http://tools.ietf.org/html/rfc2827.html

http://www.bcp38.info/index.php/Main_Page


Reality check: ~30% of active AS worldwide don't drop spoofed packets originating from their networks.

http://spoofer.cmand.org/summary.php


It is a bit like polluting the ocean by dumping wastes. It cost less for the polluters if they are not caught.

For "typical AS router", is it easy or cheap to block spoofed packets?

I wonder if that test software/website can/should "OUT / Shame" the AS routing subnets as "Major Internet Polluters" and publish a monthly reports to shame that 30% polluters.


Certainly both our upstream providers allow through alien source address packets. This has caused me some head scratching on occasion, as in "how the #^$&#^ is this even working??". I suspect the reason is that it is non-trivial to collect and manage the list of kosher subnets once you exceed a certain size.


On residential ISPs it's generally not so easy, but I think pretty much every dedicated server provider I've used has allowed spoofed source IPs.


"Source Address Validation and BCP-38." ISPs should validate the source address of UDP traffic from their end customers. This would end most UDP based volumetric DDoS attacks.


It would help reflection attacks that direct e.g. DNS responses to the target. It wouldn't help when the DNS servers themselves are the target.


No it would help because instead of giving up on tracing the attacks since the source address was spoofed, you would know who was spamming packets and get them black holed.


The OP says that IP addresses were "randomly distributed" over IPv4 space. That's very unlikely for non-spoofed botnet addresses.


The botnet would still be able to perform an attack of the same size. And with many validation schemes it would still be able to randomize the last octet or two, avoiding direct identification of compromised computers.


Yes, but for a volumetric attack, it doesn't matter if you know the source IPs. It just fills your pipes until legitimate traffic can't get through. (This wasn't a volumetric attack though, which is why it would have helped.)


But most volumetric attacks are reflection attacks, which would be impossible if BCP-38 were implemented everywhere. Direct non-reflection volumetric attacks of significant magnitude (say above 40Gbps) are almost non-existent.


It would make filtering out malicious requests easier, which would improve performance for people whose machines weren't botnetted.


In this case, yes, and it would also reduce load on the servers quite a bit. But in a volumetric attack, your pipe is full already. Any filtering you apply after that can only weed out bad traffic; you can't fit any more good traffic in there.


The beauty of DNS: No one was affected or noticed the problem. Resolvers just tried another one if they didnt get a response from one of the root servers.


DNS would be a disaster if taken down. It's also impossible.


So what were the domain names queried?


related ?

Day 2: UK research network Janet still being slapped by DDoS attack DNS services appear to be targeted, switching may work

http://www.theregister.co.uk/2015/12/08/uk_research_network_...


What made this unique now? Was it simply a high load?


Typically what you see are "amplification attacks". That's where Alice wants to DOS Bob, so she spoofs a request to Charlie that appears to come from Bob. This results in a message from Charlie to Bob. The message from A->C is crafted such that it results in a much larger return message from C->B (hence "amplification"). That lets you create an attack that produces a multiple of the bandwidth that you actually control. Then you have a bunch of machine spam the message.

In that case, you see many messages from the same source address (meaning the target under attack, i.e. Bob), but the data requested may vary. In this case, the source addresses were uncorrelated, but they all wanted the exact same address, so basically the opposite.

I can't say that I know what it is, but when you see massive spikes like that it's usually a botnet of some kind (whether it's infected machines, injected connections, or whatever method). Perhaps a bunch of bots resolving their next C&C master?


Do you have any links to resources on Botnets. It's an interesting topic that I know very little about (e.g. How they work, how they come into existence, how they're controlled/monitored, etc). It sounds like you know a decent amount about them.



Awesome, thanks!


Towards the bottom of the extremely short article/message:

  3.  Analysis

   This event was notable for the fact that source addresses were widely
   and evenly distributed, while the query name was not.  This incident,
   therefore, is different from typical DNS amplification attacks
   whereby DNS name servers (including the DNS root name servers) have
   been used as reflection points to overwhelm some third party.

   The DNS root name server system functioned as designed, demonstrating
   overall robustness in the face of large-scale traffic floods observed
   at numerous DNS root name servers.

   Due to the fact that IP source addresses can be easily spoofed, and
   because event traffic landed at large numbers of anycast sites, it is
   unrealistic to trace the incident traffic back to its source.

   Source Address Validation and BCP-38 should be used wherever possible
   to reduce the ability to abuse networks to transmit spoofed source
   packets.


> was notable for [random source addresses]

I thought that was a common thing to do if it's not reflected. Flooding the pipe is older rather than newer.

Thanks for clarifying though!


   Most, but not all, DNS root name server letters received this query load.  
Why would you want to take down every DNS server though? That's not a very effective tactic due to caching, and what's the motive?


Possibly testing or demonstrating a botnet. I doubt the goal was to actually bring down the DNS root servers. That's been tried before and it's never even made a blip - the system is massively over-provisioned, for good reason.


I wonder, what kind of machines are those? How many of those are currently operating, and where?


When things first started, a root server might have actually been something sitting under Jon Postel's desk, or in the back room of a University.

But these days, root "servers" are geographically load balanced clusters of machines. Think of something like the Akamai CDN, but instead of http(s), this CDN serves up mostly UDP/53 and TCP/53 traffic. The IP for a root server is AnyCast, and the root server operators will balance traffic between sites by adjusting BGP.

Most have built their own custom UDP load balancers at each site and behind those load balancers are several hundred physical servers to respond to the incoming queries. Zone updates are pushed to each site from the back office, so each physical server should have a complete copy of the the "." zone.

A root server operator, operating 1 of the world's 13 largest public udp system, is often under attack, either directly or (as mentioned above), used as part of a reflector attack against someone else. Generally speaking, these systems are over provisioned enough so that a direct attack has minimal impact. But reflector attacks are the main concern. Either way, when an attack starts, the operator has to find something unique in the query itself (as eliminating source addresses in a dddos is nigh impossible), create a filtering rule, and push that to all of their load balancers.

Disclaimer: I used to work for a root server org.


http://www.root-servers.org/ has the answers you seek. There are currently 13 root servers operated by 12 different entities (Verizon, NASAN, RIPE, ICANN, etc.) and most locations have multiple sites (physical instances).


Thanks.


Every (almost every?) letter is operated by a different organization, so I'd expect the setups to be different.


The root servers get large traffic spikes quite routinely, this event was simply an order of magnitude or more larger than any previous events (AFAIK). Also, it actually managed to DoS a few instances, by way of saturating their throughput--though as the notice mentions, the overall DNS wasn't significantly affected aside from a few queries having to be retried.


You can clearly see the spike in this graph[0] (scale it down to Dec to see more detail). These graphs[1] show that it was limited to IPv4.

[0] http://a.root-servers.org/metrics.html

[1] http://k.root-servers.org/statistics/ROOT/monthly/


Nice!

Looking at IPv4 Sources, the spike is mind boggling.


Why is root-servers.org not https?


You transmit no secret information to it, and it none to you?


But how do I know the information it sends to me hasn't been MITM'd if it isn't SSL'd?


But if you don't have DNS how do you download the revocation list?


How is a system with at least the ability to validate the origin but without repudiation worse than a system with absolutely no security at all.


Sure. Fair. : )


And you don't mind if the traffic is intercepted and modified?


.....Why would it be?


to protect a visitor's privacy


What privacy? Your IP and the hostname of the website you're connecting to isn't encrypted over HTTPS anyway. The content isn't sensitive and there's no cookies on the site. The only remotely personal data would be your accept language header (which could be guessed from your IP) and user agent string (which you can just spoof anyway if you're really that paranoid).

The MITM argument has more merit, but even there I can't see it making much difference here given it's niche appeal. Plus given it's tech-savy bias, most people will be running a reasonably hardened system (latest patches, et al) anyway. Not the best argument against running TLS I'd admit; but still a point worth raising since the only argument for running HTTPS is to prevent malware injection.

Obviously in an ideal world everything would be served under TLS. But let's be pragmatic about which sites we bully into switching.


Your IP and the hostname of the website you're connecting to isn't encrypted over HTTPS anyway.

AFAIK website hostname is visible when using SNI.


Indeed, but SNI is more than 10 years old now so very well supported. I'd appreciate someone else correcting me if I'm wrong here, but I believe SNI is also enabled by default (where it's supported).

In any case, even without the hostname header, it doesn't take much research to find a short list of possible candidates (eg https://www.virustotal.com/en/ip-address/193.0.6.136/informa...).


Thanks for going deep into this, let me add though that it's not about an ideal world or bullying, HTTPS should be the default. Lets Encrypt managed that for us already, pretty soon it will be the default.


Or to ensure that the content is not tampered.


Nobody would tamper with this content.


China injected javascript malware into non-https traffic that joined the users into a botnet that launched a DDOS attack on Github. The "Great Cannon".


The "Great Cannon" only targeted foreign traffic destined for Chinese websites. This one is not.


Yes but they just as easily could have targeted traffic to this site as well.

Also a number of ISPs and wifi hotspots inject ads, and I know Verizon injected a tracking header based on your mobile plan.


So Verizon is going to inject some 0day malware into this page to, what, serve you more ads?

If you're concerned about tracking cookies or headers, get a plug-in to stop it. Every website on the planet should not need to use https just because Verizon wants to make money off targeted ads.

And nobody is attacking this page to hack individuals. Of course you can. Security isn't about preventing every single possible attack from every possible angle. It's about making attacks more difficult when one is plausible or likely. Nobody will attack you through this particular website. So https is not needed to prevent a targeted attack.


That's not actually true. There have been several documented examples now of people injecting stuff into HTTP requests when they pass by (ISPs injecting notifications, ads, people running proxies injecting malicious javascript, etc).


It's completely true. This content would never be targeted for MITM.

It's a dashboard for displaying the global locations of DNS root servers and links to their authoritative organizations. Not only is this an incredibly niche site, all DNS root server information is replicated around the world by multiple organizations. Nobody uses this site to maintain their DNS trusts, it probably gets incredibly low traffic, and going out of your way to MITM it would be a lot of work for no payoff. This is a terrible target. Nobody would bother.


It's not strictly about targeted attacks. There are people who modify unencrypted content that passes through their system regardless of what content it is. There have been several presentations on this topic, but I'll link the slides for one [0]. Here's an article about an ISP injecting ads in case you don't think this sort of thing happens in real systems [1].

[0] https://www.defcon.org/images/defcon-17/dc-17-presentations/...

[1] http://arstechnica.com/tech-policy/2013/04/how-a-banner-ad-f...


Is there significance to NTP requests in relationship to DDOS?


Yes; NTP is an amplification vector, which means you can spoof a small NTP request and generate a large NTP response aimed at your target.


Yes. Both NTP and DNS operate over UDP. UDP is a connectionless protocol, which means no connection handshake needs to be made in order for a data to be delivered to a target IP address. What generally happens is, one attacker will send many requests to a many DNS and or NTP servers whilst spoofing their IP address to make it appear as if their victim is sending all of these requests. No connection handshake happens to verify that the victim is actually making these requests. So, every server that the attacker sent this request to will send the much-larger answer back to the victim. If DNS were to only operate over TCP (which uses a connection handshake), the internet would be much slower, because connection handshakes can take a while.

However, this isn't what happened on Monday. It seems like one attacker with a lot of systems used those systems to query someone's domain name whilst spoofing many IP addresses at once. This in turn overwhelmed many of the root servers, and possibly several authoritive DNS servers in the process. Sounds like a botnet owner was showing off how much power they have.


They don't mention otherwise but do we know if the attack has happened again since 1 December?


China testing something new? Or maybe some scriptkiddie testing their new botnet?


Why China of all of the 193 countries? What about Russia? US? Brazil? England?


We've always been at war with Eastasia.


Because it's always China, maybe?


In the limited logs I've seen so far, I've seen IPs from both China and Russia.


Donald Trump's failed attempt to shut down the Internet.


I bet the observed "random" source addresses are open recursive DNS servers. For this kind of attack they provide essentially free traffic-washing for whatever actual traffic-generation mechanism the attackers have.


Nope.

The open recursive DNS servers, are real DNS servers, with caching and backoff logic. If, say, there are 94k [1] open DNS resolvers in the wild, each will ask you one DNS question for example.com, cache the answer and that's it.

The big volume for the "fixed domain" queries indicates proper BCP-38 spoofing.

[1] http://public-dns.tk/


The trick is to request random top-level domains, where each request will necessarily trigger a lookup to the root.

Further, recent research has shown the number of open DNS resolvers to be in the range of 15-30 million[1].

Since the article describes a single domain name was used in the attack however, that's not what happened here.

[1] http://icir.net/mallman/papers/dns-probe-meth-imc13.pdf


Unless the attacker controlled the domain TTL, maybe? But good point -- I was thinking of a similar attack using random domains.


Open recursors asking for random subdomains can generate bigger volume of attack, but still, they are smart and will fall back if the server is overwhelmed.

Even if you're assuming 100 qps from each of the 94k recursors, that's only 9.4M qps. And most of the recursors will notice lack of answer and will slow down / stop the queries. In practice random subdomain attacks rarely generate more than a million qps (YMMV, there are exceptions, technical nitpics, etc).


what was the query string?


Rooftops?


Root Server Operators = rootops


Ah, I see. Thanks :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: