Thursday, May 14, 2015 - 18:37

Open resolvers and rogue networks. Scylla & Charybdis in the network

Open resolvers are generally considered a plague. Pretty much what open relays are for mail. And there is a good reason why certain people eyeball them with suspicion.

What is an open resolver?

Open resolvers - as the name suggests - are DNS resolvers that can be [recursively] queried by anyone. Some resolvers are deliberately open to provide services to otherwise filtered users. Most are open because they are badly configured.  With mail relays spammers can abuse the mail server to relay tons of spam on their behalf. This is a problem that is obvious to anyone. If you don't authenticate users before they can send mail everyone can do it and that means a lot of spammers will do it once they figured it out.

With open resolvers the problem isn't quite that obvious and the problem isn't just with being open. It's a more complex situation that makes them significantly more dangerous than mail relays.

What is the problem with open resolvers?

They are an excellent tool to amplify dDOS attacks that rely on congesting the target. A query is rather small while a reply can be extensive. With DNS reflection attacks you can get a factor of 40 to 50 easily. So while you hit the DNS resolver with 100Mbit/s it levers that to over 4Gbit/s. Now some folks might think: Wait a second...If I hit it with 100Mbit in the upstream I'm effectively getting 4Gbit in the downstream. So I'd dDOS myself. With TCP you absolutely would and if you just query it the same would be true for UDP. But since UDP does not establish a connection you can spoof your IP. That's why it's called a reflection attack. You send 100Mbit/s but the one getting the 4Gbit+/s is not you. It's whoever is parked at the spoofed source IP.

The more attacking sources and the more open resolvers you have at your disposal the more heat you can put on the target. Considering the number of open resolvers this can be cranked up to hundreds of Gbit/s or even Tbit/s. If you cannot evade the attack or push it upstream you'll need that amount of spare traffic. And having hundreds or thousands of Gbit/s floating around is not very likely unless you are a provider yourself or a really big network. There is one thing that is mitigating these attacks in general. Badly configured networks are not very common where lots of captured clients can be found.

Rogue networks...

To utilize this attack you need to hide the real source of the request and replace it with your target.

The network above is responsible for a subset of IP addresses in the range of 12.x. A client from that network cannot have a source IP in the claimed 98.x range and packets cannot originate outside the network. These packets must not be routed and dropped at the network's edge routers. Preferably earlier. But that is where they absolute have to be dropped. Doing that is fairly easy these days. It's actually a feature that just has to be switched on [or not off] on most somewhat modern devices. Without this royal 1st class fuck-up this attack would not be possible as spoofed sources would never reach the resolver above. They would be dropped at the edges of the attackers' networks. If there is one single idiot in this scheme it's these guys. People who operate a single DNS server may or may not be very network savvy. But those who operate entire networks really have no excuse for this feature.

The one thing that mitigates these kind of attacks in both numbers and scope is that most large networks with a lot of consumer customers implement this regime properly. And with that botnets are not a good option to launch an attack. Most bots in a network are captured consumer desktops. Captured servers are mostly used for C2 purposes and a good number of hosting and collocation providers also keep their egress traffic clean. The more common culprits of this kind of behavior are corporate networks, small local providers and providers with questionable reputation for which this actually might be a feature to offer. So we basically have less competent operators, ignorant idiots and criminals.

We can try to attack the vast number of open relays. It's a futile attempt doomed to fail. The number of open relays is more or less slowly increasing. And removing some from the equation is useless. The networks on the other hand are a much smaller number and they can be much better categorized. They are simply all bad. Taking that into account they can be further categorized in those that need assistance to fix the issue, those that need a warning shot and those that need to be disconnected straight away. If networks from the other two groups do not react they will be disconnected as well. It's significantly more effective. They are easier to find as well. The only downside is that they are significantly harder to probe. But all in all it's the easier and more efficient solution. Specifically since there might be a perfectly valid reason for open resolvers. As I said earlier. Most of them are just badly configured. But some of them are deliberately open to provide a service that cannot be provided from within a requesting network. Mostly deliberately.

Most coming under attack by this kind of attack see resolvers attacking them. But that's not really true. It's not the resolvers who are attacking them. They are just replying to a query which from their POV is a perfectly legit query coming from the attacked network. The arguable part here is just volume. At some point you're supposed to realize that a source is a little bit too active to actually request DNS records or that your traffic is way above your baseline. The foundation of the problem however is that the actual attacker can query on behalf of something he clearly isn't. And that's absolutely not a secret for the network that allowed that UDP traffic to egress.

Add new comment

This form is protected by Google Recaptcha. By clicking here you agree to include Google Recaptcha for this session. The page will reload and the form will become avaiable.