I've been following Bunny (BunnyCDN/Bunny.net) for a while and love what they are doing. Couple of suggestions:
• Every code snippet should be copy/pasteable (and work well), that article might be the entry point to what many are doing.
• I have this feeling this is following Cloudflare's Workers style of `addEventListener('fetch', eventHandler);`. I need to write a 500-word essay on that, but the short version is that I strongly believe that it'd be way better for their clients if the client could just do `export default async function handleDns(event) { ... }` instead of having a global (what?) context where there's a function unattached called `addEventListener` and you need to know it's for `"dns"` (is there even other possibility here?) and that the respondWith() only accepts a promise, which is not common (it's more common to accept an async fn which then becomes a promise). Compare these two snippets, current and with my small proposal:
// Current API:
addEventListener('dns', event => {
event.respondWith(handleRequest(event.request));
});
function handleRequest(request) {
return new TxtRecord('Hello world!, 30);
}
vs
// Potential/suggested API:
export default function handleDns(event) {
return new TxtRecord('Hello world!, 30);
}
This way it's easier to write, to reason about, to test, etc. It's all advantages from the client point of view, and I understand it's slightly more code from Bunny's point of view but it could be done fairly trivially on Bunny's end; they could just wrap the default export with this code on their infra level, simplifying and making dev lives a lot easier:
// Wrapper.js
import handler from './user-fn.js';
addEventListener('dns', event => {
if (typeof handler === 'function') {
event.respondWith(handler(event));
} else {
// Could support other basic types here, like
// exporting a plain TxtRecord('Hello', 30);
throw new Error('Only handler functions supported now');
}
});
> I strongly believe that it'd be way better for their clients if the client could just do `export default async function handleDns(event) { ... }`
I'm the tech lead of Cloudflare Workers and you're absolutely right about this. We actually introduced such a syntax as an option a while back and are encouraging people to move to it:
Your argument is exactly one of the reasons for this. The more general version of the argument is that it enables composability: you can take a Worker implementation and incorporate it into a larger codebase without having to modify its code. This helps for testing (incorporating into a test harness), but also lets you do things like take two workers and combine them into one worker with a new top-level event handler that dispatches to one or the other.
Thanks! This article is pretty much what I had in mind and exactly my 500-words argument! I don't know how I missed it. This is def one of the things that made me think Cloudflare Workers were not so polished. Are you hiring? I've got many more ideas where this came from :)
That was actually the original idea, then for compatibility reasons, we decided to take this path initially. We are exploring adding fallback support as well, given that it greatly simplifies things and DNS can't really afford to do a whole lot of waiting anyway.
We serve thousands of custom domains for our SaaS customers.
The end users of these domains are globally distributed and served from 14 different data centre locations across the world.
To do the geo IP matching we tried a lot of things, third party services etc but couldn't find one that works well and are priced well.
For example, the hosted DNS service we use also have a IP based filter chain feature but are priced around $22 per domain per month as add-on.
At the end, we built a anycast based solution that was very painful to setup but works fine now and can use a single A record that works across the world. We had a get a ASN, a /24 block and hell lot of back n forth with a government run org to set it up.
A "hosted" scriptable DNS server which takes the location as input and output IP of nearest edge server as output is the exact thing I needed. So yes there is definitely a niche market for it.
I still have to explore and see how closely bunnydns is able to get the source IP/location (tricky) and how does health check etc could work but definitely something I would explore and consider.
We don't strictly use an IP DNS database to figure out the location, but a series of factors that make it much more accurate than standard GeoDNS. Something we've had to do to improve our own routing for the CDN. We're constantly looking for feedback and how to solve problems, so make sure to reach out. Your feedback is what allows you to improve.
They had me at the combination of the words "scriptable" and "DNS." My monkey brain is susceptible to that clever trick.
I'm struggling to see what this could be used for, but the comments here help.
In summary:
- an alternative to anycast.
- an alternative to routing inside your app (your app could detect the IP, and have different behavior based on rules internally). This means you are always going to the same origin, which scriptable DNS would prevent, you could put things at the edge and reduce hops.
Years ago I did school IT, during the time when all websites moved to SSL-only. We used to filter websites easily with Squid and something called Dan's Guardian. When Google switched to SSL-only we had to change. We could have required everyone to install certificates, but instead we used PowerDNS with Lua to force all Google search related queries to answer with a Google-provided CNAME.
I know that we all have our preferences of langauges, but is Javascript in 2022 actually any worse than anything else as a scripting language? What would you prefer instead?
That's of course purely anecdotal, but recently I explored static site generators.
One of these was Hugo(written in go, which is pretty young ecosystem, I'm not very familiar with). Everything was pretty straight forward.
Other one was Gatsby v4 (seems like pretty mature JS project). Oh boy.
Docs suggest multiple ways to initialize new project, depending which page you open and without clear explanation of the difference. NPM will download ungodly amount of dependencies and then inform you happily, that you have 16 critical vulnerabilities. Plugins tend to create dependency hell.
In general each time I try to do something with JS I end up debugging issues that require some arcane knowledge, filtering outdated docs and dealing with general flakiness of everything.
While I get, that on frontend it might be hard to avoid, I don't get why someone does it to themselves, when having other options (purely subjective opinion - I spend most time on backend, so I'm probably biased).
Fair enough. I do agree that JS-based libraries, frameworks, and tools tend to have bad docs and questionable API designs, compared to equivalents that I've used in other languages. There is some cultural issue there that I don't understand.
But... this is an embedded scripting environment! You aren't using any of that stuff.
Yes and no. You're definitely less exposed to JS ecosystem, but it's often the same issue with available examples, code snippets, docs.
I've similarly bad experience with for eg. plugins for VS Code. I wanted to add more supported syntax to org-mode plug in, at which point I found that their test suite is messed up.
Of course none of this is inherently fault of JS as a language, but current development culture around it promotes use of lots of silver tape, which makes me nervous, when I have to use anything JS-based.
It comes down to a combination of factors, but the long and short of things when it comes to content-focused sites specifically is that companies want all of:
- A modern user experience with rich content
- Authoring tools which are easy for non-technical users to understand
- Fast content delivery times
- Low operational costs
The problem is that getting all of them at once requires a bunch of fiddly integration problems. These problems are compounded if you throw e-commerce into the mix as well, in which case your marketing department is gonna want all kinds of analytics to try and suss out what users are doing, when, and why.
So, a lot of the weirdness and complexity has to do with making projects adaptable enough to handle a variety of different use cases while still offering at least a little bit of lift (vs. rolling the entire thing oneself).
Well dang, that's what I get for taking too long to implement my idea :-D
Seriously this is great. I started building a "scriptable DNS" to make it easy to have a DNS record that always points at the valid K8s nodes in my cluster (and randomizes the order of the IPs each time). Since nodes can come and go very quickly (especially during an upgrade), and their IP changes every time, it's useful to be able to act dynamically.
This is most assuredly better than what I was building though. Mine is rust-based but the "script language" is a very simple DSL. I considered allowing docker containers that receive some command arguments and must write the answer to standard out, but that felt like a brittle interface and I worried about performance (even with offering a cache). I also considered writing it in Elixir and allowing elixir code snippets, but I got scared of how hard it would be to secure that.
Anyway really neat idea! I hope to see more innovations and implementations!
I can't nail down the exact date, but I believe PowerDNS 2.0 shipped sometime around 2001-2002 with pipe backend support. It allowed you to craft dynamic responses to DNS queries from any language you could get to run on Linux.
Yeah this was my thought, too. I was using PowerDNS to do this sort of thing within a basic home environment. But I have no idea if it has the necessary features to keep a production environment happy. It was fun to play with.
Curious how you imagined handling TTL and response caching in this kind of scenario?
In my experience, ISPs (particularly residential providers) sometimes ignore/override the TTL in authoritative DNS records and aggressively cache responses, for reasons...
Yeah great question, and I don't fully have an answer yet! For the scenario I need it for, the hostname only does one thing and won't be looked up by the client until it's needed, which helps avoid that problem. Furthermore, thank to K8s nodePort routing, as long as one of the nodes is still alive and answering, the request will be routed even if the first IP is no longer valid. I've also considered as the cluster size grows, only returning the 3 to 5 newest node IPs present since I expect the ones most likely to get killed/recycled will be the older ones (not always true, but often is in my setup).
Has anybody else run into this and solved it? Cloudflare DNS seems to have figured out a decent way to deal with this. I may take a close look at their responses and see what they set for TTL, etc.
I will admit my DNS knowledge is a bit out of date, so I am sure folks much smarter than me have indeed figured this out. I am also curious about how this gets mitigated.
1) Claim privacy first and then have a cookie banner.
2) Say “routing” when you mean location/IP based DNS.
3) Is that a loosely typed language in the scripting engine? Not sure I would want DNS queries to be relying on that.
I am sure there is still some innovation left in DNS. SDDNS I’d call it: Software defined DNS. Especially with the splinternet we are walking into these days. Just don’t think this version cuts it. Nevertheless an interesting company to follow. I see potential.
Cookies as tech say nothing about privacy. If you want to store information that you refuse to share your private data - that almost certainly is getting stored in a cookie.
You can route connections on different layers than IP routing. We commonly talk about http request routing as in dispatch based on the domain/path. I'm happy with "routing" as in directing traffic via DNS resolution. I doubt anyone here is confusing that.
But cookie banners do. Essential cookies that are required e.g. to store login data no not require a cookie banner (https://github.blog/2020-12-17-no-cookie-for-you/). So if there is a cookie banner you can assume that the site wants to store analytics, tracking or advertising cookies.
> I'm happy with "routing" as in directing traffic via DNS resolution. I doubt anyone here is confusing that.
I disagree. As a former network engineer the title "We're transforming internet routing" and subtitle "Rethinking Internet Routing" [my emphasis] makes me think of IP based routing first. I think they could have been clearer or picked a less grandiose title.
Don't get me wrong, on the surface this looks like a neat tool.
Absolutely correct on the privacy. We're actually dropping everything from Google / Facebook and anything third-party at the moment (since it's not technically legal in Europe anyway). It will be a bit of a process though.
In woodworking, that's a different word. In "english" English, the woodworking word is prononced "raowting", as in "now" or "about"; the business of choosing a route is pronounced "rooting", as in "boot".
In American English, "route" and "about" have the same vowel-sound, which seems unfortunate; I wonder how that happened.
English is a mess, but I hope we don't try to fix it!
Sure, most of us understand there are different uses of the word "routing" with respect to technology stacks. But they specifically say "Internet Routing" in the article title and elsewhere. "Internet Routing" does have a fairly specific meaning i.e. IP routing as define by a bunch of RFC's. What they're doing isn't really "internet routing", it's really just redirecting traffic.
Why? It is a perfectly legitimate use of the term. You can use DNS responses to route visitors to the appropriate datacenter or regional network.
Head over to https://cachecheck.opendns.com/ and plug in 'www.google.com', you'll notice the Google returns different IPs in different geographic locations to route visitor traffic.
When an expert is explaining things to non-experts (like a marketing page would), you use terms that your audience will understand and relate to. The goal is not perfect technical accuracy. The goal is to convey the basic idea so the reader can understand it.
Oh ffs, why does TFA refer to this as "internet routing"?? This is HTTP routing, or "web routing" if you want, but "internet routing" is something else entirely -- think BGP and OSPF. Is this a thing now, to refer to HTTP routing as "internet routing"?
It's not. It's just that "internet routing" really refers to routing protocols, while TFA seems to mean something completely different, and, really, specifically HTTP routing.
Feels like you could use this to monitor say some mail servers and use DNS result to "route" to a healthy one for example, or the closest geo. Web seems to be one specific use case, and I feel using "internet" is fair here.
"Routing" is more debatable if you really want to be pendantic, but if you want to be pendantic, you need to be precise on web vs internet :)
Also routing and DNS are different things. Misunderstanding what routing is while trying to sell your technology to technologists is likely not a winning strategy.
For people who want to understand, learn about, or stay on top of their, DNS check out dug. Its a cli tool I made to help visualize DNS propagation but is a great learning tool.
Yeah honestly I have very little faith in a license, especially on such a small project, to actually do anything. Mostly a small platform tbh. It did keep dug from being accepted to homebrew though
Somewhat related (and shameless plug), I coded a Lua-scriptable (Javascript planned) DNS server a while ago[0]. I'm using it in a few low traffic domains, but the code needs some additional love :-)
I've been looking for something like this within my internal network.. I want scrip-table control over DNS.. but I run all Linux/Bsd's and this looks to be .NET are the instructions on running in Linux and/or Docker containers?
Hi. It runs out of the box in Linux (BSD not tested, but it runs in macOS too). I'm in the process of configuring automatic releases in Github and a proper Docker image.
If you need help, write me to luis at <my HN username>.com :-)
Where they say that? This is the only part I can find about DNS geolocation:
> 2. DNS: Run trick DNS servers that return specific server addresses based on IP geolocation. Downside: the Internet is moving away from geolocatable DNS source addresses. Upside: you can deploy it anywhere without help.
> You're probably going to use a little of (1) [Anycast] and a little of (2). DNS load balancing is pretty simple. You don't really even have to build it yourself; you can host DNS on companies like DNSimple, and then define rules for returning addresses. Off you go!
Seems they are saying that "the internet in general" is moving way from location-based DNS, but that's a bit like saying that the internet in general is moving away from Wordpress.
2. Sending user data (IP address, location) to authoritative nameservers is out of vogue.
There are efforts to send privacy-friendly geo info to authoritative nameservers. But they aren't getting much traction. Which means location based DNS is getting less useful by the day (because it's not working for as many people).
I can think of use cases with existing solutions that could migrate to something like this, but not any really novel use cases. Put your health checks in your DNS server rather than issuing nsupdate requests from a monitoring system. Use this to do GeoDNS dynamically in the DNS server rather than using anycast and giving out different records at different authoritative DNS servers. Check the netblock or ASN-1 of someone connecting and direct traffic from specific networks to different host records. Automatically adjust your TTL upward when you're getting a lot of DNS traffic for a particular record. Adjust it down when adding new A records to a particular name. Have fine-grained control over logging for just certain requests. Automatically have SPF and DKIM updated when adding records to a particular subzone.
One of the things that would be nice is if this would mean that Bunny's Let's Encrypt support would do wildcard certs. Right now, because Bunny doesn't control the DNS, they can't create wildcard certs. I know this is mostly about the scriptable DNS, but it is also an announcement that they're entering DNS more generally.
Surprised by all the negativity and dismissiveness in the comments.
Bunny in general has been a positive experience for me so looking forward to trying this
Not so sure about the per million pricing on scriptable dns queries. Isn’t it quite easy to generate billions of dns queries? ie I hope there is some sort of ddos mitigation in front of that
This is very clever, and something I wished existed inside of AWS/Route53. It would greatly simplify some of the work needed for redundancy/resilience. Something similar as a locally-deployable "unbound/dnsmasq style" component would be neat as well.
geo-ip has never returned the correct state for me (across 3 different ISPs). The automatic Timezone detection on phones also rarely works correctly here.
So I can easily believe that it’s wildly inaccurate for a significant amount of the world.
Our goal was not to use this to orchestrate a global CDN, it's just one of the many use-cases and can be combined with anycast easily. This is just one example and we're not really trying to push GeoDNS specifically. What we envision is smart load balancing, backups, dynamic routing for validation or service discovery and a bunch of other things that are currently very painful to do. GeoDNS is an old concept and can be solved much better without scripting anyway.
In terms of accuracy, it is generally very accurate on a country level. Anything else is pretty useless.
At my employer, some users register with a pm.me email address. When users contact us via email, they use a different protonmail address and never the pm.me address. I know you can never rely on the sender address, but from a support perspective this is still strange and extremely troublesome (for example it is tedious to assign the user to an account). I assume that Protonmail users can only receive emails with pm.me, but not send them. For me actually a reason to block pm.me or to handle it like a throwaway address.
In ProtonMail you can have email aliases (different address for the same inbox). By default, user@protonmail.com and user@pm.me exist. You can both send and receive from them.
> to block pm.me or to handle it like a throwaway address
How is a pm.me address more throwaway than a GMail one? I would say it's the other way around, especially considering that many ProtonMail users pay for their email service, so it's more likely to be a real user behind the address.
My criticism is: You should never use an e-mail address from which you cannot send. This has nothing to do with Protonmail but is a problem in general.
> but sending emails from a @pm.me email address requires a paid account.
Forgot that, I personally have a paid account and I think many people using Protonmail do, the free accounts are not so good.
> You should never use an e-mail address from which you cannot send.
Depends for what, if you subscribe to newsletters you don't have to send emails, but I agree it's pretty dangerous to use an email that you don't have send access for.
• Every code snippet should be copy/pasteable (and work well), that article might be the entry point to what many are doing.
• I have this feeling this is following Cloudflare's Workers style of `addEventListener('fetch', eventHandler);`. I need to write a 500-word essay on that, but the short version is that I strongly believe that it'd be way better for their clients if the client could just do `export default async function handleDns(event) { ... }` instead of having a global (what?) context where there's a function unattached called `addEventListener` and you need to know it's for `"dns"` (is there even other possibility here?) and that the respondWith() only accepts a promise, which is not common (it's more common to accept an async fn which then becomes a promise). Compare these two snippets, current and with my small proposal:
This way it's easier to write, to reason about, to test, etc. It's all advantages from the client point of view, and I understand it's slightly more code from Bunny's point of view but it could be done fairly trivially on Bunny's end; they could just wrap the default export with this code on their infra level, simplifying and making dev lives a lot easier: