Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why design your own API so that I can't try it without rewriting my entrypoints? No thanks.

Cloudflare is building an insanely good platform and I think it is one that is worth betting on into the future. I have no idea where this company came from. Maybe it's a rebrand, because they seem to have serious customer base and perhaps network footprint.

PoPs are ~119 which is significantly fewer (less than half) of Cloudflare's presence, and Cloudflare has queueing, streaming, D1 (databasing), R2, and all sorts of other things. Workers' DX cannot be beaten.

Just my 2c. If the creators are here, I'd love to know why you decided to design a new API. That is so upsetting.



Bunny has been around for much longer than CloudFlare. All those third party video streaming websites (e.g. adult content) all rely on CDNs like these. Bandwidth is very cheap. CloudFlare is able to command their prices mostly because of the security features and the fact that they are a pull based CDN. Most of the internet outside of SaaS rely on traditional CDNs like Bunny for low cost distribution.


Did they undergo a rebrand? Did I just miss this company for many years (it's possible)? I'm happy to believe you. But when you say "traditional CDNs," I think Akamai.


> PoPs are ~119 which is significantly fewer (less than half) of Cloudflare's presence

Cloudflare doesn't execute workers in all their PoPs.

I'm in central Mexico and my workers execute in DFW even though there's a Cloudflare PoP not even 30 mins away from here (QRO).


> Cloudflare doesn't execute workers in all their PoPs.

Yes we do!

> I'm in central Mexico and my workers execute in DFW even though there's a Cloudflare PoP not even 30 mins away from here (QRO).

I think you will find that even if you turned off Workers, your site would still be routed to DFW. Some of our colos don't have enough capacity to serve all traffic in their local region, so we selectively serve a subset of sites from that colo and reroute others to a bigger colo further away. There are a lot of factors that go into the routing decision but generally sites on the free plan or lower plan levels are more likely to be rerouted. In any case, the routing has absolutely nothing to do with whether you are using Workers. Every single machine in our edge network runs Workers and is prepared to serve traffic for any site, should that traffic get routed there.

(Additionally, sometimes ISP network connectivity doesn't map to geography like you'd think. It's entirely possible that your ISP has better connectivity to our DFW location than the QRO location.)


I've heard this argument from you before (on Twitter iirc) but I've been using Workers for 4 years now. Never, not even once, I have seen a Worker executing in Mexico. They always execute in DFW.

The CDN does cache stuff on QRO often but Workers and KV are a completely different story.

We're not on the free plan. We pay both for Workers and the CF domain plan.

Maybe all PoPs have the technical capacity to run Workers but if for whatever reason they don't, then it's irrelevant.


> The CDN does cache stuff on QRO often but Workers and KV are a completely different story.

I don't know of any way that requests to the same hostname could go to QRO for cache but not for Workers. Once the HTTP headers are parsed, if the URL matches a Worker, that Worker will run on the same machine. This could change in the future! We could probably gain some efficiency by coalescing requests for the same Worker onto fewer machines. But at present, we don't.

I do believe you that you haven't seen your Workers run in QRO, but the explanation for that has to be something unrelated to Workers itself. I don't know enough about your configuration to know what it might be.


Back a couple of years ago your CEO and another CF employee explained free plans got routed to other PoPs:

> Not all sites will be in all cities. Generally you’re correct that Free sites may not be in some smaller PoPs depending on capacity and what peering relationships we have.

https://x.com/eastdakota/status/1254118993188642816

> The higher the plan the higher the priority, so if capacity is an issue (for whatever issue, from straight up usage to DDoSes) free sites will get dropped from specific locations sooner. Usually you will still maintain the main locations.

https://x.com/itsmatteomanf/status/1261028088919609352

So I ended up getting a paid plan but still the behavior hasn't changed. I've tried with different ISPs and locations and I've never seen a Worker executing in Mexico (QRO, GDL, MEX) or any of the other PoPs in the US closer than DFW (MFE, SAT, AUS, IAH).


Cloudflare DX is garbage. It has improved a bit in the last year but it's very far from being usable by your average developer. I am building a product on workers and I am questioning that decision every other day.


Are you doing it on Rust? TypeScript with Workers is a dream. Consider that, while it is not yet fully mature, you can build and launch your app once and it is global-first. It costs like $100 or less to run at significant scale. It's a dream.


Yes. There is a steep discovery curve for the wasm target. However, it makes it easier for development because once your code compile, it’ll probably run fine. There are some gotchas related to the platforms but once you learn them, you’ll be fine. Still, none of this is documented and the worker crate is practically unmaintained.


Once you have the app running in the cloud, Workers are a great runtime. Super solid with great perf and uptime. But CF still needs to improve local DX, a lot.


DX == developer experience?

I think it's pretty good, but yeah, not ideal. I'm also building a product on workers, and using D1, KV, R2, queues, and am pretty happy with the DX. Running remote previews is pretty neat.


Cloudflare had only 100 PoPs just a few years ago. Bunny has been around 10 years, but didn't get the cash injection from Google like Cloudflare did.

If you read the article, Bunny uses Deno, CF uses a cut down version of Chromeium (each instance is like a browser tab; isolated). Thus the API difference.

But I do agree, CF is building out more of a suite.


WorkerD isn't anywhere near a "cutdown version of Chromium," it is an incredible platform with years of engineering put into it, from some of the people behind very similar and successful products (GAE, Protocol Buffers, to name some). I assume you are referring to V8 here but that also powers Deno.

WorkerD is open source: https://github.com/cloudflare/workerd

I personally am not a fan of Deno because of how it split the Node JS ecosystem, so that is not a benefit in my eyes. Of course, Workers can run Rust.

Nothing you said here necessitates an API difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: