> WebSockets ... haven’t ever wanted to use those. Ever. For anything. For any reason.
You’ve never used a web-app chat client?
> WebBluetooth
APIs like these don’t exist for the sake of regular unprivileged web-apps. They exist for the sake of browser extensions (or browser “apps”, or apps within a browser-projector like Electron), specifically in order to be used to add driver-like or service-like capabilities to devices like Chromebooks where the browser is the OS.
> APIs like these don’t exist for the sake of regular unprivileged web-apps. They exist for the sake of browser extensions (or browser “apps”, or apps within a browser-projector like Electron), specifically in order to be used to add driver-like or service-like capabilities to devices like Chromebooks where the browser is the OS.
That's not really true, though: they're part of the Chrome team's belief in not limiting the capabilities of the web as a platform for app development (on the basis of "if you lack one feature the app needs, the entire app ends up native"). This is a large part of Project Fugu: https://developers.google.com/web/updates/capabilities
BOSH is essentially long polling which is pretty difficult to scale (the worst case scenario can become 1 connection per message for a single client).
I'm pretty surprised however, that a nearly 10 year old standard is being considered as "superflous" as newer technology like WebBluetooth and WebUSB. What we had before Websockets wasn't really long polling, it was Flash.
Before websockets, webapp chat features—as implemented by your average web-backend programmer—couldn’t pass the C10K challenge.
Yes, fundamentally, on a technical level, there’s not much difference between holding open an HTTP connection in long-polling, vs. holding open a websocket connection.
But the abstractions presented by webserver gateway interfaces (e.g. CGI, prefork process-per-connection language-module embeds), languages/web frameworks (e.g. Ruby on Rails), and platforms (e.g. Heroku) back then, just didn’t support long polling efficiently.
HTTP backends, back then, were all built on an assumption of serialized queuing of HTTP requests—with each web server/web stack/worker thread serving requests one-at-a-time, getting each request out of the way as quickly as possible. There was no concept of IO asynchrony. Web servers were request loops, not event reactors. Libuv didn’t exist yet; nginx didn’t exist yet. The standard web server was Apache, and Apache couldn’t “yield” from a request that was idle.
And, as such, providers like Heroku would queue at the load-balancer, and only proxy a single open request to your web backend at a time, under the presumption that it wouldn’t be able to handle concurrent load. So you had to pay for 2x the CPUs (e.g. Heroku “dynos”) if you wanted to be able to hold 2x the connections open!
Entire third-party businesses (e.g. https://pusher.com/) were built around the hosting of custom web servers that were written in an event-driven architecture, and so were able to host these pools of long-polling connections. But they were freakin’ expensive, because even they didn’t scale very well.
Eventually, it was realized that the just-introduced Node.js could do IO asynchrony pretty well, and people started building explicit “websocket servers” in Node, culminating in the https://socket.io/ library. Back then, you couldn’t just put your regular HTTP load-balancer in front of your websocket backend, because your regular HTTP load-balancer almost certainly didn’t support held-open connections. You needed to host socket.io on a separate host/port, directly open to the internet. (This was one of the driving forces of Node’s adoption: as long as you were putting a Node app directly on the Internet, you may as well just put the rest of your HTTP app in there as well, and make the “websocket backend” into your whole backend.)
Sure, these days, every backend, load-balancer, and NAT middlebox can handle long-polling just fine. But we got there with a decade of struggle, and “legacy”-but-not-really code that used WebSockets because they guaranteed the semantics that long-polling couldn’t.
(I should mention, though, that WebSockets still have some advantages in the modern environment; namely, idle WebSockets are known to be idle at the runtime level, and so, unlike with a long-polling HTTP request, a mobile device can relax its wakeup-timer intervals when the only network connections it’s holding open are idle WebSockets.)
I didn't mean the "chat with us now" engagements widgets; I meant, like, Google Hangouts, or Slack, or Twitch chat, or even an pre-Google-Docs Etherpad sidebar chat.
Though, honestly, I prefer the web-chat customer service for my bank/cellphone provider/etc. to calling them on the phone. I don't want to wait an hour on hold with my phone using up both battery and minutes; I want to just leave a window open on my computer and have it ding when they're ready.
A decade ago people use comet for this, with php and apache! Every single ongoing comet connection occupy an apache process, a significant resource hog, yet people still use it because they have no other choice. These days we have websocket but I bet comet can be implemented with minimal resource penalty these days thanks to proliferation of async webserver support in modern backend stacks.
You’ve never used a web-app chat client?
> WebBluetooth
APIs like these don’t exist for the sake of regular unprivileged web-apps. They exist for the sake of browser extensions (or browser “apps”, or apps within a browser-projector like Electron), specifically in order to be used to add driver-like or service-like capabilities to devices like Chromebooks where the browser is the OS.