Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Websocketd – It's like CGI, twenty years later, for WebSockets (websocketd.com)
176 points by smartmic on Dec 22, 2021 | hide | past | favorite | 25 comments


Another option is gwsocket[1]

>>> gwsocket is a simple, standalone, language-agnostic, RFC6455 compliant WebSocket Server, written in C. It sits between your application and the client's browser, giving fast bidirectional communication between these two with ease and flexibility

[1] https://gwsocket.io/man


Do any of these options support communicating via an http API instead of stdin/out?

Ideally I'd like to have a standalone websocket service that handles long-running connections and then calls my specified API eg. `POST <my-http-service>/websocket/client/<client_id>`. Then the service can respond by sending a `POST <my-websocket-service>/websocket/client/<client>`, like an opensource https://pusher.com? I realise it'd be fairly easy to build myself, just wondering if there is something off the shelf that achieves this?


Centrifugo maybe [1]

[1] https://centrifugal.dev/


Have a look at Pushpin: https://pushpin.org/docs/about/


I like the stdin/stdout idea from an ease of use POV, but doesn't having each connection in a separate process pose a rather significant constraint on scalability, considering also that WebSocket connections are generally considered to be long-lived?


How much do you need to scale before scaling up is not viable?

As far as memory goes, a few thousand identical processes shouldn't be too much, add memory to your computer/VM/droplet/etc until it is enough. You can also rewrite the websocket program from Python to Go (and then to C) to fit more processes into the available memory.

As far as number of processes goes, Linux can probably handle more active processes than the computer's memory can handle.

As far as file handles goes, by default I think you're limited to a few tens of thousands, but there's knobs you can turn to raise that limit.

I would guess (thumbsuck) that you might run into scale-up walls once you have more than 100k active connections on a single Linux instance[1]. At that point you have paying customers that will allow you to spend money on switching from a scale-up-based architecture to a scale-out-based architecture.

[1] My $5/m Digital Ocean droplet once handled ~50k long-lived TCP connections over a two day period without much configuration necessary. ISTR that I had to turn some knobs in /proc/... to raise the limits on file handles.


After C, assembly.


I think you are right but this is a tool explicitly designed for convenience and not for scalability. The target is more like, I need a websocket on my VPS to connect to my own app that is used only by me. Or to hack together a quick demo, which is basically the same thing.


Exactly. As a concrete example, I used websocketd a few years ago when I was debugging some geometry-heavy code and wanted to use D3 as a quick way to make a visual debugging UI. I was just running a single process locally; scale was not a concern. https://twitter.com/paulgb/status/914495701119365121


A modern Linux system can scale to a large number of processes just fine. (With a bit more configuration, it can even merge identical memory across processes, with ksm.) And if you find yourself running tens of thousands of connections that are all active, you can always move to something more scalable. In practice, something else may end up being your bottleneck. Meanwhile, this seems easy to get up and running with.


I think people vastly underestimate how much modern linux can scale on a decent computer. I had systems in the field connecting via SSH into a single VM with over 9000 concurrent connections it was fine on a KVM with about 24G of RAM and a couple cores. My WAN connection was only 250Mbps which was far more limiting in terms of traffic than SSH & KVM.


Will you need to scale?


if you're using a heavy runtime then maybe; a lightweight process - maybe not; most importantly, maybe you don't need to scale.


This is awesome! Websocketd occupies a similar niche to—and might pair well with—my user friendly, single-binary webserver with CGI-like capabilities called QuickServ. When I released that here, one of the comments was that it would be nice to have WebSocket support. Now, I can just point people to Websocketd!

https://github.com/jstrieb/quickserv


Does it reconnect when the connection breaks? Does it keep track of missing packets? Are status and errors passed on to the caller?


> Does it reconnect when the connection breaks? Does it keep track of missing packets?

Isn't this something tcp itself does? So any protocol on top of tcp would inherit the same.

If the client goes offline for longer durations, they should attempt to reconnect after coming online.


The fact is that WebSocket connections tend to break. Keeping a stable connection requires programmer effort.

See this comment that someone posted here today:

https://news.ycombinator.com/item?id=29653600


Does a corresponding client program exist so that one might, e.g. tunnel some arbitrary protocol over a websocket connection.



This and this on my HN page are back to back. I find it funny: http://www.adama-lang.org/blog/woe-of-websocket/


Nice! Unfortunate that there aren't more Unix-y tools for developing modern web apps.


so inetd for websockets?


The 'about' section on their Github page says "Like inetd, but for WebSockets", so... maybe.


how does this compare to socket.io , I know that is not rfc6455 compliant. Can someone in the typescript/nodejs/deno community make one that is?


The many options for websockets mentioned in this thread makes me think I should make my websocket C library open/public as well.

Of course, since it's a library, you still need to write code that calls it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: