Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks.

- Load balancing is easier because your connection is stateless. You don't have to connect to the same server when you reconnect. Your up traffic doesn't have to go to the same server as your down traffic. Websocket tend to come with a lot of connection context. With SSE you can easily kill nodes, and clients will reconnect to other nodes automatically.

- The compression is entirely optional. So when you don't need it don't use it. What's great about it though is it's built into the browser so you're not having to ship it to the client first.

- The connection limit of 6 is only applies to http1.1 not http2/3. If you are using SSE you'll want http2/3. But, generally you want http2/3 from your proxy/server to the browser anyway as it has a lot of performance/latency benefits (you'll want it for multiplexing your connection anyway).

- In my experience CPU/memory usage is lower than websockets. Obviously, some languages make them more ergonomic to use virtual/green threads (go, java, clojure). But, a decent async implementation can scale well too.

Honestly, and this is just an opinion, no I can't see when I would ever want to use websockets. Their reconnect mechanisms are just not reliable enough and their operational complexity isn't worth it. For me at least it's SSE or a proper gaming net code protocol over UDP. If your browser game works with websockets it will work with SSE.



I appreciate the answers. For others reading, I also just ran across another thread where you posted relevant info [0]. In the case of my game, I'm going to consider SSE, since most of the communication is server to client. That said, I already have reconnects etc implemented.

In my research I recall some potential tradeoffs with SSE [1], but even there I concluded they were minor enough to consider SSE vs WS a wash[2] even for my uses. Looking back at my bookmarks, I see that you were present in the threads I was reading, how cool. A couple WS advantages I am now recalling:

SSE is one-way, so for situations with lots of client-sent data, a second connection will have to be opened (with overhead). I think this came up for me since if a player is sending many events per second, you end up needing WS. I guess you're saying to use UDP, which makes sense, but has its own downsides (firewalls, WebRTC, WebTransport not ready).

Compression in SSE would be negotiated during the initial connection, I have to assume, so it wouldn't be possible to switch modes or mix in pre-compressed binary data without reconnecting or base64-ing binary. (My game sends a mix of custom binary data, JSON, and gzipped data which the browser can decompress natively.)

Edit: Another thing I'm remembering now is order of events. Because WS is a single connection and data stream, it avoids network related race conditions; data is sent and received in the programmatically defined sequence.

0: https://news.ycombinator.com/item?id=43657717

1: https://rxdb.info/articles/websockets-sse-polling-webrtc-web...

2: https://www.timeplus.com/post/websocket-vs-sse


Cool. I didn't notice either. :)

With http2/3 the it's all multiplexed over the same connection, and as far as your server is concerned that up request/connection is very short lived.

Yeah mixed formats for compression is probably a use case (like you said once you commit with compression with SSE there's no switching during the connection). But, then you still need to configure compression yourself with websockets. The main compression advantage of SSE is it's not per message it's for the whole stream. The implementations of compression with websockets I've seen have mostly been per message compression which is much less of a win (I'd get around 6:1, maybe 10:1 with the game example not 200:1, and pay a much higher server/client CPU cost).

Websockets have similar issues with firewalls and TCP. So in my mind if I'm already dealing with that I might as well go UDP.

As for ordering, that's part of the problem that makes websockets messy (with reconnects etc). I prefer to build resilience into the system, so in the case of that demo I shared, if you disconnect/reconnect lose your connection you automatically get the latest view (there's no play back of events that needs to happen). SSE will automatically send up the last received event id up on reconnect (so you can play back missed events if you want, not my thing personally). I mainly use event ID as a hash of content, if the hash is the same don't send any data the client already has the latest state.

By design, the way I build things with CQRS. Up events never have to be ordered with down events. Think about a game loop, my down events are basically a render loop. They just return the latest state of the view.

If you want to order up events (rarely necessary). I can batch on the client to preserver order. I can use client time stamp/hash of the last event (if you want to get fancy), and the server orders and batches those events in sync with the loop, i.e everything you got in the last X time (like blockchains/trading systems). This is only for per client based ordering, no distributed client ordering otherwise you get into lamport clocks etc.

I've been burnt too many times by thinking websockets will solve the network/race conditions for me (and then failing spectacularly), so I'd rather build the system to handle disconnects rather than rely on ordering guarantees that sometimes break.

Again, though my experience has made me biased. This is just my take.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: