It’s done with JS but that’s all written for you. But as a bonus, LiveView apps do work without JS (they just aren’t updated over web sockets anymore).
I assume you have to specifically design your site to work with both the request/response model and the LiveView model in order for this to actually work, as opposed to LiveView being able to plug that hole automatically for you.
You can design singularly for LiveView and it handles everything. But if you want both request/response and LiveView (e.g. to mix the two or fallback to r/r when no js) you have to be more explicit in your design. It’s mostly trivial though. I have authentication pages that use the old controllers but pages that use only LiveView without any hassle.
> as it wouldn't work when JS is disabled in browsers.
You can make it work when JS is disabled as well, you fall back to rendering regular HTML. It does require a little extra work, but it’s not insurmountable (e.g. using @conn instead of @socket).
>you have to scale more with more users
I might opt for additional optimizations once it gets bigger, but I’m not too worried about scaling Erlang processes.
I have (limited) experience scaling long-lived websocket connections and it _sucked_. It is _way_ easier to scale out little Node servers that are "stateless" than it is to ensure that Client A is connected to Socket A.
I would much rather scale out my REST/Graph/RPC API instead of having to scale out a WS API.
99% of the time no one runs into scaling issues and worrying about it is premature optimisation. I have to remind myself that all the time.
And no, same hassle, same money spent. Thought about from the start server-side rendered pages are almost as cacheable as API responses will be. If you can't cache you're in for a world of expense at scale whichever way you go.