You are running this on localhost but won't interactions be slow if a user is 200ms from the server?
Re-rendering templates every time on things that require high interaction also seems very expensive and an easy way to slow down your server when you have multiple users correct? or what I'm missing?
LiveView does not re-render the template on every interaction. LiveView actually sends patches over the wire, which is typically smaller than a hand-written JSON response. The screencast linked above has a good example of this, where clicking the "retweet" button sends a minimal payload, since we know the exact position on the page.
LiveView also uses a long-running WebSocket connection and that reduces the amount of data sent over the wire compared to regular requests/responses, as you don't need to encode headers, cookies, etc.
Finally, if you are worried about latency, you can call "liveView.enableLatencySim(200)" in your browser console, and you will be able to emulate how your application behaves over high latencies.
The other approach is SSR with React/Vue that hydrates on pageload with something like Next.js/Nuxt.js. https://nextjs.org/
But the Phoenix approach seems better for a more Railsy single framework approach (assuming you don't like writing the entire server side code in JS, which I do not). It's more cohesive and quicker to roll something out.
I currently do mostly Vue with Rails backends professionally but if it was from scratch or rearchitected with Elixir I'd seriously reconsider using full blown Liveview.
My only concern would be missing out on some UI libraries and pure size of community support. But I wouldn't miss getting rid of the super complicated JS tooling set ups I currently use (in addition to rails or trying to jam it through the asset pipeline via webpacker), in exchange for a more centralized approach. I've gotten a bit too used to maintaining the frontend almost separately from the server app and sometimes miss the simple days of being pure Rails.
One additional concern may be portability for mobile with React native. But that only applies to a subset of apps where reuse/cross platform makes sense. Still it was a big reason why these SPA style frameworks flourished like they did.
It turns out that Elixir/Phoenix templating is, in general, astoundingly fast, and that diffing operations are highly optimized at the language level due to the data structure (IO list) used.
Due to the efficient diff'ing and data format, in practical terms I’d estimate a normal web app doesn’t require much more server resources with LV than a rest api backend. Maybe double? Of course that’ll depend drastically on the web page and template size used. Loading the text of full novels might not fair well!
On the extreme end I have a few pages that plot IoT data where it can take ~3 seconds to do a drop down in LV... Granted the server is serving a dozen SVG plots with a total of ~80,000 data points and performs the server template diff in that time. That’s on a RPi3 "server", which are also processing data, running similar pages on 4-10 browser tabs and running its own web browser. Haven't gotten close to using up the ram. Much of the slowness in that case is due to Chrome choking on that much SVG. I haven’t bothered optimizing the server side by dropping already rendered graphs.
Hope that helps give some insights. It'd be interesting to hear from people running high traffic sites.
The patches are sent as pure data and a small client-side Javascript library (morph-dom) patches the DOM accordingly, so the template itself doesn't get sent.
you are correct, you can't escape the laws of physics latency wise, but most web things we are all building, including 100% JavaScript apps require a round trip to the server. For example I can't post tweet without a round trip to twitter. On the LiveView side UX wise, we apply css loading states to the interacted elements, so the user is provided feedback while awaiting an acknowledgement. On the server, our templates are precompiled and do change tracking. So we only execute the elixir code that is necessary to render the parts of the template that changed. The rest of the calls are noops. From the top down, it appears like an extremely heavy approach, but with our change tracking and diffing, it's actually incredibly optimized. Also note the Erlang VM was made to handle multiple users, as we showed in our "The road to 2M connections with Phoenix" blog post a couple years ago. More connected users will mean more memory, but Memory is cheap, and the platform scales well enough that opening a websocket on all pages as a matter of course is completely viable. Hope that helps!
I also forgot to mention we provide optimistic UI features with our loading states and annotations. For example, in the screencast, the "Save" button has phx-disable-with="Saving...", which applies on client instantly and stays until an ack is received. You can leverage the loading states as well to toggle entire containers for a happy medium between traditional optimistic and instant user feedback.
Re-rendering templates every time on things that require high interaction also seems very expensive and an easy way to slow down your server when you have multiple users correct? or what I'm missing?