Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you are correct, you can't escape the laws of physics latency wise, but most web things we are all building, including 100% JavaScript apps require a round trip to the server. For example I can't post tweet without a round trip to twitter. On the LiveView side UX wise, we apply css loading states to the interacted elements, so the user is provided feedback while awaiting an acknowledgement. On the server, our templates are precompiled and do change tracking. So we only execute the elixir code that is necessary to render the parts of the template that changed. The rest of the calls are noops. From the top down, it appears like an extremely heavy approach, but with our change tracking and diffing, it's actually incredibly optimized. Also note the Erlang VM was made to handle multiple users, as we showed in our "The road to 2M connections with Phoenix" blog post a couple years ago. More connected users will mean more memory, but Memory is cheap, and the platform scales well enough that opening a websocket on all pages as a matter of course is completely viable. Hope that helps!


I also forgot to mention we provide optimistic UI features with our loading states and annotations. For example, in the screencast, the "Save" button has phx-disable-with="Saving...", which applies on client instantly and stays until an ack is received. You can leverage the loading states as well to toggle entire containers for a happy medium between traditional optimistic and instant user feedback.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: