The size of the diff and the latency of the underlying transport layer are independent. If your user in NY clicks a button that has to go to your server in SF, you pay for that ping both NY->SF and SF—>NY for the reply.
Same goes for whether they’re on some flakey mobile connection in a car or on a train.
It’s also super easy to accidentally send a ton of data down the wire on component mount. I worked on a massive LiveView app at a company you’ve heard of and these kinds of issues were a problem the whole time.
You also give up the standard stateless HTTP request/response model when you go with LiveView. This is not necessarily bad, but people should be aware that they’re turning their stateless web tier into a stateful web tier. It’s an entirely different model (and in my opinion, more challenging).
LiveView is cool technology but I don’t want people to overlook the fact that it has some sharp edges you have to be aware of when building large products.
Same goes for whether they’re on some flakey mobile connection in a car or on a train.
It’s also super easy to accidentally send a ton of data down the wire on component mount. I worked on a massive LiveView app at a company you’ve heard of and these kinds of issues were a problem the whole time.
You also give up the standard stateless HTTP request/response model when you go with LiveView. This is not necessarily bad, but people should be aware that they’re turning their stateless web tier into a stateful web tier. It’s an entirely different model (and in my opinion, more challenging).
LiveView is cool technology but I don’t want people to overlook the fact that it has some sharp edges you have to be aware of when building large products.