Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

is there still an argument that can be made in favor of doing your UI on the server side?

Ability to link to documents, bookmark, meaningfully use history and save pages to the disk for offline use. Ability of the users to customize standard behaviors without reverse-engineering your JavaScript code. Transparency, which often leads to much, much easier debugging and improved usability. No need to run a quad-core 4GB desktop to use the website.

And this is just the stuff relevant to internal (non-public) websites. You can argue that all of this can be achieved with JavaScript heavy clients, but in reality, it's just isn't. It's not something you get by default, it's tons of extra work, and most people don't do that work.

Even just in principle, it can't possibly be close to as fast (client-side can render optimistically, where possible), and it can't be close to as flexible (you don't have a model of state on the client to perform logic with).

In principle, server-side rendering can allow you to share pre-rendered components between thousands of users saving everyone tons of work. In practice, rendering on the server side is just string concatenation and is insignificant compared to things like running SQL quires, which you'll have to do anyway.



> Ability to link to documents, bookmark, meaningfully use history and save pages to the disk for offline use. Ability of the users to customize standard behaviors without reverse-engineering your JavaScript code. Transparency, which often leads to much, much easier debugging and improved usability. No need to run a quad-core 4GB desktop to use the website.

You can manipulate the page url/browser history from js, and you can use the url to set the application state. History and bookmarks work perfectly well in gmail, for example.

This does take explicit coding. But, if fragments of the page are being replaced as bcx does, then you need to do similar coding anyhow. Otherwise, you're doing whole page loads on every request.

js works pretty well on modest smartphones, at which level network latency is usually the major concern...


This does take explicit coding. But, if fragments of the page are being replaced as bcx does, then you need to do similar coding anyhow.

It should not be similar. There is a huge architectural difference between the two approaches. In server-side approach you're adding caching or prefetch to an already working application that has established and working URLs. With client-side approach, you need to implement adapters that transform URL information into the client state that is normally achieved by a series of UI operations and AJAX calls. Then you need to add new code to generate URLs and manipulate history.

The beauty of caching or partial page fetches is that they are generic. History manipulation is not.

js works pretty well on modest smartphones, at which level network latency is usually the major concern...

The last time I tried to browse on Kindle, it choked and died on most JS heavy websites. When 500MHz processor is not fast enough to browse the web, to me, that's a problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: