Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And I think that is madness. The web should have never gone down that route. People should push back.


So all webpages should be simple forms, not even validation?

I remember those days, and actually shipped "dynamic" webpages before JavaScript/AJAX/DOM. I'll take cache flushing.

Imagine if every like/upvote button required a screen refresh... imagine Google maps...


You are indeed very good at imagining a world where the only choices are the web exactly as it was in the 90s, and exactly as it is today. That's a rare display of creative ability, but also a false dichotomy.


Illuminate it for us, then. What is the middle ground that permits SOME computation in the client, yet is so restricted that it cannot take advantage of timing or cache attacks to steal information from other processes running on the same machine.

The difficulty with both timing and cache attacks is that a "sandbox" approach is not possible... at least not without special hardware and OS level support like the ability to tell the OS to flush caches on context switches for certain processes.


Before JavaScript got fast and feature rich we had Java Applets. We also had Flash for a longer time. They are not fundamentally different from JavaScript: if our CPU runs code from the net there could be exploits.

So maybe the thin client model? All the code runs on the server and the UI is streamed to the client? But the lag would be higher and the cost for the web server would probably have prevented any internet boom.


> What is the middle ground that permits SOME computation in the client

You already went down the wrong road there. Can you imagine how to add functionality declaratively?

The middle ground is extending user agents to support features that improve the browsing experience and enable new functionality, without turning it into an arbitrary application delivery framework. And yes, this means rejecting and scoping out some things that people do in browsers today.

Just like in the past, you as a developer would get to choose whether to write a website (ubiquitous, accessible, runs on grandma's old potato, relatively easy and cheap to maintain; these are the things that made the web popular and businesses around the world decided that it's ok to to make their product a website even if that meant they couldn't have all the features you could with a desktop application), a native application (more effort, more complex, more expensive, more invasive, more friction, more concerning w.r.t. security), or both.

To illuminate you, let's pick some examples from the false-dichotomy post...

Consider form validation. In fact, this is already done (and could always be extended to support more cases). HTML5 has built-in form validation that works without javascript. And of course it's still perfectly backwards compatible with old browsers that don't do validation; they might send you invalid fields, but you will have to validate them server side anyway because you can't trust the client.

https://developer.mozilla.org/en-US/docs/Learn/Forms/Form_va...

Static maps have already been done. Not a super smooth experience, but you could always improve that by speccing a zoomable & pannable tiled image element that'll send requests to the specified URL when you pan outside of the loaded area or zoom in. Add a set of loadable elements that are embedded into this image element and you get something that starts to resemble SVG. No JS required.

Information about points of interest could already be shown with the hover selector, but there's no reason we couldn't spec an element whose visibility can be toggled with a click, no js needed.

Upvote/downvote buttons just need an attribute that tells the browser to post the request but stay on the current page. (This also degrades trivially with browsers that don't support the attribute) You could even toggle the visibility of the arrows after posting; similar CSS selectors for checked inputs already exist.

In general, there's no reason we can't have post or get requests that display the response in a new element without reloading the entire page. Semantically, not very different from target="_blank" or whatever you use to load something in a new tab / windows, except this time you want the target to be an element.

(At this point I'd also like to note that frames exist and yes they suck but hilariously a lot of the new web does exactly the kind of stateful non-linkable things that framesets were derided for; only worse, because you actually could right-click a frame and link directly to it, but you can't right-click and link the arbitrary DOM that was cooked by your client-side javascript)

Going with the tiled image element theme, there's no reason we can't have more elements that instruct the browser how to load more data on demand. These same elements could let the user agent decide whether to paginate or scroll infinitely, or how many items to display per page.

(There's no reason we can't load images progressively and on demand.. progressive JPEG exists already, but for some reason devs still insist on giving me a blur and nothing more will load unless I enable scripts)

The way we currently do things really sucks for the user (because they have very little control over how the script behaves; the user agent is degraded to a mere dumb client with little meaningful configurability) and it sucks for developers who would rather just focus on the content and let the browser provide whatever UX fits the user & their platform best.

Web devs are in a hurry to paper over the deficiencies of browsers but in doing so (and not fixing browsers), we end up with something worse and every goddamn website becomes a complex application. We're stuck in a worst-of-both-worlds state, where the browser runs applications that lack the power of desktop applications, yet are invasive, heavyweight -- don't run on grandmas old potato, complex & expensive to develop and maintain (every website shipping complex UI logic that should be part of the browser instead), increasingly less accessible and less reliable, less linkable & crawlable, less secure.. it's all I never wanted.


I'm not sure whether you care, but that page you linked to about form validation contains JS for some of the more advanced features someone might want to implement. Even then I'd say that it's wrong to think of input as "invalid" before I've started editing the field (unless of course I tried to submit it already), which is what this standard apparently does. I'm already annoyed with forms that mark a field as invalid because it's required the moment I focus on it, that would be much more annoying if several fields further down were also marked as invalid.

Your reaction will probably be that I'm missing the point, that if we went with a JS-less world there would be solutions for this. But I strongly suspect that the solution would be to not use HTML and instead use some other technology that was capable of general computation on the client.


I mean my premise is that we could have (and imho should have) categorically rejected "general computation on the client." In that scenario, the solution could look something like HTML5 form validation.

Nitpicking the details of how it is currently implemented is indeed beside the point. Ideally, the spec is made loose enough to give user agents & users the freedom to configure the behavior to their liking (and if someone can make the case for a particular behavior must be followed in some situations, then an optional attribute is added to "force" that behavior).

In general, I'm very tired of the status quo, which is that every site developer is responsible for providing good UX and people nag at them, when their preferences could be accommodated for by the browser itself. As long as the behavior stems from javascript, there's very little a browser can do to accommodate user preferences without breaking the web at large. You know, maybe I don't like form validation the way you'd implement it in JS.

People are so vested in the status quo that some of them even get angry when you e.g. suggest that they could use the browser's reader mode (instead of nagging at the site's author) to make a site readable for themselves. Bikeshedding about colours and fonts on front page HN postings happens all the time... of course, reader mode is a hack that fails very often, so disagreeing with that suggestion is somewhat justified. But really, we could've built the web around the user agent instead of vice versa, and then your web browser would be your reader mode by default. You could blame at your browser vendor or yourself first of all if the colors and fonts (or input form validator behavior before you've entered anything) don't please you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: