Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very impressive. I wish extreme performance goals and requirements would become a new trend. I think we have come to accept a certain level of sluggishness in web apps. I hate it.

I wrote a tire search app a few years back and made it work extremely fast given the task at hand. But I did not go to the level that this guy did. http://tiredb.com



Now that we have blisteringly fast computers, it's worth it to browse old websites and see what "snappy" looks like.

http://info.cern.ch/hypertext/WWW/TheProject.html

If we could cram more modern functionality into say...twice or three times the performance of the above, I think the web would be a better place. Instead the web is a couple orders of magnitude slower.


Yes. In some ways I think we're still in a very primative kind of level for web development. Either you do it by hand, tweaking each individual parameter like the old demoscene, and making it fast and amazingly small, or else you write huge chunky slow web apps, or more usually, something in the middle.

I feel like the big thing I'm missing is smart compilers that can take web app concepts, and turn them into extremely optimsed 'raw' HTML/CSS/JS/SQL/backend. All of the current frameworks still use hand written frequently very bloated or inelegant hand written CSS & HTML, and still require thinking manually about how and when to do AJAX when it's least offensive to the user. Maybe something like yesod ( http://www.yesodweb.com/ ) or something like that is heading in the right direction. http://pyjs.org/ has some nice ideas too... But I'm thinking of something bigger than the individual technologies like coffeescript or LESS... Something that doesn't 'compile to JS', or 'compile to CSS', but 'compile to stack'. I dunno. Maybe I'm just rambling.


the "sufficiently smart compiler" is kind of like "world peace"; something to work towards, but i doubt we'll have it this lifetime.

http://c2.com/cgi/wiki?SufficientlySmartCompiler


"Sufficiently Smart Compiler", like most AI, is a concept with constantly shifting goal-posts. As soon as compilers can do something, we no longer consider that thing "smart." Consider variable lifetime analysis, or stream fusion -- a decade ago, these would be considered "sufficiently smart compiler" features. Today, they're just things we expect (of actually-decent compilers), and "sufficiently smart" means something even cleverer.


And, given those optimizations, the programmers get sufficiently dumber to compensate, resulting in a constant or decreasing level of performance.

That's gotta be a law codified somewhere, right?


There are examples of advanced functionality performing well enough. Google Docs is quite fast, especially for what it is.

On the other hand, there are sites which are conceptually much simpler but incredibly sluggish. Twitter is a particularly bad offender after you've scrolled down a few pages. Or any other site that uses a ton of Ajax with little regard for the consequences.


I absolutely love TBL's initial documents. They're utterly semantic markup. Which means that you can apply a minimal amount of CSS to have them appear in a pleasant-to-read format. Let's see if I can find that pastebin .... Here: http://pastebin.com/7sGiHBwF

But, yeah. If webpages would just revert to what TBL had created (yes, I'll allow for images and minimal other frippery) things would be so much more manageable.


> I wish extreme performance goals and requirements would become a new trend.

Not just performance, but efficiency - both speed and size. Sadly it seems that most of the time this point is brought up, it gets dismissed as "premature optimisation". Instead we're taught in CS to pile abstraction upon abstraction even when they're not really needed, to create overly complex systems just to perform simple tasks, to not care much about efficiency "because hardware is always getting better". I've never agreed with that sort of thinking.

I think it creates a skewed perception of what can be accomplished with current hardware, since it makes optimisation an "only if it's not fast/small enough/we can't afford new hardware" goal; it won't be part of the mindset when designing, nor when writing the bulk of the code. The demoscene challenges this type of thought; it shows that if you design with specific size/speed goals in mind, you can achieve what others would have thought to be impossible. I think that's a real eye-opener; by pushing the limits, it's basically saying just how extremely inefficient most software is.


> Instead we're taught in CS to pile abstraction upon abstraction even when they're not really needed, to create overly complex systems just to perform simple tasks, to not care much about efficiency "because hardware is always getting better". I've never agreed with that sort of thinking.

Right, exactly. It's obvious too that software has scaled faster than hardware in the sense that to do an equivalent task like say, boot to a usable state, takes orders of magnitude longer today than it it used to, despite having hardware that's also orders of magnitude faster.

So when I see demo of ported software that does something computing used to do back in the 90s (but slowly), I'm really only impressed by the massive towers of abstraction we're building on these days, but what we're actually able to do is not all that much better. To think that I'm sitting on a machine capable of billions of instructions per second, and I'm watching it perform like a computer doing millions, is frankly depressing.

All of this is really to make the programmers more efficient, because programmer time is expensive (and getting stuff out the door quicker is important), but the amount of lost time (and money) on the user's end, waiting for these monstrosities of abstraction to compute something must far far exceed those costs.

I'm actually of the opinion that developers should work on or target much lower end machines to force them to think of speed and memory optimizations. The users will thank them and the products will simply "be better" and continue to get better as machines get better automatically.


> All of this is really to make the programmers more efficient, because programmer time is expensive (and getting stuff out the door quicker is important), but the amount of lost time (and money) on the user's end, waiting for these monstrosities of abstraction to compute something must far far exceed those costs.

I believe that the amount of time spent optimising software should be proportional to how long it will be used for, and how many users it has/will have. It makes little sense to spend an hour to take 10 minutes off the execution time of a quick-and-dirty script that will only be run once or twice. It makes a lot of sense to spend an hour, or even a day or week, to take 1 second off the execution time of software with hundreds of thousands or millions of users that constantly use it. At some point the overhead of optimisation is less than the time (or memory?) saved by everyone, so the "programmer time is expensive" line of thinking is really a form of selfishness; interesting that free/open-source software hasn't evolved differently, since it operates under a different set of assumptions.


My desktop cold boots in well under 30 seconds which is fater than say Apple Lisa which took over 1min to boot and showed a blank screen for a good 30 seconds. You can find videos on YouTube of various boot sequences. Worst case I can recall was a windows 95 machine which took 15 min to boot.


I think my new desktop does about the same thanks to the magic of SSDs. But a minute ain't bad for a boot. I remember some old servers I used to work on that would take 30 or 40 minutes to boot, most of which was spent waiting while the SCSI controllers did some kind of check out.

Before I replaced my old desktop, I think my boot times were something on the order of 10 minutes.

(and I don't count Windows claiming you can start to work while it loads a bunch of stuff in the background making the system unusably slow as counting).

https://www.youtube.com/watch?v=YQuODFwfZYw

http://www.therestartpage.com/


> I wish extreme performance goals and requirements would become a new trend.

Well, there will always be demoscene (http://www.youtube.com/watch?v=5lbAMLrl3xI ) which I've always found remarkable.


The Windows demos almost always use a lot of the system libraries for the bulk of their work., which hasn't impressed me quite as much as what you can do in 4k with bare DOS --- where the code is directly manipulating the hardware. No libraries, no GPU drivers:

http://www.youtube.com/watch?v=dGQEeArYDS8


I agree. I've been using Ghostery recently to see the external libraries that are loaded on various sites and it's ridiculous. Some sites are loading more than 50 extern scripts.


You're not kidding! The site is amazingly fast.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: