Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good questions. I was looking at H2O and they solve this using what they call http-casper: https://h2o.examp1e.net/configure/http2_directives.html#http...

> When enabled, H2O maintains a fingerprint of the web browser cache, and cancels server-push suggested by the handlers if the client is known to be in possession of the content. The fingerprint is stored in a cookie...

That sounds interesting solution.

Actually it looks like this tool along with nginx pagespeed would work well and require no changing in build tools.

Nginx pagespeed supports parsing and optimizing the html (and linked resources), including inserting prelink headers. Which will now get pushed. It requires no change in build process.

> Won't a gzipped bundle typically produce a smaller payload?

With http2 it's better to unbundle these days as the best reason for bundling was to not create new tcp connections which require ramp up time to be useful. Now with http2 the same tcp connection is used but multiplexed. So the main reason for bundling goes away. As to it producing smaller payloads, it might be the case but not that much smaller that it's worth it since it's the same tcp connection. Better to have it in unbundled then it gets parsed and processed in chunked streams of files. The client is able to use and process the files as they come in without having to load the whole bundle and process at once. Making the site a bit more responsive.



That is the theory. The real world however can be very messy. More than two years ago Khan Academy measured how this would work for them - http://engineering.khanacademy.org/posts/js-packaging-http2.... - here was their summary:

> The reality is not so rosy. Due to degraded compression performance, the size of the data download with individual source files ends up being higher than with packages, despite having achieved 'no wasted bytes'. Likewise, the promised download efficiency has yet to show up in the wild, at least for us. It seems that, for the moment at least, JavaScript packaging is here to stay.

My personal experience has been you have to confirm the theory of how HTTP/2 related changes will perform with actual measurements. I've seen some pages get faster in HTTP/2 by no longer bundling, and in other cases seen then become slower. So far the only way to know for sure is to measure the result.


Well taking a site that was highly optimized for bundling and trying it unbundled on http2 is going to have poor performance.

You need to engineer it so you're downloading and using the javascript pieces as you go. Streaming your code and running it in a way. So the stuff you need more important goes first and stuff that doesn't later.

That way you get time to document interactive a lot quick. The user can start using the page.

With that in mind, http2 beats the pants of bundled js.


> Due to degraded compression performance

Are there any efforts underway to fix that? Seems like you could solve that problem by sharing the compression dictionary between documents transmitted over the same connection.


There are (SDCH), but they are basically abandoned and are close to being unshipped in chrome (which is the only browser that supported them): https://groups.google.com/a/chromium.org/forum/#!topic/blink...

I looked into using SDCH for topojson, since it seemed like a match made in heaven (a lot of repeated bytes in many files that are usually static), but since it never took off in usage it is being removed. The only major site that used it is linkedin.

EDIT: the continuation of this is basically what brotli is. Gather a dictionary of the most common byte-sequences on the internet, pre-ship that in every client and use that as the shared dictionary. But it will never be as good for specific use-cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: