Isn't the usage of HTTP/1.0, like, zero? Does anyone (other than maybe some minimal embedded applications that probably won't upgrade to HTTP/2 anyway) actually use HTTP/1.0 instead of HTTP/1.1?
The usage share there referred to stats from Mozilla and Firefox 36, where HTTP/1.0 is seen in 1% of all HTTP responses compared to 10% for HTTP/2. / Daniel - author of those slides
If the GP post is correct, ironically the HTTP/1.0 traffic thats really happening might be services/appliances and (lib)curl (@bagder is principle dev of curl, for those that don't know), not Firefox or any consumer browser.
My impression (from seeing IPv6 support projects being regularly deprioritized at a previous employer) is that it's more of a market/deployment problem than a technological one. Layer 3 is a uniquely annoying place in the current networking stack, in that changes to it are useless until everything on the path between you and the other hosts you talk to speak IPv6.
With upper-layer stuff like HTTP2, you can get some return on your investment as long as two hosts that want to talk to each other, however distantly separated, speak the new protocol/features. With lower-layer stuff like 802.11ac or 1GB ethernet, you can get most of the benefits just by upgrading the client hardware and infrastructure in a local setting. If you want to talk over IPv6, though, you need to upgrade clients, local infrastructure (e.g. in-home routers, corporate IT equipment), carrier routers, datacenter routers, and servers. In many cases, each of those things is controlled by a separate organization, and none of those organizations gets any benefit out of implementing IPv6 until everyone else along the chain does so too.
For example - let's say that you're providing some kind of web app or service from your own servers. There's a non-zero effort involved in setting up IPv4/6 dual stack throughout your stack, so you first check if you can even get IPv6 connectivity. Using EC2? Nope - no IPv6. Using a leased server in a colo? Pretty good chance they'll also not even have it available as an option. So at that point, why even work on this feature that you're not going to get to use, and that since it's not exercised by real traffic will probably be full of bugs?
And we can't simply dump the whole v4 range into a sub-section of v6 and call it a night either.
While it may work for the v6 side, the v4 side will have a hell of a time talking to anything on the v6 without some kind of network level NAT going on...
This is actually quite widely used (see [1]). The big limitation is that all connections have to be initiated from the IPv6 side, so it's usually used to allow v6-only client networks to access v4 servers, but even that is a big win for bootstrapping - for example, it's let T-Mobile configure all its new Android devices for IPv6-only behind a variant of NAT64.
Another cool trick is IPv4-mapped IPv6 addresses [2], which lets userspace software act like the IPv4 internet is just a subnet of the IPv6 internet, and speak IPv6 only, while the network stack of the host system deals with the annoyances of dual-stack. (In fact, I think on Linux systems this will even let you pretend that an IPv4-only configuration is actually IPv6 ;-) ).
Well, kind of... It's less about not being able to grok IPv6 and more about these routers having limited memory available. IPv6 addresses take up 4x as much memory as IPv4 addresses, and that adds up pretty quickly in a routing table. As routers are typically integrated devices, you can't just go "add more RAM", so sticking with IPv4 effectively quadruples your capacity.
Is that mixing up two different classes of devices?
Home embedded routers run into problems due to the overhead of state tracking and buffers for all the simultaneous connections. If they support 10k simultaneous connections, and they track 2x ip addresses for each, the additional overhead of 2x 12 bytes per ipv6 connection is 240kB memory. Buffers are going to be much, much more than that, and buffer requirements are about the same for ipv4 and ipv6.
Routers that have full BGP routing tables tend to be serious routers with TCAM. That's expensive and that does put pressure on BGP routing table sizes, to accommodate older and cheaper [real] routers. However, ignoring ipv6 is simply not an option, so there's no point in dreaming about how staying with ipv4 forever would allow quadruple the routing table capacity for a set amount of TCAM.
I was actually referring more to the core routers with TCAM than home routers. And I agree, ignoring IPv6 isn't an option, but it is a viable option to encourage workarounds. Which is exactly what's happened; so most ISPs that support IPv6 do 6to4 tunneling at the edge.
It can be even worse than that. A significant number of routers forward IPv4 packets in hardware but fall back to a software implementation for IPv6. So it might happen that the real bottleneck is not memory but throughput.
As for sticking with IPv4 for increased capacity, it depends. Private networks might get away with it for their internal traffic, but no major ISP nowadays will deliberately choose not to route IPv6 for their customers.
The IPv4 address table needs 3x as many routes as a non-fragmented address space like IPv6 would, so it's a much smaller difference than it sounds like.
During the initial few years, it was also about hardware limitations. The Cisco kit we had in the early 2000s featured hardware accelerated IPv4 handling but fell back to the host CPU for IPv6. Even a simple ping test would show the difference since the software path was seen as a fallback for anomalies and they'd provisioned the CPU accordingly.
It's more complicated than that; nobody with an ipv4 address had an incentive to switch, and the ipv6 committee addressed the wrong problems in trying to encourage adoption. As IPv4 addresses will likely become more valuable in the future, then there will be a cost incentive to switch.
For another take on the matter, see: http://cr.yp.to/djbdns/ipv6mess.html in particular a good summary of one issue is under "Another way to understand the costs"
I don't know what will be said but I've wondered why they went to 128 bit addresses when 64 should have been "enough for anybody." Note 64 bit isn't double 32, but rather like 4 billion times more if my math isn't too far off.
Imagine having to type those addresses out by hand. Though skipped 0 are allowed, hex only helps a little. Not looking forward to that.
Back when IPv4 was put in place, they had no idea the net would become so important. Now they know, and so going all the way to 128 helps futureproof.
Also, look at the (though media over-hyped) Y2K thing. that was a case of "good enough" that ended up staying in production way beyond expectation.
Heck, there are companies out there selling devices that allow "old" industrial looms to load patterns from SD cards and such when they were originally designed to take floppies. This via a device that have a floppy cable connector at one end, a SD slot at the other, and fix inside a 5.25" or 3.5" drive bay.
Computational stuff seems to stay in use way longer than anyone imagined back when every year or so seemed to produce whole new architectures.
Right, futureproofing. But this is not like when hard disks were 512mb, then 2gb, then 4, and kept hitting very shortsighted limits.
We're surviving now with only minor discomfort on a 32 bit address space, and 64 is billions of times more space.
128 is just showing off for no obvious reason that I can see, besides looking like guids. The number approaches the number of atoms in the universe I believe. I'm guessing a universe-wide addressing system can wait a few years, haha.
There are 64 bits for the network prefix, and then 64 bits for devices on the network.
So you'll have a network like 2001:f4dc:2110::/64, and then you can allocate ::1, ::2, ::3. Real had to allocate by hand, right? Not really. Sure, billions of addresses go unused, but who cares? With expansion it's fine - you just write 2001:f4dc:2110::2.
If you had to write 2001:f4dc:2110:0000:0000:0000:0000:0002 then I would sympathise. But you don't.
The second 64 bits are reserved for future innovation. Ethernet nodes happen to populate it with tedious junk, but that forces ISPs to allocate a /64 per user as a bare minimum.
Over the coming years and decades, people will repurpose those bits to do interesting stuff at the network edge, without forklifting the entire Internet again.
While HTTP/2 will lead to some huge improvements in efficiency it's going to take some time for the web to collectively forget a decade's worth of kludges otherwise known as 'best practices.'
Domain sharding, image spriting, script concatenation, lots of unnecessary intermediate caching, etc. All of these were created with one goal. Faster page load times (via connection thrashing) meant more Google PR juice. All of them have hidden costs that add to development complexity and unnecessarily increase load on the servers-side. IMHO, all should die a fiery death.
If we're really shooting for a faster and more efficient web experience, HTTP/2 solves the greatest constraint on the back-end (ie allowing multiple connections over a single TCP stream).
What we need next are a change in priorities defined by a new standard of 'best practices' such as:
1) Stop concatenating scripts
Assuming HTTP/2 and no 1 request/asset constraint what's the point? Sure, it may lead to to a 10-20% increase in compression overall but only if the user navigates to every page on your site.
2) Stop minifying scripts
Controversial? Maybe, but why are we intentionally creating human unreadable gibberish for a modest gain when intermediate gzip compression leads to much greater gains.
I can't count the number of posts I've read where a developer goes gets excited about shaving 40KB off of their massive concatenated global.js soup (incl jquery, mootools, underscore, angular, etc) when they would have had much greater gains by optimizing image/media compression.
As a library developer, minifying sucks. For every version I release I have to produce another copy, run a second set of tests (to make sure minifying didn't break anything), and upload/host an additional file. In addition, any error traces I get from users of the minified version are essentially useless unless they take the time to download and test the non-minified version.
3) Quit loading common libraries locally
If I could teach every budding web developer one lesson it would be how a local cache works and how no variation of concatenation/compression will lead to a faster response than a warm cache.
Yes, loading 3rd party code can potentially lead to a XSS MITM attack (if you don't link via HTTPS). No, your local copy is not more robust than a globally distributed CDN. No, loading yet another unique copy of jquery is not going to be faster than fetching the copy from a warm local cache.
The only justification for loading common libs locally is if the site operates on an air-gapped intranet.
Google indirectly encouraged most of these 'best practices' by giving PR juice to sites with faster page load times.
It would be really interesting to see if/how the search engines will adjust their algorithms as new 'best practices' are established. I know they're starting to incentivize sites that use HTTPS.
What I'd really like to see is sites being penalized for making an unnecessarily large number of external domain requests with the exception of a whitelist of common CDNs.
As for TCP/IP. I'd really like to hear a sound technical justification of why the header checksum calculation for TCP is coupled to the IP header.
Controversial? Maybe, but why are we intentionally creating human unreadable gibberish for a modest gain when intermediate gzip compression leads to much greater gains
gzip compression is already widely supported, however minification does sill have savings (eg removing comments from code - which really have no purpose being transmitted in the first place.
What is actually counter productive for compression is renaming all of your Javascript variable names to single character tokens. But that isn't something I've seen any Javascript minimizers do.
I'd also challenge your point about "human unreadable gibberish" as once the data leaves the server it doesn't need to be human readable. It only needs to be human readable at the development end - which is why minification only happens on live content hosted on the production environment.
What I was implying was, minification only adds - at best - an additional %20 reduction in file size after gzip (ie which is more like 40%-60% reduction). Assuming practices #1 and #3 are followed, scripts should remain relatively small and the added benefit of minification would be negligible.
Unless the end goal is obfuscation, I don't see the value of adding additional complexity to the development process for what amounts to a micro-optimization. Plus - from a user perspective - if a site's scripting fails, I'd like to see developers maintain the ability to view the source in production and determine why.
In an unrelated note, I'd also like to see ES/CSS updated with new features that render the most common transpiled languages obsolete. I think transpiling is an awesome approach for testing language improvements and possibly building DSLs but supporting them in the long-term (ex tooling, backward compatibility) doesn't make sense.
Speaking from the perspective of a library author/contributor. Developing, testing, and stabilizing a widely-used codebase over the long term is hard. Removing additional steps and reducing complexity makes life a bit easier.
20% isn't worthless if you're browsing on a mobile phone on a slow GPRS connection.
From an end user perspective, I get sick of all the responsive sites that look pretty on all screen sizes, but loads like a sack of shit on anything slower than WiFi. Which is why the libraries I write auto-minimize live code (they also use smaller dependencies, but that can be mitigated by your public CDN point).
> Yes, loading 3rd party code can potentially lead to a XSS MITM attack (if you don't link via HTTPS). No, your local copy is not more robust than a globally distributed CDN. No, loading yet another unique copy of jquery is not going to be faster than fetching the copy from a warm local cache.
Loading a local copy of jquery is probably not going to be more robust than a globally distributed CDN, but when it fails, it's probably going to be at the same time as the rest of your site, so it's not breaking anything that's not already broken. Who knows what's going to happen to the jQuery CDN in 10 years when jQuery is no longer in style, etc. What if they forget to renew the domain, or their domain registration is hijacked, etc.
I was not aware that my firefox now use http2 when speaking with google etc, kind of cool.
You can check it with the Network monitor (Ctrl-Shift-Q). Reaload the page, clck on a request an look for "Version" beneath "Status code".