Browsers made a fundamental change a while back to not share caches between origins because caching it became a side-channel to detecting if someone had visited it before.
So now if two different websites embed the same remote font then visitors will have to download it separately for both sites.
Surely one of the most popular browsers in the world could ship with some commonly requested fonts and all you'd learn is that they were using chrome which the user-agent said anyways.
per-minute isn't a crazy unreasonable proxy for it though:
- logs are generally proportional to the length of the job
- other artifacts also usually correlate to an extent
- some of the cost for Github will be for the entire time the job is active: e.g. active connections for log streaming etc.
- it's largely correlated to the value the end-user gets out of it
- it's easy to bill for because they can already do billing that way on the hosted runners
- the costs are easy to predict for end-users
It's not like the rest of the Github platform is a per-user cost to run, but that's how Github charge for most features.
There are the parliament acts which restrict the length of the delay the HoL can inflict (including to just 1 month for money bills). They also cannot amend money bills.
There is however the Salisbury convention which is that the HoL shouldn't block legislation that was a manifesto commitment of the governing party. That doesn't meant they can't amend it at all, but they can't substantively change it. It's also just a convention, not a rule.
I believe the removal of the "experimental" nomenclature is just an indication that Rust is "here to stay" in the kernel (which essentially means that developers can have confidence investing in writing Rust based drivers).
The overall rules haven't changed.
Strictly speaking they've always been obligated to not break the Rust code, but the R4L developers have agreed to fix it on some subsystems behalf IIRC so Rust can be broken in the individual subsystem trees. But I think it's been the case all along that you can't send it to Linus if it breaks the Rust build, and you probably shouldn't send it to linux-next either.
LLVM platform support is neither sufficient (rustc needs to be taught about the platform) not technically necessary (you could write a rustc backend that targets a platform that LLVM doesn't, like modifying cranelift or once the gcc backend reaches maturity).
Yes, that's the entry of the rabbit hole, and reader is advised to dig their own tunnel.
I remember following the tension for a bit. Yes, there are other subjects about how things are done, but after reading it, I remember framing "code quality" as the base issue.
In high-stakes software development environments, egos run high generally, and when people clash and doesn't back up, sparks happen. If this warning is ignored, then something has to give.
If I'm mistaken, I can enjoy a good explanation and be gladly stand corrected.
This is what happened here. This is probably the second or third time I witness this over 20+ years. Most famous one was over CPU schedulers, namely BFS, again IIRC.
Lets Encrypt are doing is because of the decision that CAs and browser makers made that it needs to be reduced (browsers have been reducing the length of certs that they trust).
The why is because it's safer: it reduces the validity period of private keys that could be used in a MITM attack if they're leaked. It also encourages automation of cert renewal which is also more secure. It also makes responding to incidents at certificate authorities more practical.
For a start-up it's much easier to just pay the Cloud tax than it is to hire people with the appropriate skill sets to manage hardware or to front the cost.
Larger companies on the other hand? Yeah, I don't see the reason to not self host.
The problem is that Cloudflare do incremental rollouts and loads of testing for _code_. But they don't do the same thing for configuration - they globally push out changes because they want rapid response.
It's still a bit silly though, their claimed reasoning probably doesn't really stack up for most of their config changes - I don't see it to be that likely that a 0.1->1->10->100 rollout over the period of 10 minutes would be a catastrophically bad idea for them for _most_ changes.
And to their credit, it does seem they want to change that.
Yeah to me it doesn't make any sense - configuration changes are just as likely to break stuff (as they've discovered the hard way) and both of these issues could have been found in a testing environment before being deployed to production
So now if two different websites embed the same remote font then visitors will have to download it separately for both sites.
https://developer.chrome.com/blog/http-cache-partitioning
reply