You're making a mistake assuming that the push for HTTPS-only Web is about protecting the content of your site.
The problem is that mere existence of HTTP is a vulnerability. Users following any insecure link to anywhere allow MITM attackers to inject any content, and redirect to any URL.
These can be targeted attacks against vulnerabilities in the browser. These can be turning browsers into a botnet like the Great Cannon. These can be redirects, popunders, or other sneaky tab manipulation for opening phishing pages for other domains (unrelated to yours) that do have important content.
Your server probably won't even be contacted during such attack. Insecure URLs to your site are the vulnerability. Don't spread URLs that disable network-level security.
The numbers are sent in this peculiar format, because that's how they are stored in the certificates (DER encoding in x509 uses big endian binary), and that's the number format that OpenSSL API uses too.
It looks silly for a small value like 65537, but the protocol also needs to handle numbers that are thousands of bits long. It makes sense to consistently use the same format for all numbers instead of special-casing small values.
For very big numbers (that could appear in these fields), generating and parsing a base 10 decimal representation is way more cumbersome than using their binary representation.
The DER encoding used in the TLS certificates uses the big endian binary format. OpenSSL API wants the big endian binary too.
The format used by this protocol is a simple one.
It's almost exactly the format that is needed to use these numbers, except JSON can't store binary data directly. Converting binary to base 64 is a simple operation (just bit twiddling, no division), and it's easier than converting arbitrarily large numbers between base 2 and base 10. The 17-bit value happens to be an easy one, but other values may need thousands of bits.
It would be silly for the sender and recipient to need to use a BigNum library when the sender has the bytes and the recipient wants the bytes, and neither has use for a decimal number.
OpenStreetMap often has building outlines, but not building height. This would be a nice way to augment that data for visualisations (remember: OSM doesn't take auto-generated bot updates, so don't submit that to the primary source).
It varies. New public APIs or language features may take a long time, but changes to internals and missed optimizations can be fixed in days or weeks, in both LLVM and Rust.
Couple of things that are commonly misunderstood/unappreciated about this:
• Uninitialized bytes are not just some garbage random values, they're a safety risk. Heartbleed merely exposed unitialized buffers. Uninit buffers can contain secrets, keys, and pointers that help defeat ASLR and other mitigations. As usual, Rust sets the bar higher than "just be careful not to have this bug", and therefore the safe Rust subset requires making uninit impossible to read.
• Rust-the-language can already use uninitialized buffers efficiently. The main issue here is that the Rust standard library doesn't have APIs for I/O using custom uninitialized buffers (only for the built-in Vec, in a limited way). These are just musings how to design APIs for custom buffers to make them the most useful, ergonomic, and interoperable. It's a debate, because it could be done in several ways, with or without additions to the language.
> Uninitialized bytes are not just some garbage random values, they're a safety risk.
Only when read. Writing to "uninitialized" memory[1] and reading it back is provably secure[2], but doesn't work in safe Rust as it stands. The linked article is a proposal to address that via some extra complexity that I guess sounds worth it.
[1] e.g. using it as the target of a read() syscall
[2] Because it's obviously isomorphic to "initialization"
Obviously, initialized memory isn't an uninitialized memory any more.
There are fun edge cases here. Writing to memory through `&mut T` makes it initialized for T, but its padding bytes become de-initialized (that's because the write can be a memcpy that also copies the padding bytes from a source that never initialized them).
Note that if you have a `&mut T` then the memory must already be initialized for T, so writing to that pointer doesn't initialize anything new (although as you say it can deinitialize bytes, but that only matters if you use transmute or pointer casting to get access to those padding bytes somehow).
ADHD meds contain controlled substances, and there's an annual production quota for them set by the DEA. The quota is intentionally set very tightly, so it's easy to hit it when the demand increases even slightly above projections.
Most international pharmaceutical companies have some presence in the US, so the US quota has a world-wide effect.
Additionally, prescriptions are for very specific doses of specific variants of the meds. Because it's a controlled substance, pharmacies aren't allowed to use any substitutes (not even something common-sense like dispensing 2x30mg for a 60mg prescription). This makes shortages happen even before all of the quota runs out, because some commonly used doses run out sooner.
Why would they do anything about that? It’s their job to set and enforce quotas, not to ensure access. From their perspective, I’d imagine that tight quotas make them feel reassured that they’ve got a lid on diversion concerns.
It does sound like the quota-setting system was designed for an era where the “legitimate” growth wasn’t on the order of “10% a year for 15 years”:
You're right that the DEA's quota system prioritizes diversion control over access, and it's clearly stuck in a bygone era unfit for todays demand growth. But it's baffling that Big Pharma, with its lobbying muscle, hasn't pushed Congress to modernize this bottleneck. Surely they'd profit from looser quotas.
Instead of hoping for a Trump EO to nuke the DEA (literally or figuratively), why not redistribute Controlled Substance Act enforcement? Agencies like the FBI or HHS already handle overlapping domains. The DEA's rigid gatekeeping, especially on research and quotas, stifles innovation more than it curbs abuse.
Or if the court overturned Wickard v Filburn. The Federal power to regulate substances like this at all is based on a butterfly effect version of the commerce clause.
The US is adopting isolationist policies based on a nationalist ideology. The government is run by anti-intellectuals. The US economic policy is based on xitter rants, and flip-flops every week. The fickle vindictive ruler is personally attacking businesses that don't make him look good. It's clear that in the US the path to success is now loyalty. The president runs a memecoin.
It is not going to happen, this is just day-dreaming. Yes, I saw the news, but you can't compare a few tens of people wanting to leave the US for ideological reasons to millions of people that stay in the US because they can fare better and make more money or start new companies overnight because they have a great idea.
The US is not adopting isolationist policies. It's adopting more nationalistic policies, which is no different than how China has been running its economy (and politics in general) for decades. And specifically the four year Trump Administration is pursuing heavily nationalistic policies. There's no evidence the Democrats will keep much of Trump's policy direction, as certainly the Biden Admin and Trump Admin could hardly be more different.
Let me know where you see the US military pulling back from its global footprint. How many hundreds of global bases has the US begun closing? They're expanding US military spending as usual, not shrinking. The US isn't shuttering its military bases in Europe or Asia.
The US is currently trying to expedite an end to the Ukraine v Russia war, so it can pivot all of its resources to the last target standing in the Middle East: Iran. That's anything but isolationist.
Also, the US pursuing Greenland and the Panama Canal, is the opposite of isolationist. It's expansionist-nationalistic. It's China-like behavior (Taiwan, Hong Kong, South China Sea, Tibet).
I really like the WebGPU API. That's the API where the major players, including Apple and Microsoft, are forced to collaborate. It has real-world implementations on all major platforms.
With the wgpu and Google dawn implementations, the API isn't actually tied to the Web, and can be used in native applications.
The only reason I like WebGL and WebGPU is that they are the only 3D APIs where major players take managed language runtimes into consideration, because they can't do otherwise.
Had WebAssembly already been there without being forced to go through JavaScript for Web APIs, and most likely they would be C APIs where everyone and their dog are writing bindings insteads.
Now, it is pretty much a ChromeOS only API still, and only available across macOS, Android and Windows.
Safari and Firefox have it as preview, and who knows when it will ever it stable at a scale that doesn't require "Works best on Chrome" banners.
Support on GNU/Linux, even from Chrome, is pretty much not there, at least for something to use in production.
And then we have the whole drama that after 15 years, there are still no usable developer tools on browsers for 3D debugging, one is forced to either guess what rendering calls are from the browser or which are from the application, GPU printf debugging, or having a native version that can be plugged into Renderdoc or similar.
People pick the best option, while worse option can creep from being awful to just a close second, and then suddenly become the best option.
There's a critical point at which there's enough EV infrastructure to overcome objections, available cars become cheap enough, and then there's hardly any reason to pick gas cars that are slower, laggier, noisier, smelly, more expensive to run and can't be refuelled at home.
Sort of. While electric cars are great, the type of person who buys a $3,000 car cannot afford the cheapest electric car for about 10-15 years after that tipping point, even after you account for gas savings. So new cars are likely to switch suddenly, it still will be a decade before that catches up. The average car in the US is 12 years old.
Even the type of person who buys a 3 year old car cannot (will not?) afford a payments on a new car accounting for the gas savings. They will buy what they can get - but they also will influence the market as they are likely to be sensible (often a new car is not sensible) and so willing to pay extra for the EV, and this in turn will put pressure on the new cars since trade in value is very important to most people who buy a new car (which is sensible, but it is the banks forcing this on the buyers)
Maybe? I can see what you’re saying, but the real world can move as slow as sludge at times. These aren’t smartphones that are relatively easily produced, shipped, and purchased by users.
Second order effects like load on an aging power grid could easily cause speed bumps.
I hope you’re right, but I don’t know I could bet on it
The problem is that mere existence of HTTP is a vulnerability. Users following any insecure link to anywhere allow MITM attackers to inject any content, and redirect to any URL.
These can be targeted attacks against vulnerabilities in the browser. These can be turning browsers into a botnet like the Great Cannon. These can be redirects, popunders, or other sneaky tab manipulation for opening phishing pages for other domains (unrelated to yours) that do have important content.
Your server probably won't even be contacted during such attack. Insecure URLs to your site are the vulnerability. Don't spread URLs that disable network-level security.