Then again, remote DMA to your memory via a port on the computer, while a great tool for debugging internal stuff, is also quite the wide door to getting hacked if someone every manages to plug in a malicious device in the same port.
It's funny how we've come back around to this, with the M3 Mac Studios allowing you to enable RDMA over Thunderbolt. You have to toggle the setting via a firmware change, but it's there for performance!
FireWire was pretty wild in its day. It just got hampered by the per-port licensing fee, and once USB 2.0 rolled out, its days were numbered for anyone not needing the latency/power features.
One thing could also be that by the time you have 10GE uplinks, shaping is not as important.
When we had 512kbit links, prioritizing VOIP would be a thing, and for asymmetric links like 128/512kbit it was prudent to prioritize small packets (ssh) and tcp ACKs on the outgoing link or the downloads would suffer, but when you have 5-10-25GE, not being able to stick an ACK packet in the queue is perhaps not the main issue.
At 10G and up, shaping still matters. Once you mix backups, CCTV, voice, and customer circuits on the same uplink, a brief saturation event can dump enough queueing delay into the path that the link looks fine on paper while the stuff people actually notice starts glitching, and latency budgets is tight. Fat pipes don't remove the need for control, they just make the billing mistake more expensive.
Sounds like you would run into E=mc^2 for your "lets just not have mass and stuff solves itself". The words "easily" here and there have a lot of work in front of them.
For the Alcubierre drive which also speculates on acquiring negative mass, the
quote: "the energy requirements still generally require a Type III civilization on the Kardashev scale." says something about how well the word "easily" fits into such a solution.
Unfortunately it talks a lot of how they shoot lasers and get the plasma to emit a lot of energy, but less on how they are to turn this into electricity to first feed the reactor itself to keep it going, and get a net gain out of it.
A bit too common on the fusion reporting articles.
I think NTFS get a bit of crap from the OS above it adding limitations. If you read up on what NTFS allows, it is far better than what Windows and the explorer allows you to do with it.
And if you time travel to the 90s, this is what amiga owners with 1M ram said about PC/Win users needing 8,16,32M of ram to paint a few icons on the monitor.
But noone listened then, because ram was cheap and you should not stand in the way of "progress".
So here we are, needing gigs to paint a single pixel. Congratulations everyone that chose bloat, you won.
For some people, when you are not taking over the whole machine (as you would do in demos and games), then replacing the OS with something that gives you memory protections and virtual memory, uses all ram for caches, talks ipv6 and things like that is kind of neat. It will be a somewhat slow unix box, but it is still that same machine doing both kinds of tasks for you.
reply