And then they'll start feeding in data like gaze tracking, and adjust the generated content in real time to personalize it to be maximally addictive for each viewer.
The “over 1 TFLOPS” claim for the M1 appears to be for single precision floats whereas FLOPS performance figures for supercomputers, including the one given for the CRAY-1, are almost always based on double precision (FP64) floats. The double precision FLOPS performance of the M1 would be lower, perhaps half of the single precision performance.
I had just gone down this rabbit hole for unrelated reasons (looking into yields). Nvidia's 5090 die is 750 mm^2, managing 419 TFLOPS on the FP16 benchmark.
The last line about "simplifying approximations within the literature[...] applied outside of their intended context" makes me think the author has an issue with the way other theoreticians are using LIGO data in their analyses.
Fiber is a decades long investment into hardware- one that I would argue we hardly needed. Google fiber started with the question, what would people do with super high speed? The answer was stream higher quality videos and that's about it. In fact, by the time fiber became widespread, many had moved off of PCs to do the majority of their Internet use via cell phones.
With that said, the fiber will be good for many years. None of the LLM models or hardware will be useful in more than a few years, with everything being replaced to newer and better on a continual basis. They're stepping stones, not infrastructure.
We reemplaced one tech that was used by literally the whole world, pair copper wires, with something orders of magnitude better and future proof. My pc literally cant handle the bandwidth of my fiber connection.
Where I live (Germany), lots of people have vDSL at advertised speeds of 100 Mib/s, using pair copper wires. Not saying that fiber is not better, it obviously is, and hence the government is subsidizing large-scale fiber buildouts. But as it stands right now, I'm confident that for 99% of consumers, vDSL is indeed enough.
In the 90s and 2000s, I remember our (as in: tech nerds') argument to policy-makers being "just give people more bandwidth and they will find a way to use it", and in that period, that was absolutely true. In the 2000s, lots of people got access to broadband internet, and approximately five milliseconds later, YouTube launched.
But the same argument now falls apart because we have the hindsight of seeing lots of people with hundreds of megabits or even gigabit connections... and yet the most bandwidth-demanding thing most of them do is video streaming. I looked at the specs for GeForce Now, and it says that to stream the highest quality (a 5K video feed at 120Hz), you should have 65 Mib/s downstream. You can literally do that with a vDSL line. [1] Sure, there are always people with special usecases, but I don't recall any tech trend in the last 10 years that was stunted because not enough consumers had the bandwidth required to adopt it.
[1] Arguably, a 100 Mib/s line might end up delivering less than that, but I believe Nvidia have already factored this into their advertised requirements. They say that you need 25 Mib/s to sustain a 1080p 60fps stream, but my own stream recordings in the same format are only about 5 Mib/s. They might encode with higher quality than I do, but I doubt it's five times more bitrate.
In that, it's closest to the semiconductor situation.
Few companies and very few countries have the bleeding edge frontier capabilities. A few more have "good enough to be useful in some niches" capabilities. The rest of the world has to get and use what they make - or do without, which isn't a real option.
California requires a warning a month in advance for anything a year or longer. Pointing out this law has gotten me a few refunds from services that failed to comply and renewed my subscription without telling me.
Currently, domain specific languages written in YAML. I see these everywhere, from configuring individual utilities to managing giant architectural stacks. People get in their heads that YAML is more easily written and read than code, and so instead of just writing the code to do something, users have to deal with a bunch of YAML.
The drawbacks of YAML have been well-documented[1]. And I think it's worse now in the LLM era. If I have a system that's controlled via scripts, an LLM is going to be good at modifying those scripts. Some random YAML DSL? The LLMs have seen far fewer examples, and so they're going to have a harder time writing and modifying things. There's also good tooling for linting and checking and testing scripts to ensure LLM output is correct. The tooling for YAML itself is more limited, even before getting into whatever application-specific esoteric things the dev threw in.
CFIT is not necessarily pilot error. For example, if ATC vectored a plane without ground proximity warnings into the side of a mountain, that would also be CFIT.
On the one hand, it sounds very stressful. On the other hand, if you screwed up, you wouldn't even notice because your brain would be obliterated before it would register.
reply