I believe that the major limiting factor in the speed of development of AI is hardware, not software. Now and for the foreseeable future. The most important projects in AI, therefore, are the ones pushing hardware performance as high as it can go, and the people who have the opportunity to write software for that hardware are going to make the fastest progress.
I'm tempted to agree with Carmack here, having worked at a now-defunct self driving project previously that used multiple server-grade GPUs and many-core CPUs pulling 1000+W in their car that couldn't even figure out how to move around a car that was parallel parked too far out in the road.
I mean, I partially agree with his opinions from those tweets, but my conclusions are still different. Firstly, the AGI of the future may well be able to be distilled to a form that would fit on a top supercomputer of today. But developing an AGI is going to need a whole lot more power than running an optimized version after the fact. And secondly, the flexibility and availability of GPUs is hard to beat, but I think that after a few iterations of datacenter-scale hardware we'll learn what's needed to really make it shine. So maybe the first iteration of Dojo won't be much better than a GPU cluster in practice, but the third or fourth could be spectacular.
Sure, TPUs and other accelerators are also exciting. Dojo is specifically interesting because of the massive off-chip communication bandwidth it has, which I believe is important. But in general, I'm excited about custom AI silicon.
OpenAI seems to believe that scaling transformers can solve many if not most problems, so I'm not alone here.
Then don't require an AI to run in realtime. Just let it interpret the world in 0.001x speed. This way you could easily simulate hardware which will only be available in decades.
Why do you think this?