If you were to play Semi-God where you could pick up each atom, we still have at least 200x before we hit atomic feature size. But before you are excited about 200x, that is only about 10 full node step, or roughly 20 years of progress ( assuming the rate stay the same which we all know wont happen ). So let say about ~30 years. 2050
We have a very decent roadmap all the way till 2030. Till roughly around TSMC 1nm and 0.8nm. So 2nm in 2025, 1.4nm in 2027, 1nm in 2029. Intel seems to be executing well so far, so they might have a minor lead by 2025/2026. And may be able to reach 0.8nm by 2030.
We used to question whether there is enough market to sustain the development of leading node. ( Which was the number one issue on the death of Moore's law, but media doesn't mention it much if at all ) But given the current market size of multiple Trillion Dollar companies, and geo-political willingness to invest into silicon. I dont see we have a market / funding problem for the next 10 years.
So yes, you could get a GPU that perform close to a GTX4090 by 2033 and it cost you less than $200.
> we still have at least 200x before we hit atomic feature size
How so? The size of the silicon atom is 0.2nm, so we're already close to dealing with atomic scales. It seems likely we'll switch to an element like gallium, as a nearby comment mentioned, before we reach even 1nm node size.
At that point the CNT manufacturing process would have matured beyond the current experimental, but promising, stage.[1]
We can only speculate, but I reckon these changes will happen much earlier than 2050.
> So yes, you could get a GPU that perform close to a GTX4090 by 2033 and it cost you less than $200.
I sure hope we get to that point much sooner than that. :) NVIDIA and AMD are squeezing this performance at the expense of power, heat and size. Apple has proven it's possible to do this much more efficiently, but their silicon is still incredibly large compared to older generations. If we can get a better wafer yield with another material or process, while also reducing power and heat, then the only drawback for consumers could be cost, which should continue to go down. Though if NVIDIA has any say in the matter, they'll surely continue to price gouge consumers, as they've done this generation.
Because the current "5nm" tech doesn't even have any feature size that is a single digit nm. The number 200x is just some napkin maths. Based on a 40nm feature size and 0.2nm atom. So take it with some pinch of salt.
And before we even hit that, quantum tunnelling effect will happen sooner or later.
>Though if NVIDIA has any say in the matter, they'll surely continue to price gouge consumers, as they've done this generation.
It depends on how you view it. This is the first time Nvidia has introduce a GPU running on leading edge node. People often like to compare it to previous x090 pricing. But Nvidia has always used a mature node, and they are far cheaper both in design and wafer price. So getting a 4090, using 4nm, isn't really price gouge consumers as most mainstream comments and media likes to think it is.
We have a very decent roadmap all the way till 2030. Till roughly around TSMC 1nm and 0.8nm. So 2nm in 2025, 1.4nm in 2027, 1nm in 2029. Intel seems to be executing well so far, so they might have a minor lead by 2025/2026. And may be able to reach 0.8nm by 2030.
We used to question whether there is enough market to sustain the development of leading node. ( Which was the number one issue on the death of Moore's law, but media doesn't mention it much if at all ) But given the current market size of multiple Trillion Dollar companies, and geo-political willingness to invest into silicon. I dont see we have a market / funding problem for the next 10 years.
So yes, you could get a GPU that perform close to a GTX4090 by 2033 and it cost you less than $200.