NVIDIA is adding more features, like super-scaling to games, and machine learning models are improving faster than Moore's law. I expect those fancy features, like tensor cores to be a must for 4K gaming in the future.
What's funny is that the same strategy (leaving out specialized instructions from consumer level hardware) that worked extremely well for CPUs won't work for GPUs in my opinion.
If you look at ray tracing hardware (I have it on my RTX 2070 Max-Q card in my laptop), it sucks right now, but it's improving very fast as machine learning algorithms improve.
One thing that I forgot is that AMD can just focus on inferencing hardware (INT16 operations), and leave out tensor cores...so actually you are right, I'll just stay with NVIDIA GPUs.
What's funny is that the same strategy (leaving out specialized instructions from consumer level hardware) that worked extremely well for CPUs won't work for GPUs in my opinion.
If you look at ray tracing hardware (I have it on my RTX 2070 Max-Q card in my laptop), it sucks right now, but it's improving very fast as machine learning algorithms improve.
I just found this:
https://www.tomshardware.com/news/amd-big_navi-rdna2-all-we-...
One thing that I forgot is that AMD can just focus on inferencing hardware (INT16 operations), and leave out tensor cores...so actually you are right, I'll just stay with NVIDIA GPUs.