Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't blame NVidia for Intel and AMD's failure to do anything meaningful with OpenCL, or Google for completely ignoring it on Android and push Renderscript instead.


What on earth is your point? Let me make one thing clear, I could not care one bit about this team based mentality. Companies can be bad at each their thing.

NVIDIA delayed driver support for OpenCL deliberately, because they wanted the only viable alternative for their users to be CUDA. Pushing ML universities to invest resources into that.

Your point about AMD nok making something out of OpenCL isn't particularly fair... Maybe what you mean is that they should sponsor their own HPC labs at various universities? Is that "making something out of it"?


Nvidia delivered a good solution with cuda. You are whining about them not investing as hard in opencl. What if focusing on cuda was what they thought would let them deliver a good result for users? Lack of focus and sustained investment is part of issue w amd and others. If openCL is the hot thing that nvidia supposedly purposely delaying was a great opportunity for amd and intel to eat their lunch. Where were they?


I stopped believing in the well-intentions of NVIDIA a long time ago. Mind you, I didn't stop buying their products. But, they repeat the same shitty stuff all the time. I get why they do it, I get why it works, it's just annoying that people defend it as if it is altruistically motivated (not accusing you of this in particular, but in general).

And, as I said, AMD didn't, and still doesn't do a great job in the compute department. And, NVIDIA spent a lot of money developing good solutions with CUDA, their proprietary technology, cannot be anything but a good thing, right?

Invest money to "help" universities. Lock core software to your proprietary solutions. Jack up prices.

If they only did it in this instance, I'd perhaps give them the benefit of the doubt. But, off the top of my head:

- cuda - phys-x - gsync - Gameworks in general - rtx - dlss

There is really nothing uniquely special about any of these technologies, other than being (mostly) software solutions tied to NVIDIa hardware, and being pushed heavily onto both developers and researchers.

It's the same playbook as "give MatLab/<Any AutoDesk Product> for free to students". These good deeds are not altruistic, they are investments in market capture.


Why haven't Intel and AMD did that then?


One player playing dirty does not imply the rest are saints?


My point is exactly that, Intel, AMD and Google are the ones to blame for OpenCL failure, not NVIDIA.

And lets not forget that OpenCL came from Apple, which they gave up after messing with Khronos politics. Which is basically the reason Metal came to be, before Khronos decided what OpenGL vNext was supposed to be. Had it not been for AMD's Mantle, they would probably still wondering about it.

They were the ones that failed to provide tooling and libraries that would create a valuable ecosystem around OpenCL.

It is more than fair, it is always easy to blame others for our failures.


Nvidia took a risk and invented the whole "AI on GPU" industry. AMD and Intel ignored it until they saw big $ there and their strategy was to create an "open system" that would cause external pressure (from folks like you) on Nvidia to share the industry with them. Now, Nvidia's OpenCL performance is better than AMD's but strictly inferior to CUDA, so what are you going to do when AMD fails even in that?


NVIDIA did absolutely not "invent AI on GPU", that's a ridiculous statement. When programmable shaders became commonplace, shaders were used to do general computation. That was the birth of GPGPU. NVIDIA jumped on this and developed a lot of very good tooling around that. known as CUDA. Then they invested a lot to make that tooling standard in research departments.

Once GPGPU got more commonplace, researchers also doing AI, finding suitable GPGPU tasks, used the more user friendly and advanced tooling, which was CUDA.

So, "NVIDIA inventing AI on GPU" suggests you do not know much about the history of AI work on GPUs. But, feel free to correct me.


Nvidia invented HW shaders:

https://en.wikipedia.org/wiki/Shader

https://www.khronos.org/opengl/wiki/History_of_Programmabili...

"The GeForce 3, the first NV20 part, contained the first example of true programmability. Despite NVIDIA being a pioneer of highly configurable fragment processing, its programmability was in its vertex processing. The GeForce 3 was the first GPU that brought programmabilitiy to consumer hardware."

The AI on GPGPU was possible only because Nvidia provided a library with matrix and activation functions running on a GPU at speeds far surpassing CPUs and designed a fairly nice API that anybody could understand. There was nothing like that before and that's why they had such a foothold with academic institutions. Of course they didn't invent AI but made it possible to run on their GPUs and actively helped researchers to do that while Intel and AMD slept (well, Intel at least tried to do that on CPU with MKL).


I assume that by replying, you figured out why your original statement was ridiculous?

Tool makers do not get attributed inventions made with said tools.


What's the advantage of OpenCL if it still only works good on Nvidia? Because it has "Open" in the name? You are acting like it was Nvidia holding OpenCL back, but I'm not convinced that was the case.


The thing about the OpenCL support story is that if you look at version support, it's obvious NVIDIA wasn't at fault.

Even despite NVIDIA not being particularly interested in strongly supporting it, Intel, AMD and Qualcomm kept up support for it up to OpenCL 2.0. Get to 2.1 and the only vendor bothering to support it is Intel. OpenCL 2.2 is still only available in ROCm and even that came an entire 4 years after the spec was finalized. It's clear that none of the companies were particularly interested in pushing OpenCL with the effort required for its single source features in 2.0 onwards.

Then we get to OpenCL 3.0, where Khronos rolled back most of the big mandatory features from 2.2 and suddenly it's once again supported by NVIDIA, Intel and Samsung mere months after the spec is ratified (AMD noticeably still missing 2+ years later).


NVIDIA was not holding back OpenCL. Sure, it supported a lower version, but it actually worked. AMD delivered something that was nominally better and factually broken, time after time after time. Intels implementation was somewhat better, but the hardware was not interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: