Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Possibly naive question: will there ever be a C for GPUs? That is, a minimal, portable "assembly language"? Or will there continue to be heterogeneity?

I think that's what OpenCL and such are trying to be, but I know it is nascent and has competitors. I don't know enough about the hardware to tell if languages will eventually consolidated like C (with basically all higher level languages built on top of C), or if there is some fundamental heterogeneity or evolutionary divergence that means there can't be one language.



There's been a number of attempts to create hardware agonostic languages for GPGPU programming - examples are Brook (http://graphics.stanford.edu/projects/brookgpu/lang.html), and Harlan (https://github.com/eholk/harlan/wiki/Language-Overview), although there are many others. None seem to take off, as with most languages - and OpenCL seems like it's here to stay (perhaps because it requires little new learning on the part of C programmers, as it is just C with a few restrictions and a few additions). I think OpenCL will be the "assembly" of GPUs for a long time to come, and will generally be the primary target of higher level languages aiming to make use of the GPU.


No, the primary target is PTX in CUDA devices.

This is one of the reasons OpenCL is still playing catchup with CUDA.

Thanks to PTX, you can target CUDA directly with C++ and Fortran, besides a few other languages.

OpenCL only this year got SPIR.


PTX is certainly more suitable as an intermediate representation than OpenCL's dialect of C, but PTX obviously doesn't and never will fulfill the requirements of being the low-level portable target for high-level languages.

NVidia's early-mover advantage is significant, but software tied to only their hardware will never be able to achieve the kind of status that the netlib stuff has. The only question is whether the gold-standard numerical libraries a decade from now will have multiple backends, or a single non-CUDA backend.


Funny I had the impression CUDA is winning hands down on HPC.


I never said they weren't. But they can't expect to automatically continue with that success while staying as closed and proprietary as CUDA has been so far. Their hardware is not drastically better than that of their competitors, and they are subject to serious competition (unlike Intel with their CPUs). OpenCL is here to stay with a market far broader than just AMD GPUs, so it's pretty much inevitable that it will take over as the dominant standard unless it's developed as badly as OpenGL was in the early 2000s. If Microsoft ever ships a CPU-backed OpenCL runtime as part of Windows, it'll be all over for CUDA.


The NVIDIA hardware is not necessarily better, but the toolchain is unmatched. Race checker, memory checker, top-notch debugger and profiler, visual disassembly graph, makes it a very smooth experience.

AMD can only compete through HSA and non-standard OpenCL extensions, and only companies whose hardware originates from AMD GPUs are in HSA.

OpenCL suffers from being fragmented and with very fuzzy mapping to real hardware. Even OpenGL compute shaders looks more interesting to me with their vast texture format access, and the fact that OpenCL multiple queues don't deliver in practice.

There is both SPIR and HSAIL in competition with PTX, NVIDIA can rejoice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: