Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What about Mesa for OpenCL? Its support for AMD cards is pretty good in my (limited) experience.

The codegen stuff is all shared (formerly Gallium, now llvmpipe?), so Mesa's implementation of OCL should be great.



In principle if you really wanted to stick with AMD GPUs, it is probably better than ROCm. Rusticl is slightly faster (maybe 3%) than ROCm and supports SPIR-V but the problem boils down to the fact that if OpenCL doesn't work well on NVIDIA GPUs then nobody is going to invest the time and effort into porting pytorch to take full advantage of OpenCL.

There is a pytorch dlprim fork but so far it is just a random GitHub repository. That is better than nothing but you would expect AMD to just cut some checks so these people can work on the software full time.


I spent some time trying to get them to do this for a previous startup back around 2017 but their view at the time was essentially that a couple interns and/or "the community" would solve the problem. At a higher level I think they just didn't see it as an important market to serve. We were getting decent performance on their cards even without ROCm but it just wasn't something they cared about.


It’s absolutely wild to me that AMD and Intel doesn’t collaborate to port PyTorch and Tensorflow to OpenCL/Vulkan. I just don’t get it. Seems like the obvious thing to do.


The legacy OpenCL implementation doesn't support modern AMD (at least since RDNA) cards. RustICL seems to be the path forward but it's still a work-in-progress for AMD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: