Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, I haven't done PyTorch on AMD hardware and I guess that's not surprising with the support story.

I've commented elsewhere about AMD building the whole ecosystem around everyone working/distributing source and compiling only at runtime, and how I think that's kind of a nonstarter for commercial operators or any sort of hobbyists/etc.

But generally it just seems like AMD is utterly uninterested in targeting anyone but HPC. You see the guy in this thread posting the "but look at all 2 of the supercomputers using AMD GPUs!" (and I've seen the same talking point brought up in other discussions too) and that's the thing, is, ROCm is exactly the minimum effort they need to get those HPC wins and not a developer-hour further. ML is maybe a secondary target but they can't really get first-class support without solving all those other problems. So, they've built what they can on ROCm and if it doesn't run on your hardware welp, not doing that, sucks to be you.

They really really need some PTX equivalent (or just outright PTX support ala GPU Ocelot) for any sort of serious adoption. PTX gives NVIDIA an incredible forwards-and-backwards support story and AMD just is like "lol compile it for each individual chip in each individual family". No way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: