Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I expect GPU hardware to specialize like Google’s TPU. The TPU feels like ARM in these AI workloads where when you start to run these at scale, you’ll care about the cost perf tradeoff for most usecases.

> CPU/GPU share the same RAM AFAIK.

This depends on the GPU I believe Apple has integrated memory, but most GPUs from my limited experience writing kernels have their own memory. CUDA pretty heavily has a device memory vs host memory abstraction.



On top of that, Nvidia has provided a unified addressing abstraction over PCI for a looooong time via CUDA: https://developer.nvidia.com/blog/unified-memory-in-cuda-6/

Customers like Pixar could probably push this even further, with a more recent Nvidia rack and Mellanox networking. Networking a couple Mac Studios over Thunderbolt doesn't have a hope of competing, at that scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: