Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yup!

> It's interesting to see that Apple is explicitly targeting the M2 Ultra as an alternative to GPUs for machine learning.

For many purposes the limiting factor on local LLMs is the amount of VRAM. nVidias with 64GB+ are insanely expensive (unless you're a funded startup). The ability of Apple Sillicon GPUs to use system RAM for the GPUs is a game-changer.



> The ability of Apple Sillicon GPUs to use system RAM for the GPUs is a game-changer.

It'd be a lot more of a game change if Apple didn't continue its tradition of charging extortionate prices for ram.


While I agree it's expensive it's not quite apples to apples. GPU memory is very very expensive. A100 80GB costs $10k msrp (> $15k on ebay). An RTX 4090 only has 24gb of vram at $1600. Even if you do dual 3090's + nvlink for $1600 you only get up to 48GB, no where near the up to 192GB of vram you can access on apple silicon. The upgrade from 64GB to 192GB of ram is $1600. That's still expensive of course, but 128GB of 800GB/s memory that your GPU has access to for $1600 is actually not bad value. It's not like just sticking an extra 128gb of ram into your computer. DDR5-6400 dual channel will give up up to 102 GB/s, which is no where near M2 Ultra or GDDR6X.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: