Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's apples to oranges but here we go:

  M2   134bn 192GB  RAM   800GB/s   3 TFlops?  $9,600

  H100  80bn 188GB VRAM   900GB/s  60 TFlops  $40,000

  A100  54bn  80GB VRAM   600GB/s  20 TFlops  $10,000

  4090  76bn  24GB VRAM  1000GB/s  82 TFlops   $1,400
[1] https://www.nvidia.com/en-us/data-center/h100/

[2] https://en.wikipedia.org/wiki/Ampere_(microarchitecture)#A10...

[3] https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...



M2 Ultra on a Mac Studio with 192gb is $5599 (60 core gpu), or $6599 (76 core gpu).


H100 has 16k cores, but of course that's a completely different kind of core.


I really want to see how that fares against Nvidia for ML development, because it's hard to compete with that kind of flexibility and price.


Cerebras's Wafer Scale Engine Two (WSE-2) has 2.6 trillion transistors.

https://en.wikipedia.org/wiki/Cerebras

https://www.anandtech.com/show/16626/cerebras-unveils-wafer-...


Somebody please correct why 4090 looks so close to H100, it looks off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: