Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

4080 is still a 16gb part, and still the 2nd fastest part Nvidia sell at consumer level (its a 1200 dollar part). The 3090 (for however long it remains on sale) and the 4090 are outliers.

The obvious counter point is an M2 MacBook in 14/16 inch size, where wallet permitting you can have up to 96gb of unified memory too...

I simply chose the Air in my example earlier because its incredible you can have that level of performance in ostensibly an entry level passively cooled machine.



Well at least in the U.S. a lot of used 3090s are out for less than $800 on eBay which is a steal for 24GB VRAM. The 4090 is about $1600 and it's still a consumer-level card; gamers are buying it more often than the 4080. For ML workloads, it's actually a pretty good deal considering how expensive Nvidia's higher-end, non-consumer offerings are. Lambda Labs did some benchmarking: https://lambdalabs.com/blog/nvidia-rtx-4090-vs-rtx-3090-deep...

But yeah the M2 MacBooks are incredible for local LLMs for their price. Nvidia doesn't have any consumer-level priced accelerators with that much memory.


You don't have that level of performance though. Not even close really. RAM amount is only one piece of the puzzle.


I don't know is 24GB Air qualify "entry level"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: