Well at least in the U.S. a lot of used 3090s are out for less than $800 on eBay which is a steal for 24GB VRAM. The 4090 is about $1600 and it's still a consumer-level card; gamers are buying it more often than the 4080. For ML workloads, it's actually a pretty good deal considering how expensive Nvidia's higher-end, non-consumer offerings are. Lambda Labs did some benchmarking: https://lambdalabs.com/blog/nvidia-rtx-4090-vs-rtx-3090-deep...
But yeah the M2 MacBooks are incredible for local LLMs for their price. Nvidia doesn't have any consumer-level priced accelerators with that much memory.
But yeah the M2 MacBooks are incredible for local LLMs for their price. Nvidia doesn't have any consumer-level priced accelerators with that much memory.