Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "If you squint a bit and look into the near future it's not so hard to imagine a future Mx chip with a more capable Neural Engine and yet more RAM, and able to run the largest GPT3 class models locally. (Ideally with better developer tools so other compilers can target the NE)"

Very doubtful unless the user wants to carry around another kilogram worth of batteries to power it. The hefty processing required by these models doesn't come for free (energy wise) and Moore's Law is dead as a nail.



Most of the time I have my laptop plugged in and sit at a desk...

But anyway, there are two trends:

- processors do more with less power

- LLMs get larger, but also smaller and more efficient (via quantizing, pruning)

Once upon a time it was prohibitively expensive to decode compressed video on the fly, later CPUs (both Intel [1] and Apple [2]) added dedicated decoding hardware. Now watching hours of YouTube or Netflix are part of standard battery life benchmarks

[1] https://www.intel.com/content/www/us/en/developer/articles/t...

[2] https://www.servethehome.com/apple-ignites-the-industry-with...


My latest mac seems to have about a kilogram of extra battery already compared to the previous model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: