Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is great work. I've always thought it would be great if running LLM could be commoditized for regular average Joe hardware. I had thought that llamafile was like dockerfile for llama.cpp but looks like that's a mistake?

Will definitely be giving this a try.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: