Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Run Llama locally on CPU with minimal API's in-between you and the model (github.com/anordin95)
3 points by anordin95 on Sept 30, 2024 | hide | past | favorite


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: