Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah yes, this reminds me! I forgot to mention it but for the local NLP models that we run, they're in the range of 100 million parameters so they're able to be run on CPU (no GPU required!) with pretty low latency.

Also a fun tidbit on the connectors, more than half of them now are built by open source contributors! We just have an interface that needs to be implemented and people have been able to figure it out generally.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: