Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They explain that they will be releasing longer context lengths in the future.

It’s better to make your RAG system work well on small context first anyway.



While that's true when you're dealing with a domain that's well represented in the training data and your return type isn't complicated, if you're doing anything nuanced you can burn 10k tokens just to get the model to be consistent in how it answers and structures output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: