Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been using the API. There are conflicting reports in this thread that seems to indicate it may also be affected. I am not sure.


Did you set the model code to gpt-4-0314?

I did, and I still get the original speed (produces tokens at about the speed you would read aloud), and I haven't seen quality change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: