Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What would be particularly nice is if it became the norm for LLM users to be expected to not only supply the log of prompts that produced a given output, but also the specific LLM, making the entire process reproducible and verifiable.

Of course this would possibly exclude using SaaS-based LLMs like ChatGPT in places like schools, and as such it might make sense to require students to only utilize open ones. Or maybe, if OpenAI provided a verification service whereby a prompt could be checked against the output it supposedly produced in the past at some point (even if the behavior of the chatbot had since changed).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: