Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm explicitly working on this in my startup's product (a GenAI for code product).

The obvious answers: record the human's intent in the form of their prompt, and record the LLM's raw output (if you use a conversational LLM out of the box, it almost always includes this even if you explicitly prompt it not to, lol). Of course, depending on your UX this may or may not work. For autocomplete there is no obvious user intent.

There are additional approaches which I'm exploring that require more intentional engineering, but essentially involve forcing "structure" so that more intent gets explicitly specified.



To be clear you're not working on what I pointed out, you're just doing the same thing. The prompt "may" encode intent, but that has no bearing on what code gets written or stored or changed.

Think about this as well: You're creating processes that have no reasonability. I am 100% responsible for all code I write, even if I wrote it wrong, but if the code your tool generates is wrong it's not my fault, I didn't write it. Multiply this by the hundreds of thousands of times this will happen in a given year and by each employee.

Frankly, you should re-evaluate if you even want your product in the world. What kind of future hellscape are you enabling?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: