Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Gpt is really bad at optimizing prompts this way because there is no way it has the ability to simulate the effects, way too complex. Tools like this need to log and a/b test.

gpt can be layered and made into an agent etc. To do the AB testing or to make prompts longer by adding more end cases as time goes by. But the effects of one single word change are far too complex for gpt base output to understand anything about.



I'm sure it could be improved, including telling it to do what you suggest. Have you tried it as is though?


Yes I used it. The optimized prompt was not better for my use case. The playground was useful though. I believe prompt optimization is really only optimized by running it through many scenarios and understanding how changing a single word affects things down the line. And then a bunch of hardcoded conditions to change the system/assistant messages on demand as an output of the tool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: