Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I actually like the editable format of the chat interface because it allows fixing small stuff on the fly

Fully agreed. This was the killer feature of Zed (and locally-hosted LLMs). Delete all tokens after the first mistake spotted in generated code. Then correct the mistake and re-run the model. This greatly improved code generation in my experience. I am not sure if cloud-based LLMs even allow modifying assistant output (I would assume not since it becomes a trivial way to bypass safety mechanisms).



The only issue I would imagine is not being able to use prompt caching, which can increase the cost of API calls, but I am not sure if prompt caching is used in general in such a context in the first place. Otherwise you just send the "history" in a json file, there is nothing mystical about llm chats really. If you use an API you can just send to autocomplete whatever you want.


> I am not sure if cloud-based LLMs even allow modifying assistant output.

In general they do. For each request, you include the complete context as JSON, including previous assistant output. You can change that as you wish.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: