Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Same here. A small variation: I explicitly use website to manage what context it gets to see.


What do you mean by website? An HTML doc?


I mean the website of AI providers. chatgpt.com , gemini.google.com , claude.ai and so on.


I’ve had more success this way as well. I will use the model via web ui, paste in the relevant code, and ask it to implement something. It spits out the code, I copy it back into the ide, and build. I tried Claude Code but I find it goes off the rails too easily. I like the chat through the UI because it explains what it’s doing like a senior engineer would


Well, this is the way we could do it for 2 years already, but basically you are doing the transport layer for the process, which can not be efficient. If you really want to have tight control of what exactly the LLM sees, than that's still an option. But you only get so far with this approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: