Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did.

I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving.

I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful).

Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.

Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it.

My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated.

Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality.



>Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.

Yep but this is much less time than writing the code, compiling it, fixing compiler errors, writing tests, fixing the code, fixing the compilation, all that busy-work. LLMs make mistakes but with Gemini 2.5 Pro at least most of these are due to under-specification, and you get better at specification over time. It's like the LLM is a C compiler developer and you're writing the C spec; if you don't specify something clearly, it's undefined behaviour and there's no guarantee the LLM will implement it sensibly.

I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.


> I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.

It's always the easy cop out for whoever wants to hype AI. I can preface it with "I'd go so far as to say", but that is just a silly cover for the actual meaning.

Properly reviewing code, if you are reviewing it meaningfully instead of just glancing through it, takes time. Writing good prompts that cover all the ground you need in terms of specificity, also takes time.

Are there gains in terms of speed? Yeah. Are they meaningful? Kind of.


GPs main point is that you (need to) learn to specify (and document) very, very well. That has always been a huge factor for productivity, but due to the fucked up nature of how software engineering is often approached on an organisational level we're collectively very much used to 'winging it': Documentation is an afterthought, specs are half baked and the true store of knowledge on the codebase is in the heads of several senior devs who need to hold the hands of other devs all the time.

If you do software engineering the way you learned you were supposed to do it long, long ago, the process actually works pretty well with LLMs.


> GPs main point is that you (need to) learn to specify (and document) very, very well. That has always been a huge factor for productivity

That's the thing, I do. At work we keep numerous diagrams, dashboards and design documents. It helps me and the other developers understand and and have a good mental model of the system, but it does not help LLMs all that much. The LLMs won't understand our dashboard or diagrams. It could read the design documents, but it wokdn't help it to not make the mistakes it does when coding, and definitely would not reduce my need to review the code it produces.

I said before, I'll say it again - I find it unacceptable in a professional setting to not review the code LLMs produce properly, because I have seem the sort of errors it produces (and I have access to the latest Claude and Gemini models, that I understand to be the top models as of now).

Are they useful? Yeah. Do they speed me up? Sort of, especially when I have to write a lot of boring code (as mentioned before, glue code for APIs, unit tests, scaffolding for new modules, etc). Are the productivity gains massive? Not really, due to the nature of how it generates output, and mainly due to the fact that writing code is only part of my responsibilities, and frequently not the one that takes up most of my time.


Do you have any example prompts of the level of specificity and task difficulty you usually do? I oscillate between finding them useful and finding it annoying to get output that is actually good enough.

How many iterations does it normally take to get a feature correctly implemented? How much manual code cleanup do you do?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: