Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The programmers who will find LLMs most useful are going to be those who prior to LLMs were copying and pasting from Stack Overflow, and asking questions online about everything they were doing - tasks that LLMs have precisely replaced (it has now memorized all that boilerplate code, consensus answers, and API usage examples).

The developers who will find LLMs the least useful are the "brilliant" ones who never found any utility in any of that stuff, partly because they are not reinventing the wheel for the 1000th time, but instead addressing more challenging and novel problems.



It's very much the opposite.

LLMs free me from the nuts and bolts of the "how", for example I don't have to manually type out a loop. I just write a comment and the loop magically appears. Sometimes I don't have to prompt it at all.

With my brain freed from the drudgery of everyday programming, I have more mental cycles to dedicate to higher concerns such as overall architecture, and I'm just way more productive.

For experienced programmers this is a godsend.

Less experienced developers lack the ability to mentally "see" how software should be architected in a way that balances the concerns, so writing a loop a bit faster it's not as much of an advantage. Also, they lack the reflexes to instantly decide if generated code is correct or incorrect.

LLMs are limited by the user's decision speed, the LLM generates code for you but you have to decide whether to accept or reject. If it takes me 1 second to decide to accept code that would have taken me 10 seconds to physically type, then I'm saving 9 seconds, which really adds up. For a junior developer, LLMs may give negative productivity if it takes them longer to decide if the LLM's version is correct than it would have taken them to type whatever they were going to write in the first place.


> LLMs are limited by the user's decision speed

This is obviously the critical point. It's not whether the LLM can do something, i.e. give it a go, but whether that actually saves you time. If it takes longer to verify the LLM code for correctness than to write it yourself, then there is no productivity gain.

I guess this partly also hinges on how much you care about correctness beyond "does it seem to work". For a prototype maybe that's enough, but for work use you probably should check for API "contractual correctness", corner cases, vulnerabilities, etc, or anything that you didn't explicitly specify (or even if you did!) to the LLM. If you are writing the code itself then these multifaceted requirements are all in your head, but with the LLM you'll need to spell them all out (or iterate and refine), and it may well have been faster just to code it yourself (cf working with an intern with -ve productivity).

If you fail to review the LLMs code thoroughly enough, and leave bugs in it to be discovered later, maybe in production, then the cost of doing that, both in time and money, will far outweigh any cost saving in just having written it correctly yourself in the first place. Again, this is more of a concern for production code than for hobbyist or prototype stuff, but having to fix bugs is always slower than getting it right in the first place.

For myself, it seems that for anything complex it's always the design that takes time, not the coding, and the coding in the end (once the detailed design has been worked out) just comes down to straightforward methods and functions that are mostly simple to get right first time. What would be useful, but of course does not yet exist, would be an AGI peer programmer that operated more like a human than a language model, who I could discuss the requirements and design with, and then maybe delegate the coding to as well.


I like to think I'm more of a "challenging and novel problems" developer than a "copy and paste from Stack Overflow" developer, and I've been finding LLMs extremely useful for over two years at this point.

Notes: https://simonwillison.net/tags/ai-assisted-programming/


Yeah, I was gonna say this is not how I see this going. The copy/paste dev is replaced by the novel dev using LLM for the stuff they used to hire interns and juniors for.

In law, this sort of thing already happened with the rise of better research tools. The work L1s used to do a generation ago just does not exist now. An attorney with experience gets the results faster on their own now. With all the pipeline and QoL issues that go with that.


That makes some sense, but seems to be answering a different question of whose jobs may be in jeopardy from LLMs, as opposed to who might currently find them useful.

Note though that not all companies see it this way - the telecom I work at is hoping to replace senior onshore developers with junior offshore ones leveraging "GenAI"! I agree that the opposite makes more sense - the seniors are needed, and it's the juniors whose work may be more within reach of LLMs.

I really can't see junior developer positions wholesale disappearing though - more likely them just leveraging LLM/AI-enhanced dev tools to be more productive. Maybe in some companies where there are lots of junior developers they may (due to increased productivity) need fewer in the future, but the productivity gains to be had at this point seem questionable ... as another poster commented, the output of an LLM is only as useful as the skill of the person reviewing it for correctness.


I think we all assume each individual company will need fewer developers to do the same work they're doing now. The question is do they have fewer devs or do more work. And if it is have fewer devs, will that open up the door for more small companies to be competitive as well, since they need fewer devs and have less competition for talent from people with deep pockets.

I find a lot of the AI discussion seems to land in the "lump of labor" fallacy camp though.


I am a skeptic. What would you say would be the easiest way for me to change my mind?


How many of them are there of the latter type? In my 15 yrs of experience I would say 95%+ of all developers belong to your first category.


95% sounds way high, but maybe I'm wrong. I think it's part generational - old school programmers are used to having to develop algorithms/etc from scratch, and the younger generation seem to have been taught in school to be more system integrators assembling solutions out of cut and paste code and relying on APIs to get stuff done (with limited capability to DIY if such an API does not exist).

But not all younger programmers can be Stack Overflow cut-n-pasters, because not all (and surely not 95%!) programming jobs are amenable to that approach. There are lots of jobs where people are developing novel solutions, interacting with proprietary or uncommon hardware and software, etc, where the solution does not exist on Stack Overflow (and by extension not in an LLM trained on Stack Overflow).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: