Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many of us remember that at the beginning of last year the fear was that programming by programmers will get obsolete by 2024 and LLMs will be doing all the job?

How much has changed?



I remember some people were saying things in vaguely but not explicitly that direction, but given OpenAI were "we're not trying to make bigger models for now, we're trying to learn more about the ones we've already got and how to make sure they're safe" I dismissed them as fantasists.

What has happened is GPT-4 came out (which is certainly better in some domains but not everywhere), but mainly the models have become much cheaper and slightly easier to run, and people are pairing LLMs with other things rather than using them as a single solution for all possible tasks — which they probably could do in principle if scaled up sufficiently, but there may well not be enough training data and there certainly aren't computers with enough RAM.

And, like with the self-driving cars, we've learned a lot of surprising failure modes.

(As I'm currently job-hunting, I hope what I wrote here is true and not just… is "copium" the appropriate neologism?)


Conclusion in the blog post says it all:

> I regret to say it, but it's true: most of today's programming consists of regurgitating the same things in slightly different forms. High levels of reasoning are not required. LLMs are quite good at doing this, although they remain strongly limited by the maximum size of their context. This should really make programmers think. Is it worth writing programs of this kind? Sure, you get paid, and quite handsomely, but if an LLM can do part of it, maybe it's not the best place to be in five or ten years.


>> I regret to say it, but it's true: most of today's programming consists of regurgitating the same things in slightly different forms.

I wonder how different this would be if software was not hindered by "intellectual property" laws.


I wouldn't call myself an expert, but my gut tells me we're close to a local maximum when it comes to the core capabilities of LLMs. I might be wrong of course. If I'm right, I don't know when or if we'll get out of that. But it seems the work of putting LLMs to good use is gonna continue for the next years regardless. I imagine hybrid systems between traditional deterministic IDE features and LLMs could become way more powerful than what we have today. I think for the foreseeable future, any system that's supposed to be reliable and well understood (most software, I hope) will require people willing and capable to understand it, that's in my mind the core thing programmers are and will continue to be needed for. But anyway: I do expect less programmers will be needed if demand remains constant.

As for demand, that's difficult to predict. I'd argue a lot of software being written today doesn't really need to be written. Lots of weird ideas were being tried because the money was there, pursuing ever new hypes, with an entire sub industry building ever more specialised tools fueling all this. And with all that growth, ever more programmers have been thrown at dysfunctional organisations to get a little more work done. My gut tells me that we'll see less of that in the next years, but I feel even less competent to predict where the market will go than where the tech will go.

So long story short, I guess we'll still need programmers until there's a major leap towards GAI, but less than today.


The compiler is still not part of the picture, when LLMs start being able to produce binaries straight out of prompts, then programmers will indeed be obsolete.

This is the holy grail of low-code products.


Why is an unauditable result the holy grail? Is the goal to blindly trust the code generated by an LLM, with at best a suite of tests that can only validate the surface of the black box?


Money, low-code is the holy grail that business no longer need IT folks, or at very least, reduce the amount of FTEs they need to care about.

See all the SaaS products, without any access to their implementation, programable via graphical tooling, or orchestrated via Web API integration tools, e.g. Boomi.


Is it no different to you when the black box is created by an LLM rather than a company with guarantees of service and a legal entity you can go after in case of breach of contract?

Where does the trust in a binary spit out by an LLM come from? The binary is likely unique and therefore your trust can't be based on other users' experience, there likely isn't any financial incentive or risk on the part of the LLM should the binary have bugs or vulnerabilities, and you can't audit it if you wanted to.


As usual this kind of things will sorted out, as developers have to search for something else.

QA, acceptance testing whatever, no different from buying closed source software.

Only those that never observed the replacement of factory workers by complete robot based chains can think this will never happen to them.

Here is a taste of the future,

https://www.microsoft.com/en-us/power-platform/products/powe...


Assembly line robots are still a bit different from LLMs directly generating binaries though, right?

An assembly line robot is programmed with a very specific repeatable task that can easily be quality tested to ensure that there aren't manufacturing defects. An LLM generating binaries is doing this one off, meaning it isn't repeatable, and the logic of the binary isn't human auditable meaning we have to trust that it does what was asked of it and nothing more.


The same line of arguments from Assembly language developers against FORTRAN compilers and the machine code they could generate.

There are ACM papers about it.

It didn't hold on.

Do you really inspect the machine code generated by your AOT or JIT compilers, in every single execution of the compiler?

Do you manually inspect every single binary installed into the computer?


There's a fundamental difference between a compiler and a generative LLM algorithm, though. One is predictable, repeatable, and testable. The other will answer the same question slightly differently every time its asked.

Would you trust a compiler's byte code if it spit out slightly different instructions every time you gave it the same input? Would you feel confident in the reliability and performance of the output? How can you meaningfully debug or performance profile your program when you don't know what the LLM did and can't reproduce the issue locally short of running the exact copy of the deployed binary?

Comparing compilers and LLMs really is apples and oranges. That doesn't mean LLMs aren't sometimes helpful or that they should never be used in any situation, but LLM fundamentally are a bad fit for the requirements of a compiler.


So who is instructing the LLMs on what sort of binaries to produce? Who is testing the binaries? Who is deploying them? Who is instructing the LLMs to perform maintenance and upgrades? You think the managers are up for all that? Or the customers who don’t know what they want?


Just like offshoring nowadays, you take the developers out of the loop, and keep PO, architects and QA.

Instead of warm bodies somewhere on the other side of the planet, it is a LLM.


Nobody of any interest said this. This is something you are saying now using a thin rhetorical strategy meant to make you look correct over an opponent that doesn't exist.


That's like the people saying, "they said the ice caps would melt, ha, hasn't happened, all fake". Meanwhile, nobody said that.


Can't say I saw anyone thinking programmers would be obsolete by 2024...


I remember 10 years ago when the fear was that cheaper programmers in developing countries (India mostly) would be doing all the programming.

It's just a scam to keep you scared and stop you from empathizing with your fellow workers.


Outsourcing was more of a threat than AI. And a lot of jobs really did move. It is still a real thing, not that many programming jobs moved back to the states.


That's legit. I've managed to dodge it but many jobs have moved overseas. Many of my coworkers the past years have been contractors living in other countries.

This is what happened to America's manufacturing industry. Shouldn't emphasizing with fellow workers mean recognizing the pattern instead of dismissing it as FUD?


I don't think it's a scam or a conspiracy. It's human nature to worry and when given a reasonably sounding, but scary, idea we tend to spread it to others.


> It's just a scam to keep you scared and stop you from empathizing with your fellow workers.

I am quite unconvinced this is the reason. Seems rather conspiratorial.


So far I've seen bad programmers create more (and possibly worse) bad code and good ones use LLM to their advantage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: