Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This feels achievable in five years.


It always feels achievable in five years. People were saying exactly this 30 years ago.

Sooner or later it may (or may not) be a true statement, but it's awfully hard for me to say that it's any different right now than it has been before.


I've had ChatGPT write code from vague statements that got close enough that it'd take the typical intern days of research to figure out. I've also had it fail spectacularly before prompted more extensively. But there are tasks I'd rather hand of to ChatGPT already today than hand to an intern, because it does the job faster and is able to correct misunderstandings and failures far faster.

E.g. I posted a while back how I had it write the guts of a DNS server. It produced a rough outline after the first request, and would fill out bit by bit as I asked it to elaborate or adjust specific points. The typical intern would not know where to start and I'd need to point them to the RFC, and they'd go off and read them and produce something overwrought and complex (I've seen what even quite experienced software devs produce when given that task; and I know how much work it took me the first time I did it).

So it may not exactly replace an intern, in that there are classes of problems that require low-level reasoning and a willingness and ability to go off and research that it's just not set up for yet and that will be harder to replace. But the problem set will change. Both in that what gets to the intern will be things where LLMs don't produce good result fast enough (I wouldn't ask an intern to do something what ChatGPT can do well with little prompting), and that interns will be more likely to go off and learn a bit and then spend more time prompting LLMs and in that sense produce more value than they could before.


What's different is that each subtask now feels like a weekend hackathon. Plus a bunch of engineering to ensure high quality results and build the appropriate UI/UX for human drivers.

You could solve 5% of cases now and over five years drive it up beyond 100% (where you're getting new customers and startups that had never even tried the previous methods).


What software do people envision themselves creating with these AI slaves?

More shitty CRUD apps? Which get easier and easier to pump out everyday with the growing numbers of frameworks, libraries, copying and pasting snippets from stack overflow?

Or will AI really write all the code for all our critical systems, operating systems, financial markets, planes, factories, vehicles, spacecraft? And will they do it so confidently and accurately that humans can safely just forget how to code?

Sure, perhaps. But by then AI will also be so advanced and independent in its problem solving that it will have no need to listen to human prompts.

I don’t really see the point in that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: