Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The proper use of these systems is to treat them like an intern or new grad hire. You can give them the work that none of the mid-tier or senior people want to do, thereby speeding up the team. But you will have to review their work thoroughly because there is a good chance they have no idea what they are actually doing. If you give them mission-critical work that demands accuracy or just let them have free rein without keeping an eye on them, there is a good chance you are going to regret it.


What a awful way to think about internship.

The goal is to help people grow, so they can achieve things they would not have been able to deal with before gaining that additional experience. This might include boring dirty work, yes. But that means they thus prove they can overcome such a struggle, and so more experienced people should be expected to also be able to go though it - if there is no obvious more pleasant way to go.

What you say of interns regarding checks is just as true for any human out there, and the more power they are given, the more relevant it is to be vigilent, no matter their level of experience. Not only humans will make errors, but power games generally are very permeable to corruptible souls.


I agree that it sounds harsh. But I worked for a company that hired interns and this was the way that managers talked about them- as cheap, unreliable labor. I once spoke with an intern hoping that they could help with a real task: using TensorFlow (it was a long time ago) to help analyze our work process history, but the company ended up putting them on menial IT tasks and they checked out mentally.


>The goal is to help people grow, so they can achieve things they would not have been able to deal with before gaining that additional experience.

You and others seem to be disagreeing with something I never said. This is 100% compatible with what I said. You don't just review and then silently correct an interns work behind their back, the review process is part of the teaching. That doesn't really work with AI, so it wasn't explicitly part of my analogy.


What an awful way to think about other people, always assuming the very worst version of what they said.


Certainly such a message demonstrates that a significant amount of efforts have been put to not fall in the kind of behavior it warrants against. ;)


The goal of internships in a for profit company is not the personal growth of the intern. This is a nice sentiment but the function of the company is to make money, so an intern with net negative productivity doesn't make sense when goals are quarterly financials.


Sure, companies wouldn't do anything that negatively affects their bottom line, but consider the case that an intern is a net zero - they do some free labor equal to the drag they cause demanding attention of their mentor. Why have an intern in that case? Because long term, expanding the talent pool suppresses wages. Increasing the number of qualified candidates gives power to the employer. The "Learn to Code" campaign along with the litany of code bootcamps is a great example, it poses as personal growth / job training to increase the earning power of individuals, but on the other side of that is an industry that doesn't want to pay its workers 6 figures, so they want to make coding a blue collar job.

But coding didn't become a low wage job, now we're spending GPU credits to make pull requests instead and skipping the labor all together. Anyway I share the parent poster's chagrin at all the comparisons of AI to an intern. If all of your attention is spent correcting the work of a GPU, the next generation of workers will never have mentors giving them attention, starving off the supply of experienced entry level employees. So what happens in 10, 20 years ? I guess anyone who actually knows how to debug computers instead of handing the problem off to an LLM will command extraordinary emergency-fix-it wages.


I’ve never experienced an intern who was remotely as mediocre and incapable of growth as an LLM.


I had an intern who didn’t shower. We had to have discussions about body odor in an office. AI/LLM’s are an improvement in that regard. They also do better work than that kid did. At least he had rich parents.


I had a coworker who only showered once every few days after exercise, and never used soap or shampoo. He had no body odor, which could not be said about all employees, including management.

It’s that John Dewey quote from a parent post all over again.


Was he Asian? Seems like somehow asians win the genetic lottery in the stink generation department.


Wait, you had Asmongold work for you? Tell us more! xD


I have always been told expect an intern to be a net loss in productivity to you and anything else is a bonus since the point is to help them learn.


What about a coach's ability for improving instruction?


The point of coaching a Junior is so they improve their skills for next time

What would be the point of coaching an LLM? You will just have to coach it again and again


coaching a junior doesn’t just improve the junior. It also tends to improve the senior.


Coaching an LLM seems unlikely to improve you meaningfully


What about it?


Isn't the point of an intern or new grad that you are training them to be useful in the future, acknowledging that for now they are a net drain on resources.


An overly eager intern with short term memory loss, sure.


And working with interns requires more work for final output compared do-it-yourself


For this example - Let’s replace the word “intern” with “initial-stage-experts” or something.

There’s a reason people invest their time with interns.


Yeah, most of us are mortal, that’s the reason.


But LLMs will not move to another company after you train them. OTOH, interns can replace mid level engineers as they learn the ropes in case their boss departs.


Yeah, people complaining about accuracy of AI-generated code should be examining their code review procedures. It shouldn’t matter if the code was generated by a senior employee, an intern, or an LLM wielded by either of them. If your review process isn’t catching mistakes, then the review process needs to be fixed.

This is especially true in open source where contributions aren’t limited to employees who passed a hiring screen.


This is taking what I said further than intended. I'm not saying the standard review process should catch the AI generated mistakes. I'm saying this work is at the level of someone who can and will make plenty of stupid mistakes. It therefore needs to be thoroughly reviewed by the person using before it is even up to the standard of a typical employee's work that the normal review process generally assumes.


Yep, in the case of open source contributions as an example, the bottleneck isn't contributors producing and proposing patches, it's a maintainer deciding if the proposal has merit, whipping (or asking contributors to whip) patches into shape, making sure it integrates, etc. If contributors use generative AI to increase the load on the bottleneck it is likely to cause a negative net effect.


This very much. Most of the time, it's not a code issue, it's a communication issue. Patches are generally small, it's the whole communication around it until both parties have a common understanding that takes so much time. If the contributor comes with no understanding of his patch, that breaks the whole premise of the conversation.


I can still complain about the added workload of inaccurate code.


If 10 times more code is being created, you need 10 times as many code reviewers..


Plus the overhead of coordinating the reviewers as well!


"Corporate says the review process needs to be relaxed because its preventing our AI agents from checking in their code"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: