The proper use of these systems is to treat them like an intern or new grad hire. You can give them the work that none of the mid-tier or senior people want to do, thereby speeding up the team. But you will have to review their work thoroughly because there is a good chance they have no idea what they are actually doing. If you give them mission-critical work that demands accuracy or just let them have free rein without keeping an eye on them, there is a good chance you are going to regret it.
The goal is to help people grow, so they can achieve things they would not have been able to deal with before gaining that additional experience. This might include boring dirty work, yes. But that means they thus prove they can overcome such a struggle, and so more experienced people should be expected to also be able to go though it - if there is no obvious more pleasant way to go.
What you say of interns regarding checks is just as true for any human out there, and the more power they are given, the more relevant it is to be vigilent, no matter their level of experience. Not only humans will make errors, but power games generally are very permeable to corruptible souls.
I agree that it sounds harsh. But I worked for a company that hired interns and this was the way that managers talked about them- as cheap, unreliable labor. I once spoke with an intern hoping that they could help with a real task: using TensorFlow (it was a long time ago) to help analyze our work process history, but the company ended up putting them on menial IT tasks and they checked out mentally.
>The goal is to help people grow, so they can achieve things they would not have been able to deal with before gaining that additional experience.
You and others seem to be disagreeing with something I never said. This is 100% compatible with what I said. You don't just review and then silently correct an interns work behind their back, the review process is part of the teaching. That doesn't really work with AI, so it wasn't explicitly part of my analogy.
The goal of internships in a for profit company is not the personal growth of the intern. This is a nice sentiment but the function of the company is to make money, so an intern with net negative productivity doesn't make sense when goals are quarterly financials.
Sure, companies wouldn't do anything that negatively affects their bottom line, but consider the case that an intern is a net zero - they do some free labor equal to the drag they cause demanding attention of their mentor. Why have an intern in that case? Because long term, expanding the talent pool suppresses wages. Increasing the number of qualified candidates gives power to the employer. The "Learn to Code" campaign along with the litany of code bootcamps is a great example, it poses as personal growth / job training to increase the earning power of individuals, but on the other side of that is an industry that doesn't want to pay its workers 6 figures, so they want to make coding a blue collar job.
But coding didn't become a low wage job, now we're spending GPU credits to make pull requests instead and skipping the labor all together. Anyway I share the parent poster's chagrin at all the comparisons of AI to an intern. If all of your attention is spent correcting the work of a GPU, the next generation of workers will never have mentors giving them attention, starving off the supply of experienced entry level employees. So what happens in 10, 20 years ? I guess anyone who actually knows how to debug computers instead of handing the problem off to an LLM will command extraordinary emergency-fix-it wages.
I had an intern who didn’t shower. We had to have discussions about body odor in an office. AI/LLM’s are an improvement in that regard. They also do better work than that kid did. At least he had rich parents.
I had a coworker who only showered once every few days after exercise, and never used soap or shampoo. He had no body odor, which could not be said about all employees, including management.
It’s that John Dewey quote from a parent post all over again.
Isn't the point of an intern or new grad that you are training them to be useful in the future, acknowledging that for now they are a net drain on resources.
But LLMs will not move to another company after you train them. OTOH, interns can replace mid level engineers as they learn the ropes in case their boss departs.
Yeah, people complaining about accuracy of AI-generated code should be examining their code review procedures. It shouldn’t matter if the code was generated by a senior employee, an intern, or an LLM wielded by either of them. If your review process isn’t catching mistakes, then the review process needs to be fixed.
This is especially true in open source where contributions aren’t limited to employees who passed a hiring screen.
This is taking what I said further than intended. I'm not saying the standard review process should catch the AI generated mistakes. I'm saying this work is at the level of someone who can and will make plenty of stupid mistakes. It therefore needs to be thoroughly reviewed by the person using before it is even up to the standard of a typical employee's work that the normal review process generally assumes.
Yep, in the case of open source contributions as an example, the bottleneck isn't contributors producing and proposing patches, it's a maintainer deciding if the proposal has merit, whipping (or asking contributors to whip) patches into shape, making sure it integrates, etc. If contributors use generative AI to increase the load on the bottleneck it is likely to cause a negative net effect.
This very much. Most of the time, it's not a code issue, it's a communication issue. Patches are generally small, it's the whole communication around it until both parties have a common understanding that takes so much time. If the contributor comes with no understanding of his patch, that breaks the whole premise of the conversation.