Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of people commenting are referring to hobbyist art, like TFA's piano tuning, or audio cables, or sewing equestrian tack.

The point (that the article alludes to but definitely does not spell out) is that, with the advent of general AI, this style of artistry will die in EVERY field if we allow it to; not just art and media, but engineering, technology, policy too.

This to me is a HUGE problem with the introduction of these consumer-ready AIs. You may be able to say "agh I'm an engineer and I wouldn't let this happen in my project!" but some places just want their dang power plants, whether or not they have a host of lifelong nuclear engineering in their region. I worry about a power plant that has a fault when the only true experts are few and far between.



People can learn things when they need to solve problems, forging "expert-enough people" out of otherwise lower-skilled folks with incredible speed. This only happens in the face of problems that need solving, and so far AI seems to be amazing at generating "problems that need solving" by being just good enough to get going but not so good as to solve all the unforeseen or unknown problems you encounter once you are going. I don't think this pattern will result in a world of only upside or purely better solutions than before AI hit the scene, but I also don't feel like it's quite as apocalyptic as many fear.


This is a good point; I agree that it's good that we have a nuanced and realistic (as opposed to pessimistic) view of the situation. As someone else pointed out regarding the Dunning-Kruger effect, the less knowledgeable we are the worse we are at understanding our own level of knowledge. Regarding your comment, not understanding the "unforeseen or unknown problems you encounter once you are going" could be catastrophic in certain fields.


I don't think you need AI for this.

The title is actually what Dunning & Kruger found in their paper. Poor performers have no idea the difference between a poor performance and a good performance. If they knew how bad they were then they could tell the difference between good and bad. And of course, because of how quantiles work, there's going to be 50% of people in the bottom half.

(The classic usage of Dunning-Kruger effect to mean poor performers rate themselves higher than good performers isn't what the paper is about and you're welcome to read it [1])

[1]: https://www.hep.ucl.ac.uk/~eo/stuff/unskilled%20and%20unawar...


What do you think makes AI special? Technology advances have been killing artisans in every field for as long as we've had them.


Even with the most advanced pre-genAI technology, making sounds and images is a manual craft that requires one to develop not just technical skill / tool knowledge, but the concepts to define and articulate what you actually want. Intentionality and vision. The “what you want” becomes more complex and nuanced just as much as the “what you can”, by necessity.

GenAI not only fills in the blanks for “what you want” based on a vague text prompt, with the lowest common denominator of its training data, it provides no means or motive to develop intentionality. In its current form, it’s a cognitive dead end, endlessly providing a statistically smudged shadow of the detailed knowledge and conceptual space that gave birth to the data it was trained on. A picture is worth a thousand words, and today’s cutting edge models still forgot most of them.

It could get better. But it could also stagnate in a very dystopian way if it discourages learning traditional art without becoming a real alternative that continues to produce artists and not just consumers.


These new AI tools have the potential to be more personally comprehensive and have more widespread adoption than slavery and other forms of servitude. The loss of skill (or lack of development) that the parent commenter is referring to has been discussed as widely as Hegel's master-slave dialectic to Idiocracy. Individuals and cultures that use these tools to replace the innate tools they have developed become qualitatively less sophisticated, unless they are pressured by some other external need, or their own self-mastery. I suspect we will have a mix of all these situations across fields and cultures that adopt these new AI tools, and will need to focus on not only making the tools better, but pressuring ourselves to be better (i.e., we replaced physical labor through previous tools, but have only partly accommodated that lack of physical development through gym culture, sports, and more back to the earth activities like hiking- personal transportation being one of the first things to get replaced by new tools like domesticated animals, bikes and cars). If it happens too fast, or we have other catastrophes, we might need to bring back non-military conscription focused on the environment (as in William James's lecture on the Moral Equivalent of War), in order to prop up our nation state and technological culture that it has provided the foundation for.


This is totally true, but the rate of this death has exploded with the advent of computers, and is accelerating as LLMs become more sophisticated.

At the very least, to me it seems like things are finally happening fast enough for humans to notice a trend. I think I read the phrase "faster than generational rate", meaning that things happen at a frequency that we can observe in our lifetimes (advent of computers is faster than generational, but the development of agricultural advances really isn't)


> with the advent of general AI

which is still nowhere near reality


It's still worth discussing what may the effects be once it does become available.

It's hard to know exactly when. We're bad at estimating.

"progress is always slower than you think in the short term, and faster than you think in the long term."


I agree with this, but I also think that while we're not at 100% general AI, we are experiencing a gradual continuous approach towards it. The problems start now, and we have the opportunity to solve them while they're not as serious.


I almost agree with you; I would support that we do not currently have GAI, but I think we are near it, and regardless of how close we are to it, we currently have a lesser version of the same thing, which means a lesser version of the same problem.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: