Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Kids are going to be really screwed when the AI output converges into more Markov-chains like bullshit as it's being self-feeding with it's own output, creating something like a big but pompous Megahal/Hailo clone.

This isn't going to happen. The literature on model collapse suggests it occurs when your model is fed on majority synthetic data, which is not how anyone is training models. Even if they were, do you think they're going to ship something that performs noticeably worse than its predecessor?



> do you think they're going to ship something that performs noticeably worse than its predecessor?

Yes. Capitalist software vendors do that all the time. Google search has been getting continually worse over the decades. Windows is more bloated than ever. AI is going to be no different.


You said it. Traning. But later, most of the input of data it's being inputted back from AI's output.

Because the end users will use far more the output data from AI than feeding it from remote sources far from the original.

Disasters will happen, just wait.


> do you think they're going to ship something that performs noticeably worse than its predecessor?

This happens all the time. Once the first gen devs are off, the releases tend to alternate. A decline from the existing version and then an improvement that mostly fixes what they made worse in the prior release. e.g. Windows XP -> Vista -> 7 -> 8 -> 10

The nature of AI though may make it more difficult to distinguish the failings immediately which could result in an irreversible inflection point.


AI's are not OSes. Once the source of training comes mostly from the AI itself instead of human curated sources, what will you get over time it's the same thing the Megahal/Hailo chatbots parroted a few decades ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: