> I wonder if in 10 years people will be stuck debugging Rube-Goldberg machines composed of LLM api calls doing stuff that if-statements can do, probably cobbled together with actual if-statements
Sounds like an extension of https://en.wikipedia.org/wiki/Wirth%27s_law.
How many times have I done some simple arithmetic by typing it into my browser's bar and checking out the google calculator results? When a generation ago I would have plugged it into a calculator on my desk (or done it in my head, for that matter...). I would be entirely unsurprised to hear that in another generation we're using monstrously complicated "AI" systems to perform tasks that could be done way more simply/efficiently just because it's convenient.
My son regularly uses Alexa as a calculator, and also asks Alexa all kinds of things without a thought as to whether the output triggers a simple pattern match and gets fed to a specialised process or triggers a web search or is processed some other way. It's all conversational anyway. So the day Amazon plugs an LLM into it, it's not a given he'll even notice the difference for some time.
Sounds like an extension of https://en.wikipedia.org/wiki/Wirth%27s_law. How many times have I done some simple arithmetic by typing it into my browser's bar and checking out the google calculator results? When a generation ago I would have plugged it into a calculator on my desk (or done it in my head, for that matter...). I would be entirely unsurprised to hear that in another generation we're using monstrously complicated "AI" systems to perform tasks that could be done way more simply/efficiently just because it's convenient.