I keep wondering how many things like this need to happen before the other shoe drops and the ring-around-the-rosie investment structure collapses. It's become very obvious that "AI" in its current form isn't going to turn a profit, at least not in the short term.
The "They" here are the folks who are currently investing in 'selling' AI solutions to other companies. OpenAI, Microsoft, Google's Gemini, and a slew of AI-backed startups are good examples.
They don't need AI to turn a profit.
They need AI to be seen as widely adopted and "a part of life".
They need certain categories of folks (CEOs, CIOs, Boards of Directors) to see AI as valuable enough to invest in.
They need to keep up the veneer of success long enough to make their investments attractive to acquisition by Private Equity or to an IPO.
They need to juice the short-term stock price.
Their goal isn't to produce a long-term business, their goal is to raise short-term returns to the point that the investors get a nice return on their investment, and then it becomes someone else's problem.
"How money works" YouTube channel had a nice video about this trend in particular going back to making stock buybacks legal in 1982 I think, which made CEO and execs wealth acquisition driven not by a long successful career with healthy margins and dividends, but a short-tenured local maximum pump-and-dump and a hold-the-bag game funded by endless fiat currency which is printed on the backs of other people. Other people's money , Gordon Gecko, they're not just real, they're celebrity sociopaths running us into the ground because of a fragile ego.
Oh it's gonna turn a profit to someone, especially when market cools down into "it's just a service making some things easier/more efficient" (and not "will replace all the expensive experts company needs, but never the people pushing for it in the company").
Just not whoever ends up with the bag of now far less valuable stock.
Is "Microsoft Lowers AI Software Growth Targets as Customers Resist Newer Products" really "way different" than sales quotas?
Or more to the point,
a statement from Microsoft PR spinning it as "growth targets" doesn't prove they haven't also lowered sales quotas in some divisions.
Even if the Microsoft spokesperson is being completely honest,
lower growth targets is still evidence of weakness in the AI bubble.
Yes, that’s why we all do our meetings in the metaverse, and then return home on our segways to watch some 3d tv, while the robotic pizza making van delivers robot-made pizza.
Ultimately, you can spend what you want; if the product is bad people won’t use it.
I'm intrigued by this thought, and I'm not sure it's the right way to look at the current situation.
Think about it via a manufacturing analogy. I think we can all agree that modern cnc machining is much better for mass manufacturing than needing to hire an equivalent number of skilled craftspeople to match that throughput.
Imagine we had a massive runup of innovation in the cnc manufacturing industry all in one go. We went from cnc lathes to 2d routing tables to 3, then 4, then 5 axis machining all in the span of three years. Investment was so sloshy that companies were just releasing their designs as open source, with the hope that they'd attract the best designers and engineers in the global race to create the ultimate manufacturing platform. They were imagining being able to design and manufacture the next generations of super advanced fighter jets all in one universal machine.
Now these things are great at manufacturing fully custom one-off products, and the technicians who can manipulate them to their fullest are awestruck by the power they now have at their fingertips. They can design absolutely anything they could imagine.
But you know what people really want? Not fighter jets, but cheap furniture. Do you know what it takes to make cheap furniture? Slightly customized variants of the early iterations that were released as open source. Variants that can't be monitized by the companies that spent millions on designing and releasing them.
The tech might work great, but that doesn't mean the investment pays off with the desired returns.
People keep comparing LLMs (and AI, I suppose) to specialised machines like the printing press or the harvester or something, and often throwing in a luddite comparison.
The glaring difference is that specialised machines, usually invented to do an existing task better, faster or more safely, do indeed revolutionise the world. As you pointed out, they perform necessary functions better, faster, and / or more safely.
Note that segues, that weird juice machine etc, we not built to fill a gap or to perform a task better, faster or more safely. Neither were pet rocks or see-through phones. Nobody was sitting around before the Metaverse going 'man, I wish Minecraft could be pre-made and corporate with my work colleagues", and when these things launched the sales pitches were all about "look at the awesome things this tech can do, isn't it great?!", rather than "look at the awesome things this tech will allow you / help you to do, aren't they great?!".
LLMs are really impressive tech. So are segues and those colour-changing t-shirts we had in the 80s. They looked awesome, the tech was awesome, and there were genuine practical applications for doomerist, somewhere.
But they do not allow the average poison to do anything awesome. They don't make arduous tasks faster, better or safer without heavily sacrificing quality, privacy, and sanity. They do not fill a gap in anybody's life.
That's the difference.
Most AI is currently a really cool technology that can do a bunch of things and it's very exciting to look at, just like the Segway and the Metaverse. And, really, an ant, or a furby.
They are not going to revolutionise anything, because they were more built to. They weren't built to summarise your emails or to improve your coding (there are many princes of software that were built to assist with coding, and they are pretty good) or to perform any arduous or dangerous tasks.
They were built to experiment, to push boundaries, to impress, and to sell.
So yes, I 100% agree with you and take your point a little further it's not even that LLM's are too high tech and fancy for most periods. I don't even think that they're products. They are components, or add-ons, being sold as products like extension power cables
50 years before the invention of the plug socket, or flexible silicone phone cases being sold in the era of landlines and phone boxes.
And I'm legit still baffled that so many people seem to have jobs that revolve around reading and writing emails or producing boilerplate code, who are not able to confidently do those things, but aren't just looking for a new job.
Like, it's a tough market, but if you haven't learned to skim-read an email by now, do yourself a favour and find a job that doesn't involve so much skim reading of emails. I don't get it.
AKA "too big to fail". The interests of major and early AI capital owners will be prioritized over those of the later capital and non-capital-owning public.
> Things that are too big to fail can end up being nationalized when they do fail.
And if that happens, will the taxpayer be on the hook to make investors whole? We shouldn't. If it is nationalized, it needs to be done at a small fraction of the private investment.
When the government takes your property with eminent domain, they don't give you what you've put into it or what you owe, they give you the market value for the property.
If one or more of the AI companies fail, the government would pay what they feel is the market value for the graphics cards, warehouses, and standing desks and it will surely be way less than what the investors have put in.