What you end up with is a dozen people owning all the wealth and everyone else owning nothing, resulting in the robots not doing anything because no one is buying anything, resulting in a complete collapse of the economic system the world uses to operate. Mass riots, hunger wars, political upheaval, world war 3. Nuke the opposition before they nuke you.
That’s one scenario, but there are others. There are lots of open-weight models. Why wouldn’t ownership of AI end up being widely distributed? Mabybe it's more like solar panels than nuclear power plants?
In terms of quality of life, much/most of the value of intelligence is in how it lets you affect the physical world. For most knowledge workers, that takes the form of using intelligence to increase how productively some physical asset can be exploited. The owner of the asset gives some of the money/surplus earned to the knowledge worker, and they can use the money to affect change in the physical world by paying for food, labor, shelter, etc.
If the physical asset owner can replace me with a brain in a jar, it doesn't really help me that I have my own brain in a jar. It can't think food/shelter into existence for me.
If AI gets to the point where human knowledge is obsolete, and if politics don't shift to protect the former workers, I don't think widespread availability of AI is saving those who don't have control over substantial physical assets.
What are the barriers to entry? Seems like there are lots of AI startups out there?
There is a rush to build data centers so it seems that hardware is a bottleneck and maybe that will remain the trend, but another scenario is that it stops abruptly when capacity catches up? I'm wondering why this doesn't this become a race to the bottom?
Is the suggestion that AGI (or even current AI) lowers the barrier of entry to making a company so much that regular people can just create a company in order to make money (and then buy food/shelter)? If so, I think there's a lot of problems with that:
1) It doesn't solve the problem of obtaining physical capital. So you're basically limited to just software companies.
2) If the barrier to entry to creating a software product that can form the basis of a company is so low that a single person can do it, why would other companies (the ones with money and physical capital) buy your product instead of just telling their GPT-N to create them the same software?
3) Every other newly-unemployed person is going to have the same idea. Having everyone be a solo-founder of a software company really doesn't seem viable, even if we grant that it could be viable for a single person in a world where everyone has a GPT-N that can easily replicate the company's software.
On a side note, one niche where I think a relatively small number of AI-enabled solo founders will do exceptionally well is in video games. How well a video game will do depends a lot on how fun it is to humans and the taste of the designer. I'm suspicious that AIs will have good taste in video game design and even if they do I think it would be tough for them to evaluate how fun mechanics would be for a person.
Your potato RTX that uses a finite amount of power, is already paid for, and does things that you know are useful verses your supercomputer cluster and circular funding economy that uses infinite power, is 10x more overleveraged than the dot com bubble, and does things that you think might be useful someday. Challenge:
Can't find it now, but I read an article about someone adding AI help to a large appliance. Can't assume WiFi is set up, so it has to work offline. Frontier model not required.
I don't think we will be building these things ourselves, but I think there will still be products you can just buy and then they're yours.
It would be the opposite of the "Internet of things" trend though.
I don't think it's a short term thing, but in the short term, yes you're exactly right.
But what are the minimum inputs necessary to build self-sustaining robotic workforce of machines that can (1) produce more power (2) produce more robots (3) produce food. The specifics of what exactly is necessary--which inputs, which production goals--is debatable. But imagine some point where a billionaire like Elon has the minimum covered to keep a mini space-x running, or a mini optimus factory running, a mini solar-city running.
At this point, it's perfectly acceptable to crash the economy, and leave them to their own devices. If they survive, fine. If they don't, also fine. The minimum kernel necessary to let the best of mankind march off and explore the solar system is secure.
Obviously, this is an extreme, and the whole trajectory is differential. But in general, if I were a billionaire, I'd be thinking "8 billion people is a whole lot of mouths to feed, and a whole lot of carbon footprint to worry about. Is 8 billion people (most of whom lack a solid education) a luxury liability?"
I really just don't believe that most people are going to make it to "the singularity" if there even is such a thing. Just more of the same of humanity: barbaric bullshit, one group of humans trying to control another group of humans.
This is the only outcome any economic model concludes. Complete market collapse. It will scream before it collapses (meaning it will shoot to the moon, then completely collapse). Way worse than the Great Depression because instead of 26% unemployment, it will be 80%.
What makes you assume the AI companies actually want to create a super intelligence they can’t control. Altman has stated as much. Musk definitely wants to remain in power.
Not yet, I agree, but who is to say they couldn't?
Limiting life to cell based biology is a somewhat lousy definition by the only example we know. I prefer the generalized definition in "What is Life?" by Erwin Schrödinger which currently draws the same line (at cellular biology) but could accommodate other forms of life too.
I sometimes wonder about what our governments would do if one of the businesses in their jurisdictions were to achieve AGI or other such destabilizing technology. If it were truly disruptive, why would these governments respect the ownership of such property and not seize it - toward whatever end authority desires. These businesses have little defense against that and simply trust that government will protect their operations. Their only defense is lobbying.
AGI is end game scenario. That is "winning". If a business wins it, then the government may not remain subservient to it no matter what free market conditions it had preserved beforehand, as long as it has the power to act.
Economies of Scale have been such a huge influence on the last ~by 300 years of industrial change. In 1725 people sat at home hand crafting things to sell to neighbors. In 1825, capitalists were able to open factories that hired people. By 1925, those products were built in a massive factory that employed the entire town and were shipped all over the country on rail roads. By 2025 factories might be partially automated while hiring tens of thousands of people, cost billions to build and the products get distributed globally. The same trend applies to knowledge work as well, despite the rise of personal computing.
Companies are spending hundreds of millions of dollars on training AI models, why wouldn’t they expect to own the reward from that investment? These models are best run on $100k+ fleets of power hungry, interconnected GPUs, just like factory equipment vs a hand loom.
Open weight models are a political and marketing tool. They’re not being opened up out of kindness, or because “data wants to be free”. AI firms open models to try and destabilize American companies by “dumping”, and AI firms open models as a way to incentivize companies who don’t like closed-source models to buy their hosted services.
I think a lot of people will be okay with paying $20 a month if they're getting value out of it, but it seems like you could just buy an AI subscription from someone else if you're dissatisfied or it's a bit cheaper?
This is not like cell service or your home ISP; there are more choices. Not seeing where the lock-in comes from.
If robots can do all industrial labor, including making duplicate robots, keeping robots the exclusive property of a few rich people is like trying to prevent poor people from copying Hollywood movies. Most of the world doesn't live under the laws of the Anglosphere. The BRICS won't care about American laws regarding robots and AI if it proves more advantageous to just clone the technology without regard for rights/payment.
I don't see how owning a robot helps me with obtaining the essentials of life in this scenario. There's no reason for a corporation to hire my robot if it has its own robots and can make/power them more efficiently with economy of scale. I can't rent it out to other people if they also have their own robots. If I already own a farm/house (and solar panels to recharge the robots) I guess it can keep me alive. But for most people a robot isn't going to be able to manufacture food and shelter for them out of nothing.
Then government comes in and takes over. In the end we will end up with communism. Communism couldn't compete with the free market but in a world of six companies it can.
> What you end up with is a dozen people owning all the wealth and everyone else owning nothing
Only if the socialists win. Capitalism operates on a completely different principle: people CREATE wealth and own everything they have created. Therefore, AI cannot reduce their wealth in any way, because AI does not impair people's ability to create wealth.