My cynical AI opinion is that if it truly was revolutionary, we would be seeing private companies or governments use it internally to massively boost their productivity or achieve otherwise impossible feats. Instead, all we're seeing are consumer products or half-baked integrations that are another way to further centralize data collection.
Talks about all-powerful, malevolent AI or requesting to halt AI development just sounds like baiting for regulary capture to me. If AI research or use is deemed dangerous, it becomes even harder for startups or individuals to compete with established companies.
Also, I am not concerned about an all-powerful AI in the slightest. Humanity is excellent at oppressing others, and I have no doubt we'd be equally good at oppressing a true AI as well.
"we would be seeing private companies or governments use it internally "
What world are you living in, where this isn't happening? Every private company that can use it is using it, Microsoft themselves must be using Copilot. Governments are lining up to train their own LLMs.
Commercial and consumer use are not mutually exclusive, in any case.
Also "Humanity is excellent at oppressing others, and I have no doubt we'd be equally good at oppressing a true AI as well."
If you actually read history a bit carefully, you'd understand this frequently works out badly for the oppressors. The Romans hired, and abused their germanic mercenaries, thinking with their centuries of political experience, they could let the Germanics do all the hard fighting while getting paid little.
The contemporary Jin dynasty in China thought the same way, just hire barbarian mercenaries to do all the hard work in their civil wars. Those illiterate barbarians, who up that point, have never achieved much in China, surely wouldn't be a threat.
The empire may no longer around but the UK and the Commonwealth still are. They are thriving. France is still around, and despite the protests, is still a wealthy nation. Belgium is still going strong. The US is the richest country in the world. Turkey is still kicking. Despite Nanjing, Japan's economy is to die for.
The former empires are shadows of their former selves and had to undergo significant reforms. Nobody knows or cares who the king of Belgium is anymore.
That was implied, I think, though achieving impossible feats is a bit of a stretch in expectations.
Private enterprise is certainly leveraging LLMs. OpenAI APIs on Azure are very hot right now, limited to companies that have existing contracts with Azure.
Say hypothetically that it was a prefect duplicate of a human brain. That would certainly be called a truly revolutionary accomplishment, but that duplicate wouldn't be expected to massively boost productivity any more than adding another human would.
If a GPT model (+ associated cheap software wrapper like LangChain etc) was hypothetically as good/productive as a qualified human engaged in remote work, that would massively boost productivity. The reason is because no matter how much it costs to run such a model at inference, it isn't going to cost anywhere near as much as the ~million dollars required from society to raise a human infant until they're capable of that same level of productivity (in the developed world), plus the ongoing cost of wages. What that means is that once you find a "good worker" model, you don't need to go on an expensive hiring spree, all you need to do is change the number of instances you're using from 1 to whatever number is most optimal for you. You could employ thousands of new people at a guaranteed level of quality within a day.
From the point of view of the organisation building said agents, this would get a lot more extreme. You have all of the above benefits, except you're only paying for electricity and amortised hardware costs rather than someone else's profit. But you can also work on improving the hypothetical human-level agents. If you can optimise their compute costs at runtime and we're accepting the premise that they're as good as a qualified human, then you can get superhuman performance through simply running them faster. Spin up a group of however many professors and engineers you need, give them four days of real time that's equivalent to a year of subjective time, that's superhuman performance. How long did it take to go from GPT-3 to GPT-4? If these agents are truly human equivalent for the purposes of work, you can set them to building GPT-5 or whatever the next most powerful is, as fast as you can run them. I suspect the real limit would be just how fast you can download the data (or build the architecture required to gather more), not how fast you can categorise and select it. Once your ducks are in a row with regards to the data, you have an even better foundational platform to do the same thing again but with smarter agents this time. If they're human level in performance, you could also task them with generating data for the model. You could do this directly (e.g. training on the huge amount of text they're producing as they work), or you could task them towards building and designing consumer products designed to increase their access to data. For example, designing a cheap robot that can help people in their home like a robot vacuum cleaner, or something like Tesla's FSD, or a home speaker assistant. Once the model is multi-modal like GPT-4 is, you can get data by acquiring novel images rather than being restricted to just text. Maybe GPT-5 isn't just text and images but also audio, so you can increase the ways you acquire data even further. If they're genuinely at human level performance, none of this should be impossible. In our current world a major limiting factor on productivity is that skilled human labour is rare - when you can copy-paste a known-good skilled 'human' labourer that becomes completely inverted.
Summing up: if we could get them to reliable human level performance, that would lead to a massive productivity boost because it would make the cost of skilled human labour and ingenuity far, far lower while increasing supply to effectively "infinite, limited only by your ability to pay to run them". Agents like these are not at that stage yet, they've still got a significant way to go. But if they get to human equivalent productivity, that isn't just like adding one more high quality research scientist or engineer, it's adding millions of them, and that's a massive productivity boost.
> My cynical AI opinion is that if it truly was revolutionary, we would be seeing private companies or governments use it internally to massively boost their productivity
ChatGPT (with GPT-3.5) is, even from preview, only four months old, the paper on the ReAct pattern, a key approach to doing productive work with LLMs, is about a month older.
There’s a lot of work on doing what you talk about in many organizations, but it takes time to do.
I wouldn’t call this cynical, I’d call it incredibly naive. Notwithstanding the fact that we are still feeling out what LLMs can and cannot do. When are established players quick to move? When does disruptive tech NOT have to fight back against organisations compromised of individuals who are often incentivised against its success? When does a new player topple the giants overnight? Pretty much never.
Revisit this comment in 2 months, 6 months, a year. It’s mainly a matter of when OpenAI allows widespread access to the gpt-4 api and developers have time to build stuff.
Also, shockingly, go back and look at the history of other game changing innovations - cars, microcomputers, even electricity. It took time for companies (and consumers) time to figure out how to use them. This will be no different.
They are. But you can’t expect something to be released and six month later we’ve completely exhausted all possible value from it. It takes longer than that at most big companies to agree to even use a technology and another six months to pass it through infosec, compliance, legal, fifty tech managers, etc. But as a first party source I can tell you these things are happening, pervasively, at every company, everywhere.
Talks about all-powerful, malevolent AI or requesting to halt AI development just sounds like baiting for regulary capture to me. If AI research or use is deemed dangerous, it becomes even harder for startups or individuals to compete with established companies.
Also, I am not concerned about an all-powerful AI in the slightest. Humanity is excellent at oppressing others, and I have no doubt we'd be equally good at oppressing a true AI as well.