True AI is a long way off according to top ML people like Yann LeCun
We have no idea how a computer can be designed to come up with its own goals. We can teach it English, French, and Chinese, or get near perfect photo identification, but the computer cannot decide to learn to write a novel on its own. We could teach it to write novels, but there would still be other tasks it can't do. Even if a computer could self-direct 99% of known possible things in the universe, if there were one thing we could prove it would never learn, then it's not AI.
It may not even be possible and we could be stuck with humans around forever. Ug.
I guess we'll see who's eating who's words in time.
Most "basic" AI stuff like voice and image recognition was "a long way off" for like, forever. Every "advancement" in machine learning since the 50s ended up as a dud.
Then BAM! Voice recognition is here. Not perfect but definitely good enough. Image recognition is way, way, way better than it was 10 years ago (which was only marginally better than it was 10 years previous). The current trend of machine learning is accelerating progress, not a plateau.
Just think of the Geth from Mass Effect: their creators made them to perform basic tasks and share data. As their scope widened they were tweaked to handle more work. Eventually they became self-aware before they or even their creators really understood what was going on.
Besides, why use novels as a use case? Why not, I don't know, organized crime drama? They all follow the same arc anyway: someone does something evil or greedy, rivals descend into civil war, leader(s) assassinated, vague hope for better tomorrow.
It'll be like Winston Smith in 1984, writing the same stories over and over again and changing little more than names.
> Most "basic" AI stuff like voice and image recognition was "a long way off" for like, forever. Every "advancement" in machine learning since the 50s ended up as a dud.
That was the general public's perception. Certain people in the field of ML knew such advances in application-specific domains such as voice and imagery would come at some point.
The difference between the application-specific solutions and general AI is top researchers don't have any real theories about general AI.
Even the OpenAI folks have toned down the rhetoric on their about page [1]. Their goal is no longer general AI, as Musk and Altman defined it at the outset. It's to contribute to existing developments. They write,
"In the short term, we're building on recent advances in AI research and working towards the next set of breakthroughs."
I can't explain my view any better in text. I've studied ML, ML researchers are nearest to AI, and they're all saying it isn't around the corner. They don't even have theories on how to make it truly general. Society will continue to see really great advances as a result of ML developments, but general AI isn't going to be one of them.
It seems to me the amount of data available through the internet is what powered the recent advances in ML. Hence the reason data companies like Google have been making the progress.
The domain is also limited, Siri can't play Go, and AlphaGo can't do voice recognition.
I believe the ML researchers if they are saying AI is not close.
> It seems to me the amount of data available through the internet is what powered the recent advances in ML.
Great point. This was part of it. There were other elements, all of which did not come together until a few years ago, yet their result has been expected since at least the 90s if not the 50s by some very visionary and optimistic people. The other elements were (1) having enough labeled data, (2) fast enough hardware, not just to solve the problems, but also to be able to test repeatedly with different initialization weights to identify good values for these parameters (3) good enough libraries for programming the hardware, enabling more developers to easily make use of GPUs for matrix operations, and distributed computing for problems that benefit from parallelization (4) various mathematical advances whose details I couldn't do justice, but things like rewriting the math so problems are more easily parallelizable helped a bunch, (5) sharing of research. ML researchers shared their advances openly via places like arxiv.org and this led to faster advances (6) the use of patents for defense rather than offense - I believe google led the way here
Now, and going forward, yes, it's believed that those with the most data have the biggest advantage. Even so, researchers are making advances and specializing in all kinds of areas of ML, like ones requiring little data, no data or lots of data, varying degrees of labeled data, different classes of algorithms, etc. The field is growing rapidly
That's the gist. I'm definitely missing a few things
Yann LeCun [1] is a good person to follow if you're interested in a top ML researcher's opinion about AI and the media's perception of it. He frequently posts articles with which he agrees or disagrees
We have no idea how a computer can be designed to come up with its own goals. We can teach it English, French, and Chinese, or get near perfect photo identification, but the computer cannot decide to learn to write a novel on its own. We could teach it to write novels, but there would still be other tasks it can't do. Even if a computer could self-direct 99% of known possible things in the universe, if there were one thing we could prove it would never learn, then it's not AI.
It may not even be possible and we could be stuck with humans around forever. Ug.