This article is a complete waste. The title implies that it’s about ai but it turns out to be a portrait of a mans life — a pr piece. Not only that but free energy minimization has nothing to do with intelligence other than vaguely describing one of its most obvious and superficial characteristics.
—-
Ai is the most important issue in the world. True general ai is an existential threat to human kind. The economics of general ai lead to the extinction of humans no matter how you slice it. Killer robots are just icing on the cake — the tip of the iceberg.
General ai can be thought of as the keystone in the gateway of automation. It allows the automation of the human mind itself. The ai we have now cannot do this. Better ml algorithms will never threaten the human mind most likely. So people have a very false and dangerous sense of security.
Ml experts eagerly correct people like me with a vague notion and wave of the hand: ai won’t be a problem for a long time. As I said, ml is not a threat (for being automation of human thought) and this is because ml has nothing to do with human thought. Ml experts don’t know anything about human thought and therefore a complete layman is just as qualified to speculate about general ai as an ml expert is. Or a person with a physics degree or what have you. You might say that laymen tend to be dumber, or some variation on that, but that’s besides the point and irrelevant.
There are many reasons to be worried about the creation of general ai. First, general ai is much more broad than it is given credit for — sentience has many more forms than the human mind and is a broader attack surface than usually thought. People imagine it as finding the human mind like a needle in a haystack. It’s a lot easier than that. The algorithm for the kernel of intelligence is probably relatively much simpler than one would initially imagine. We don’t know when we might stumble on it. Or I could be wrong but I’m still right because even if it’s very complex relatively, we will still discover it if we try — and we are trying. As i said, ml isn’t a huge threat for general ai and I think it’s very likely that brain research is the biggest threat currently. The resolution of mri scanning and probing is increasing as is the computational power to make sense of the readings and test algorithms that we discover. I already see people commenting that computer won’t be powerful enough to test algorithms: you won’t need a silicon version of the brain to test them. I guarantee it.
If general ai were to come into existence, it would have the ability to do any task better than a human. Any group or organization that uses ai to perform any task will overtake anyone who does not. It will be a ratchet effect where each application of ai spreads across the world like a disease and never goes away. Soon, everything is done with ai. A market economy’s decentralized nature makes it an absolute powder-keg for ai in this respect because each node in the market is selfish and will implement ai to gain a short term advantage in the market — and as I’ve said once one node does it all nodes will do it. This behaviour historically has fueled the success of markets but as we have seen with global warming does not always work.
The key here is the fact that the only reason human life has value is because humans offer an extremely vital and valuable service that cannot be found anywhere else. Even though this is true, most humans on this planet do not enjoy a high quality of life. It is insane to imagine that once our only bargaining chip is ripped from our collective hands that the number of people with high standard of living will go up instead of down. There will be mass unemployment. Humans will be cast aside. And that’s all assuming that robots are never made to maliciously target human life for any reason.
People say that automation leads people to better, new jobs. In reality jobs are not an inexhaustible resource. They just seem to be.
The only solution, in one form or another, is the prohibition of ai. I hope that someone else reading this will agree with me or suggest another solution. I am interested in forming some kind of group to prevent all this from happening.
I agree with most of what you state, but the prohibition of AI is impossible. How could you stop nations from researching it secretly? How could you stop the Amazons and Baidus?
—-
Ai is the most important issue in the world. True general ai is an existential threat to human kind. The economics of general ai lead to the extinction of humans no matter how you slice it. Killer robots are just icing on the cake — the tip of the iceberg.
General ai can be thought of as the keystone in the gateway of automation. It allows the automation of the human mind itself. The ai we have now cannot do this. Better ml algorithms will never threaten the human mind most likely. So people have a very false and dangerous sense of security.
Ml experts eagerly correct people like me with a vague notion and wave of the hand: ai won’t be a problem for a long time. As I said, ml is not a threat (for being automation of human thought) and this is because ml has nothing to do with human thought. Ml experts don’t know anything about human thought and therefore a complete layman is just as qualified to speculate about general ai as an ml expert is. Or a person with a physics degree or what have you. You might say that laymen tend to be dumber, or some variation on that, but that’s besides the point and irrelevant.
There are many reasons to be worried about the creation of general ai. First, general ai is much more broad than it is given credit for — sentience has many more forms than the human mind and is a broader attack surface than usually thought. People imagine it as finding the human mind like a needle in a haystack. It’s a lot easier than that. The algorithm for the kernel of intelligence is probably relatively much simpler than one would initially imagine. We don’t know when we might stumble on it. Or I could be wrong but I’m still right because even if it’s very complex relatively, we will still discover it if we try — and we are trying. As i said, ml isn’t a huge threat for general ai and I think it’s very likely that brain research is the biggest threat currently. The resolution of mri scanning and probing is increasing as is the computational power to make sense of the readings and test algorithms that we discover. I already see people commenting that computer won’t be powerful enough to test algorithms: you won’t need a silicon version of the brain to test them. I guarantee it.
If general ai were to come into existence, it would have the ability to do any task better than a human. Any group or organization that uses ai to perform any task will overtake anyone who does not. It will be a ratchet effect where each application of ai spreads across the world like a disease and never goes away. Soon, everything is done with ai. A market economy’s decentralized nature makes it an absolute powder-keg for ai in this respect because each node in the market is selfish and will implement ai to gain a short term advantage in the market — and as I’ve said once one node does it all nodes will do it. This behaviour historically has fueled the success of markets but as we have seen with global warming does not always work.
The key here is the fact that the only reason human life has value is because humans offer an extremely vital and valuable service that cannot be found anywhere else. Even though this is true, most humans on this planet do not enjoy a high quality of life. It is insane to imagine that once our only bargaining chip is ripped from our collective hands that the number of people with high standard of living will go up instead of down. There will be mass unemployment. Humans will be cast aside. And that’s all assuming that robots are never made to maliciously target human life for any reason.
People say that automation leads people to better, new jobs. In reality jobs are not an inexhaustible resource. They just seem to be.
The only solution, in one form or another, is the prohibition of ai. I hope that someone else reading this will agree with me or suggest another solution. I am interested in forming some kind of group to prevent all this from happening.