It is a big mistake to think that most computability theory applies to AI, including Gödel’s Theorem. People start off wrong by talking about AI “algorithms.” The term applies more correctly to concepts like gradient descent. But the inferences of the resulting neural nets is not an algorithm. It is not a defined sequence of operations that produces a defined result. It is better described as a heuristic, a procedure that approximates a correct result but provides no mathematical guarantees.
Other aspects of ANN that show that Gödel doesn’t apply is that they are not formal systems. Formal system is a collection of defined operations. The building blocks of ANN could perhaps be built into a formal system. Petri nets have been demonstrated to be computationally equivalent to Turing machines. But this is really an indictment on the implementation. It’s the same as using your PC, implementing a formal system like its instruction set to run a heuristic computation. Formal system can implement informal systems.
I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
AI is most definitely an algorithm. It runs on a computer, what else could it be? Humans didn't create the algorithm directly, but it certainly exists within the machine. The computer takes an input, does a series of computing operations on it, and spits out a result. That is an algorithm.
As for humans, there is no way you can look at the behavior of a human and know for certain it is not a Turing machine. With a large enough machine, you could simulate any behavior you want, even behavior that would look, on first observation, to not be coming from a Turing machine; this is a form of the halting problem. Any observation you make that makes you believe it is NOT coming from a Turing machine could be programmed to be the output of the Turing machine.
> With a large enough machine, you could simulate any behavior you want
This is not exactly true, depending on what you mean by behavior. There are mathematical functions we know for a fact are not computable by a Turing machine, no matter how large. So a system that "behaves" like those functions couldn't be simulated by a TM. However, it's unclear whether such a system actually could exist in physical reality - which gets right back to the discussion of whether thinking is beyond Turing completeness or not.
> But the inferences of the resulting neural nets is not an algorithm.
Incorrect.
The comment above confuses some concepts.
Perhaps this will help: consider a PRNG implemented in software. It is an algorithm. The question of the utility of a PRNG (or any algorithm) is a separate thing.
Heuristic or not, AI is still ultimately an algorithm (as another comment pointed out, heuristics are a subset of algorithms). AI cannot, to expand on your PRNG example, generate true random numbers; an example that, in my view, betrays the fundamental inability of an AI to "transcend" its underlying structure of pure algorithm.
1. If an outside-the-system observer cannot detect any flaws in what a RNG outputs, does the outsider have any basis for claiming a lack of randomness? Practically speaking, randomness is a matter of prediction based on what you know.
2. AI just means “non human” intelligence. An AI system (of course) can incorporate various sources of entropy, including sensors. This is already commonly done.
On one level, yes you’re right. Computing weights and propagating values through an ANN is well defined and very algorithmic.
On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
I suspect maybe at that level you can think of it as an algorithm with unreliable outputs. I don’t know what that idea gains over thinking it’s not algorithmic and just a heuristic approximation.
"Heuristic" and "algorithmic" are not antipodes. A heuristic is a category of algorithm, specifically one that returns an approximate or probabilistic result. An example of a widely recognized algorithm that is also a heuristic is the Miller-Rabin primality test.
“Algorithm” just means something which follows a series of steps (like a recipe). It absolutely does not require understanding and doesn’t require determinism or reliable outputs. I am sympathetic to the distinction that (I think) you’re trying to make but ANNs and inference are most certainly algorithms.
> On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
It is hard to assess the comment above. Depending on what you mean, it is incorrect, inaccurate, and/or poorly framed.
The word “really” is a weasel word. It suggests there is some sort of threshold of understanding, but the threshold is not explained and is probably arbitrary. The problem with these kinds of statements is that they are very hard to pin down. They use a rhetorical technique that allows a person to move the goal posts repeatedly.
This line of discussion is well covered by critics of the word “emergence”.
> But the inferences of the resulting neural nets is not an algorithm
It is a self-delimiting program. It is an algorithm in the most basic sense of the definition of “partial recursive function” (total in this case) and thus all known results of computability theory and algorithmic information theory apply.
> Formal system is a collection of defined operations
Not at all.
> I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
We have zero evidence of this one way or another.
—
I’m looking for loopholes around Gödel’s theorems just as much as everyone else is, but this isn’t it.
Heuristics implemented within a formal system are still bound by the limitations of the system.
Physicists like to use mathematics for modeling the reality. If our current understanding of physics is fundamentally correct, everything that can possibly exist is functionally equivalent to a formal system. To escape that, you would need some really weird new physics. Which would also have to be really inconvenient new physics, because it could not be modeled with our current mathematics or simulated with our current computers.
To be fair, I muddled concepts of formal/informal systems versus completeness and consistency. I think if you start from an assumption that ANN is a formal system(not a given), you must conclude that they are necessarily inconsistent. The AI we have now hallucinates way too much to conclude any truth derived from its “reasoning.”
Excuse me, what are you talking about? You think there is any of computability that doesn't apply to AI? With all respect and I do not intend this in a mean way but just intend to rightly call all of this as exactly nonsense. I think there is a fundamental misunderstanding of computational theory and Turing machines, Church-Turing thesis, etc. any standard text on the subject should clear this up.
But surely any limits on formal systems apply to informal systems? By this, I am more or less suggesting that formal systems are the best we can do, the best possible representations of knowledge, computability, etc., and that informal systems cannot be "better" (a loaded term herein, for sure) than formal systems.
So if Gödel tells us that either formal systems will be consistent and make statements they cannot prove XOR be inconsistent and therefore unreliable, at least to some degree, then surely informal systems will, at best, be the same, and, at worst, be much worse?
I suspect that if formal systems were unequivocally “better” than formal systems our brains would be formal systems.
The desirable property of formal systems is that the results they produce are proven in a way that can be independently verified. Many informal systems can produce correct results to problems without a known, efficient algorithmic solution. Lots of scheduling and packing problems are NP-complete but that doesn’t stop us from delivering heuristic based solutions that work good enough.
Edit: I should probably add that I’m pretty rusty on this. Godels theorem tells ua that if a formal system is consistent, it will be incomplete. That is, there will be true statements that cannot be proven in the system. If the system is complete, that is, all true/false statements can be proven, then the system will be incomplete. That is you can prove contradictory things in the system.
AI we have now isn’t really either of these. It’s not working to derive truth and falsehood from axioms and a rule system. It’s just approximating the most likely answers that match its training data.
All of this has almost no relation to the questions we’re interested in like how intelligent can AI be or can it attain consciousness. I don’t even know that we have definitions for these concepts suitable for beginning a scientific inquiry.
Yeah I don’t know why GP would think computability theory doesn’t apply to AI. Is there a single example of a problem that isn’t computable by a Turing machine that can be computed by AI?
It does apply to AI in terms of the computers we compute neural networks on may be equivalent to Turning machines but the ANN networks are not. If you did reduce the ANN down to a formal system, you will likely find that in terms of Godels theorem that it would be sufficiently powerful to prove a falsehood. Thus not meeting the consistency property we would like in a system used to prove things.
Other aspects of ANN that show that Gödel doesn’t apply is that they are not formal systems. Formal system is a collection of defined operations. The building blocks of ANN could perhaps be built into a formal system. Petri nets have been demonstrated to be computationally equivalent to Turing machines. But this is really an indictment on the implementation. It’s the same as using your PC, implementing a formal system like its instruction set to run a heuristic computation. Formal system can implement informal systems.
I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.