Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Suppose the algorithm could update itself with new information. Then you can formulate it in one of two ways:

1. The process of updating the algorithm can, itself, be described by a Turing machine. Think of it like an emulator / debugger (since a universal Turing machine exists, i.e., you can write an evaluator for a Turing machine as a Turing machine). The outer Turing machine can run the inner one partially, realize it needs changes, update it, and then continue running it and return its output.

But then the outer Turing machine, which is self-contained, is itself subject to the halting problem, and you can construct the counterexample based on it. You've just made the machine bigger.

2. The process of updating the algorithm cannot be described by a Turing machine. Perhaps it involves some sort of higher form of intelligence which itself cannot be represented as a Turing machine (so, if you want to say "a human updates it," you're working on the assumption that the human brain cannot be implemented/emulated - however slowly - on a Turing machine, and the decisions of how to update the algorithm require this supra-Turing computational power).

But then, as before, the actual machine is this combination of the extra intelligence and the Turing machine. You've made some stronger non-Turing machine to solve the problem, and the statement "No Turing machine can determine whether any arbitrary Turing machine halts" still holds true. You've introduced a new class of machines with additional power over Turing machines. And the halting problem now applies to these machines when evaluating whether a machine of their own class halts, even though they can determine whether simpler machines halt. Your human-Turing mechas can solve the halting problem for Turing machines, but they cannot solve it for other human-Turing mechas. See https://en.wikipedia.org/wiki/Turing_jump

In fact, this model, applied to Gödel's theorem, was explored in Turing's own Ph.D. thesis: https://en.wikipedia.org/wiki/Systems_of_Logic_Based_on_Ordi...



Thanks for response! I will definitely read about Turing jumps :)

I'm not sure I understand your 1. Reading the Halting problem undecidability proof, it goes like this:

Suppose machine H can solve the halting problem. We construct special machine M which calls H(M), and negates the output. Then when we run H(M), it halts if M doesn't halt, and doesn't halt if M halts, hence M doesn't exist.

On the other hand, if H could self-modify, we would have a sequence H_0,H_1,... of machines based on its modifications. Ie, M is written with H_i, but later we're calling H_j(M). No guarantee that our negation still works.


How do you generate the sequence H_0, H_1, ...?

Specifically, is there a Turing machine G that can generate H_0, H_1, ..., onwards, perhaps taking in some input along the way?

If so, then you can construct a self-contained Turing machine Z that takes a machine M as input, which works as follows: it calls G to generate H_0, runs H_0 for a bit on M, then calls G to generate H_1, runs H_1 for a bit on M, and so forth. This is a single Turing machine which does not need external modification. Internally, it works by having an inner Turing machine that it modifies. But Z itself doesn't change.

Then you can define M = "if Z(M): loop", call Z(M), and get your contradiction back.

If no, then you've built a more-powerful-than-a-Turing-machine machine of some sort.


Thanks so much! This is fun.

I think, with your explanation, there's still a big handicap for the halting machine code.

To illustrate.. If I were to say to you: Is the correct answer yes or no, considering that later I will reverse my answer, and enforcing that you can't later change your answer? It's like, okay, of course, with that model, you can never win. I don't think it reveals any limitation of your understanding, you just got pulled into a game that you can't win.

The limitation that is put on the code and in your example too is, although the machine can take in extra input along the way, at a certain point the mathematician gets the last word.

If you have G(H,M) which at each point gives you [0|1]:H', a more interesting question is if what happens if you define M = "if H(M)[0]: print H(M)[1] and then loop". Then, does the printed out H(M)[1](M) produce any contradiction?

Hopefully I'm making sense here! Thanks again for engaging.


I mixed up some things here... If you have G(G,M) which at each point gives you [0|1]:G', it's interesting to think of what happens if you define M = "if G(G,M)[0]: print G' = G(G,M)[1] then loop"... does G'(G',M) always produce a contradiction?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: