Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If I understand this article, it’s about a 500-page book that is devoted to one proof of one theorem in topology. I find that amazing.

And cheers to Quantum Magazine for regularly publishing popularizations of research mathematics. I know many, at times, take issue with their simplifications and framing, but they’re trying, where almost no one else with their reach covers these areas at all.



I hope they have a quality editor. Would hate to have a situation where they made a mistake on page 5, rendering the subsequent 495 pages worthless gibberish without a correction. (I know there's a bit from a TV show or movie like this, but I can't recall where. Maybe Good Will Hunting?)

On a serious note, I'm wondering how approachable this book is to non-mathematicians. I mean, I've had more formal math education than your average lay person, but I've also hardly used most of it since college. I'm curious if it'd be a worthwhile read or just something that'd end up sitting on my shelf, mostly unread...


It's peppa pig episode where mommy pig writes a book and george overflows the buffer of her computer by scoring very high in the chicken game


It gets better:

it is used in at least one but do I think two or possibly more later episodes.

The one that pops into my mind is the one where the kids dress up as something from their favourite book and the Elephant kid dresses up as the very long number from Mummy Pigs book.

For some reason even while I'd much prefer silence, Peppa Pig doesn't annoy me as much as certain other kids tv shows. Maybe it is the lack of lessons to learn, that everyone messes upnor something like that?


Oh my god that's hilarious

Here's the link: https://youtu.be/IMAav163SFQ?t=42


As a mathematician myself, I highly doubt it would be readable for non-mathematicians. Typically reading math books is work, even for mathematicians, and not so much something you do to relax in the evening.


Was the movie you remember maybe "Proof"?

https://en.m.wikipedia.org/wiki/Proof_(2005_film)

PS: it had a mathematician with a band that did an imaginary number in it.


It will only cost you a little over a Benjamin to find out.


I wonder how many other results in maths require an entire book length treatment (or multiple book length treatments)? The one that jumps out to my mind is Gödel's Incompleteness Theorem(s)[1]. I know there are at least a couple of complete books dealing exclusively with this result.

[1]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...


Godel's theorem is actually not that complex. Sure, you may think of "Godel, Escher, Bach", but that book is stuffed with sooo much material that is only tangentially related to the theorem.

Of course the consequences of that theorem are legion, and they do warrant many books worth of discussion. But the proof itself can fit in a long-ish blog post, I think.

Fermat's last theorem, on the other hand...


For those interested:

Assume a formal language that can model the natural numbers.

Consider the set of all provable statements. This is countable because it’s a subset of the list of strings of a finite number of characters, which is countable.

Consider the statement “Statement N is false”. This is both a valid statement and cannot appear in the list.

Godel proved significantly more than that, but that’s the basic result. It’s basically the same as the diagonal argument of real numbers being uncountable and it’s also basically the halting problem.


Since these 3 results are so similar are they a subset of a more general concept?


Arguably the halting problem is just a subset of Godel where every theorem is "This does or does not halt". Someone's written a paper going the other way: https://www.andrew.cmu.edu/user/avigad/Teaching/halting.pdf

The crucial trick is that you can plug in an entry as data to another "program/theorem". This is extremely interesting, because there are limited logics and limited computation models where Godel and the Halting Problem do not apply.

Not so sure about the diagonal argument.


GEB is a book about intelligence/consciousness. The incompleteness theorem is incidental — it’s only there to illustrate the notion of self-referentiality, and I’m not sure it does a particularly good job at it.


If you could prove the ABC conjecture then Fermat's last theorem becomes much less than a page of proof :)


Speaking of ABC, some years ago HN was all abuzz with inter-universal Teichmuller theory, did that fizzle out or what?



The incompleteness theorem can be proved in a few lines given the right definitions up front. It's basically the same as the undecidability of the halting problem, which has a proof that is written as a poem: https://introcs.cs.princeton.edu/java/54computability/haltin...

There is a wonderful book by Torkel Franzén about the incompleteness theorem that is 100+ pages, but it is not purely about the proof. It's a popularization that discusses bogus ways philosophers have interpreted the theorem, and stuff like that. It is called "Gödel's Theorem: An Incomplete Guide to its Use and Abuse".

Meanwhile Aschbacher and Smith's proof of the classification of quasithin groups really is a book of 1200+ pages: https://books.google.com/books?id=SbO2irSQHMEC

And of course the proof of the classification of finite simple groups has afaik never been published all in one place, though there are some book-length expositions of it referring back to the original papers. It is a set of papers spanning around 20 years and adding up to around 10,000 pages. It is sometimes called "the enormous theorem": https://en.wikipedia.org/wiki/Enormous_theorem


The most recent case similar to this one is Mochizuki's work on the ABC conjecture, which has still not reached the point where it's accepted by the majority of the community due to its complexity. There are lots of parallels between Mochizuki and Freedman. They both had to break a lot of new ground, and they both had trouble communicating their results to enough people to become part of the official canon of mathematics that can actually be taught to others.

Freedman at least convinced a few key people, but AFAIK Mochizuki's work has never been understood by anyone not named Mochizuki. It may be complete BS, it may be earthshakingly brilliant, but either way it will evidently take someone's entire career to prove it.


I believe the latest news on Mochizuki’s work is that Peter Scholze and Jakob Stix identified a fatal flaw, and Mochizuki has not convincingly rebutted it. See e.g. the first item here: https://www.math.columbia.edu/~woit/wordpress/?p=12429

Both the claimed proof and the disproof have now been published in journals, and neither has been retracted! That’s a rather unusual situation to say the least. But I think people in the field generally agree that the proof is flawed and unfixable.


Experts have been skeptical for quite some time. You can see discussion on the exact point Scholze (PS) brings up in the following blog post (the post and many comments are by prominent number theorists)

https://www.galoisrepresentations.com/2017/12/17/the-abc-con...

One quote from the post I find worth repeating:

> This post is not about making epistemological claims about the truth or otherwise of Mochizuki’s arguments. To take an extreme example, if Mochizuki had carved his argument on slate in Linear A and then dropped it into the Mariana Trench, then there would be little doubt that asking about the veracity of the argument would be beside the point. The reality, however, is that this description is not so far from the truth.


I would say that most hallmark results in theory-heavy mathematics (number theory, algebraic geometry, topology, …) could stretch to at least a book length treatment if you wanted a mostly self-contained proof.

It sounds like the main difference with this theorem is that the writeup of the original proof was so complicated, terse, and error-prone, that very few experts even understood it.


I don’t think the incompleteness theorems are nearly as wordy - this translation of Gödel’s original paper is under 40 pages after subtracting the introduction (not part of the original).


The classification of finite simple groups [1] is apparently ~11-12 books long (when it's finished). Yes, several books for a single theorem.

[1] https://en.wikipedia.org/wiki/Classification_of_finite_simpl...


I don't wanna know how much books the classification of finite complex groups is gonna take.


Not very much actually. Finite simple groups are kind of like prime numbers: any finite group can be broken down into simple subgroups similar to how any natural number can be broken down into a multiset of prime number factors.

That's why the classification of finite simple groups is so important, it's essentially a classification of all finite groups.

https://en.wikipedia.org/wiki/Composition_series#Uniqueness:...


I suspect someone could write a book (or an HN comment at least) about the popular confusion about Godel's 1st incompleteness theorem. :-)

In fact we (my brother and I) agreed that it bears a striking resemblance to Turing's halting problem: They both seem to show what computers or math can't do, but in fact both theorems only reveal limitations of certain models of computers and/or math. We both have undergrad degrees in math fwiw.

Both theorems use a clever logical trick. For Godel's theorem, it is an encoding of "This sentence is false". For the halting problem proof, Turing first fixes the algorithm, asks the algorithm what its result will be, then switches the correct answer, forcing the algorithm to produce the incorrect answer.

Would the proof completely fall apart if the algorithm could update itself to handle the new information? The only thing stopping it is that in the formulation of the problem, the algorithm is assumed to not be able to change with new information (ie it's not an "online" algorithm). Similarly, what if the model of math had mechanisms for recognizing paradoxes, or circular logical requirements, etc. I don't think either of those theorems would work anymore.

Anyway, I'm not saying the Halting problem would be easy to compute with the right model. If we had an efficient algorithm to solve the halting problem, we could solve tons of math problems easily. We could write some code to enumerate any countably infinite sequence, checking for the presence of some condition, and make inferences based on if it halts or not. For example, we could solve the Collatz conjecture by just writing code to halt if any number's sequence doesn't eventually go to one, and checking whether that loop ever halts.

My punchline is, I think it's beautiful that even the undecidable, uncomputable problems are only that way because we are lacking a better model. Or at least that's my current understanding of the situation :-)


> Would the proof completely fall apart if the algorithm could update itself to handle the new information? The only thing stopping it is that in the formulation of the problem, the algorithm is assumed to not be able to change with new information (ie it's not an "online" algorithm).

Simple put the halting problem means that you cannot make a program (a static piece of code) that can determine in a finite amount of time if another program halts operating on some input. I.E. program P takes program H and input I and answers if H run with I halts. I honestly don't know what part of the computational model could be changed to make that not true (besides Oracles or Church-Turing thesis being wrong).

Also, I don't think "online algorithm" means what you think it means. Online algorithms have no more computational power than offline ones; they just can get started with only partial information.

> Similarly, what if the model of math had mechanisms for recognizing paradoxes, or circular logical requirements, etc. I don't think either of those theorems would work anymore.

I don't know what a model of recognizing paradoxes would look like. The 1st incompleteness theorem is based on only a few simple axioms and reasonable rules of deduction. There isn't much wiggle room to make a mathematical model that isn't bound by it, but is useful.


> The halting problem means that you cannot make a program (a static piece of code) that can determine in a finite amount of time if another program halts operating on some input.

Interesting. My first thought was, couldn't we just create programs that are designed to keep growing? Isn't that both completely possible and more powerful than the model of computing where programs are static?

I realized it may not be entirely possible to build an actual machine which hosts endlessly growing code. There are probably laws of physics which put some type of limit on how big things we build can get.


> Interesting. My first thought was, couldn't we just create programs that are designed to keep growing? Isn't that both completely possible and more powerful than the model of computing where programs are static?

What does it mean to "create programs that are designed to keep growing"? If they keep going by executing a finite program (if modified by a non-static program, what created that program?) on their code and resuming, that would not be more powerful (Turing machines can do that).

> I realized it may not be entirely possible to build an actual machine which hosts endlessly growing code. There are probably laws of physics which put some type of limit on how big things we build can get.

So long as you have enough memory, even an Intel 8086 would be able to compute arbitrarily large programs. It could just emulate a universal Turing machine.


> What does it mean to "create programs that are designed to keep growing"? If they keep going by executing a finite program (if modified by a non-static program, what created that program?) on their code and resuming, that would not be more powerful (Turing machines can do that).

To me, your question is like asking this: Do all the programs that we humans ask computers to compute come from a finite program? Is life itself powered by a finite program?

A program which keeps growing can make use of all the code it has seen before, all the results it has seen from running code previously. Although any of its inputs are finite strings, there are no theoretical limitations regarding the source of the inputs it can receive. They come from outside.

The core idea of the undecidability of the halting problem is that we can construct a machine M that can ask the halting machine for a bit, flip the bit, give it back to the same static halting machine code, thus forcing it to produce the wrong bit.

Any human in that situation would change their understanding of the situation, and we would tend to afford them a chance to change their answer based on their new understanding. Being able to adjust execution strategy after seeing new information is core to human life. Why do we neglect to provide computers opportunities which are available to all humans?

A somewhat trivial reply to Turing's original negator machine M would be to emit a new version of the halting code which checks for the exact code given by M and then returns the opposite of what was returned when M calls out to H. Then, in the argument, we would need to address the possibility that the H machine called out in M to may provide a different answer when we pass it all back through H again. The argument wouldn't work as-is if we afforded H the ability to respond to the situation.

What would the statement be? "It is not possible, given the information the halting code just received, it could change its answer response to the contradiction of its previous answer." A child can figure out this simple trick - we all understand why it's a trick - why do we not let a computer the same opportunity in our theory?

> So long as you have enough memory, ...

True enough. My point was in fact that if the inputs our machine receives are all needed, we would run out of memory eventually.


Your "growing program" can (presumably, since we don't have a formal definition) be interpreted by a finite program; i.e. which can treat unlimited memory as instructions to execute. This means, in a very strict and formal sense, that it is not a more powerful model of computing. Therefore it is subject to the halting problem.

But even if it were a more powerful model, it would still be subject to an analogous halting problem. This is well-trodden ground. It is a misunderstanding to believe that the halting problem is fundamentally tied to the exact power or nature of Turing Machines.


Thanks, I read the proof [1] again a lot more carefully last night, and really failed to formalize my point.

I tried to reason about it in different ways. Such as, for case 1, considering Z(Z) halts. I don't think it's unreasonable to say that H(Z,Z) could in fact return true. It could read the code for Z, determine that Z(Z) will loop since H(Z,Z) would return false, return true, and then Z(Z) would halt since H(Z,Z) returned true. No immediate contradiction.

Problem is that doesn't completely eliminate the contradiction, it just introduces another one. Even if Z(Z) halts and H(Z,Z) returns false, you could argue backwards and say that the only way for Z(Z) to halt is if H(Z,Z) returns true, but H(Z,Z) can't return two different results.

Part of me still doesn't like the negator program. Too many contradictions. Apparently there are constructive proofs for the halting problem too, which I don't understand yet, so the rabbit hole continues.

Thanks again for engaging.

[1]: https://www.comp.nus.edu.sg/~cs5234/FAQ/halt.html


> Such as, for case 1, considering Z(Z) halts. I don't think it's unreasonable to say that H(Z,Z) could in fact return true. It could read the code for Z, determine that Z(Z) will loop since H(Z,Z) would return false, return true, and then Z(Z) would halt since H(Z,Z) returned true. No immediate contradiction.

This doesn't make sense. You said in the same case both that "Z(Z) halts" and "Z(Z) will loop" and then "Z(Z) would halt". So there is a contradiction here.

One assumption that you may not realize without the necessary background is that (a) "Z" is fixed in any particular case, we don't consider counterfactual "Z"; and similarly (b) "Z(Z)" must either always halt or always not halt, by the relevant definition of "program".


Thanks for the response. I was trying to illustrate why my argument doesn't work, but you did a much better job at that.

I may have found what I've been looking for in a related topic called Oracle machines for the halting problem. Part of its definition is that it can't solve problem for machines equivalent to itself, which, I believe, would exclude Z-type programs from the domain.

To put my plea into some formal jargon, "Just because the halting problem is undecidable doesn't mean oracles Turing machines for the halting problem don't exist." Kind of a simple and obvious point in hindsight, but I'm happy with it.


Gödel's Incompleteness Theorem and the undecidability of the halting problem can in fact be proved using each other, so there is good reason to think that there is a deeper connection.

What you suggest isn't actually possible, and to explain why, it's worth stepping back a little bit. Originally, there was a program to formalize mathematics that came to be known as naïve set theory. This theory ran into a major roadblock when Russell proposed a paradox: does the set of all sets that do not contain themselves contain itself? The solution to this, as found in modern set theory, is to very carefully construct sets in such a way that no set can contain itself in the first place, and so the very question isn't expressible in the logic of modern set theory.

The underlying point of both Gödel and Turing is, as you say, constructing a similar statement akin to "this statement is false." But more specifically, what they actually did was to show that, once you meet some relatively low barriers, it is impossible to make a system that omits that construct. In effect, what is done is to encode (respectively) proofs and programs into integers, and then using this encoding, shows that you can always create a self-reference that blows up. Yes, if you change the encoding, it requires a potentially different number to create the self-reference, but the point is that so long as there is a bijection to the natural numbers, then the problematic self-reference must exist.

And, as you may be able to reflect upon, everything we as humans can write or speak can be reduced to a finite-length string of a finite-symbol alphabet, so everything we create must be bijectable with the natural numbers if infinite.


I am honoured to have nerd-sniped you, jcranmer. I won't respond at length as I have in some of the child threads here.

As far as I can tell in my other expositions, apparently I object to the very fact that mathematicians like to fix things and make inferences about them! I think some things, such as computation, humans, animals, etc, should be allowed to be sensitive to time and prior inputs.. and the very act of fixing now and inferencing later seems to infringe upon that right. That may be the lowest barrier of all of mathematics!

Cheers my friend, thank you for engaging.


"apparently I object to the very fact that mathematicians like to fix things and make inferences about them": that's not really what's going on: time can be present in mathematics (e.g. defining a sequence iteratively) and the like.

What you do have to fix is the definition of things: your reasoning is very shaky, and once you would start formalizing things you will find you cannot "defeat" the Halting problem or incompleteness theorem.


Thanks. I have a math degree, I know that I'm handwaving haha. All the comments are encouraging me to formalize my ideas. Maybe I will make a blog post after some further consideration.


Suppose the algorithm could update itself with new information. Then you can formulate it in one of two ways:

1. The process of updating the algorithm can, itself, be described by a Turing machine. Think of it like an emulator / debugger (since a universal Turing machine exists, i.e., you can write an evaluator for a Turing machine as a Turing machine). The outer Turing machine can run the inner one partially, realize it needs changes, update it, and then continue running it and return its output.

But then the outer Turing machine, which is self-contained, is itself subject to the halting problem, and you can construct the counterexample based on it. You've just made the machine bigger.

2. The process of updating the algorithm cannot be described by a Turing machine. Perhaps it involves some sort of higher form of intelligence which itself cannot be represented as a Turing machine (so, if you want to say "a human updates it," you're working on the assumption that the human brain cannot be implemented/emulated - however slowly - on a Turing machine, and the decisions of how to update the algorithm require this supra-Turing computational power).

But then, as before, the actual machine is this combination of the extra intelligence and the Turing machine. You've made some stronger non-Turing machine to solve the problem, and the statement "No Turing machine can determine whether any arbitrary Turing machine halts" still holds true. You've introduced a new class of machines with additional power over Turing machines. And the halting problem now applies to these machines when evaluating whether a machine of their own class halts, even though they can determine whether simpler machines halt. Your human-Turing mechas can solve the halting problem for Turing machines, but they cannot solve it for other human-Turing mechas. See https://en.wikipedia.org/wiki/Turing_jump

In fact, this model, applied to Gödel's theorem, was explored in Turing's own Ph.D. thesis: https://en.wikipedia.org/wiki/Systems_of_Logic_Based_on_Ordi...


Thanks for response! I will definitely read about Turing jumps :)

I'm not sure I understand your 1. Reading the Halting problem undecidability proof, it goes like this:

Suppose machine H can solve the halting problem. We construct special machine M which calls H(M), and negates the output. Then when we run H(M), it halts if M doesn't halt, and doesn't halt if M halts, hence M doesn't exist.

On the other hand, if H could self-modify, we would have a sequence H_0,H_1,... of machines based on its modifications. Ie, M is written with H_i, but later we're calling H_j(M). No guarantee that our negation still works.


How do you generate the sequence H_0, H_1, ...?

Specifically, is there a Turing machine G that can generate H_0, H_1, ..., onwards, perhaps taking in some input along the way?

If so, then you can construct a self-contained Turing machine Z that takes a machine M as input, which works as follows: it calls G to generate H_0, runs H_0 for a bit on M, then calls G to generate H_1, runs H_1 for a bit on M, and so forth. This is a single Turing machine which does not need external modification. Internally, it works by having an inner Turing machine that it modifies. But Z itself doesn't change.

Then you can define M = "if Z(M): loop", call Z(M), and get your contradiction back.

If no, then you've built a more-powerful-than-a-Turing-machine machine of some sort.


Thanks so much! This is fun.

I think, with your explanation, there's still a big handicap for the halting machine code.

To illustrate.. If I were to say to you: Is the correct answer yes or no, considering that later I will reverse my answer, and enforcing that you can't later change your answer? It's like, okay, of course, with that model, you can never win. I don't think it reveals any limitation of your understanding, you just got pulled into a game that you can't win.

The limitation that is put on the code and in your example too is, although the machine can take in extra input along the way, at a certain point the mathematician gets the last word.

If you have G(H,M) which at each point gives you [0|1]:H', a more interesting question is if what happens if you define M = "if H(M)[0]: print H(M)[1] and then loop". Then, does the printed out H(M)[1](M) produce any contradiction?

Hopefully I'm making sense here! Thanks again for engaging.


I mixed up some things here... If you have G(G,M) which at each point gives you [0|1]:G', it's interesting to think of what happens if you define M = "if G(G,M)[0]: print G' = G(G,M)[1] then loop"... does G'(G',M) always produce a contradiction?


I found this nice presentation about Godel's theorems. https://youtu.be/HeQX2HjkcNo?t=895

"There is no proof for the statement with Godel-number g".

What I'm struggling with is trying to understand if self-reference can be formally represented by some symbol. Can it?

Godel's proof seems to rely on a trick that allows a formal system to talk about itself but is that possible? There would need to be a symbol for that and thus a symbol that refers to itself?

A picture of a pipe is not a pipe. Similarly a representation of a proof is not a proof, is it?. Not saying I believe I'm right and Godel wrong, but it just feel strange to me that a formal system could ever "prove" anything about itself.

What does it mean that something is "true"? That it can be proven?


Not every picture of a pipe is a pipe but it's at least conceivable that an actual working pipe can be constructed out of something which happens to be a picture of a pipe. Similarly Godel isn't proving that all statements in a system are paradoxically self-referential. Only that given any system of sufficient complexity, one can conceivably be constructed.

In any event, all of our mathematical proofs are predicated upon the idea that the representation of a mathematical object can be treated in terms of proof as equivalent to the mathematical object. Is "2" the number 2 or is it just a glyph that non-representationally depicts the successor of the first counting number? And if you believe the latter, then how do you go about proving anything about the actual successor of the first counting number without resorting to glyphs?


> given any system of sufficient complexity,

That's my question. It seems that adding "self reference" to the system makes it more complicated than it would be otherwise. So it's not just a system with enough complexity to let you do arithmetic. You need a system with enough complexity to also do self-reference. And I would say and think that self-reference is complicated.

Do we need self-reference in any other areas of mathematics, than Godel's and related proofs?


I like to think of the halting problem as similar to the inability to predict the future. In order to predict the future you'd need a model of the whole universe, including the model. Once you think of it like that you realize that you can't beat halting in the classical universe.


The halting problem has that same style of thinking, but the universe doesn't necessarily play by such rules. For one, it's still an open question whether the universe could properly contain a model of itself (hence, needing a model of the universe plus the model isn't a guaranteed absurdity).


It is an absurdity though. You would need a material that can encode more information than it takes to physically represent it.


> Would the proof completely fall apart if the algorithm could update itself to handle the new information? The only thing stopping it is that in the formulation of the problem, the algorithm is assumed to not be able to change with new information (ie it's not an "online" algorithm).

First an online algorithm is an algorithm that process a stream instead of the full input. It's not because it changes.

Regarding the halting problem, lets assume that you give me a program can change itself and you claim solves the halting problem. Use Turings proof on that and it proves that you have not solved the halting problem for all inputs, because it just gave you one that it produced the wrong answer for.


> I suspect someone could write a book (or an HN comment at least) about the popular confusion about Godel's 1st incompleteness theorem. :-)

That book has already been written: https://www.ams.org/notices/200703/rev-raatikainen.pdf


> reveal limitations of certain models of computers

Modulo weird stuff with black holes or whatever (not that I am qualified to understand whether they count), there are no known counterexamples to the Church-Turing thesis.


Your comment made me break down and finally make an HN account. I think I may have the perfect answer for you. Your comments remind me so much of myself, when I was in 7th grade and we just learned that 0.999... would exactly equal 1. Not almost, but exactly. I know that I asked my math teacher tricky questions, and left really unsatisfied, because "obviously it's not true", probably my teacher does not understand infinity. I kind of forgot about it, and it only really clicked a couple years later once I learned about series and limits, because I finally gained an intuition for working with infinity.

A very important notion about infinity that the terms "For any arbitrary large N" and "For Infinity" are not the same thing.

I actually think that the cause of the confusion is similar. Maybe your mental model of computation includes an "arbitrarily large N" where you actually need an "infinitely large N".

Note that the halting problem (and a whole bunch of other interesting undecidable problems) is actually "semi-decidable". This means we are guaranteed to get an answer to the question, but only if the answer is actually "yes". (Before you ask: Taking the inverse of a problem does not preserve semi-decidability, so no trickiness here.)

We need some intermediate results here. I am not going to prove them for brevity (and because I will probably be unable to produce them right now), so you'll have to trust me for them. Sorry!

A Turing machine P which gets input i produces output P(i). We can construct a machine P_i which ignores its input, but instead always produces P(i) for any i. We can also construct for any P an encoding Enc(P) (the "source code" so to say, actually a Gödel numbering) which encodes the turing machine. We can also show that there is a universal turing machine U which for any Enc(P) will produce the same outputs as P for all possible inputs i. We can also construct a special limited universal turing machine U_j which halts after at most j steps.

So how does semi-decidability work in practice: Let us simulate a turing machine P on U_1. If it halts, we return 1, if it does not halt after 1 step, we return 0. This turing machine will solve the halting problem for all turing machines that halt after 1 step. We can then look at U_2, U_3 and so on, and those machines will always solve the halting problem for a strict superset of the others. So we know that the number of turing machines which do not have a solution by our ever increasing set of halting problem deciders will get smaller and smaller.

The halting problem now says: Even if all machines of {U_j : j is natural number} run in parallel, we will always have Turing machines P for which we will not get an answer. The halting problem also tells us that there will never be a U_infinity, which would then give us the answer.

If you don't like the self-referential proof, you can check out the diagonalization proof, which is more rigorous.

Now you have given examples of for example humans adapting to additional input. We know for a fact that humans in their lifetime can only perform a finite number of computational operations, and parse a finite number of input arguments. If I was a physicist I could probably cite some theorem of thermodynamics to give you an upper bound on that number, which is probably astronomically huge but finite. The same applies for the computing power of the universe. Huge, but finite.

This is all the halting problem says: We can solve a lot of problems by brute force, but never all of them. We can also solve all problems approximately by just giving an arbitrary (but fixed) answer after we get bored with our long-running turing machine.

The halting problem is important because for all we know and care, and infinite computer cannot physically exist, as the universe itself is finite. For example, the halting problem can be solved by a theoretical computer which speeds up by factor 2 on every step. And probably also something like a machine which has an exact encoding for every real number, which cannot exist in the universe as we currently understand it, for similar reasons. https://en.wikipedia.org/wiki/Hypercomputation

It's not that we cannot think of those models, it is just that they are completely useless and uninteresting as they get too powerful, but also impossible to build. You could approximate them, but the approximation is useless, as the approximation will again be a convoluted turing machine.


It's a bit late to add this, but I realize that I wasn't entirely clear above. When I say "results in maths (that) require an entire book length treatment" I don't mean a proof that is inherently book length. I mean, a proof that justifies (one or more) book length "downstream" treatments that explain, clarify, expand, etc., the base proof.

Going back to Gödel's Incompleteness Theorem(s) for example... yes, the original proof is not that long, but for whatever reason, people have felt compelled to write entire books on the topic, after Gödel, to say more about the proofs. That kind of thing is more what I was thinking of, FWIW.


I did a University module on Godel' Incompleteness theotem and the complete lecture notes which gave all the lead up, explanation and the proof itself along with lots of othet related material was only about 50 pages in total if I recall correctly.

Was a pleasure to do that module though!


"Incompleteness ex Machina" by Sebastian Oberhoff, is a nice paper-length exposition https://arxiv.org/abs/1909.04569.


Others have already mentioned it, but Gödel's incompleteness theorem doesn't need a book-length treatment. My undergraduate mathematical logic class covered the proof in a little more than a week.


It is pretty common for some pure mathematics papers to be pretty much book length anyway these days, say 80 pages or more.


It's amazing how math can require so much work for what appears to be a single result, but I spend an entire semester proving a single mathematical theorem and to do that I had to work through an entire textbook, so I don't think it's that unusual.


I am not a mathematician, though I do very applied math (ML), I took a course this semester that is intended for Pure Math MSc, called Advanced Vector spaces, having only done some linear algebra and calculus at the undergraduate level, some abstract algebra and some geometric algebra.

I am consistently in awe of how well mathematicians have stacked layers of abstraction one on top of the other, and how many different ideas end up being very related to one another. Maybe I am romanticised it and the fact that I regret not going for pure math, but there is beauty in all those abstractions that fit together so nicely.


As a former mathematician, I would say the stacked layers get a lot messier closer to the cutting edge.


May I ask why former?

Defining better abstractions is part of the process that got us so far. Even in ML, we are starting to define some very good abstractions for Neural Networks through the perspective of symmetries and geometry.


The academic job market is brutal compared to tech, especially the “two-body problem”: https://slate.com/human-interest/2013/10/academia-s-confound...

I completely agree we should be defining better abstractions! I more meant it gets a lot harder to do so near the cutting edge.


I meant that it takes time to figure out the correct ones, modern algebra is the outcome of nearly 300 years of 'modern' mathematics!

It's unfortunate that such issues exist! But considering the issues within academia, you are most likely in a better situation now that otherwise with respect to free time and salary.


Kevin Hartnett's popularizations of mathematical research are amazing, I've never seen anything like them before. (Said as a complete amateur with sketchy coverage of undergraduate pure math).


Hurray for obsession, because without it humanity would be forever skirting around the dark corners of our understanding.


I realize I have little chance of actually being able to understand all of it. Still, I'd gladly have bought it at a lower cost. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: