Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It might not be able to do it for any turing machine/ a universal turing machine – but it might quickly figure out what a turing machine will do without executing all steps of it.


It might get it right some of the time, but it will be necessarily wrong some of the time. It's also very possible that for many (possibly even most) TMs, the most efficient algorithm for predicting the output is that TM itself.


> It might get it right some of the time, but it will be necessarily wrong some of the time

Yes, but the proof of the halting problem relies on diagonalization - i.e. a very exotic and carefully crafted input.

I would also like to note that, analogously, modern SAT solvers can solve most instances of NP-hard problems in polynomial time. Even though there exist hard cases they cannot solve in polynomial time (well, assuming P != NP), in practice these polynomial algorithms are exceedingly useful.

> It's also very possible that for many (possibly even most) TMs, the most efficient algorithm for predicting the output is that TM itself.

That could very well be true, but might still not hold for the subset of inputs (i.e. programs) that we actually care about in practice.

Relatedly, Kolmogorov complexity makes for some very interesting further reading (which essentially formalizes the problem we're discussing here): https://en.wikipedia.org/wiki/Kolmogorov_complexity


I think the problem is you're assuming that general AI = Turing machine, but there's no indication that needs to be the case.

"General AI" to me means human intelligence running on an artificial system (silicon, simulated brain, etc), so the optimization I'm thinking of is more akin to having an assembly expert translate your code into assembly than a compiler optimization pass.

Given that I have optimized my fair bit of code by removing abstraction layers or simplifying code, by definition a general AI should be similarly capable & can handle even ambiguous tasks like "refactor this codebase in this way". Obviously this gives up accuracy, but humans make mistakes writing code as well & it would be much easier to say "I've observed a fault that has this properties. Figure out the problem". It should do an even better job than I can on problems like that because for complicated problems it should be able to follow complex codebases with greater ease than I.

Again, I'm defining a tautological definition of "general AI" as one that's capable of doing all that. If it's not capable of doing that then it's not general AI.


I understand your points, but it's important to understand that humans ARE Turing Machines. We don't know of anything that CAN be computed bit can't be computed by a Turing Machine, so General AI would be a Turing Machine.

The Turing Machine model is specifically designed to abstract what a human (mathematician) does: you have a notebook (tape) and some kind of working memory inside your head, and at any one time you can either read something from the notebook and change the state in your head, or you can write something new in the notebook. This is what a TM does - it is an extremely abstract description of what it means to think, basically.


The problem for me with that line of reasoning is that it's one based on philosophy & not mathematically proven or with any clear evidence.

For example, [1], [2], [3] all show there are classes of computation outside of Turing machines. So if we agree there are computations outside of Turing machines, then the question is where does the human brain fall and, relatedly, can non-Turing machines run Turing machines? I suspect the answer to the latter question must be yes given the simplicity of a Turing machine (i.e. a pen & pencil is sufficient).

Thus, the fact that a human can execute a Turing machine doesn't conclude anything meaningful to me. If you could show that a Turing machine can execute a human brain, then the human brain would 100% be a Turing machine since a core property of a Turing machine is that it can transfer to any other Turing machine.

Even if we build "general AI" on a Turing machine, all we've shown is that there is a class of intelligence that is at its core a complicated Turing machine. It might suggest that a human brain is also a Turing machine (& I'd shift the weight of my prior from let's say 30% we're not Turing machines to 70% we are), but I think the only way to definitively prove that would be to do so by mathematically proving the model of the human brain, and then maybe also using it to actually clone a human brain onto a Turing machine to prove the model correct.

I think until that happens the argument remains philosophical & whichever side you take to be uninteresting. The only purpose of the debate is to show the question itself is important.

[1] https://en.wikipedia.org/wiki/Hypercomputation [2] http://faculty.poly.edu/~jbain/physinfocomp/Readings/94Hogar... [3] https://www.sciencedirect.com/science/article/pii/S030439750...


I do not have access to the third paper, which may hold somewhat more interest. Otherwise, all examples of models of hypercomputation in [1] and [2] are relying on performing an infinite number of steps in a finite time OR on precisely knowing the solution to an uncomputable problem. These are as interesting as trying to solve human flight assuming anti-gravity exists - they are obviously non-physical, absurdist models.

In fact, we don't know of a single physical process that is not Turing Computable, at least if we add randomness. Even with quantum wave-function collapse, we already know that QCs are still Turing equivalent.

This all suggests to me that the prior for the human mind being Turing equivalent can only be taken to be very close to 1.


That’s a solid counter argument. I assume you’re saying we don’t know of a single physical process that’s not a Turing machine because all of the models we have built to simulate them are Turing computable? That’s a strong point. Maybe I should readjust my prior on this. It does sound like you’re better versed in this topic. I like to learn by asking questions, so if you’re amenable to answering please do. If you don’t, then please don’t. It can be overwhelming for some and I’m not trying to prove you wrong. I admit I rushed into a position in haste without actually being well prepared on the topic and had way too much confidence in my own opinion.

I have some devil’s arguments but maybe they’re all bad. Genuinely, my technical grounding and knowledge here has atrophied to (at least I feel like) extremely laughable point and didn’t start high to begin with as undergrad engineering teaches a very different kind of math and I really limited my extracurricular need to seriously study beyond the core engineering topics. Even there, doing the bare minimum to just cram through exams rather than actually learning and understanding the topics throughout the year.

If we have only very primitive models of how the world works. After all we can simulate how the smallest atoms work to maybe some more complicated chemical interactions. Still, as it gets to biology our ability to simulate things breaks down rapidly. I don’t mean at a performance level, but at a “we’re waaaaaaay off in the applied aspects”. Drug discovery is one I’m thinking of. Or predicting someone’s facial features from their DNA (that last one always feels extremely dubious but news friendly). Or general AI. Those have seem to hit a wall in results. That’s close to a god of the gaps argument so let me know if that’s a bad faith one because that’s just too small a gap for non computability to live vs we just aren’t smart enough yet as a species? I could see that.

Or what about that we don’t know of any actually infinite Turing machines at a physical level (thermodynamics and heat death off the universe puts an upper bound there I am thinking now randomly to justify my position rather than considering that before-hand). So is anything really a Turing machine in reality or are Turing machines themselves just a useful tool/model to model the universe but not 100% accurate and the world works slightly differently and the error comes from non computability and not randomness? Let me know if that’s just a kooky supposition on my part in case the math of Turing machines already proves that non-infinite things are always Turing computable and is very basic results I have forgotten/never learned.

What if the Turing machine model is an easy tool to simulate parts of it but not possible to simulate the thing itself? Like the universe itself is not computable on a Turing machine. As far as I know something like that’s been proven in the past few years - there was a proof that if we are living in a simulation, then the physical rules of the thing running the simulation would have to be very different and not look like our universe. Doesn’t that indicate that there’s a limit to the size of any Turing machine we could build to model the universe itself, and thus maybe the infinite model required to build a theoretical Turing machine itself doesn’t map to how reality works. I’ll admit freely this one may have an improper grounding in Turing machines even at a popular level as I only read the abstract to that paper and maybe I misread it or I’m misremembering something that has so much technical nuance that I’m missing what that actually means and misremembering.

Has it been investigated on the theoretical side whether the interaction of networks of Turing machines (whether intelligent or not) all interacting is itself Turing computable? Would that simulation itself be an instance of hypercomputation or some other non computable area of research? I don’t understand the basics well enough to interpret that research so I’m hoping you do. I started reading the Wikipedia article and my eyes just glazed over.

I would think these would all be very significant problems but I’ve admittedly not kept up on my understanding of how this stuff works at a deep mathematical level (and have let those skills atrophy over the years) and maybe that’s all been answered or I have significant flaws in my intuition. I really and sincerely appreciate you taking the time to respond and explain and talk about this stuff. I don’t like studying but I love learning what the higher level intuitions are of people who do study this stuff more deeply are.


Oh, and I’m going to read [1] now. I thought the heading was timely and lovely since we were discussing this and I look forward to reading it (I haven’t yet so I do t know what the article says yet)

[1] https://news.ycombinator.com/item?id=28167835


This tidbit was interesting:

> No method of computing carried out by a mechanical process can be more powerful than a Turing machine. Although widely adopted, as there is no clear way to prove or disprove its validity the proposition still remains a conjecture.

I think that’s what we’re basically discussing, right? Still, the way that’s phrased puts it into the P!=NP camp for me so I think you may be right.


Humans can't solve the halting problem either, there is no contradiction. The halting problem is a theoretical problem that needn't apply in real life. If you for example restrict your AI to be able to generate optimal assembly for all programs that don't require more than 100PB of source code to write down, the halting problem no longer applies (in fact you can now implement this AI using a regular expression).


It might only be wrong for programs that have never existed and will ever exist.


Sure, that's possible, but the opposite is also possible: it might be wrong for most programs we actually write. Well, to be fair, there is some upper bound for any program running on a real CPU.


>Sure, that's possible, but the opposite is also possible: it might be wrong for most programs we actually write.

We could confirm this though! It's not like we can't find out if a given program halts or is inconsisent. Godel talks about it in his letter to Von Nuemann.


There are programs for which we can check this, but there is no general procedure to check if any program halts. Even ignoring the halting problem itself, say we analyze a program and realize it halts iff P=NP, or pi to the e is transcendental, .or if Pi's decimal expansion at position Graham's number is divisible by 3. Will that program halt? It might be very hard to say.

More promisingly, there are ways to construct programs such they will halt, using total languages (though not every problem can be solved with such a limitation).


We can write a general procedure to see it any given program halts within n-steps however.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: