Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its harsh to say it, but Wolfram tried his best at a computational theory of everything and failed.

We didn't know this is in the eighties, when the first cellular automata ideas were conceived. So it was a worthy thing to explore in earnest. But it did not work. There is nothing to show for it. It did not strike a vein. These things happen. All the time. You have a great startup idea but no market fit. In this case the market is the Universe. And you cant fake it till you make it with the Universe.

The universe most certainly has a mysterious affinity with mathematics. And computation is a mathematical concept. So its a decent hypothesis. But there are a lot of mathematical concepts that dont manifest in any shape or form in physical reality.

From the simple geometric thinking of ancient cultures to Newton's and Leibnitz's calculus and then all the subsequent glories of 19th and 20th century physical theory, when new mathematical concepts "fit" the way the universe works there is just an avalanche of prediction, verification, learning, refinement, further prediction etc.

Its wrong to think we have reached the end of "mathematical physics". So new ideas are needed, and computation is as good an inspiration as a falling apple. But prunning dead-end ideas is a faster way to get closer to the truth.



Here's the fundamental problem with Wolfram's approach: he 1) came up with a model of physics (which is fine), 2) noticed that it reproduced many of the normal things that are necessary in a viable theory of everything, like the basic results of quantum physics and relativity (also fine), and 3) declared success, taking 2) as an indication that he's absolutely right and this is the real theory of everything and we're done. It's 3) that's absolutely not fine, and I'm not convinced that Wolfram fully gets why.

It's extremely easy to come up with models that reproduce most of modern physics if you at all know what you're doing. String theory does it, loop quantum gravity does, and so on. There are deterministic models that avoid the "God playing dice" aspects of quantum mechanics yet still reproduce all the classic results. There are rods + gears models of electromagnetism that give the right numbers even though the mechanisms are ludicrous.

The fact that it is so easy to come up with models that match modern physics is in itself a meaningful and not at all obvious thing, but it ultimately derives from the fact that the real universe seems to operate on laws that spring directly from symmetry principles. It turns out that most of the physics that matters is extremely "natural" and can be derived as a consequence of much simpler assumptions than you'd expect, even if the math that gets you from those assumptions to the resulting mechanics can be intense. If you're unfamiliar with this concept but understand calculus, you owe yourself a very deep dive on Noether's theorem, the way that symmetry radiates into every aspect of physics is one of the most profound things to study in physics.

The upshot of Noether's theorem and the ubiquity of its applications in modern physics is that it's very easy to create a theory that matches the predictions of e.g. special relativity: you just need to sneak it in by, for instance, defining your "foliations" in such a way that you have Lorentz symmetry, then everything else comes for free. If you want general relativity, then you (mostly) just need invariance under diffeomorphisms, which is really frickin easy to build into the limit of any graph-based model since you're basically redefining space altogether. I still don't entirely understand how Wolfram gets quantum theory in there; I don't doubt that his model does actually do it at a mathematical level, I just can't stand the verbose writing style and have too little interest in his particular theory to work through it, but once you start talking about constantly branching and recombining state graphs and stuff like that it's not at all hard to imagine that you could pick your definitions in such a way that Hilbert spaces pop out and then you define observers/observations in a way that makes it cleanly match a many-worlds interpretation of quantum mechanics.

But the fact that you have a model that reproduces all of known physics doesn't mean anything. We already have several of those. And people rightly criticize even the top contenders on the basis that they all tend to suffer from the same defect, they're overparameterized and could predict a lot of universes that don't work the way ours does, and there are very few experiments that would rule the models out altogether (rather than merely constrain the parameters). To the extent that their predictions differ from what current theory would predict, their parameters could be easily tuned to match almost any result, which makes it tough to have any faith that the goalposts wouldn't be moved when results did come in that could test, say, the extreme conditions where quantum gravity would be relevant. Wolfram's is no different, except that as far as I can tell he hasn't gone anywhere near as far as e.g. the string theorists in working out what the different predictions would even be for his theory. He's just blindly declaring it correct.

Models are great, and I think there is something useful in Wolfram going down the rabbit hole in terms of showing what a model that reproduces quantum effects looks like, I feel like that is underexplored (the rules of quantum mechanics are usually taken as a given, even in theories of "everything"). But his breathless declarations of having solved physics are ludicrous, and I feel like his ideas might actually be taken much more seriously if he had a more realistic understanding of what he was working with.


I'm sympathetic to your take on how overly grandiose the language is, but I also think you're being too harsh here.

The idea that the universe is discrete/computational is a fine idea, but underspecified and useless on its own. There's an infinite array of computable rules to choose from. But the fact that with a few assumptions on the rules you can then limit to both GR and QM is very non-trivial and, in my opinion, pretty surprising.

To your point, does it prove that this is _the_ correct theory? Definitely not, and metering language around the claims is important. Still, the result feels novel, surprising, and worthy of further investigation, alongside the other popular models being explored. I think it's a shame that Wolfram's demeanor turns people off from the work.


> But the fact that with a few assumptions on the rules you can then limit to both GR and QM is very non-trivial and, in my opinion, pretty surprising.

Perhaps you're not familiar with the literature here, but GP isn't exaggerating, using e.g. Noether's Theorem you can derive the expected laws of physics from very simple symmetry principles. This means that any model with these symmetries will produce these behaviours.

If you make up a new model of Newtonian mechanics that doesn't depend explicitly on time, so that your laws are the same tomorrow as today, then it's proven that such a model will conserve "energy". You could point at this as an indication of the correctness of your theory, but it's really unavoidable. You can play a similar trick for the fundamental forces if you have the patience to work through the derivation.

A better test is these models is if they're predictive, and I haven't seen a such a result about this CA-physics outside of Wolfram's blog.


I of course agree with most of what you say. The thing that impresses me about this whole ruliad business is that it seems to operationalize computational version Tegmark's mathematical universe hypothesis: all sets of mathematical axioms plus their computable consequences equally well have the secret fire of existence, our SU(3) x SU(2) x U(1) world is not the only realized one.

But it's also slightly different; in Tegmark's description of the MUH there's not a meaningful connection between the universe that realizes (let's say) Euclid's axioms and our universe. They're just separate places in the Platonic realm; they way we learn about Euclidean geometry is by computing, using some little Turing-complete region to simulate geometry. If I understand correctly, the ruliad says no, it is possible, in principle, to navigate through the hell of a mess and actually find the place in the hypergraph, not disconnected from the place that describes our lived experience, that is Euclidean geometry. It's sort of the ultimate reading of the Copernican principle: the laws we see around us are not particularly special and aren't privileged over other laws.

I find that to be a pretty beautiful philosophical idea while also thinking it's not a very practical one for doing actual science. If it contains representations all possible consistent axioms, well, how would you ever make a prediction about an actual experiment nearby? In the framework of relativistic QFTs we can make a bunch of different models and test them, settle on one, and use it to make predictions. Or find that actually it was just a low-energy EFT all along, falsifying our model. But the ruliad can never be falsified; the claim is that every possible universe is in there. How do I use it to make predictions about physics beyond the standard model? Or even just SM physics? Unclear.


I wish I could fail as well as him


You might really enjoy All Watched Over By Machines of Loving Grace. He wasn’t the first to try this, and many failed before him. Most people don’t know that the entire concept of the “ecosystem” comes from a computational view of ecology.

It’s bigger problem than just to this. It’s that we’ve based everything off the Club of Rome style mindset of society and it’s all failed. But we haven’t figured out another way. So climate change, politics, democracy and marketing all continue to try to figure out the computational stable state of society.


figure out the computational stable state of society

Hahahaha!! That's so "psychohistory" ... it suggests such, almost 'individual'-like intention.

Also, none of it has failed. That's an absurd interpretation of where we are. The real problem is that everything we do ... all the day-to-day problems we solve, all the systems we build that fit and do a 'good job' helping us, say, in some respect, ... set us up to need even more of the same basically. As you wrote elsewhere ... increasingly complex etc.

I'd flesh out more, but, must run now.

The reality is simply that we WON'T outrun reality. There is no failure, nor is there success ... that dualistic thinking really tends to obscure rather than clarify ... by anchoring some 'conclusion' based on some specific perspective and cutting off wider views. We will follow 'the laws' of other organisms ... our own specific path, but the same basic fundamental arc ... game theoretic and in less ... abstract framings.


It’s failed if it doesn’t accomplish its stated goal. Don’t get lost in deconstructionism then declare all goals meaningless. It failed to create a stable state. That’s all.


> It’s that we’ve based everything off the Club of Rome style mindset of society

What is this supposed to be a reference to?


Possibly the "Limits to Growth" report (1972) by Meadows, et al. They used a simple model of resources, population, economy, pollution, etc. Some would say an over-simplified model. The report sparked a lot of controversy. I am not aware of significant changes to their World3 model that might be useful trying to integrate climate change models with economic-resource and population models.


I am very aware of what the Club of Rome is/was. I don't understand what the sentence means.


The last sentence said something about a computationally stable (model/state) of society. I suppose such stability would be helpful in looking for a sustainable economy or society. But I'm not clear either, on what that sentence means.


We keep trying to measure, calculate, forecast, then offset our world by building increasingly complex system to keep it “stable state” which we then errantly call “natural.”


What a fantastic one sentence summary of taoism and certain schools of anarchism.



The universe most certainly has a mysterious affinity with mathematics.

Is that really true?

Could it be more fair to say that mathematics has a (not so) mysterious affinity with the universe?

Specifically, where do our 'axioms' come from? Why did people spend centuries trying to prove the parallel postulate?

Partly, I'm being rhetorical, but, also, partly I'm really not. I would certainly not categorically dispute what you wrote, but I'd also not embrace it 'out-of-hand'.

... So, 'the floor is open', so-to-speak... if any have other perspectives on math-universe connection, rebuttals, etc. :)


I think it's more just the fact that our universe seems to fit extremely rigid, unbreakable rules definable by math. If we lived in a simulation for example, you could have phenomena that "break" these rules at any given time.


But wouldn't we just interpret that "break" as more rules for us to learn and study and build world models around? Why would we think "the simulation has broken" and not "physics is weird huh? Especially in edge cases like absurdly high energies or absurdly tiny scales."

(I don't think the universe is a simulation, to be clear.)


In this case though there'd be nothing to learn, since math couldn't explain this phenomena.


But that happens all the time in science, and we come up with different math that does explain it. Eg Jupiter's orbit couldn't be explained, the math didn't add up, so we came up with the idea that light had a speed and that Jupiter was far enough away that the speed impacted our calculations.

If you think about quantum mechanics, it's something that could sure look like a bug. Systems that you expect to be deterministic are stochastic if you look closely enough? If it were a program I was writing, I'd start wondering if there were rounding errors and/or concurrency issues. But we've come up with math to understand it.

Math is very general, I'm not sure there's a process that you couldn't describe with a complex enough mathematical system, and thereby conclude it had it's origins outside of our universe.


The difference is that there's no possible model for this kind of hypothetical phenomena, it's by definition undefined behavior. Imagine a universe where the formula 𝜏=rF would just randomly be violated for no absolutely no reason, such as a person sometimes accidentally throwing a baseball that leaves the atmosphere, or a child accidentally lifting a house. Even the randomness of quantum mechanics can be explained using models that are very consistent and testable, but the only theory we could come up for this wouldn't be based in math, it'd be the equivalent to blaming it on magic, and no amount of advancement in science would ever come closer to explaining it.


How do know the difference between a problem that cannot be solved with math, and a problem you haven't solved with math yet?

Let's say you throw a baseball into orbit. That's very strange, a profound mystery. You tell me that we must live in a simulation. I contend something strange happened which we don't understand, because we only have one datapoint, but that a satisfactory explanation does exist. How do you know I'm wrong?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: