I think you have it wrong. Wolfram's claim is that for a wide array of small (s,k) (including s <= 4, k <= 3), there's complex behavior and a profusion of (provably?) Turing machine equivalent (TME) machines. At the end of the article, Wolfram talks about awarding a prize in 2007 for a proof that (s=2,k=3) was TME.
The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.
If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.
So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.
I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.
Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].
k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].
There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].
Yah Stephen Wolfram is too often grandiose thereby missing the hard edges.
But in this case, given how hard P=NP is, it might create wiggle room for progress.
Ideally it would have gone on and said in view of lemma/proof/conjecture X, sampling enumerated programs might shine light on ... no doubt that'd be better.
But here I'm inclined to let it slide if it's a new attack vector.
I think your approach is pretty much fundamentally flawed.
Put it this way, let's say someone recorded typing in the paragraph that you presented but saved the keystrokes, pauses, etc. Now they replay it back, with all the pauses and keystrokes, maybe with the `xdotool` as above, how could you possibly know the difference?
Your method is playing a statistical game of key presses, pauses, etc. Anyone who understands your method will probably not only be able to create a distribution that matches what you expect but could, in theory, create something that looks completely inhuman but will sneak past your statistical tests.
I'm no expert but, from what I understand, the idea is that they found two 3D shapes (maybe 2D skins in 3D space?) that have the same mean curvature and metric but are topologically different (and aren't mirror images of each other). This is the first (non-trivial) pair of finite (compact) shapes that have been found.
In other words, if you're an ant on one of these surfaces and are using mean curvature and the metric to determine what the shape is, you won't be able to differentiate between them.
The paper has some more pictures of the surfaces [0]. Wikipedia's been updated even though the result is from Oct 2025 [1].
Is it the case that 'they' are simply two ways of immersing the same two tori in R^3 such that the complements in R^3 of the two identical tori are topologically different?
If so, isn't this just a new flavor of higher-dimensional knot theory?
They don't appear to care about the images of the immersions or their complements, aside from them not being related by an isometry of R^3. They're not doing any topology with the image.
In other works, they have two immersions from the torus to R^3, whose induced metric and mean curvature are the same, and whose images are not related by an isometry of R^3. I didn't see anything about the topology of the images per se, that doesn't seem to be the point here.
As others mentioned, tool use wasn't restricted to homo sapiens. I think this makes sense, no? We didn't spontaneously use tools, it must have evolved incrementally in some way.
I think we see shades of this today. Bearded Capuchin monkeys chain a complex series of tasks and use tools to break nuts. From a brief documentary clip I saw [0], they first take the nut and strip away the outer layer of skin, leave it dry out in the sun for a week, then find a large soft-ish rock as the anvil with a heavier smaller rock to break open the nut. So they had to not only figure out that nuts need to be pre-shelled and dried, but that they needed a softer rock for the anvil and harder rock for the hammer. They also need at least some type of bipedal ability to carry the rock in the first place and use it as a hammer.
Apparently some white-faced Capuchins have figured out that they can soak nuts in water to soften it before hammering it open [1].
No, we could have had something which other previous species didn't that unlocked the use of tools. Otherwise if no species could be the first, or it would be deemed spontaneous, no new skills could be unlocked.
I will blame overlong copyright term lengths. 70 years after authors death or 95 years after publication, allowing most recent work to enter the commons effectively after a century, or more, from now [0].
This is the rare case when Europe is even worse. Metropolis, the 1927 Fritz Lang film, is out of copyright in the United States but will still be in copyright in Germany until 2047: 120 fucking years.
It’s preposterous, and offensive to anyone’s intelligence to claim that this is about incentivizing production; does anyone seriously believe there is a potential artist out there who would avoid making their magnum opus if it could only be under copyright for 119 years?
The problem is, copyright law is no longer about artists, if it ever was: it’s about corporations, i.e. maximizing the value corporations can extract from intellectual property.
"Why prediction markets aren't popular" [0] gives some compelling arguments (to me) about why prediction markets haven't caught on and probably never will.
As I understand it, the main argument is that for prediction markets that aim to incentivize the thing they're predicting, better to invest in the thing directly. Otherwise, "prediction markets" are successful precisely when they can't influence the outcome, like sports betting.
I remember finding the election betting interesting last presidential election, but I also remember that it was spiked when Musk invested to change the odds.
Musk, being the world's richest person, is something of an outlier. He can afford to give free money to the market for longer than anyone else, and the size of the market might not be big enough to handle the imbalance.
There's a level of irrational spending which only institutional investors can counterbalance, and they might not have the risk appetite to get into a single market on a relatively less regulated platform that could rug pull them.
My understanding is that unchecked wealth only remains that way until its owner acts irrationally on a stock exchange, at which point it is quite rapidly checked and becomes someone else's unchecked wealth.
Which is to say that Elon Musk can inflate any market he wants, but only by losing sums of money that will become increasingly significant as more and more people find out about the free cash giveaway.
There’s no functional difference in how markets work when 99% of wealth is owned by a handful of kings vs 99% of wealth being owned by a handful of oligarchs.
I don't really think so. You just swapped the term king to Oligarchs. In fact the Oligarchs are even worse because people think that they have freedom when they might not in fact have such freedom in the first place.
I don’t have an opinion on if it’s worse or not because some people mistakenly think they are free.
I meant that from the perspective of how market forces play out, hyper concentration of wealth into a few actors looks the same whether the title of the those actors is “king” or “oligarch”.
You start losing the wisdom of the crowds effect the market gives if you have a handful of people making the decisions for the entire market
I've also created a slightly modified version that includes graphics for the moon phases and different highlighting parameters, depending on whether it's a new or full moon [0].
The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.
If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.
So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.
I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.
Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].
k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].
There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].
[0] https://arxiv.org/abs/cs/0212002
[1] https://www.math.cmu.edu/~af1p/Texfiles/AFFHCIRG.pdf
[2] https://arxiv.org/abs/1706.08431
reply