This seems a strange prediction to me. Can you elaborate on why you think this? Do you think the reals will be replaced by surreals in common usage by laypeople, by working mathematicians or in mathematical pedagogy? While I certainly think surreal numbers are really nifty for tying together different number systems into one rigorous foundation, your claim sounds like someone saying that the algebraic numbers would surpass the rational numbers.
I think they'll be important in AGI research for one thing. Take reinforcement learning (RL) for example: the rewards are arbitrarily constrained to be real numbers (or rational numbers). But why? There is a rich variety of RL environments one can construct using non-real number reward systems, and there's nothing special about the reals (or rationals) that suggests any connection to RL. The real numbers, recall, are the unique complete ordered field. What do complete ordered fields intrinsically have to do with reinforcement learning? Nothing, as far as I can tell (if there were some such connection, it would certainly be interesting to professors of real analysis).
As you point out, surreal numbers do a great job of tying together many different number systems, and that's inherently appropriate for studying generalized reinforcement learning where rewards come from many different number systems.
I remain skeptical because all practical computing necessarily has finite representation. We don't compute with real numbers, we use floats or doubles of a given precision. While this very occasionnaly leads to weird errors (from the perspective of pure mathematics) it's usually good enough.
Having briefly read your paper I think it makes a mistake analogous to arguments that machine intelligence is impossibly by Godel's incompleteness theorems. Consider the statement:
> This statement is not true if written by xamuel
Despite the fact that you cannot consistently assert that statement, it doesn't strike me as a strong argument against your general intelligence.
By analogy, why don't your arguments in section 3 prove the impossibility of Human General Intelligence? You are replacing the usual meaning of AGI (a machine that can execute a wide variety of tasks in human to superhuman fashion) with a far stronger one (a machine that can complete arbitrary tasks perfectly). A machine that (as in example 5) has an `incorrect' loss function may still be generally an AGI.
I'd challenge the notion that non-Archimedean tasks are relevant to AGI. Here is another example of your non-Archimedean tasks. Pick a pair of integers. The reward for this task is such that for points p and q, p < q if p comes before q lexicograpically, and |p-q| >= 1 if p!=q.
All the complexity is baked into the structure of the task, but it doesn't seem to be a compelling barrier to an AGI any more than the search task "Find the largest integer" is.
I'd expect AGI's to be able to approach non-Archimedean tasks at least as well as humans, but I expect they'd do it in an analagous way -- by loading Surreal numbers or the equivalent into their software rather than their hardware. That is to say, an AGI should be able to reason about these concepts despite not being built out of them.
>You are replacing the usual meaning of AGI (a machine that can execute a wide variety of tasks in human to superhuman fashion) with a far stronger one (a machine that can complete arbitrary tasks perfectly).
No, I never make any assumption about AGIs being capable of completing arbitrary tasks perfectly. Indeed, that would be quite impossible, no agent could do that. The point is rather that the traditional RL agent cannot even comprehend environments with non-Archimedean rewards, but a genuine AGI would be able to comprehend them.
Interesting. Most ML models presumably operate over IEEE double precision floating point values. These have a couple interesting properties like INF and NAN, but not infinitesimals.
We're still not comfortable checking if rationals are equal in any major language... there's a long way to go.
I could see a future high-level programming language (think Python design goals) using the rigorous foundation of surreals as the underlying numeric system. Sure 95% of the time it'll look the same to the programmer, and the performance will be worse, but you get inherent stability around infinities, irrationals, NaNs, floating-point comparisons, etc
It's reasonable to think that ~200 years from now, highschoolers will learn that numbers are all "really" from this structure of surreal numbers, and that most computers and physics processing etc uses them behind the scenes.
> We're still not comfortable checking if rationals are equal in any major language...
This seems to be confusing the issue to me. Checking for rational equality is trivial (just cross multiply). What we don't have is major languages deciding the the type rational numbers is useful enough to get first class support. And even that is overstating the issue, some languages do care and make representing rationals easy. In Julia, rational numbers can just be written like 2//3.
If people try to use floats to represent rationals and run into errors, the problem isn't that languages are incapable of representing rationals.
I don't see any upsides to using surreal numbers as a datatype. If you're restricting it to a finite size you're still going to have analagous problems that float or double do and I don't see any advantages in stability or whatever. If you're allowing arbitrary size (but obviously still finite) representation, you can still only represent the dyadic rationals, so you can't even express 1/3. The infinitesimals and infinities would require infinite size, so they're never practical.
Yes, it's instructive to recall that the Pythagorean Cult believed all numbers were rational (and this was a religious sort of belief). There's a legend (albeit probably not historically accurate) that when a member of the cult proved that sqrt(2) was irrational, they put him to death... https://en.wikipedia.org/wiki/Hippasus