I actually ran into this once not as a joke. I was using some software which had an option that would change the default base it used for all numbers. After typing <command> 16 and fiddling about in hexadecimal I tried switching back to decimal using <command> 10, yet I was still in hex mode. It took me a few more seconds than it should for me to realise what had happened.
It can actually represent all integers just fine. It works like any other base, except whereas for example in base 10, positions are multiplied by 10^0 = 1, 10^1 = 10, 10^2 = 100 and so on, in base 1 they are multiplied by 1^0 = 1, 1^1 = 1, 1^2 = 1, and so on. The result is that each digit adds 1 to the number, like tally marks. So we have, for example
Yes, the word for base 10 is “decimal” [1] and digits do not have to be decimal digits. There are also binary digits “bits” [2], ternary digits “trits” [3], and so on.
Whether the thumb and big toes are fingers is something that depends on definitions, but in the strictest sense they are excluded from fingers and toes, and thus humans generally have 20 digits, eight fingers, and eight toes by this definition.
Different primes will be delicate in different bases (you can trivially demonstrate this by considering a prime n in base n-1) but they exist in all bases.
The fact that the existence of a type of number can be proven despite not finding a single example is very interesting and unintuitive to me as a non-mathematician.
A (small) minority of mathematicians find it unintuitive too. Intuitionistic mathematics and constructivist mathematics say that if you can't construct an example of it you can't prove it exists. They don't accept a proof that "some X exists such that P(X) is true" as valid unless you can give an example of such an X. "Intuitionist" because it is based on an intuition widely shared by a lot of people, but mostly rejected by modern mathematics.
An example described in the article, and which comes from the same branch of analytic number theory, is Daniel Shiu's result that you can find arbitrarily long strings of primes in "any" arithmetic progression.
(By "any" I mean any arithmetic progression a (mod m) where a and m don't share any common prime factor.)
So, for example, there is a string of a billion consecutive primes, each of whose decimal expansions ends in a billion ones.
In principle you could use the method to find it, possibly the first example would be around 10^(10^10) or so.
The gateway to all of this is the quantitative form of Dirichlet's theorem on prime numbers in arithmetic progressions: given coprime a and m, there are infinitely many primes which are a (mod m). (So, in other words, Shiu's result with "just one prime".) In this case, we know asymptotically how many such primes there are, and we can get upper bounds on the error. The proof is highly non-constructive, and uses complex analysis applied to the so-called "L-functions".
Using a variety of trickery ("sieve methods", "the circle method", etc.) this can then be leveraged into other interesting results.
1. We first prove there are infinitely many primes.
2. Assume the number of primes is finite.
3. Thus, there exists a number that is the product of all primes.
4. Thus, there exists a number that is one more than the product of all primes, let us call this number A.
5. A is larger than any prime, thus different from any prime, thus not a prime.
6. Thus, there is at least one prime that A can be divided by.
7. But by it's very definition as the product of all primes + 1, the remainder of dividing this number by any prime is 1.
8. Thus, A is divisible by no prime, and prime.
9. Contradiction follows from assumption 2, thus assumption is false.
10. There are thus infinitely many primes.
11. There are finitely many number below 10^10^10^10^10
12. Thus, there must at exist infinite primes above 10^10^10^10^10
I don't know any of these numbers, but showing that a contradiction can be derived from assuming that such a number not exist is the common way to do it.
In a way numbers are much worse than that. Imagine writing a number down as a decimal fraction, like 2.0 or 4.86. If when you do this, the digits go on forever and no digit (such as a zero or a three) appears much more or less often than the you'd expect at random, that number is called "normal in base 10". If that's true in every base, it's just called "normal".
That doesn't sound very normal, probably no numbers you usually think about have these properties, why do mathematicians call them "normal"? Well, the mathematicians have proved that almost all real numbers are normal. It just turns out that we haven't the faintest idea how to find them and we never needed them for anything so we didn't notice how abundant they apparently are.
We aren't even sure if some weird numbers we do know about are normal, Pi looks pretty normal, but we can't prove it is.
I am unsure whether or not GPUs are good at work involving arbitrary precision numbers. My intuition was that they excelled at doing massively parallel operations for which accuracy is not a high concern.
I would probably look into using some sort of large integer sieve if I were trying to search for an example. Thinking about it though, I am even unsure whether it is decidable that N is widely digitally delicate.
Your number is broken up into 32-bit coefficients to 2^32 as a compounding power (sum over a_i * 2^32i) -- and then you can do it as tensor(ish) operations, if you fiddle with carry/mod steps in between to keep the coefficients the right size and partial results properly aligned.
For a prime to be delicate as per the definition given, all numbers with "levhensthein distance" 1 must be composite. This puts a lot of conditions on what the prime can be, for p = a_0 + a_1 b + a_2 b^2 + ... + a_n b^n, if p is delicate then (p - a_0), (p - a_0 + 1), ..., (p - a_0 + b - 1) must all be composite, and so must be (p - b a_1), ..., (p - b a_1 + b (b-1)), and so forth.
This is equivalent to proving that certain slices of some arithmetic successions contain no primes. Not a professional mathematician and I didn't read the paper, but I suspect this is related to prime gaps works by Tao, who is cited in the article.
Yes, but if we turn it around, if digitally delicate is not the majority, then the majority of primes are one digits' edit distance away from another, i.e. all but one digit is shared with another prime.
Napkin math (possibly with gross errors, and doing stats, so making the math rigid cannot lead to rigorous proof):
A prime with N digits has 9N ‘neighbors’.
There are about 10^N/ln(10^N) - 10^(N-1)/ln(10^(N-1)) primes with N digits, so the probability of a N-digit number being prime is about 1/ln(10^N), of it not being prime 1-1/ln(10^N).
So, the question is about the behavior of (1-1/ln(10^N))^9N as N ⇒ ∞. Google gives me a plot that’s essentially zero, so yes, most primes should be digitally delicate.
> Here, his sweatshirt lists the first 20 digitally delicate primes.
Isn't the first digitally delicate prime simply 2? Or are single-digit primes excluded? Or am I missing something terribly fundamental?
edit Or is it that, because we can replace 2 by 3 and still get a prime, it doesn't count? Which is to say, replacing any digit in the number by any other digit, must always result in a non-prime?
Proving there is an infinite number of some subset of primes, for any sort of non trivial subset of primes, seems noteworthy at first. But considering the implications of there being infinite primes, there being a finite limit seems far more noteworthy. Is there any set of finite primes that isn't trivial?
I'm considering trivial to be something like all primes less than 100 or all primes divisible by some prime p, which itself is just a much fancier generalization of the claim that 2 is a noteworthy prime for being the only even prime.
I think this is the only set of non-trivial primes proven to be finite, and even then it is technically an infinite set of finites sets of primes.
Wouldn’t that work best the other way that all transformations lead to other primes? Obviously impossible since changing the last digit to 0 will make any number composite.
The way I understand this concept of a 'widely digitally delicate prime' is that you can take such a prime, prepend it with any sequence, and receive a prime in return.
If that is the case, then finding such a number immediately trivialises the search for the 'largest known prime' - take the largest known prime, tack this number on the back of it, and you now have a new largest prime. I guess you'd get bonus points if somehow it's also Mersenne.
> this concept of a 'widely digitally delicate prime' is that you can take such a prime, prepend it with any sequence, and receive a prime in return.
This makes no sense. Let X be a number that satisfies your condition and consider the number XX formed prepending X to itself. You're asserting that XX is prime, but it is obviously composite, because it is divisible by X (just like say 2929 is 29*101).
You can rephrase that conjecture to say... given any sequence of digits not ending in 2 (or 0), is there a prime number that ends in that sequence. That's an interesting conjecture!
Okay. So they put lots of zeroes in front of the primes. Then they start changing some of the zeroes and prove that it will eventually become composite. This feels like a very distant cousin to Messene prime to me. Could it be used for faster algorithm for finding primes?
Wouldn’t it be more interesting to find a prime number where changing any single digit (maybe besides the least significant one) would still be a prime? Call it a sturdy prime.
No such numbers except a small number of trivial cases exist.
In odd bases, n % 2 === (sum of the digits of n), and in even bases, n % (b-1) === (sum of the digits of n).
If there existed infinitely many 'semi-sturdy' primes, this would imply that several prime 2-tuples match infinitely many primes, but results in that area are limited AFAIK.
I am wondering, what kind of implications will it have on encryption. Would it be easier to break down prime factorization if they are composed of digitally delicate primes?
I would expect it to be irrelevant. If being able to factor slightly different numbers was very helpful in factoring a particular number it would already be something we can do; generating known-factorable numbers from a prime is trivial.
>“The story of mathematical research is that you don’t know beforehand if you can solve a challenging problem or whether it will lead to something important,” Pomerance said. “You can’t decide in advance: Today I’m going to do something valuable. Though it’s great, of course, when things turn out that way.”
In other words: 'Maybe someone, someday, will figure out if this matters. It'll be neat if it does'