I don’t understand, and I hope it’s just bad writing.
Certainly you can build a branch of mathematics without an axiom of infinity, and that’s fine, it’s math over finite sets.
However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.
He may think the axiom of infinity isn’t satisfied by our real physical world, but that’s not a math question! There’s nothing logically inconsistent about infinite sets nor their axiomatizations.
It's an interesting read. I don't think it's bad, but it's not rigorous or really aimed at anything in particular. Basically asking a discrete mathematician whether he needs continuity: no. It seems reasonable that we might need separate paradigms to think about different kinds of problem (e.g., is there a physical size of the universe vs. is there a biggest prime number) because we don't know yet if there is a theory of everything or if there are innate boundary layers.
It's a fun thinking prompt, and you can go down the rabbit hole of information theory and quantized spacetime. Like you suggest, it's perfectly fine to say "infinity does not exist" and also contemplate and operate on slice at a time.
It's rarely understood that infinity isn't something mathematicians made up to make things more complex, it's an abstraction that makes a lot of ideas vastly simpler.
This is alluded to in the article; it's challenging to prove a+b=b+a without infinity (though if you do modular/wraparound arithmetic it becomes straightforward).
It seems to me (not an expert in this area by a long stretch) that ultrafinite mathematics could basically be a branch of theoretical computer science in the sense that people seem interested in procedures to generate the numbers. In this regard, it's a bit surprising that TCS wasn't mentioned in the article.
What people might not be understanding is that mathematics is inherently built... ZFC was pored over for years and eventually the community concluded it was a good system to (a) preserve most, if not all, of the mathematics that had already been done and (b) build more mathematics.
You can have gripes over whether or not pure math is compatible with the physical world but we're not exactly close to solving that problem... if we were, then physicists would have a much easier time lol
> However, an axiom of infinity is independent, it doesn’t contradict anything in standard formalizations, and so it doesn’t make sense to say “infinity is wrong”.
Suppose we start with ZFC - Infinity as our base system. Then the negation of Infinity is consistent with this system. But adding Infinity itself makes the system strictly stronger, since ZFC proves the consistency of ZFC - Inf: in particular, in ZFC, we cannot prove that Infinity is consistent with ZFC - Inf.
In other words, in principle, it might be the case that ZFC - Inf is consistent, yet ZFC itself has a contradiction. In practice, most people believe that ZFC is also consistent, but we have no way to prove it a priori without accepting even more new axioms.
> But in the late 1800s, Georg Cantor and other mathematicians showed that the infinite really can exist.
I think, as I understand it, the objection is this. The proposition that infinity is "real", and there are actually infinite (not just very many) things.
I don't think it's bad writing. These people actually get angry at the idea that other people do math that might not connect to the real world. And they specially have it out for infinity.
I say do whatever math you like. It is helpful to know what math you are doing. For instance, while I don't have a "problem" with the Axiom of Choice per se I do like clean specifications of when we are using it and when we are not, because it is another example of when we detach from reality as we know it. I don't have a problem with detaching from reality as we know it, I just like there to be awareness that we have.
But plenty of math is detached from reality. Honestly we don't observe very many "mathematical entities" at all; I've never seen a graph. I've never seen hyperbolic space. I'm aware of the many places aspects of them seem to map to reality, but I've never actually seen a literal graph in the real world.
Personally I am reminded of the way that we model our computers with Turing Complete formalisms, despite the fact they are observably not Turing Complete and are technically just finite state machines. However, the observation that they are "just" finite state machines doesn't move us closer to an understanding of how our computers work, it moves us farther away. Even though computers are completely real-world phenomena, if you want to understand the issues raised by things like Turing Incompleteness and other such things in the real world, you're going to be exponentially better off using Turing Machine formalisms and simply noting that you may run out of memory or practically-available computational resources before a calculation can complete than trying to build a new set of formalisms around finite state machines. We can be in an engineering context where we are well aware of the finite nature of everything we are doing because it all comes back to real, physical machines, but it's still easier to model with infinity than without it.
In that context, the real utility of "infinity" is less "an infinite number of things" than "you will never reach for another X [byte of RAM, byte of disk, CPU cycle, incrementing counter, etc.] and be told you're out of resources". Basically we write our proofs, formal or informal, as ignoring "what if I reach for this resource and it's not there?" for every such resource and every time we reach for a resource, which is quite often. You could go through a system and add a "what if" check for every such instance, but it's way cheaper to just buy another stick of RAM or tweak the program to take fewer resources than it is to try to deal with the exponential-with-a-large-exponent explosion of states this causes mathematically.
The problem with infinity is that it's a hack. It is basically the NULL pointer of mathematicians. An instance of a number that has a special meaning that breaks the abstraction of numbers.
If you want to do things with infinity, fine, but then do it properly and write things like lim x->inf (your expression with x here)
I think you can reframe this and better understand the point these mathematicians are making.
The vast, vast majority of mathematics DOES use infinities. That's the standard perspective. The question is whether there is good, interesting, useful mathematics to be explored by disallowing that concept.
The way I see it, Gödel's, Turing's work and complexity theory come out of this line of thinking about _effective_ computation. This is an argument for exploring the mathematics that arises when you don't think of actual computer math as an imperfect approximation of the real numbers, but rather as a mathematical object in its own right.
I would guess (?) it's more interesting for floating point math and related than for integer math, because for integer math it's already well explored in group theory.