This explanation is relatively reductive when it comes to its criticism of computational geometry.
The thing with computational geometry is, that its usually someone else's geometry, i.e you have no control over its quality or intention. In other words, whether two points or planes or lines actually align or align within 1e-4 is no longer really mathematically interesting because its all about the intention of the user: does the user think these planes overlap?.
This is why most geometry kernels (see open cascade) sport things like "fuzzy boolean operations" [0]) that lean into epsilons. These epsilons mask the error-prone supply chain of these meshes that arrive in your program by allowing some tolerance.
Finally, the remark "There are many ways of solving this problem" is also overly reductive, everyone reading here should really understand that this is a topic that is being actively researched right now in 2026, hence there are currently no blessed solutions to this problem, otherwise this research would not be needed. Even more so, to some extent this problem is fundamentally unsolvable depending on what you mean by "solvable", because your input is inexact not all geometrical operations are topologically valid, hence an "exact" or let alone "correct along some dimension" result cannot be achieved for all (combination of) inputs.
[0] https://dev.opencascade.org/content/fuzzy-boolean-operations
> This is why most geometry kernels (see open cascade) sport things like "fuzzy boolean operations" [0]) that lean into epsilons. These epsilons mask the error-prone supply chain of these meshes that arrive in your program by allowing some tolerance.
They don’t just lean into epsilons, the session context tolerance is used for almost every single point classification operation in geometric kernels and many primitives carry their own accumulating error component for downstream math.
Even then the current state of the art (in production kernels) is tolerance expansion where the kernel goes through up to 7 expansion steps retrying point classification until it just gives up. Those edge cases were some of the hardest parts of working on a kernel.
This is a fundamentally unsolvable problem with floating point math (I worked on both Parasolid and ACIS in the 2000s). Even the ray-box intersection example TFA gives is a long standing thorn - raytracing is one of the last fallbacks for nasty point classification problems.