Well, this is embarrassing. I just realised I had wrongly interpreted the result in [1]. I made an error on how Shor's algorithm encodes the numbers, wrongly assuming that numbers are encoded into quantum state space, which is 2^(2^n), where instead, there is one bit encoded into one qubit, which is also more practical.
The result shall be interpreted directly with the error rate for logical qubits to decrease as ~n^(-1/3). This, in turn, means that factorisation of a 10000-bit number would only require an error rate of 1/10th of the number of the logical qubits for a 10-bit number. This is practical given that one can make a quantum computer with around 100k qubits and correct errors on them.
On the other hand, a sibling comment already mentioned the limited connectivity that those quantum computers now have. This, in turn, requires a repeated application of SWAP gates to get the interaction one needs. I guess this would add a linear overhead for the noise; hence, the scaling of the error rate for logic qubits is around ~n^(-4/3). This, in turn, makes 10000-bit factorisation require a logical error rate of 1/10000 that of 10-bit number factorisation. Assuming that 10 physical qubits are used to reduce error by order, it can result in around 400k physical qubits.
[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3