Nice, thanks for linking that paper. I also did below.
The authors argue (e.g. in the first comment here https://scottaaronson.blog/?p=8310#comments) that by their definition, Google still only has a fraction of one logical qubit. Their logical error rate is of order 1e-3, whereas this paper considers a logical qubit to have error of order 1e-18. Google's breakthrough here is to show that the logical error rate can be reduced exponentially as they make the system larger, but there is still a lot of scaling work to do to reach 1e-18.
So according to this paper, we are still on roughly the same track that they laid out, and therefore might expect to break RSA between 2040 and 2060. Note that there are likely a lot of interesting things one can do before breaking RSA, which is among the hardest quantum algorithms to run.
How to understand that logical error rate can reduce exponentially when the system becomes larger? Does it mean each physical qubit will generate more logical qubit when the total number of physical qubits increases?