The error rates given are still horrendous and nowhere near low enough for the Quantum Fourier Transform used by Shor's algorithm. Taking qubit connectivity into account, a single CX between 2 qubits that are 10 edges aways gives an error rate of 1.5%.
Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.
About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.
Re: AI, it's a long way off still. The big limitation to anything quantum is always going to be decoherence and t-time [0]. To do anything with ML, you'll need whole circuit (more complex than shor's) just to initialize the data on the quantum device; the algorithms to do this are complex (exponential) [1]. So, you have to run a very expensive data-initialization circuit, and only then can you start to run your ML circuit. All of this needs to be done within the machine's t-time limit. If you exceed that limit, then the measured state of a qubit will have more to do with outside-world interactions than interactions with your quantum gates.
Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).
The following portion from Google's announcement seems most important:
> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.
Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.
I'm curious if EC can mitigate the sub-par decoherence times.
[0]: https://www.science.org/doi/abs/10.1126/science.270.5242.163...
[1]: https://dl.acm.org/doi/abs/10.5555/3511065.3511068
[2]: https://www.ibm.com/quantum/blog/eagle-quantum-processor-per...
> Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses.
Was this actually measured and published somewhere?
In addition to that, the absolutely enormous domains that the Fourier Transform sums over (essentially, one term in the sum for each possible answer), and the cancellations which would have to occur for that sum to be informative, means that a theoretically-capable Quantum Computer will be testing the predictions of Quantum Mechanics to a degree of precision hundreds of orders of magnitude greater than any physics experiment to date. (Or at least dozens of orders of magnitude, in the case of breaking Discrete Log on an Elliptic Curve.) It demands higher accuracy in the probability distributions predicted by QM than could be confirmed by naive frequency tests which used the entire lifetime of the entire universe as their laboratory!
Imagine a device conceived in the 17th century, the intended functionality of which would require a physical sphere which matches a perfect, ideal, geometric sphere in Euclidean space to thousands of digits of precision. We now know that the concept of such a perfect physical sphere is incoherent with modern physics in a variety of ways (e.g., atomic basis of matter, background gravitational waves.) I strongly suspect that the cancellations required for the Fourier Transform in Shor's algorithm to be cryptographically relevant will turn out to be the moral equivalent of that perfect sphere.
We'll probably learn some new physics in the process of trying to build a Quantum Computer, but I highly doubt that we'll learn each others' secrets.