I’m a quantum dabbler so I’ll throw out an armchair reaction: this is a significant announcement.
My memory is that 256 bit keys in non quantum resistant algos need something like 2500 qubits or so; and by that I mean generally useful programmable qubits. To show a bit over 100 qubits with stability, meaning the information survives a while, long enough to be read, and general enough to run some benchmarks on is something many people thought might never come.
There’s a sort of religious reaction people have to quantum computing: it breaks so many things that I think a lot of people just like to assume it won’t happen: too much in computing and data security will change -> let’s not worry about it.
Combined with the slow pace of physical research progress (Schorrs algorithm for quantum factoring was mid 90s), and snake oil sales companies, it’s easy to ignore.
Anyway seems like the clock might be ticking; AI and data security will be unalterably different if so. Worth spending a little time doing some long tail strategizing I’d say.
The required number of qubits to execute Shor’s algorithm is way larger than 2500 qubits as the error ceiling for logical qubits must decrease exponentially with every logical qubit added to produce meaningful results. Hence, repeated applications of error correction or an increase in the surface code would be required. That would significantly blow up the number of physical qubits needed.
The error rates given are still horrendous and nowhere near low enough for the Quantum Fourier Transform used by Shor's algorithm. Taking qubit connectivity into account, a single CX between 2 qubits that are 10 edges aways gives an error rate of 1.5%.
Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.
About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.
How can I, a regular software engineer, learn about quantum computing without having to learn quantum theory?
> Worth spending a little time doing some long tail strategizing I’d say
any tips for starters?
How long until this can derive a private key from its public key in the cryptocurrency space? Is this an existential threat to crypto?
They showed a logical qubit that can last entangled for an hour, but to do that they had to combine their hundred or so physical qubits into a single one, so in some sense they have, right now, a single (logical) qubit
> AI and data security will be unalterably different if so
Definitely agree with the latter, but do you have any sources on how quantum comphters make "AI" (i.e. matrix multiplication) faster?
> AI and data security will be unalterably different if so
So what are the implications if so ?
> Worth spending a little time doing some long tail strategizing I’d say.
What do you mean by this?
I think some element of it might be: Shor’s algorithm has been known of for 30 years, and hypothetically could be used to decrypt captured communications, right? So, retroactively I will have been dumb for not having switched to a quantum-resistant scheme. And, dumb in a way that a bunch of academic nerds have been pointing out for decades.
That level of embarrassment is frankly difficult to face. And it would be devastating to the self-image of a bunch of “practical” security gurus.
Therefore any progress must be an illusion. In the real world, the threats are predictable and mistakes don’t slowly snowball into a crisis. See also, infrastructure.
Edit after skimming arxiv preprint[1]:
Yeah, this is pretty huge. They achieved the result with surface codes, which are general ECCs. The repetition code was used to further probe quantum ECC floor. "Just POC" likely doesn't do it justice.
(Original comment):
Also quantum dabbler (coincidentally dabbled in bitflip quantum error correction research). Skimmed the post/research blog. I believe the key point is the scaling of error correction via repetition codes, would love someone else's viewpoint.
Slightly concerning quote[2]:
"""
By running experiments with repetition codes and ignoring other error types, we achieve lower encoded error rates while employing many of the same error correction principles as the surface code. The repetition code acts as an advance scout for checking whether error correction will work all the way down to the near-perfect encoded error rates we’ll ultimately need.
"""
I'm getting the feeling that this is more about proof-of-concept, rather than near-practicality, but this is certainly one fantastic POC if true.
[1]: https://arxiv.org/abs/2408.13687
[2]: https://research.google/blog/making-quantum-error-correction...
Relevant quote from preprint (end of section 1, sorry for copy-paste artifacts):
"""
In this work, we realize surface codes operating below threshold on two superconducting processors. Using a 72-qubit processor, we implement a distance-5 surface code operating with an integrated real-time decoder. In addition, using a 105-qubit processor with similar performance, we realize a distance-7 surface code. These processors demonstrate Λ > 2 up to distance-5 and distance7, respectively. Our distance-5 quantum memories are beyond break-even, with distance-7 preserving quantum information for more than twice as long as its best constituent physical qubit. To identify possible logical error f loors, we also implement high-distance repetition codes on the 72-qubit processor, with error rates that are dominated by correlated error events occurring once an hour. These errors, whose origins are not yet understood, set a current error floor of 10−10. Finally, we show that we can maintain below-threshold operation on the 72qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1µs cycle duration.
"""
> Worth spending a little time doing some long tail strategizing I’d say.
Yup, like Bitcoin going to zero.
https://en.wikipedia.org/wiki/BGP_hijacking#Public_incidents
A long-term tactic of our adversaries is to capture network traffic for later decryption. The secrets in the mass of packets China assumedly has in storage, waiting for quantum tech, is a treasure trove that could lead to crucial state, corporate, and financial secrets being used against us or made public.
AI being able to leverage quantum processing power is a threat we can't even fathom right now.
Our world is going to change.
You need to distinguish between "physical qubits" and "logical qubits." This paper creates a single "first-of-a-kind" logical qubit with about 100 physical qubits (using Surface Code quantum error correction). A paper from Google in 2019 estimates needing ~20 million physical qubits ("How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits" - https://arxiv.org/abs/1905.09749), though recent advances probably brought this number down a bit. That's because to run Shor's algorithm at a useful scale, you need a few thousand very high quality logical qubits.
So despite this significant progress, it's probably a still a while until RSA is put out of the job. That being said, quantum computers would be able to retroactively break any public keys that were stored, so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.