logoalt Hacker News

dmurraytoday at 12:04 AM2 repliesview on HN

But the reason factoring numbers is used as the quantum benchmark is exactly that we have a quantum algorithm for that problem which is meant to scale better than any known algorithm on a classical computer.

So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.

Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?


Replies

pclmulqdqtoday at 11:31 AM

The algorithm in question is a hypothetical algorithm for a hypothetical computer with certain properties. The properties in question are assumed to be cheap.

In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.

show 1 reply
bawolfftoday at 9:53 AM

> Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

I don't think we have any "real" experience scaling from 15 to 21. Or at least not in the way shor's algorithm would be implemented in practise on fault tolerant qubits.

We haven't even done 15 yet in a "real" way yet. I susect the amount of time to factor 15 on fault tolerant qubits will be a lot longer than the time to go from 15 to 21.