They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request, is there a way of handling that with try{} predestined{} blocks? Or do I need to use the Bootstrap Paradox library?
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse
I see the evidence, and I see the conclusion, but there's a lot of ellipses between the evidence and the conclusion.
Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?
In the past five years I participated in a project (with Yosi Rinott and Tomer Shoham) to carefully examine the Google's 2019 "supremacy" claim. A short introduction to our work is described here: https://gilkalai.wordpress.com/2024/12/09/the-case-against-g.... We found in that experiment statistically unreasonable predictions (predictions that were "too good to be true") indicating methodological flaws. We also found evidence of undocumented global optimization in the calibration process.
In view of these and other findings my conclusion is that Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.
I wonder if anyone else will be forced to wait on https://scottaaronson.blog/ to tell us if this is significant.
The slightly mind blowing bit is detailed here: > https://research.google/blog/making-quantum-error-correction...
“the first quantum processor where error-corrected qubits get exponentially better as they get bigger”
Achieving this turns the normal problem of scaling quantum computation upside down.
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
Processing in multiverse. Would that mean we are inyecting entropy into those other verses? Could we calculate how many are there from the time it takes to do a given calculation? We need to cool the quantum chip in our universe, how are the (n-1)verses cooling on their end?
If we are in a simulation. This seems like a good path to getting our process terminated for consuming too much compute.
You learn a lot by what isn't mentioned. Willow had 101 qubits in the quantum error correction experiment, yet only mere 67 qubits in the random circuit sampling experiment. Why did they not test random circuit sampling with the full set of qubits? Maybe when turning on the full 101 set of qubits, qubits fidelity dropped.
Remember macroscopic objects have 10^23=2^76 particles, so until 76 qubits are reached and exceeded, I remain skeptical that the quantum system actually exploits an exponential Hilbert space, instead of the state being classically encoded by the particles somehow. I bet Google is struggling just at this threshold and they don't announce it.
I really wish the release videos made things a ~tad~ bit less technical. I know quantum computers are still very early so the target audience is technical for this kind of release, but I can’t help wonder how many more people would be excited and pulled in if they made the main release video more approachable.
>the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes
That's an EXTRAORDINARY claim and one that contradicts the experience of pretty much all other research and development in quantum error correction over the course of the history of quantum computing.
I met julian touring UCSB as perspective grad students. We sat together at dinner and he was really smart, kind, and outgoing. Great to see him presenting this work!
Take the announcement with a grain of salt. From German physicist Sabine Hoffenfelder:
> The particular calculation in question is to produce a random distribution. The result of this calculation has no practical use. > > They use this particular problem because it has been formally proven (with some technical caveats) that the calculation is difficult to do on a conventional computer (because it uses a lot of entanglement). That also allows them to say things like "this would have taken a septillion years on a conventional computer" etc. > > It's exactly the same calculation that they did in 2019 on a ca 50 qubit chip. In case you didn't follow that, Google's 2019 quantum supremacy claim was questioned by IBM pretty much as soon as the claim was made and a few years later a group said they did it on a conventional computer in a similar time.
The main part for me is reducing error faster as they scale. This was a major road-block, known as "below threshold”. That's a major achievement.
I am not sure about RCS as the benchmark as not sure how useful that is in practice. It just produced really nice numbers. If I had a few billions of pocket change around, would I buy this to run RCS really fast? -Nah, probably not. I'll get more excited when they factor numbers at a rate that would break public key crypto. For that would spend my pocket change!
Link to the actual article: https://www.nature.com/articles/s41586-024-08449-y
What benchmark is being referred here?
>>Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe
Is anyone else even close to Google in this space? (e.g. on the "System Metrics" the blog defines)
With this plus the weather model announcement. I’m curious what people think about the meta question on why corporate labs like Google DeepMind etc seem to make more progress on big problems than academia?
There are a lot of critiques about academia. In particular that it’s so grant obsessed you have to stay focused on your next grant all the time. This environment doesnt seem to reward solving big problems but paper production to prove the last grant did something. Yet ostensibly we fund fundamental public research precisely for fundamental changes. The reality seems to be the traditional funding model create incremental progress within existing paradigms.
Genuinely curious: does this make US regulators second-guess breaking up Google? Having a USA company be the first to develop quantum computing would be a major national security advantage.
Am I oversimplifying in thinking that they’ve demonstrated that their quantum computer is better than at simulating a quantum system than a classical computer?
In which case, should I be impressed? I mean sure, it sounds like you’ve implemented a quantum VM.
Do Americans still want to breakup the big US tech companies like Google? With proper regulation it feels like their positive externalities, like this, is good for humanity.
Newbie's question: how far is the RCS benchmark from a more practical challenge such as breaking RSA?
The article concludes by saying that the former does not have practical applications. Why are they not using benchmarks that have some?
Great time to learn the mathematical foundations of quantum computing rigorously: https://quantumformalism.academy/mathematical-foundations-fo.... All self-paced with problem sheets (proof challenges) and worked out solutions, etc.
Trying to understand how compute happens in Quantum computers. Is there a basic explanation of how superposition leads to computing?
From chatgpt, "with n qubits a QC can be in a superposition of 2^n different states. This means that QCs can potentially perform computations on an exponential number of inputs at once"
I don't get how the first sentence in that quote leads to the second one. Any pointers to read to understand this?
This is yet another attempt to posit NISQ results (Noisy Intermediate Scale Quantum) as demonstrations of quantum supremacy. This does not allow us to do useful computational work; it's just making the claim that a bathtub full of water can do fluid dynamic simulations faster than a computer with a bathtub-full-of-water-number-of-cores can do the same computation.
If history is any guide we'll soon see that there are problems with the fidelity (the system they use to verify that the results are "correct") or problems with the difficulty of the underlying problem, as happened with Google's previous attempt to demonstrate quantum supremacy [1].
[1] https://gilkalai.wordpress.com/2024/12/09/the-case-against-g... -- note that although coincidentally published the same day as this announcement, this is talking about Google's previous results, not Willow.
Some of these results have been on the arxiv for a few months (https://arxiv.org/abs/2408.13687) -- are there any details on new stuff besides this blog post? I can't find anything on the random circuit sampling in the preprint (or its early access published version).
They renamed quantum supremacy to "beyond-classical"? That's something.
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
Can someone explain to me how he made the jump from "we achieved a meaninful threshold in quantum computing performance" to "The multiverse is probably real."
So one of the interesting comparisons between Quantum computing vs classical in the video: 5 mins vs 10^25 years. So are there any tradeoffs or specific cases in which the use cases for Quantum computing works or is this generic for "all" computing use cases? if later then this will change everything and would change the world.
Relatives were asking for a basic explainer. Here's a good one by Hannah Fry: https://youtu.be/1_gJp2uAjO0
Can't wait for Google to release a breakthrough paper in 5 years just for the authors to leave and build OpenQuant
105 qubits
Interesting; it might be time for me to load up a quantum simulator and star learning how to program these things.
I've pushed that off for a long time since I wasn't completely convinced that quantum computers actually worked, but I think I was wrong.
Can anyone comment on how this chip is built? What does the hardware look like?
What does it mean when they say that the computations are happening in multiverse? I didn't know we are that advanced already :)
This is a great technical achievement. It gives me some hope to see that the various companies are able to invest into what is still very basic science, even if it were mostly as vanity projects for advertising purposes.
Quantum computing will surely have amazing applications that we cannot even conceive of right now. The earliest and maybe most useful applications might be in material science and medicine.
I'm somewhat disappointed that most discussions here focus on cryptography or even cryptocurrencies. People will just switch to post-quantum algorithms and most likely still have decades left to do so. Almost all data we have isn't important enough that intercept-now-decrypt-later really matters, and if you think you have such data, switch now...
Breaking cryptography is the most boring and useless application (among actual applications) of quantum computing. It's purely adversarial, merely an inconsequential step in a pointless arms race that we'd love to stop, if only we could learn to trust each other. To focus on this really betrays a lack of imagination.
How much support infrastructure does this thing need? (eg. Cryogenic cooling?) How big is a whole 'computer' and how much power draw?
More misleading random circuit sampling benchmarks. All it proves is that Google has built a quantum computer that does quantum things.
If error is getting corrected, doesn’t it mean lower entropy? If so where else is entropy increasing, if it is a valid question to be asked.
This is weird. I got this pop up halfway through reading:
> After reading this article, how has your perception of Google changed? Gotten better Gotten worse Stayed the same
Seems to coincide with Google's annual quantum announcement, which has happened each fall since 2014
In other words, get off the cloud so nobody has your encrypted data which they will be able to crack in a few minutes five or ten years from now?
there is so much skepticism on quantum computing that instead of inflated marketing words one should always start by what the biggest problems are, how they are not still solved yet, and then introduce what the new improvement is.
Otherwise there is no knowing if the accomplishment is really significant or not.
We need to seriously think if our systems/society are even remotely ready for this.
>It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
Makes sense, or doesn't it? What's your take on the multiverse theory?
what is this actually useful for?
Every time this comes up people say they're not actually useful for ML. Is that true? And if not what would they be useful for
I’m a quantum dabbler so I’ll throw out an armchair reaction: this is a significant announcement.
My memory is that 256 bit keys in non quantum resistant algos need something like 2500 qubits or so; and by that I mean generally useful programmable qubits. To show a bit over 100 qubits with stability, meaning the information survives a while, long enough to be read, and general enough to run some benchmarks on is something many people thought might never come.
There’s a sort of religious reaction people have to quantum computing: it breaks so many things that I think a lot of people just like to assume it won’t happen: too much in computing and data security will change -> let’s not worry about it.
Combined with the slow pace of physical research progress (Schorrs algorithm for quantum factoring was mid 90s), and snake oil sales companies, it’s easy to ignore.
Anyway seems like the clock might be ticking; AI and data security will be unalterably different if so. Worth spending a little time doing some long tail strategizing I’d say.