My memory is that 256 bit keys in non quantum resistant algos need something like 2500 qubits or so; and by that I mean generally useful programmable qubits. To show a bit over 100 qubits with stability, meaning the information survives a while, long enough to be read, and general enough to run some benchmarks on is something many people thought might never come.
There’s a sort of religious reaction people have to quantum computing: it breaks so many things that I think a lot of people just like to assume it won’t happen: too much in computing and data security will change -> let’s not worry about it.
Combined with the slow pace of physical research progress (Schorrs algorithm for quantum factoring was mid 90s), and snake oil sales companies, it’s easy to ignore.
Anyway seems like the clock might be ticking; AI and data security will be unalterably different if so. Worth spending a little time doing some long tail strategizing I’d say.
So despite this significant progress, it's probably a still a while until RSA is put out of the job. That being said, quantum computers would be able to retroactively break any public keys that were stored, so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.
Which to be clear is quite a bit faster than expected in 2020, but still within the realm of plausible stuff.
This.
People seems to think that because something is end to end encrypted it is secure. They don't seem to grasp that the traffic and communication that is possibly dumped/recorded now in encrypted form could be used against them decades later.
Use a key exchange that offers perfect forward secrecy (e.g. diffie Hellman) and you don’t need to worry about your RSA private key eventually being discovered.
ETA: Wikipedia 2330 qubits, but I'm not sure it is citing the most recent work: https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#ci...
The result shall be interpreted directly with the error rate for logical qubits to decrease as ~n^(-1/3). This, in turn, means that factorisation of a 10000-bit number would only require an error rate of 1/10th of the number of the logical qubits for a 10-bit number. This is practical given that one can make a quantum computer with around 100k qubits and correct errors on them.
On the other hand, a sibling comment already mentioned the limited connectivity that those quantum computers now have. This, in turn, requires a repeated application of SWAP gates to get the interaction one needs. I guess this would add a linear overhead for the noise; hence, the scaling of the error rate for logic qubits is around ~n^(-4/3). This, in turn, makes 10000-bit factorisation require a logical error rate of 1/10000 that of 10-bit number factorisation. Assuming that 10 physical qubits are used to reduce error by order, it can result in around 400k physical qubits.
[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3
Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.
About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.
Imagine a device conceived in the 17th century, the intended functionality of which would require a physical sphere which matches a perfect, ideal, geometric sphere in Euclidean space to thousands of digits of precision. We now know that the concept of such a perfect physical sphere is incoherent with modern physics in a variety of ways (e.g., atomic basis of matter, background gravitational waves.) I strongly suspect that the cancellations required for the Fourier Transform in Shor's algorithm to be cryptographically relevant will turn out to be the moral equivalent of that perfect sphere.
We'll probably learn some new physics in the process of trying to build a Quantum Computer, but I highly doubt that we'll learn each others' secrets.
Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).
The following portion from Google's announcement seems most important:
> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.
Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.
I'm curious if EC can mitigate the sub-par decoherence times.
[0]: https://www.science.org/doi/abs/10.1126/science.270.5242.163...
[1]: https://dl.acm.org/doi/abs/10.5555/3511065.3511068
[2]: https://www.ibm.com/quantum/blog/eagle-quantum-processor-per...
Was this actually measured and published somewhere?
> Worth spending a little time doing some long tail strategizing I’d say
any tips for starters?
https://quantum.microsoft.com/en-us/tools/quantum-katas
The first few lessons do cover complex numbers and linear algebra, so skip ahead if you want to get straight to the 'quantum' coding, but there's really no escaping the math if you really want to learn quantum.
Disclaimer: I work in the Azure Quantum team on our Quantum Development Kit (https://github.com/microsoft/qsharp) - including Q#, the Katas, and our VS Code extension. Happy to answer any other questions on it.
You don't need to know quantum theory necessarily, but you will need to know some maths. Specifically linear algebra.
There are a few youtube courses on linear algebra
For a casual set of video: - https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFit...
For a more formal approach:
- https://youtube.com/playlist?list=PL49CF3715CB9EF31D
And the corresponding open courseware
- https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010...
Linear algebra done right comes highly recommended
1. Kaye, LaFlamme, and Mosca - An Introduction to Quantum Computing
2. Nielsen and Chuang - Quantum Computation and Quantum Information (The Standard reference source)
3. Andrew Childs's notes here [1]. Closest to the state-of-the-art, at least circa ~3 years ago.
the model of quantum mechanics, if you can afford to ignore any real-world physical system and just deal with abstract |0>, |1> qubits, is relatively easy. (this is really funny given how incredibly difficult actual quantum physics can be.)
you have to learn basic linear algebra with complex numbers (can safely ignore anything really gnarly).
then you learn how to express Boolean circuits in terms of different matrix multiplications, to capture classical computation in this model. This should be pretty easy if you have a software engineer's grasp of Boolean logic.
Then you can learn basic ideas about entanglement, and a few of the weird quantum tricks that make algorithms like Shor and Grover search work. Shor's algorithm may be a little mathematically tough.
realistically you probably will never need to know how to program a quantum computer even if they become practical and successful. applications are powerful but very limited.
"What You Shouldn't Know About Quantum Computers" is a good non-mathematical read.
I would not worry about hardware at first. But if you are interested and like physics, the simplest to understand are linear optical quantum circuits. These use components which may be familiar from high school or undergraduate physics. The catch is that the space (and component count) is exponential in the number of qubits, hence the need for more exotic designs.
I prefer his explanation to most other explanations because he starts, right away, with an analogy to ordinary probabilities. It's easy to understand how linear algebra is related to probability (a random combination of two outcomes is described by linearly combining them), so the fact that we represent random states by vectors is not surprising at all. His explanation of the Dirac bra-ket notation is also extremely well executed. My only quibble is that he doesn't introduce density matrices (which in my mind are the correct way to understand quantum states) until halfway through the notes.
But the key thing to know about quantum computing is that it is all about the mathematical properties of quantum physics, such as the way complex probabilities work.
Short enough that its reasonable to start r&d efforts on post quantum crypto.
https://www.deloitte.com/nl/en/services/risk-advisory/perspe...
Definitely agree with the latter, but do you have any sources on how quantum comphters make "AI" (i.e. matrix multiplication) faster?
So what are the implications if so ?
What do you mean by this?
Honestly, there's not that much to discuss on this though. The only things you can do from this strategizing is to consider even encrypted data as not safe to store, unless you're using quantum resistant encryption such as AES; and to budget time for switching to PQC as it becomes available.
So the commenter is saying that Cybersecurity needs to be planning for a near-world where traditional cryptography, including lots of existing data at rest, is suddenly as insecure as plaintext.
That level of embarrassment is frankly difficult to face. And it would be devastating to the self-image of a bunch of “practical” security gurus.
Therefore any progress must be an illusion. In the real world, the threats are predictable and mistakes don’t slowly snowball into a crisis. See also, infrastructure.
Yeah, this is pretty huge. They achieved the result with surface codes, which are general ECCs. The repetition code was used to further probe quantum ECC floor. "Just POC" likely doesn't do it justice.
(Original comment):
Also quantum dabbler (coincidentally dabbled in bitflip quantum error correction research). Skimmed the post/research blog. I believe the key point is the scaling of error correction via repetition codes, would love someone else's viewpoint.
Slightly concerning quote[2]:
"""
By running experiments with repetition codes and ignoring other error types, we achieve lower encoded error rates while employing many of the same error correction principles as the surface code. The repetition code acts as an advance scout for checking whether error correction will work all the way down to the near-perfect encoded error rates we’ll ultimately need.
"""
I'm getting the feeling that this is more about proof-of-concept, rather than near-practicality, but this is certainly one fantastic POC if true.
[1]: https://arxiv.org/abs/2408.13687
[2]: https://research.google/blog/making-quantum-error-correction...
Relevant quote from preprint (end of section 1, sorry for copy-paste artifacts):
"""
In this work, we realize surface codes operating below threshold on two superconducting processors. Using a 72-qubit processor, we implement a distance-5 surface code operating with an integrated real-time decoder. In addition, using a 105-qubit processor with similar performance, we realize a distance-7 surface code. These processors demonstrate Λ > 2 up to distance-5 and distance7, respectively. Our distance-5 quantum memories are beyond break-even, with distance-7 preserving quantum information for more than twice as long as its best constituent physical qubit. To identify possible logical error f loors, we also implement high-distance repetition codes on the 72-qubit processor, with error rates that are dominated by correlated error events occurring once an hour. These errors, whose origins are not yet understood, set a current error floor of 10−10. Finally, we show that we can maintain below-threshold operation on the 72qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1µs cycle duration.
"""
However the main scaling of error correction is via surface codes, not repetition codes. It's an important point as surface codes correct all Pauli errors, not just either bit-flips or phase-flips.
They use repetition codes as a diagnostic method in this paper more than anything, it is not the main result.
In particular, I interpret the quote you used as: "We want to scale surface codes even more, and if we were able to do the same scaling with surface codes as we are able to do with repetition codes, then this is the behaviour we would expect."
Edit: Welp, saw your edit, you came to the same conclusion yourself in the time it took me to write my comment.
Goodbye not just to Bitcoin, but also Visa, Stripe, Amazon shopping, ...
Yup, like Bitcoin going to zero.
As to faking signatures and, e.g. stealing Satoshi's coins or just fucking up the network with fake transactions that verify, there is some concern and there are some attack vectors that work well if you have a large, fast quantum computer and want to ninja in. Essentially you need something that can crack a 256 bit ECDSA key before a block that includes a recently released public key can be inverted. That's definitely out of the reach of anyone right now, much less persistent threat actors, much less hacker hobbyists.
But it won't always be. The current state of the art plan would be to transition to a quantum-resistant UTXO format, and I would imagine, knowing how Bitcoin has managed itself so far, that will be a well-considered, very safe, multi-year process, and it will happen with plenty of time.
More likely that other critical infrastructure failures will happen within trad-finance, much larger vulnerability footprint, and being able to trivially reverse engineer every logged SSL session is likely to be a much more impactful turn of events. I’d venture that there are significant ear-on-the-wire efforts going on right now in anticipation of a reasonable bulk SSL de cloaking solution. Right now we think it doesn’t matter who can see our “secure” traffic. I think that is going to change, retroactively, in a big way.
If the encryption on Bitcoin is broken, say goodbye to the banking system.
A long-term tactic of our adversaries is to capture network traffic for later decryption. The secrets in the mass of packets China assumedly has in storage, waiting for quantum tech, is a treasure trove that could lead to crucial state, corporate, and financial secrets being used against us or made public.
AI being able to leverage quantum processing power is a threat we can't even fathom right now.
Our world is going to change.
A sort of quantum commenting conundrum, I guess.
The first time I was just watching, determined not to press the button, but when I received the response, I was startled into pressing it.
The second time, I just stepped back from my keyboard, and my cat came flying out of the back room and walked on the keyboard, triggering the request.
The third time, I was holding my cat, and a train rumbled by outside, rattling my desk and apparently triggering the switch to send the request.
The fourth time, I checked the tracks, was holding my cat, and stepped back from my keyboard. Next thing I heard was a POP from my ceiling, and the request was triggered. There was a small hole burned through my keyboard when I examined it. Best I can figure, what was left of a meteorite managed to hit at exactly the right time.
I'm not going to try for a fifth time.
For a brief moment I thought this was some quantum-magical side effect you were describing and not some API error.
It's a bit like Jeopardy, really.
If you auth with the bearer token "And There Are No Friends At Dusk." then the API will call you and tell you which request you wanted to send.
Ah. Newbie mistake. You need to turn OFF your computer and disconnect from the network BEFORE sending the request. Without this step you will always receive a response before the request is issued.
I see the evidence, and I see the conclusion, but there's a lot of ellipses between the evidence and the conclusion.
Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?
I have no idea who put it there, but I can assure you the actual paper contains no such nonsense.
I would have thought whoever writes the google tech blogs is more competent than bottom tier science journalists. But in this case I think it is more reasonable to assume malice, as the post is authored by the Google Quantum AI Lead, and makes more sense as hype-boosting buzzword bullshit than as an honest misunderstanding that was not caught during editing.
No sign of a Heisenberg cut has been observed so far, even as experiments involving entanglement of larger and larger molecules are performed, which makes objective-collapse theories hard to consider seriously.
Bohmian theories are nice, but require awkward adjustments to reconcile them with relativity. But more importantly, they are philosophically uneconomical, requiring many unobservable — even theoretically — entities [0].
That leaves either many-worlds or a quantum logic/quantum Bayesian interpretations as serious contenders [1]. These interpretations aren't crank fringe nonsense. They are almost inevitable outcomes of seriously considering the implications of the theory.
I will say that personally, I find many-worlds to focus excessively on the Schrödinger-picture pure state formulation of quantum mechanics. (At least to the level that I understood it — I expect there is literature on the connection with algebraic formulations, but I haven't taken the time to understand it.) So I would lean towards quantum logic–type interpretations myself.
The point of this comment was to say that many-worlds (or "multiverses", though I dislike the term) isn't nonsense. But it also isn't exactly the kind of sci-fi thing non-physicists might picture. Given how easy it is to misinterpret the term, however, I must agree with you that a self-aware science communicator would think twice about whether the term should be included, and that there may be not-so-scrupulous intentions at play here.
Quick edit: I realise the comment I've written is very technical. I'm happy to try to answer any questions. I should preface it by stating that I'm not a professional in the field, but I studied quantum information theory at a Masters level, and always found the philosophical questions of interest.
---
[0] Many people seem to believe that many-worlds also postulates the existence of unobservable parallel universes, but this isn't true. We observe the interaction of these universe's every time we observe quantum interference.
While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function. They don't discretely separate from one another, either — it depends on your choice of basis. That choice is where decoherence comes in.
[1] And of course, there is the Copenhagen "interpretation" — preferred among physicists who would rather not think about philosophy. (A respectable choice.)
If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.
Also your comment does not have any useful information. You assumed hype as the reason why they mentioned parallel computing. It's just a bias you have on looking at world. Hype does helps explain a lot of things. So it can be tempting to use it as a placeholder for anything that you don't accept based on your current set of beliefs.
Let me add a recommendation for David Wallace's book The Emergent Multiverse - a highly persuasive account of 'quantum theory according to the Everett Interpretation'. Aside from the technical chapters, much of it is comprehensible to non-physicists. It seems that adherents to MW do 'not know how to refute an incredulous stare'. (From a quotation)
People call it "many worlds" because we can interact only with a tiny fraction of the wavefunction at a time, i.e. other "branches" which are practically out of reach might be considered "parallel universes".
But it would be more correct to say that it's just one universe which is much more complex than what it looks like to our eyes. Quantum computers are able to tap into this complexity. They make a more complete use of the universe we are in.
A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".[85]
Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."[86]
In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory",[87] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[88]
A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[89]
A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[90] the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.[90]
[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation#Pol...
Reminds me of the Aorist Rods from Hitchhikers' Guide to the Galaxy.
Science is about coming up with the best explanations irrespective of whether or not a large chunk does not believe it.
And best explanations are the ones that is hard to vary. Not the one that is most widely accepted or easy to accept based on the current world view.
It could be that we are borrowing qbit processing power from Russel's quantum teapot.
or you mean specifically the parallel computation view?
The success on the random (quantum) circuit problem is really a valdiation of Feynman's idea, not Deutsch: classical computers need 2^n bits to simulate n qubits, so we will need quantum computers to efficiently simulate quantum phenomena.
Maybe A wasn't the most efficient algorithm for this universe to begin with?
That's in line with a religious belief. One camp believes one thing, other believes something else, others refuse to participate and say "shut up and calculate". Nothing wrong with religious beliefs of course, it's just important to know that is what it is.
A simple counterexample is superdeterminism, in which the different measurement outcomes are an illusion and instead there is always a single pre-determined measurement outcome. Note that this does not violate Bell's inequality for hidden variable theories of quantum mechanics, as Bell's inequality only applies to hidden variables uncorrelated to the choice of measurement: in superdeterminism, both are predetermined so perfectly correlated.
Just to be clear, where in the Schrödinger equation (iħψ̇ = Hψ) is the "multiverse"?
Copenhagen interpretation is just "easier" (like oops all our calculations about the univers don't seemt to fit, lets invent "dark matter") when the correct explanations makes any real world calculation practically impossible (thus ending most of physics further study) as any atom depends on every other atom at any time.
Doesn't this also mean that other universes have civilizations that could potentially borrow capacity from our universe, and if so, what would that look like?
Tangentially related, but there's a great Asimov book about this called The Gods Themselves (fiction).
That being said, I think the two most commonly preferred interpretations of quantum mechanics among physicists are 'Many Worlds' and 'I try not to think about it too hard.'
I don't know much about multiverse, but we need something external to explain the magic we uncover.
Energy and quantum mechanics are really cool but dense to get into. Like Planck, I suspect there's a link between consciousness and matter. I also think our energy doesn't cease to exist when our human carcass expires.
If it's not, what would be your explanation for this significant improvement then?
In view of these and other findings my conclusion is that Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.
On the other hand the question is what does "real QC" mean? The current QC's perform very limited and small computations, they lack things like quantum memory. The large versions are extremely impractical to use in the sense that they run for 1000ths of a second and take many hours to setup for a run. But that doesn't mean that the physical effects that they use/capture aren't real.
Just a long long way from practical.
- quantum physics are real, this isn't about debating that. The theory underpinning quantum computing is real.
- quantum annealing is theoretically real, but not the same "breakthrough" that a quantum computer would be. Z-wave and google have made these.
- All benchmark computations have been about simulating a smaller quantum computer or annealer. which these systems can do faster than a brute force classical search. These are literally the only situation where "quantum supremacy" exists.
- There is literally no claim of "productive" computation being made by a quantum computer. Only simulations of our assumptions about quantum systems.
- The critical gap is "quantum error correction", proof that they can use many error prone physical qubits to simulate a smaller system with lower error. There isn't proof yet that is actually possible.
This result they are claiming, is they have "critical error correction" is the single most groundbreaking result we could have in quantum computing. Their evidence does not satisfy the burden of proof. They also only claim to have 1 qubit, which is intrinsically useless, and doesn't examine the costs of simulating multiple interacting qubits.
https://scottaaronson.blog/?p=8310#comments
and here
https://scottaaronson.blog/?p=8329
though I bet he will have more to say now that the paper is officially out.
“the first quantum processor where error-corrected qubits get exponentially better as they get bigger”
Achieving this turns the normal problem of scaling quantum computation upside down.
The scaling problem is multifaceted. IMHO the physical qubits are the biggest barrier to scaling.
In theory, theory and practice are the same.
Google's announcement is legit, and is in line with what theory and simulations expect.
Processing in multiverse. Would that mean we are inyecting entropy into those other verses? Could we calculate how many are there from the time it takes to do a given calculation? We need to cool the quantum chip in our universe, how are the (n-1)verses cooling on their end?
What if it's already happening to our universe? And that is what black holes are? Or other cosmology concepts we don't understand?
Maybe a great filter is your inability to protect your universe from quantum technology from elsewhere in the multiverse ripping yours up?
Maybe the future of sentience isn't fighting for resources on a finite planet, or consuming the energy of stars, but fighting against other multiverses.
Maybe The Dark Forest Defence is a decision to isolate your universe from the multiverse - destroying it's ability to participate in quantum computation, but also extending it's lifespan.
(I don't believe ANY of this, but I'm just noting the fascinating science fiction storylines available)
DE is some sort of entropy that is being added to our cosmos in an exponential way over historic time. It began at a point a few billion in to our history.
I found it an interesting read and hadn't heard the term before, but it's exactly the kind of nerdy serendipity I come to this site for!
I think string theories ideas about extra curled up dimensions are far more likely places to look. You've already got an infinite instantaneous energy problem with multiverses let alone entropy transfer considerations.
[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation
I've followed Many worlds & Simulation theory a bit too far and I ended up back where I started.
I feel like the most likely scenario is we are in a AI (kinder)garden being grown for future purposes.
So God is real, heaven is real, and your intentions matter.
Obviously I have no proof...
How do you reach that conclusion?
Characters in The Sims games technically have us human players as gods, it doesn't mean that when we uninstall the game those characters get to come into our earthly (to them) heaven or have any consequences for actions performed during the simulation?
If you imagine simulations we can build ourselves, such as video games, it's not hard to add something at the edge of the map that users are prevented from reaching and have the code send "this thing is massive and powerful" data to the players. Who's to say that the simulation isn't actually focussed on earth, and everything including the sun is actually just a fiction designed to fool us?
- https://thomasvilhena.com/2019/11/quantum-computing-for-prog...
That's an EXTRAORDINARY claim and one that contradicts the experience of pretty much all other research and development in quantum error correction over the course of the history of quantum computing.
For a rough but well-sourced overview, see Wikipedia: https://en.wikipedia.org/wiki/Threshold_theorem
For a review paper on surface codes, see A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,” Phys. Rev. A, vol. 86, no. 3, p. 032324, Sep. 2012, doi: 10.1103/PhysRevA.86.032324.
Not sure why you would say that? This sort of exponential suppression of errors is exactly how quantum error correction works and why we think quantum computing is viable. Source: have worked on quantum error correction for a couple of decades. Disclosure: I work on the team that did this experiment. More reading: lecture notes from back in the day explaining this exponential suppression https://courses.cs.washington.edu/courses/cse599d/06wi/lectu...
Remember macroscopic objects have 10^23=2^76 particles, so until 76 qubits are reached and exceeded, I remain skeptical that the quantum system actually exploits an exponential Hilbert space, instead of the state being classically encoded by the particles somehow. I bet Google is struggling just at this threshold and they don't announce it.
I am not sure about RCS as the benchmark as not sure how useful that is in practice. It just produced really nice numbers. If I had a few billions of pocket change around, would I buy this to run RCS really fast? -Nah, probably not. I'll get more excited when they factor numbers at a rate that would break public key crypto. For that would spend my pocket change!
The error correction is producing a single logical qubit of quantum memory, i.e. a single qubit with no gates applied to it.
Meanwhile, the random circuit sampling uses physical qubits with no error correction, and is used as a good benchmark in part because it can prove "quantumness" even in the presence of noise.[1]
[1] https://research.google/blog/validating-random-circuit-sampl...
> The particular calculation in question is to produce a random distribution. The result of this calculation has no practical use. > > They use this particular problem because it has been formally proven (with some technical caveats) that the calculation is difficult to do on a conventional computer (because it uses a lot of entanglement). That also allows them to say things like "this would have taken a septillion years on a conventional computer" etc. > > It's exactly the same calculation that they did in 2019 on a ca 50 qubit chip. In case you didn't follow that, Google's 2019 quantum supremacy claim was questioned by IBM pretty much as soon as the claim was made and a few years later a group said they did it on a conventional computer in a similar time.
The RCS is a common benchmark with no practical value, as is stated several times in the blog announcement as well. It's used because if a quantum computer can't do that, it can't do any other calculation either.
The main contribution here seems to be what they indeed put first, which is the error correction scaling.
She doesn't even say that this isn't a big leap (she says it's very impressive - just not the sort of leap that means that there are now practical applications for quantum computers, and that a pinch of salt is required on the claim of comparisons to a conventional computer due to the 2019 paper with a similar benchmark).
This was a fascinating watch, and not the kind of content that is easy to find. Besides videos like that one, I enjoy her videos as fun way to absorb critical takes on interesting science news.
Maybe she is controversial for being active and opinionated on social media, but we need more science influencers and educators like her, who don't just repeat the news without offering us context and interpretation.
And I can't blame her for adopting this trend, in many cases it is the difference between surviving or not on YouTube nowadays.
> Of course, as happened after we announced the first beyond-classical computation in 2019, we expect classical computers to keep improving on this benchmark
As IBM showed their estimate of classical computer time is taken out of their a**es.
Problems that benefit from quantum computing as far as I'm aware have their own formal language class, so it's also not like you have to consider Sabine's or any other person's thoughts and feelings on the subject - it is formally demonstrated that such problems exist.
Whether the real world applications arrive or not, you can speculate for yourself. You really don't need to borrow the equally unsubstantiated opinion of someone else.
On the other hand it's actually not completely necessary to have a superpolynomial quantum advantage in order to have some quantum advantage. A quantum computer running in quadratic time is still (probably) more useful than a classical computer running in O(n^100) time, even though they're both technically polynomial. An example of this is classical algorithms for simulating quantum circuits with bounded error whose runtime is like n^(1/eps) where eps is the error. If you pick eps=0.01 you've got a technically polynomial runtime classical algorithm but it's runtime is gonna be n^100, which is likely very large.
> The next challenge for the field is to demonstrate a first "useful, beyond-classical" computation on today's quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.
Does anyone know?
But I don't really have a feel of what's going on, really. How many quantum computers there are, is there anything that is actually capable of performing anything more than just being an ongoing research prototype? Some educated guesses about how far can be some non-public projects by now? Like, is it possible that some secret CIA project is further ahead than what we know, or if it's even more unlikely and farther away than fusion power? Or maybe it's even more comparable to cold fusion?
I know, that this kinda exists as an idea, and apparently somebody's working on it, but that's pretty much it.
Not sure if they are close in terms of specs but looks like they are a viable solution and seeing an increase in utilization over the last year... Seems both are pretty interesting to keep an eye on.
Would love if someone could weight in.
>>Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe
There are a lot of critiques about academia. In particular that it’s so grant obsessed you have to stay focused on your next grant all the time. This environment doesnt seem to reward solving big problems but paper production to prove the last grant did something. Yet ostensibly we fund fundamental public research precisely for fundamental changes. The reality seems to be the traditional funding model create incremental progress within existing paradigms.
Around 50% of our time was spent working in Overleaf making small improvements to old projects so that we could submit to some new journal or call-for-papers. We were always doing peer review or getting peer reviewed. We were working with a lot of 3rd-party tools (e.g. FPGAs, IBM Q, etc). And our team was constantly churning due to people getting their degrees and leaving, people getting too busy with coursework, and people deciding they just weren't interested anymore.
Compare that to the corporate labs: They have a fully proprietary ecosystem. The people who developed that ecosystem are often the ones doing research on/with it. They aren't taking time off of their ideas to handle peer-review processes. They aren't taking time off to handle unrelated coursework. Their researchers don't graduate and start looking for professor positions at other universities.
It's not surprising in the slightest that the corporate labs do better. They're more focused and better suited for long-term research.
Because in product development, there can be short-sighted industry decisions based on quarterly returns. I've also seen a constant need to justify outcomes based on KPIs etc, and constantly justifying your work, etc.
In which case, should I be impressed? I mean sure, it sounds like you’ve implemented a quantum VM.
I’ve seen lots of people dismissing this as if it isn’t impressive or important. I’ve watched one video where the author said in a deprecating manner “quantum computers are good for just two things: generating random numbers and simulating quantum systems”.
It’s like saying “the thing is good for just two things: making funny noises and producing infinite energy”.
(Also, generating random numbers is pretty useful, but I digress)
A quote from the article is especially ludicrous: > to benefit society by advancing scientific discovery, developing helpful applications, and tackling some of society's greatest challenges
You don't need a quantum computer to do this. We can solve housing and food scarcity today, arguably our greatest challenges. Big tech has been claiming that it's going to solve all of our problems for decades now and it has yet to put up.
If you want this type of technology to be made and do actual good, we need publicly funded research institutions. Tech won't save us.
If history is any guide we'll soon see that there are problems with the fidelity (the system they use to verify that the results are "correct") or problems with the difficulty of the underlying problem, as happened with Google's previous attempt to demonstrate quantum supremacy [1].
[1] https://gilkalai.wordpress.com/2024/12/09/the-case-against-g... -- note that although coincidentally published the same day as this announcement, this is talking about Google's previous results, not Willow.
Can someone explain to me how he made the jump from "we achieved a meaninful threshold in quantum computing performance" to "The multiverse is probably real."
What computation would that be?
Also, what is the relationship, if any, between quantum computing and AI? Are these technologies complementary?
To me that sounds a bit like saying my "sand computer" (hourglass) is way faster than a classical computer, because it'd take a classical computer trillions of years to exactly simulate the final position of every individual grain of sand.
Sure, it proves that your quantum computer is actually a genuine quantum computer, but it's not going to be topping the LINPACK charts or factoring large semiprimes any time soon, is it?
Ongoing research.
The main idea of quantum machine learning is that qubits make an exponentially high-dimensional space with linear resources, so can store and compute a lot of data easily.
However, getting the data in and results out of the quantum computer is tricky, and if you need many iterations in your optimization, that may destroy any advantage you have from using quantum computers.
AI is quite good in producing the meaningless drivel needed for quantum computing related press releases.
AI is limited in part by the computation available at training and runtime. If your computer is 10^X times faster, then your model is also "better". Thats why we have giant warehouses full of H100 chips pulling down a few megawatts from the grid right now. Quantum computing could theoretically allow your phone to do that.
Are there AI algorithms that would benefit from quantum?
The article concludes by saying that the former does not have practical applications. Why are they not using benchmarks that have some?
I'm not sure how to put it quantitatively, but my impression from listening to experts give technical presentations is that the breaking-rsa-type algorithms are a decade or two away.
This is very soon from a security perspective, as all you need is to store current data and break it in the future. But it is not soon enough to use for benchmarking current systems.
It's not something that new, I like it.
A much simpler explanation is that your benchmark is severely flawed.
But to put into context, these numbers are likely accurate, but represent the time it would take for a very naive classical algorithm (possibly brute-force, I am unsure).
For example, the previous result claimed it would take Summit 10,000 years to do the same calculation as the Sycamore quantum chip. However, other researchers were able to reproduce results classically using tensor-network-based methods in 14.5 days using a "relatively small cluster". [1]
[1] G. Kalachev, P. Panteleev, P. Zhou, and M.-H. Yung, “Classical sampling of random quantum circuits with bounded fidelity,” arXiv.org, https://arxiv.org/abs/2112.15083 (accessed Dec. 9, 2024).
From chatgpt, "with n qubits a QC can be in a superposition of 2^n different states. This means that QCs can potentially perform computations on an exponential number of inputs at once"
I don't get how the first sentence in that quote leads to the second one. Any pointers to read to understand this?
Only asymmetric cryptography is threatened. There is no realistic threat to symmetric encryption like AES.
If you are encrypting your cloud data with ed25519 or RSA, then yes, a quantum computer could theoretically someday crack them.
aka everything we use daily
The other tradeoff is that quantum computers are much noisier than classical computers. The error rate of classical computers is exceedingly low, to the extent that most programmers can go their entire career without even considering it as a possibility. But you can see from the figures in this post that even in a state of the art chip, the error rates are of order ~0.03--0.3%. Hopefully this will go down over time, but it's going to be a non-negligible aspect of quantum computing for the foreseeable future.
Also of note: P is in BQP, but it is not proven that BQP != P. Some problems like factoring have a known polynomial time algorithm, and the best known classical algorithm is exponential, which is where you see these massive speedups. But we don't know that there isn't an unknown polynomial time classical factoring algorithm and we just haven't discovered it yet. It is a (widely believed) conjecture, that there are hard problems solved in BQP that are outside P.
I've pushed that off for a long time since I wasn't completely convinced that quantum computers actually worked, but I think I was wrong.
Also my almamater made the quantum enigmas series that is appropriate for high-school students (it also interesting if you have no prior knowledge about quantum computing) https://www.usherbrooke.ca/iq/quantumenigmas/ (it also use IBM online learning platform)
Quantum computing will surely have amazing applications that we cannot even conceive of right now. The earliest and maybe most useful applications might be in material science and medicine.
I'm somewhat disappointed that most discussions here focus on cryptography or even cryptocurrencies. People will just switch to post-quantum algorithms and most likely still have decades left to do so. Almost all data we have isn't important enough that intercept-now-decrypt-later really matters, and if you think you have such data, switch now...
Breaking cryptography is the most boring and useless application (among actual applications) of quantum computing. It's purely adversarial, merely an inconsequential step in a pointless arms race that we'd love to stop, if only we could learn to trust each other. To focus on this really betrays a lack of imagination.
As best I understand, it’s not clear yet whether quantum computing will ever have any practical applications.
Furthermore, there has already been a great deal of work identifying potential applications for a quantum computer, so I’d say we’ve got a fair idea of what you could do with one if it ever exists.
> After reading this article, how has your perception of Google changed? Gotten better Gotten worse Stayed the same
Otherwise there is no knowing if the accomplishment is really significant or not.
" Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s "
Standard benchmark in what sense. Well, it was chosen for task where quantum computer would have better performance.
I am not saying this is nothing. Maybe, use more reserved words, e.g. "special quantum oriented benchmark" or something.
When I think of standard benchmark, I am thinking more common scenarios, e.g. searching, sorting, matrix multiplication.
Idk enough about quantum computing to even understand this... but a technology that turns, say, AES or Blowfish, suddenly trivial to crack would very likely change the world
I'm far more scared when tech-bros like Musk land on Mars and contaminate stuff we might not even be able to detect yet.
Makes sense, or doesn't it? What's your take on the multiverse theory?
but on a quantum computer, Grover's Algorithm allows such a search to be performed in O(N^0.5) time.
So Quantum Computing, could bring us a future where, when you perform a Google search for a word, the web pages returned actually contain the word you searched for.
Lol! I'm not gonna put a kagi plug here...
Wait... what? Google said this and not some fringe crackpot?
Current complexity theory suggests that , the class of problems solvable by quantum computers, does not encompass . Quantum computers may aid in approximations or heuristics for NPC problems but won’t fundamentally resolve them in polynomial time unless , which remains unlikely.
I do think ai algorithms could be built that quantum gates could be fast at, but I don’t have any ideas off the top of my head this morning. If you think of AI training as searching the space of computational complexity and quantum algorithms as accessing a superposition of search states I would guess there’s an intersection. Google thinks so too - the lab is called quantum ai.
Note that BQP is not "efficient" in a real-word fashion, but for theoretical study of Quantum computing, it's a good first guess
"What is their mission? Cure cancer? Eliminate poverty? Explore the universe? No, their goal: to sell another fucking Nissan." --Scott Galloway
Yet another example as to why Google is essentially not going anywhere or ‘dying’ as most have been proclaiming these days.
In this day and age, I feel an immediate sense of distrust to any technologist with the "Burning Man" aesthetic for lack of a better word. (which you can see in the author's wikipedia profile from an adjacent festival -> https://en.wikipedia.org/wiki/Hartmut_Neven, as well as in this blog itself with his wristbands and sunglasses -> https://youtu.be/l_KrC1mzd0g?si=HQdB3NSsLBPTSv-B&t=39)
In the 2000's, any embracement of alternative culture was a breath of fresh air for technologists - it showed they cared about the human element of society as much as the mathematics.
But nowadays, especially in a post-truthiness, post-COVID world, it comes off in a different way to me. Our world is now filled with quasi-scientific cults. From flat earthers to anti-vaxxers, to people focused on "healing crystals", to the resurgence of astrology.
I wouldn't be saying this about anyone in a more shall we say "classical" domain. As a technologist, your claims are pretty easily verifiable and testable, even on fuzzy areas like large language models.
But in the Quantum world? I immediately start to approach the author of this with distrust:
* He's writing about multiverses
* He's claiming a quantum performance for something that would take a classical computer septillions of years.
I'm a layman in this domain. If these were true, should they be front page news on CNN and the BBC? Or is this just how technology breakthroughs start (after all the Transformer paper wasn't)
But no matter what I just can't help but feel like the author's choices harm the credibility of the work. Before you downvote me, consider replying instead. I'm not defending feeling this way. I'm just explaining what I feel and why.
I guess something to think about it that amongst a group like the "burners" there is huge variety in individual experience and skill. And even within a single human mind it's possible to have radically groundbreaking thoughts in one domain, and simultaneously be a total crack-pot in another. Linus Pauling and the vitamin C thing comes to mind. There's no such thing as an average person!
I guess we'll see what the quantum experts have to say about this in the weeks to come =)
> * He's writing about multiverses
> * He's claiming a quantum performance for something that would take a classical computer septillions of years.
> I'm a layman in this domain
I think your skepticism is well-founded. But as you learn more about the field, you learn what parts are marketing/hype bullshit, and what parts are not, and how to translate from the bullshit to the underlying facts.
IMO:
> He's writing about multiverses
The author's pet theory, no relevance to the actual science being done.
* He's claiming a quantum performance for something that would take a classical computer septillions of years.
The classical computer is running a very naive algorithm, basically brute-force. It is very easy to write a classical algorithm which is very slow. But still, in the field, it takes new state-of-the-art classical algorithms run on medium size clusters to get results that are on-par with recent quantum computers. Not even much better, just on-par.
> Or is this just how technology breakthroughs start (after all the Transformer paper wasn't)
You could say that. It's not truly a breakthrough, but it is one more medium-size step in a rapidly advancing field.
The fact is that "Burners" are everywhere, nothing about Burning Man means someone is automatically a quack. Your distrust seems misplaced and colored by your own personal biases. The list of prominent people in tech that are also "burners" would likely shock you. I doubt you've ever been to Burning Man, but you're going to judge people who have? Maybe you're just feeling a little bit too "square" and are threatened by people who live differently than you do.
Yes, Hartmut has a style, yes, he enjoys his lifestyle, no, he's not a quack. You don't have to believe me, and I don't expect that you will, but I've talked at length with him about his work, and about a great many other topics, and he is not as you think he is.
Your comment here says far more about you than it says about Hartmut Neven.
I don't want to put you on the spot too much, but can you speak to why he included the part about many-worlds in this blog post?
I don't know enough about Google to say if maybe someone else less technical wrote that, or if he is being pressured to put sci-fi sounding terms in his posts, or if he believes Google's quantum computer is actually testing many-worlds, or some other reason I can't think of.
I picked my words very carefully and I would appreciate if you responded to what I said, not what you think I implied.
I specifically called out - I'm having feelings of bias. That in a field full of quack science and overpromises and underdelivery, I am extraordinarily suspicious of anyone who I feel might be associated with a shall we say "less than rigorous relationship with scientific accuracy". This person's aesthetic reminds me of this.
> The fact is that "Burners" are everywhere, nothing about Burning Man means someone is automatically a quack. Your distrust seems misplaced and colored by your own personal biases. The list of prominent people in tech that are also "burners" would likely shock you. I doubt you've ever been to Burning Man, but you're going to judge people who have? Maybe you're just feeling a little bit too "square" and are threatened by people who live differently than you do.
You couldn't be more wrong. I'm a repeat Burner throughout the 2000's (though it's been a decade), and I've been to a dozen regional Burner events. I know many Burners both in the tech industry and outside of it.
So I actually speak with some experience. I know wonderful people who are purely artists and are not scientifically/technologically inclined - and they're great. I also know deep technologists for whom Burning man is purely an aesthetic preference - a costume not an outfit. Something to pretend to be for a little while but that otherwise has no bearing on their outside life.
And I unfortunately know those whose brainrot ends up intertwining. Crypto evangelists who find healing crystals just as groundbreaking as the blockchain. It's this latter category that I am the most suspicious of, and what I worry when I see a person presented as an authoritative leader in the Quantum Computing domain demonstrate in their external presentation.
I led with an acknowledgement that I am judging a book by it's cover, which one ought to never do. But I think it is worth pointing out because respectability in a cutting edge field is important, lest you end up achieving technological breakthroughs that don't actually change society at all (as already happened with Google Glass).
> You don't have to believe me, and I don't expect that you will,
Why would you expect that I wouldn't?
> but I've talked at length with him about his work, and about a great many other topics, and he is not as you think he is.
That's fantastic to hear! You have direct evidence contradicting the assumptions generated by my first impression. This is all that matters, and all you had to say.
So really what is being claimed is that classical computers can't easily simulate quantum ones. But is that really surprising?
What would be surprising would be that kind of speedup vs classical on some kind of general optimization algorithm. I don't think that is what they are claiming though, even if it does kind of seem like it's being presented that way.