From the paper:
> For lithium hydride, LiH, we were not able to reproduce closely the ground state energy with the currently available hardware. When accounting for 3 orbitals and using a scaling factor of r = 4, we already had to use 1558 qubits, which is a large fraction of available qubits. To summarize: the investigated method in general works, but it might be difficult to apply it to larger systems.
It would be great if the article said they solved one and made good progress on techniques for the second that will likely work on next generation hardware, but the article didn't say that. I don't see how it being a matter of scale makes this acceptable. It's like the U.S. national labs unveiling the world's first exaflop supercomputer, with a footnote indicating that in fact the computer is only 100 petaflops at the moment.
Can you add context?
EDIT: according to the paper, the initial energy at each point was found using the Hartree-Fock method with a minimal STO-3G basis set. This is one of the simplest and oldest approaches to this sort of calculation on a conventional computer. For these starting calculations they used Psi4 [1] by way of OpenFermion [2]. For the H2 molecule, their additional DWave calculations improved the accuracy of the distance-energy curve over the baseline Hartree-Fock/STO-3G calculations. For LiH, there was no improvement (Figure 3). The total runtime of their approach was therefore that of the conventional approach plus an additional series of calculations that did not yield improvements in the case of LiH.
I’d be curious is there a simple formula for calculating the “effective universal qubits” of the D-Wave?
2,048 indeed sounds like a lot of qubits based on my extremely limited knowledge of quantum, but with only ~6k connections versus fully connected which would be n(n-1)/2 = ~2mil is it just a marketing gimmick?
Why is it useful to push the bit count so high if the connectivity is so limited?
In those architectures, the limited connectivity enforces usage of SWAP gates, which increases the circuit depth.
Circuit depth, with imperfect qubits and gates, is currently the limiting factor - we don't have practical error-correction for the chips of today's size. Hence one can only perform a certain number of operations before his computation decoheres and becomes useless.
As for full connectivity; this is a benefit over gate-model: they can simulate fully-connected logical qubits; on the order of 2sqrt(Q/2), or 64, in their existing hardware.
As for the high bit count: sparse, structured problems can make very good use of the existing connectivity. Simulations of a cubic lattice are nearly competitive with modern classical hardware, for example. IIRC they can factor 16-bit numbers, too --these problems are "quasiplanar" and have relatively low connectivity requirements.
Full connectivity actually brings some huge engineering challenges; DWave's strategy is more about "this is what's actually possible today" and not "we're going to make universal quantum computers, don't ask us about error correction or crosstalks or calibration of large-scale microwave circuits"
I find this somewhat surprising.
If you think of AI code designed for GPUs, there I can see "yeah you can practice on a CPU". It'll suck but it'll work.
For quantum tech the entire sales pitch is that it's fundamentally different...doing what's near impossible on conventional hardware.
Yes I realise he's talking about the library so annealing on a CPU I guess but still seems like a very strange comment in this context.
The "quantum annealers" that D-Wave sells are not known to be more powerful than classical computers. For the moment, at best, they are interesting analog computers.
Quantum computers (either the circuit model or equivalently the quantum adiabatic computer model) are conjectured to be much more powerful than classical computers, but they are quite a bit different from D-Wave's quantum annealers (for starters, they are supposed to be able to keep all their qubits in a pure entangled state, which D-Wave definitely can not do). There are various experimental hardwares that are able to keep a handful of qubits entangled, but we will need thousands (if not millions) before being able to do anything useful with them.
But if the expected performance of the quantum machine is theoretically going to increase exponentially every X months for the next 2 decades, combined with the theory that certain problems shift from exponential to polynomial time solutions, yes, eventually the “debugger” will not be useful to actually try running your solver.
When people talk about "Quantum Computers" that can factor large primes, they are referring to a Universal Gate Quantum Computer. As of today, the largest Universal Gate Quantum Computer has 72 qubits.
The DWave Quantum Annealers are essentially a special purpose device that performs Quantum Annealing. What they refer to as 'Qubits' are very different from the entangled Qubits of a Universal Quantum Computer. To the best of my knowledge (and the paper explicitly distinguishes between the two types), it has yet to be demonstrated that Quantum Annealing is equivalent to Universal Gate Quantum Computer (and is generally suspected not to be), is in it's own complexity class, or even if it provides any complexity speedup over classical computers.
2) Marketing buzz from being seen as a forward thinking research company instead of a place that engineers their way out of costly regulatory compliance.
Given that this came from their SF group, it seems like a low-cost high-buzz exercise, and, if it had worked out, they might have some interesting methodologies for chemistry/structural improvement.
It seems like a win-win-win thing to me.
I mean, right now, the D-Wave computers seem too weak to be worth the expense compared to classical computers. So, it seems unreasonable for them to expect near-term practicality from this sort of investigation.
But in principle, if D-Wave systems greatly improve to the point that they're competitive with classical optimizers, then it'd be good for VW engineers to be able to leverage them to do stuff like, in this case, predict chemical properties.
This seems to be how the article sells it:
> “Our present work was a first field study of quantum chemistry problems on quantum annealing devices,” he says. “Our goal was to get a feeling for the bottlenecks of the problem. This in the end helps [us] to understand the underlying problems, and find new solutions or suitable subproblems.”
I mean, the controversy about D-Wave (if I recall correctly) was largely related to claims about it having achieved [quantum advantage over classical computing methods](https://en.wikipedia.org/wiki/Quantum_supremacy), along with misunderstandings about what kind of "quantum computer" it is.
However, it's generally accepted that D-Wave is a working computer that uses quantum models; that fact's uncontroversial.
In this paper, they reported using that uncontroversial ability to perform optimization problems to optimize some physics problems. And it worked basically as-expected.
Probably better to look to other commentators on this one, at least until he has enough time to emotionally process the situation and come around to it. It's not so easy admitting you're wrong in the shtetl.
But your second says "process the situation and come around to it" as if they've already proven anything.
Nobody has ever seriously claimed that the machine can't calculate things. Proof of it doing a calculation doesn't change the status quo.