In the sort term though there's a big place for languages like C, C++ and Rust for things like simulations which need to be done
- In the NISQ era [1], circuits have limited depth and size. It doesn't matter so much which language (or even algorithm!) you use when N<1000.
- Simulating a circuit is expensive, but all the heavy lifting can be delegated to highly optimized C code. The most expensive part of Cirq's simulation is (or soon will be) a call to `numpy.einsum` [2].
1: https://arxiv.org/abs/1801.00862
2: https://github.com/quantumlib/Cirq/blob/24638f234704686c4bb6...
Merely pointing out this dichotomy doesn't do anything to help people understand when they do, or don't, need those things.
How about a comment along the lines of "Here's how to know when you need to use quantum computing, and how to know when you shouldn't"?
First, the fact that most of us don't need quantum computers doesn't mean we shouldn't feel inspired to learn about them. It's a thoroughly interesting subject. I don't believe I'll live to see practical quantum computers for most of the use cases they're hyped about now, but that didn't stop me from making them my research focus in graduate school.
Second, in many exceptionally well-moderated forums for critical discussion (e.g. /r/AskHistorians), there is a mandate in place that requires commenters to engage with their source material. This means it's not enough to link to something that's ostensibly accurate; you also need to critically clarify that material to make it accessible to other readers and contextually relevant. When a link is posted without that engagement, you force others to click through to decide for themselves not only why it's relevant, but why it's accurate.
Finally, it's not a novel insight. There are scores of comments repeating the same point for any number of hyped topics, from machine learning to blockchain to JavaScript frameworks to quantum computing. It's essentially a meme. But it's more insidious than a meme, because memes are obviously low effort and insufficientally novel. This is a middlebrow dismissal precisely because it appears intellectual, yet has no insightful contribution.
And this is the result: instead of discussing what might be a very interesting Python library for quantum computing, we're litigating the appropriateness of a dismissive top comment. What have we achieved?
A simple getting started post on a similar topic is here: https://medium.com/rigetti/how-to-write-a-quantum-program-in... which might be useful for comparison.
It doesn't take much (LOC) to implement a simulator. You can understand how one works by reading the source: https://github.com/adamisntdead/QuSimPy
- Drag-and-drop what-you-see-is-what-you-get UI instead of script-based. Smooths out the learning curve.
- Supports putting state displays in the middle of the circuit, so you can directly view normally-inaccessible information instead of inferring it from experience or algebra.
- Fast. It updates all displays interactively, as you edit the circuit. Very easy to experiment, e.g. just drag a gate around seeing what it does in different places.
Definitely not the end of the world.
edit:// And they can not solve any NP-complete problem in polynomial time. That is a common misconception and not based on facts.
edit2:// Researchers working on quantum computers actually don't believe that they will make it mainstream (partially due to their complexity like cooling them down to near zero Kelvin) but instead be specialized systems available via the internet for rent - or something similar. More on the side of predicting complex systems like the weather than powering your smartphone. Then again, who thought the PC would make it mainstream.
Current public-key crypto (both RSA and elliptic curve) happens to be one of those problems. However, there are systems where we don’t know how to break them with quantum computers, and it probably isn’t possible. These aren’t in wide use but have been tested in production e.g. by Google. If it becomes a problem, people can switch.
Second, actual existing quantum computers are too small to do much of anything. We are just hitting the point where they could start to become interesting. There are still engineering and theoretical challenges in making them really work.
All these quantum programming languages let you simulate a quantum computer, but doing so demands exponentially more resources as you add qubits. The advantage of a real quantum computer is that this would not be the case.
You are confusing https://en.wikipedia.org/wiki/BQP with NP.
A quantum computer can solve certain problems within a certain computational complexity class, which would fall in a different class on a classical computer!
Given a long enough amount of time, a classical computer can calculate everything a quantum computer can.
Both binary computers and quantum computers are turing machines.
Simulating physics with computers Richard P. Feynman — https://www.researchgate.net/publication/254705307_RICHARD_F...
You can think of unitary matrices as the complex analogue of rotation matrices (https://en.wikipedia.org/wiki/Unitary_matrix), so what quantum logic gates are doing is "rotating" these vectors around in a complex space.
Should we expect more abstractions from quantum computing models in the future? Quantum data structures, common quantum operations etc? Or are quantum algorithms too different from one another, or are "the quantum parts" of most quantum algorithms very compact? (Or do we try to keep them as compact as possible because of hardware constraints?)
I think there will be, though. I think 99% of the problem is just that we have no hardware yet. It's historically just been too hard to develop algorithms for things we don't have the ability to run yet. I've seen it personally in evolutionary computation [1], and the recent renaissance in AI and ML I believe was largely driven by getting to the point we could actually run these computations in some reasonable period of time. The breakthroughs probably could have happened sooner, except even if you theorized about Deep Learning, nobody would have even been able to use it.
[1]: A professor of mine told a heartbreaking (to me) story of carrying around a deck of cards between several institutions as he progressed through his early academic career, running an evolutionary computation in whatever spare time he could get over the literal years. My crappy, grad-student-grade personal laptop, a cheap piece of crap even for the time (I had to permanently clock the nominally 1GHz CPU down to 500MHz just to keep the thing from burning itself up), could have done the whole thing in minutes, if not seconds. Per Dijkstra, computer science may be about computers as much as astronomy is about telescopes, but I would observe astronomy is pretty hard to work on without telescopes in the end.
The first practical applications will be those that can be readily represented by simple operations on a modest number of qbits. As people practice and learn, sophistication will grow.
We need new representations and much better abstractions to get away from the low-level thinking we are currently promoting.