## Quantum Uncertainty

Technology Review has a pair of articles on D-Wave‘s adiabatic quantum computer. Quantum pioneer Seth Lloyd writes in “Riding D-Wave” about quantum computing in general, adiabatic quantum computing, and D-Wave’s efforts to show that they’ve actually built a quantum computer.

Linked to that is Scott Aaronson’s article, “Desultory D-Wave,” in which Lloyd’s nail-biting is made a bit more plain. I hate giving away the punch line, but here’s what Aaronson sums up with:

Let me be clear: I think that quantum computers are possible in principle, and that D-Wave’s approach might even get us there. I’ve also met people from D‑Wave; I don’t think they’re frauds. But the human capacity for self-deception being what it is, scientists train themselves to look for red flags–and D-Wave is pretty much a red-flag factory.

Beyond that, there’s a new paper that shows problems not in just one *implementation* of quantum computing, but about its very theoretical core. In “Operator Imprecision and Scaling of Shor’s Algorithm,” authors C. Ray Hill and George F. Viamontes claim that Shor’s Algorithm doesn’t work at an interesting scale.

The reason is that errors in the quantum fourier transforms accumulate faster than quantum error correcting codes can get rid of them, particularly when factoring the sort of numbers that a sane person might use for a public key. Hill and Viamontes seem to think that it is not possible to factor a key much more than 256 bits in length. Most importantly of all, the errors accumulate linearly with the number of quantum operations and the number of operations increases polynomially with the size of the integer. My peeks at the error rate graph lead me to guess that a hard limit is reached before you get to a 512-bit number, which is no longer considered interesting using conventional sieve methods.

Shor’s algorithm (SA) is a quantum algorithm for factoring integers. Since SA has polynomial complexity while the best classical factoring algorithms are sub-exponential, SA is cited as evidence that quantum computers are more powerful than classical computers. SA is critically dependent on the Quantum Fourier Transform (QFT) and it is known that the QFT is sensitive to errors in the quantum state input to it. In this paper, we show that the polynomial scaling of SA is destroyed by input errors to the QFT part of the algorithm. We also show that Quantum Error Correcting Codes (QECC) are not capable of suppressing errors due to operator imprecision and that propagation of operator precision errors is sufficient to severely degrade the effectiveness of SA. Additionally we show that operator imprecision in the error correction circuit for the Calderbank-Shor-Steane QECC is mathematically equivalent to decoherence on every physical qubit in a register. We conclude that, because of the effect of operator precision errors, it is likely that physically realizable quantum computers will be capable of factoring integers no more efficiently than classical computers.

Hill and Viamontes also claim that this brings up a serious question about quantum computing in general. Take a deep

breath and read this:

It is natural to ask whether these results have wider implications about the power of quantum computers relative to classical computers. While the results presented in this paper do not answer this question definitively, it is important to note the singular stature of Shor’s algorithm as the only quantum algorithm that appears to efficiently solve a classically intractable problem. The

fact that Shor’s algorithm is not more efficient than classical algorithms removes the only strong evidence for the superior computational power of quantum computers relative to classical

computers.

Wow. They have by no means the last word on this, but this means that quantum computing is going to get much more interesting as a spectator sport. And perhaps this fall’s Post-Quantum Cryptography workshop will be a little less interesting.

sounds like we have SchrÃ¶dinger’s quantum computer, it is one and it isn’t one. Depends if you are in BC or MA

The paper is rubbish and ignores pretty much everything we’ve learned since 1995 in quantum error correction (i.e. the circuits they use aren’t fault-tolerant so of course they don’t deal with gate errors.)

I recognize the attitude of the authors (who are from Lockheed) from similar experiences I had at Bell Labs: some people there decided in 1994 or 1995 that quantum computing won’t work and subsequently ignored all further developments in the field.

As Dave Bacon notes, the authors only consider circuits that are well-known not to be fault-tolerant. How stupid! And that while they even cite some papers and books that do discuss fault tolerance *properly*.

flag Mexican who

flag Mexican who