Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great question. The device is fully programable. Arbitrary one-qubit operations and arbitrary two-qubit operations between adjacent qubits can be performed. Theoretically these are 'universal for computation', meaning that a large enough device could compute anything computable. You can't program Quantum Tetris or whatever on a bouncy ball :).

But nevertheless, many of these 'beyond-classical' demonstrations feel a bit arbitrary in the way you describe, and there's good reason for this. Logical operations are still quite noisy, and the more you apply, the more output quality degrades. To get the most 'beyond-classical,' you run the thing that maps most readily to the physical layout and limitations of the hardware.

As things improve, we'll see more and more demonstrations of actually useful computations. Google and others have already performed lots of quantum simulations. In the long run, you will use quantum error correction, which is the other big announcement this week.



So isn't this the same as turning a classical computer on and letting it run on whatever garbage is on the RAM at that time, and when some nonsense shows up on the screen breathlessly exclaim that it would take several millennia to get the same result with an abacus, despite the fact that something was "computed" only by strict adherence to the definition of the word? It's not like it takes a quantum computer to produce a meaningless stream of data.


That's a great analogy, and I basically agree with it. But there would be some ancient, abacus-wielding mathematicians who would be impressed by this fast-but-useless computer. One might take it as a sign that once/if the computer can be controlled properly, it might be quite useful.

This might have been part of the history of classical computers as well, except that it turns out to be pretty easy to do classical operations with very high fidelity.


Yeah... But since the device is not doing anything meaningful, there's no way to tell if it actually is computing anything, rather than being a very expensive and very complicated random number generator. Because you don't need a quantum computer to generate a stream of meaningless numbers, a machine being capable of generating a stream of meaningless numbers doesn't demonstrate whether it's computing quantumly.


Furthermore, how do you distinguish successful runs from malfunctions?


That's a good question. They run the system on a small scale and validate there. The assumption is that no new error mechanism magically switches on when the simulation gets large enough, but it is did there would be no way to know.

Hopefully large-scale, verifiable demonstrations become viable in the near future. But current they're just too hard to implement.


Re: Noise

There's some probablistic programs that we run that not only don't need determinism, but are actively harmed by it.

For example deep learning training would probably work fine if there was a 1% destructive noise, as long as there were a massive increase in compute.


Thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: