Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!

This claim makes little sense. There are many problems that are much easier to verify than to solve. Why isn't that approach ever used to validate these quantum computing claims?



That's what the author is saying. Researchers in this field should, for credibility reasons, be solving test problems that can be quickly verified. As to why this isn't done:

(1) They're picking problems domains that are maximally close to the substrate of the computation device, so they can hit maximum problem sizes (like 10^25). For many (all?) fast-verifiable problems they can't currently handle impressively large problem sizes. In the same way that GPUs are only really good at "embarrassingly parallel" algorithms like computer graphics and linear algebra, these quantum chips are only really good at certain classes of algorithms that don't require too much coherence.

(2) A lot of potential use cases are NOT easy to validate, but are still very useful and interesting. Weather and climate prediction, for example. Quantum chemistry simulations is another. Nuclear simulations for the department of energy. Cryptography is kinda exceptional in that it provides easily verifiable results.


I would add one more to this, which I would argue is the main reason:

(0) For a quantum algorithm/simulation to be classically verifiable, it needs additional structure; something that leads to a structured, verifiable output despite the intermediate steps being intractable to simulate classically. That additional structure necessarily adds complexity beyond what can be run on current devices.

To pick an arbitrary example I'm familiar with, this paper (https://arxiv.org/abs/2104.00687) relies on the quantum computer implementing a certain cryptographic hash function. This alone makes the computation way more complex than what can be run on current hardware.


Isn't weather prediction extremely easy to validate? What am I missing other than looking out the window tomorrow?


Maybe a working quantum algorithm for weather prediction would outperform currently used classical simulations, but I wouldn't expect it to be bang on every time. Inputs are imperfect. So at best you could benchmark it, and gain some confidence over time. It could very well be good enough for weather prediction though.

Also I doubt that a quantum algorithm is possible that provably solves the Navier-Stokes equations with known boundary and initial conditions. At least you need some discretization, and maybe you can get a quantum algorithm that provably converges to the real solution (which alone would be a breakthrough, I believe). Then you need some experimental lab setup with well controlled boundary and initial conditions that you can measure against.

In any case the validation would be at a very different standard compared to verifying prime factorization. At most you can gain confidence in the correctness of the simulation, but never absolute certainty.


At scale, yes. But this would still be solving toy problems with less variables, fewer dimensions.

And they’re not actually solving weather problems right now, I think. That was just an example. What they are actually solving are toy mathematical challenges.


To the extent that they are useful, they will be easier to validate.

For example: "Calculate whether we'll have El Niño this year"

The validation will not need to be run on a machine, but on the real world. Either we have el niño or we don't.


I think he was making that exact point in this blog.


Because we don't currently know a problem like this that both has a quantum algorithm we can run on this type of device with expected exponential speedup and has a fast classical verification algorithm. That's exactly the author's point/he has been advocating for quite a while the importance of researching such an example that would be better to use.


Depends on what you mean by "this type of device." Strictly speaking there are many efficiently verifiable quantum algorithms (including Shor's algorithm, the one that breaks RSA). But if you mean "this particular device," then yes, none are simple enough to run on a processor of this scale.


They likely mean on any of the current era of NISQ-like devices (https://en.wikipedia.org/wiki/Noisy_intermediate-scale_quant...) like this one or quantum annealers.


Do we have any proof or evidence that such a problem even exists?


> Why isn't that approach ever used to validate these quantum computing claims?

Hossenfelder’s linked tweet addresses this head on [1]. We need four orders of magnitude more qubits before a QC can simulate anything real.

In the meantime, we’re stuck with toy problems (absent the sort of intermediate test algorithms Aaronson mentions, though the existence of such algorithms would undermine the feat’s PR value, as it would afford cheap takedowns about the QC lacking supremacy).

[1] https://x.com/skdh/status/1866352680899104960


In terms of physical simulations, surely you can limit the size of the system to fit your exact complexity needs?

For example, isolate two molecules in a vacuum and predict its future state. Now make it 5, now 100, etc...


That's pretty much the kind of problem they're using here. The problem is that to verify the simulation, you need to run the simulation on the classic device, and that requires time that is exponential in the number of particles being simulated. They've verified that the classical and quantum simulations agree for n=1, 2, 3, ..., now here's a quantum simulation for n that would take 10^25 years to do classically.


If what you are simulating is a physical system, then to verify it you only need to replicate the physical system, not rerun the simulation on another device.

I suggested simulating the experiment of n molecules in a vacuum, another experiment might be a chaotic system like a double pendulum. Although there would need to be a high level of precision in setting up the physical parameters of the experiment.


That's not true, there are enough interesting problems for NISQ. Quantum chemistry for example.


Could it be that it's not a chance if these kind of problems are chosen? Somehow we can get from a quantum system an amount of computation that goes well beyond what a classical system can perform, but we can't seem to extract any useful information from it. Hmm.


Right, Factoring and discrete logs both come to mind; is Google's quantum computer not able to achieve measurable speedups on those versus classical computation?


That's exactly correct, this chip cannot do either of those things yet. Which is why they use this toy-ish random circuit sampling problem.


Not a large enough speedup to factor something that couldn't be done on a classical computer.

Well, really it can't run them at all, but a more-general computer this size which could, still wouldn't be large enough.


Google's chip is not general enough to perform any quantum algorithm.


It is perfectly general, but the error rate is too high to operate all qubits simultaneously for more than a few tens of gates without error. This is why error correction is needed but then you need orders of magnitude more physical qubits to deal with the overhead.


Then why can they perform an RCS evaluation but not some other algo? RCS requires the least number of qubits by a huge margin?


No, not quite, it's about the error-per-gate. RCS has very loose requirements on the error per gate, since all they need is enough gates to build up some arbitrary entangled state (a hundred or so gates on this system). Other algorithms have very tight requirements on the error-per-gate, since they must perform a very long series of operations without error.


For reference, Aaronson is widely regarded as one of the foremost members of the field.

Try "I don't understand this claim"?


The question is not whether a problem is easier to verify than to solve but whether there is a problem that is provably faster (in the complexity sense) on a quantum computer than a classical computer that is easy to verify on a classical computer.


He argues this point exactly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: