Not sure if I understand all the assumptions. Here is the gist of it. The debate is about the interpretation of quantum states |φ> (L2 norm unit vectors in a complex vector space). Is |φ> something physical or is it simply a outcome-probability-calculating method for predicting the quantum evolutions and measurement outcomes.
View 1: |φ> is real.
View 2: |φ> is just a model of our knowledge of what is real + rules for calculating prob outcomes.
Define the λ = { the physical state of a system }, i.e., that which //really// exists in the world. The question is whether:
λ = {|φ> and other stuff} (1)
or
λ = {other stuff not including |φ> } (2)
If you have (2) then |φ> was really just some mathematical track-keeping and calculation device -- it is not real: it is just something we use for calculation.
If (1) is true then the quantum state is part of the real world,
and in particular if quantum theory is complete, then λ = |φ>.
Point of view (2) says that the |φ> is not necessary to obtain the complete physical
state of the system.
The paper shows that point of view (2) is untenable.
Suppose only λs mattered for physics and (2) is true.
Suppose further that a single λ could correspond to two different quantum states
|φ0> and |φ+>.
Then an apparatus that only knows λ would get "confused" sometimes and say that impossible outcomes occurred. If you knew |φ>, you wouldn't get confused about these outcomes. Therefore, they conclude that |φ> is a necessary part of the physical description of the system.
Great timing with last night's Nova, Fabric of Time: Quantum Leap which covers the context of this paper at a layman's level quite well (streamable for free @ http://www.pbs.org/wgbh/nova/). It's a fantastic introduction to this sort of material for the uninitiated, highly recommended.
Thanks for this, enjoyed watching it this morning. I took the intro to Modern Physics a few years back and this was a great refresher! So fascinating...
I didn't read the paper yet and it really might contain some valuable insights into the nature of quantum mechanics (I'll take a look at it later today), but I find this debate about whether the wave-function is a real object or not hilarious: It completely fails to take into account that the wave function is not defined on space-time, but rather on an abstract configuration space - it confuses the mathematical model with physical reality.
The general problem with QM is that it's an algebraic and not a geometric theory, ie you know the rules, but have no real concept about what they mean. However, QM strongly resembles the Hamilton-Jacobi formulation of classical mechanics, and the geometry of that is known (see eg http://arxiv.org/abs/math-ph/0604063 ).
The quantum numbers basically select a leaf of the foliation of our phase space, and the wave function is some sort of probability distribution on that (but not really ;)), which can be represented by a function on configuration space (which is, in general, NOT space-time, in particular if multiple particles are involved).
It's closest classical analogon is the action function S (or rather, its differential dS), and you don't see many people arguing that dS is a real object: it doesn't make a lot of sense to do so, and the same holds true for the wave function.
I just read the paper. The authors actually make no claim about the reality of the wave function, so disregard anything I said above (which is still true, but not relevant).
However, they do claim to have disproven the statistical interpretation of QM.
This, however, is impossible: It can't be disproven from within QM, as all it says is the following:
"We have no idea what happens when performing individual measurements. However, let me tell you what you'll see when you repeat the experiment often enough."
All the authors have shown is that their λ mechanism is flawed.
A very interesting paper (I used to work in quantum computing, so I think I understand the paper pretty well after reading it a few times. Like most foundations papers is deceptively short!)
The basic idea is fairly simple. From quantum states (the wave function) we can determine probabilities of outcomes when these states are measured in different manners. A major problem in quantum foundations is how to understand these probabilities arise, i.e. where do these probabilities come from?
For example, it could be that the probabilities arise from our ignorance of knowledge of the system (these are what people traditionally call hidden variables.) Imagine a hard disk with a bunch of information on it. If you don't know all of the bits on this hard disk, then your computer can act in ways you can't explain because you don't know all of the bits. Each time you setup your system and run a program you may get different results because that extra information on the hard drive could be different, and so because you don't know all of the information you will see probabilities of outcomes. Mystery solved. Coming up with such a theory however currently always runs into a problem with locality. But that's a story for a different comment thread...
Here is what the authors ask. They say: well it could be that all of the information that is specified in a quantum wave function is all that matters, that is for each quantum state there is a one to one correspondence with a configuration of the hidden information (they allow for extra information, but that's basically irrelevant.) That is, literally, the quantum state is written on your hard drive in all its glory detail, and each quantum state is distinguished for any other quantum state. Contrary to this it could be that the information overlaps in some way. That is for different quantum states, some of the bits, say, will be in the same configuration. The authors then go on to show that this later assumption isn't compatible with the predictions of quantum theory (and show an experiment that can be done that will verify that the later interpretation is not correct. If the experiment fails, then quantum theory is wrong, and all hell will break lose. Assuming quantum theory holds, this shows the second interpretation isn't viable.) Very neat.
There are a couple places where the argument seems a bit odd to me. For example, it is really not clear to me why the measurement device in their system has to depend on the portion of the information that is shared between different prepared wave functions. If this information is ignored by the measuring device, I don't see how their contradiction will arise. Of course it self tells us something kind of interesting because it puts a limit on how shared information is revealed to a measurement device (this seems almost Kochen-Specker theorem like.)
If I understood you correctly, the researchers show that a many-to-one relationship between quantum representation and physical reality is untenable, so the relationship must be one-to-one.
Isn't the conventional assumption that it's one-to-many? If so, this argument isn't very interesting to me.
It doesn't sound like it doesn't rule it out, but also doesn't provide a way of testing it against other interpretations of QM.
I don't thin it would even rule out non-local hidden variable interpretations of QM such as de Broglie–Bohm theory <http://en.wikipedia.org/wiki/Bohmian_mechanics>, since these accept the wave function as real but posits that there is also an actual configuration of the universe, which evolves in a manner that is guided by the "pilot wave".
"On a related, but more abstract note, the quantum state has the striking property of being an exponentially complicated object. Specifically, the number of real pa- rameters needed to specify a quantum state is exponen- tial in the number of systems n."
This is an oblique way of saying that (quantum) reality consists of many (classical) worlds.
No it isn't, it's an explicit way of saying that the number of real parameters needed to specify a quantum state is exponential in the number of systems.
Why do you think those are mutually exclusive? "the number of real parameters needed to specify a quantum state is exponential in the number of systems" is just a rewording of "the quantum state function is an exponentially complicated object." But when you combine that statement with the main point of the paper, namely, that this exponentially complicated object is physically real, it is not unreasonable to infer that this physically real exponentially complicated object comprises multiple copies of classical reality.
In fact, there are logically only three possibilities:
1. The quantum wave function comprises zero classical realities, i.e. what we call classical reality is not real but some kind of illusion. This is actually a scientifically tenable point of view. See http://arxiv.org/abs/quant-ph/9605002, particularly section VI.
2. The quantum wave function comprises one classical reality, i.e. Copenhagen. This is not any more a scientifically tenable point of view.
3. The quantum wave function comprises multiple classical realities, i.e. many-worlds is (are?) physically real.
From what I gather, quantum "particles" travel in a way that's wave-like, but as soon as they interact with another quantum particle, the wave collapses into a single point, and then starts traveling as a wave again starting from the point of collapse. (or something along these lines).
This wave-like form is basically the "wave function"; it describes how the quantum particle moves, and where we might find it.
If this is right, then I don't see anything "mysterious" in quantum entanglement. If anything, quantum entanglement seems to be a proof that the quantum-particle (aka wave function) has an internal state that can be known without necessarily having to interact with it directly.
Somebody please throw some sense into me if I'm spewing non-sense.
Wave function collapse is not tenable as part of quantum theory. The "decoherence" process holds ground currently as a quantum mechanical description of the measurement process and it doesn't make absurd distinctions like "observer versus observed".
The problem is that the "collapse" (from wave to
particle) occurs as a result of somebody measuring. Google the double-slit experiment: when not measuring individual photons, you get wave-like interference. When measuring individual photons, you get particle behavior.
The conundrum is what is it about measuring that collapses the waveform? It is quite a mystery, and, honestly, very exciting.
Not exactly. The collection of particles, even if you detect them one at a time, displays an interference pattern. However, if you make an effort to detect which slit each particle goes through, you see that each one behaves as a particle, like you would expect, and the interference pattern vanishes.
And this is true whether you detect which slit it passes through before or after its point of impact has been detected.
That's what's weird. A particle can be particle-like or wave-like, but not both, and the thing that determines which it is is whether or not you will look.
It's not about humans looking at it. It's about collapsing the wave function. The only to "look" at a particle is to make it interact with something (trigger some chain reaction that ultimately sends a signal to one of your senses, e.g. a photo multiplier). The moment it interacts with that thing, the wave function collapses.
This part is not mysterious.
The mysterious part is, what in the world could be the thing that acts as a wave function and then collapse to a single point when it interacts with something else? What underlying reality does this hint at?
You would think this, but the quantum erasure experiment appears to demonstrate that a photon which has been marked so as to determine its path and then subsequently randomly unmarked so as to irretrievably destroy that information once again displays interference. The delayed choice quantum eraser appears to demonstrate that this decision to either record or destroy path information may be delayed until after the reference photon has arrived at the detector, with corresponding absence or presence of interference.
It's not just about interaction causing collapse, or, if it is, a particle which has been interacted with and collapsed may be subsequently uninteracted with and uncollapse back into a wave.
That part is not mysterious either - the collapse simply doesn't occur.
Say you have a photon in a superposition of state A and state B - think position. If it is in state A it hits a detector, if it is in state B it does not. If the detector is hit, it displays 'HIT' on its screen - otherwise it does not. If 'HIT' is displayed, the researcher thinks photon was in state A, otherwise the researcher thinks it is in state B.
The result of the experiment, without any wavefunction collapse, is a superposition of two states of the entire system:
State 1: Photon is in A, Detector was Hit, Researcher thinks photon is in A
+
State 2: Photon is in state B, detector was not hit, researcher thinks photon is in B.
As you can see, the fact that the researcher never sees a photon in a superposition of states a+b (he doesn't ever see the detector both lit up and not) is explained without any wavefunction collapse.
This is what I was wondering about: how does a particle ‘know’ it is being observed? Something has to interact with it physically, and once affected its physical properties change. Is that right?
I’m a total layman here but so fascinated with all this. Trying to get a better understanding.
Particles don't know anything. The way we see anything, even with our own eyes, is to analyze reflected photons. You can't see something without hitting it with something first and analyzing what bounces back. Obviously, if you're bouncing photons off something, you're going to affect its state. Usually this effect is negligible, but in the case of trying to watch individual particles, well, it's no longer negligible, it affects the experiment.
The problem is people tend to assume observation is passive, they think seeing something does't affect it; this simply isn't true at all. Eyes only work because the sun is continually spraying and bouncing photons off everything. Sight isn't passive at all, you need a photon source spraying photons everywhere, like a flash light.
What's strange about the doulbe slit experiment isn't that watching forces the particles to act like particles, that's the expected behavior all the time, the same as a single slit. What's strange is that when you don't watch it, you get the interference pattern indicating a wave that isn't acting like a particle. That's not at all expected and indicates the particle is going through both slits and interfering with itself. I'm a total layman as well, but that's my take on it.
That is very wrong. A particle doesn't 'know' it is being observed. However, when it interacts with other particles, the new state of both particles depends on the old states of both particles.
I get that particles don’t know anything, that’s why I put it in quotes. But then again, how do we know they don’t know anything? Could elementary particles possess a form of consciousness?
> The conundrum is what is it about measuring that collapses the waveform? It is quite a mystery, and, honestly, very exciting.
Isn't it obviously the fact that it's interacting with another object?
- wave-function interacts with your wall/measuring-device -> wave-function collapses into a point (or: disappears and spawns a new wave-function at said point (which we end up interpreting as the "position" of the particle))
- wave-function doesn't interact with anything -> happily keeps propagating as a wave
Two quantum particles can interact with each other without either's wavefunction collapsing. That's pretty much what makes the double slit experiment work. It seems that generally, interacting with a "macroscopic" object tends to collapse the wave function. You can find an interesting discussion of one way scientists try to explain this phenomenon here: http://en.wikipedia.org/wiki/Quantum_decoherence
> as soon as they interact with another quantum particle, the wave collapses into a single point, and then starts traveling as a wave again starting from the point of collapse.
What does "collapse" mean? Is it the overlapping or "sum" of different wave functions? Isn't the measuring instrument itself a wave function?
Collapse means that any future measurements/interactions of the wave function can be assumed to have evolved from the measured state. Most measurements themselves normally only tell you with high probability information about state, but that's fine.
If you have a wave function which can be in two different states X1 and X2, then it's full state is going to be some linear combination of the states X= a X1+b X2. It can, over time, evolve deterministically, with a and b potentially becoming larger and smaller over time.
When a measurement occurs, such as the detection of a photon of a certain frequency which indicates that the particle made a state transition, you then assign all but one of the coefficients to zero (or approximately zero depending on the measurement). If the particle is in a certain compatible potential (one in which the measured state has a specific energy), the coefficients will stay at zero.
To answer your question: " Isn't the measuring instrument itself a wave function?" Yes it is. Unfortunately, if you have two particles, to describe the system with both particles in it, as opposed to just one, you don't just add the particle wave functions together, you have to take the cross (well tensor, but it's similar) product of the two vector spaces of the particles (remember, function spaces are vector spaces). This is true _even if they do not interact with each other_.
In some sense yes : If you know the starting conditions, one could just crank it through to the outcomes (assuming the strongest form of their result). OTOH, one can't measure everything, since you'd be collapsing wavefunctions everywhere - changing the state of the universe as you go.
OK, this might be a stupid/ignorant question, but what do they mean by "[it seems very unlikely to be true that] if a quantum wavefunction were purely a statistical tool, then even quantum states that are unconnected across space and time would be able to communicate with each other."? Why is this less likely than the alternative?
It reminded me of the single electron universe hypothesis by Wheeler(http://en.wikipedia.org/wiki/One-electron_universe) and wonder if anyone can speak to whether this changes perception on seriousness/testability of that idea.
Caveat, ignorant answer coming: what I understand is as follows: we know that entangled quantum states can "communicate" through space-time (what Einstein called "spooky action at distance"). The paper therefore proves that either unconnected (not entangled) quantum states can communicate, or the wave function is real. They proceed to conclude that the first member of the alternative is less likely (justifiably I suppose).
Question for the more knowable: Does this paper support/say-nothing/deny anything about the view that the "particle-property" we're looking at is a projection into the "space" we sense (at the moment) of a more complex entity uniquely described by its wave-function?
If you want to think about quantum state faster-than-light action-at-a-distance spooky entanglement, look at it this way...
There are two gloves (a right and a left hand glove) and two lockboxes (box 1 and box 2).
Each lockbox contains one of the gloves. Someone other than you has placed the golves in.
You open box #1 and discover that it contains the right hand glove. You now know that box #2 contains the left hand glove (this is amazing to some).
The rational theory is that the states already existed before you looked into either box.
The irrational theory is that neither of the states existed as "well-defined" until after you looked into one of the boxes.
Quantum physics is just a way of abstracting the underlining reality. And it's gone so far that it has completely de-materialized everything and replaces it with math no one can understand.
Can anyone explain the proof to the irrational theory without more baseless conjecture?
As many classical metaphors for quantum mechanics, your explanation is intuitive, convincing, and demonstrably wrong.
There is simply no way to describe the underlying mechanisms of this world using classical physics. However, since our brains are used to the classical picture, the metaphors seem true at a glance.
AFAIK It's not non-determinism that makes quantum computing efficient, it's the ability to compute on superpositions, which effectively means computing all outcomes at the same time.
Ah, right. I meant non-determinism in the sense of a non-deterministic automaton that calculates all outcomes at the same time, not in the sense of introducing randomness.
So this is not a new idea, it's been around for decades, and just fallen in and out of vogue. And now they present an [their] opinion piece on the topic.
I hardly think "shakes foundations" in the HN title and more of the same hyperbole in the article is warranted
They've found a theorem that can subject the physical reality of the wavefunction to experimental tests. That's new, and it means this potentially won't be a matter of opinion any longer.
"Shakes foundations" is the title of the Nature summary, and it reflects the enthusiasm of the distinguished theoretical physicists quoted in the article.
This article is about a new theorem that was uploaded to the preprint server earlier this week[1], and the article quotes several quantum theorists who describe this new result as important (or even ‘seismic’). On what basis do you think it’s been around for decades?
The actual formal theorem is new and great, but the general battle of asserting the wave function's existence vs. non-existence (whatever either are supposed to mean) vs. saying "I don't know tell me more about the wavefunction and its experimental predictions working/failing" has been around for quite some time. Here's a nice write-up of the ridiculousness of someone definitively saying "I don't know why this works and therefore it doesn't exist!" http://lesswrong.com/lw/q5/quantum_nonrealism/
I believe mindstab is talking about debates as to whether a quantum wavefunction posses any objective reality. That is an old argument. For example Everett's Many World Interpretation from the 1950's posits a real universal wave function. It is also deterministic.
The one this paper purports to disprove are the ones which say wave functions are not real but a probability distribution representing your lack of knowledge of the total system state and collapse from measurement is not so different from choosing to condition on the probability distribution. I am sympathetic with the view of quantum mechanics as a bayesian complex probability theory and so will wait till more knowledgeable people critic the paper.
"The wavefunction is a real physical object after all, say researchers."
Mathematics is just a representation of reality, not the reality itself.
No single law in all scientific fields can state that the law itself is the physical object, or a part of the Universe. One may assert that General Relativity can explain lots of observations we can make about this universe. However, even if the observations match the way the universe seems moving 100%, it doesn't indicate that the universe itself is GR. GR is just a successful model of reality that can assert some important relationships among macroscopic observations in this universe.
Assume that someone forged a ultimate theory that can integrate every possible physical observation we can make about this universe. Even if that's the case, he can't assert that "this universe IS the theory." It's merely the theory is a successful model of reality, but not that reality is model itself.
No one can dare to claim that a set of mathematical description of observation equals reality. No wavefunction can be a physical object. That's simply idiotic.
I can sympathize the side claiming that the differentiation between interpretative probability and physical probability can be a huge thing, so that "wavefunction is a real physical object" makes a lot sense. Yes, finding the inconsistency from interpretative model is great, but insisting that the alternative model is itself a physical reality is another issue.
All probabilitic objects derive its existential validity from the identity of indiscernible. We employ probabilistic tools from the belief that what we observe in quantum level are indeed identicals. However, the indiscernibility isn't equal to identity, even if the tools based on the assumption succeed in many aspects of how the supposed identities interact. We would never know each fundamental physical unit is truly identical unless we uncover the fundamental totality of quantum particles. Before we grasp the totality of the physical fundamentals, no probability is physical yet. And I think humanity would never know it. There's a limit in our knowledge and the true identity is beyond our reach.
Not everybody is convinced that reality deserves to be on a pedestal separate from mathematics. Perhaps the only thing distinguishing our reality from mathematical structures is that we are part of it. All mathematical structures with self-aware substructures (such as ourselves) appear to be "reality" from the point of view of its inhabitants.
Presented in the criticism sections of your mentioned Wikipedia articles, Gödel's incompleteness theorem is there as a strong rejection to mathematics = reality argument. No mathematics is self-contained, which means it's incomplete to describe the totality of reality.
We shouldn't forget that mathematics is an invention of mortal human beings. For the dwellers in quantum space, the mathematical fundamental might be totally different from our one.
Not sure if I understand all the assumptions. Here is the gist of it. The debate is about the interpretation of quantum states |φ> (L2 norm unit vectors in a complex vector space). Is |φ> something physical or is it simply a outcome-probability-calculating method for predicting the quantum evolutions and measurement outcomes.
View 1: |φ> is real.
View 2: |φ> is just a model of our knowledge of what is real + rules for calculating prob outcomes.
Define the λ = { the physical state of a system }, i.e., that which //really// exists in the world. The question is whether:
If you have (2) then |φ> was really just some mathematical track-keeping and calculation device -- it is not real: it is just something we use for calculation. If (1) is true then the quantum state is part of the real world, and in particular if quantum theory is complete, then λ = |φ>.Point of view (2) says that the |φ> is not necessary to obtain the complete physical state of the system.
The paper shows that point of view (2) is untenable.
Suppose only λs mattered for physics and (2) is true. Suppose further that a single λ could correspond to two different quantum states |φ0> and |φ+>. Then an apparatus that only knows λ would get "confused" sometimes and say that impossible outcomes occurred. If you knew |φ>, you wouldn't get confused about these outcomes. Therefore, they conclude that |φ> is a necessary part of the physical description of the system.
|φ> is real.