The paradox seems to rest on magical thinking about consciousness, and if one simply accepts that a conscious observer can be in a superposition like any other piece of matter, the paradox is resolved.
And, from the article:
For Wigner, this was an absurd conclusion. Instead, he believed that once the consciousness of an observer becomes involved, the entanglement would “collapse” to make the friend’s observation definite.
But what if Wigner was wrong?
Well, obviously Wigner is wrong, sorry for being flippant.
In this paper (preprint here: https://arxiv.org/pdf/1907.05607.pdf), the magical thinking about consciousness seems to be transmuted into "“Absoluteness of Observed Events” (i.e. that every observed event exists absolutely, not relatively)". The authors (it appears to me) mean by this that if an observer performs an experiment, while this observer is himself in a superposition, we have to regard the outcome of the experiment as absolute (i.e. not in a superposition) because it was made by a (conscious) observer.
In my opinion, both "Absoluteness of Observed Events", and the equivalent from the layman's article, "When someone observes an event happening, it really happened", is a disingenuous and confusing way of talking about observers who are in a superposition. We have crossed over from "quantum mechanics is weird" to "these superficially intuitive but clearly false assumptions about quantum mechanics are weird".
We're veering away from physics here, but isn't consciousness actually quite "magical". It's beyond remarkable to me that stimulating nervous tissue, composed of quite mundane things like protons and electrons, yields a subjective experience. I know many people reject the hard problem of consciousness, but to those who don't, the implications of thought experiments like Wigner's friend, like superpositions or entanglement of subjective experiences, are truly paradoxical. You are calling certain viewpoints obviously wrong or false because you adhere to certain philosophical viewpoints. There's nothing wrong with that, and they're all justifiable, but none of them are complete consensus among philosophers.
Coming back to physics, there's an assumption you and other commenters are silently making, which is that quantum mechanics is even applicable to macroscopic objects like humans. The largest objects which have been shown to act wave-like are a few thousand atoms large. While it's indisputable that at the lowest level the universe is fundamentally quantum mechanical, it's a little brazen to extrapolate that over more than 20 orders of magnitude. As a physicist myself, I'll believe it when I see it, and I'm looking forward to getting results from proposed experiments like FELIX and its successors.
>Coming back to physics, there's an assumption you and other commenters are silently making, which is that quantum mechanics is even applicable to macroscopic objects like humans.
It follows from schrodinger equation, it provides no exception for macroscopic objects. You can say quantum effects indeed happen at the lowest level, and macroscopic behavior follows what happens at the lowest level. It's a question of reducibility.
> The largest objects which have been shown to act wave-like are a few thousand atoms large.
Large objects have wave-like behavior, e.g. you can't determine their size with femtometer resolution, they don't suffer from ultraviolet catastrophe and have macroscopic quantum effects like superconductivity.
Pretty sure I’m dunnig-krugering this, but is consciousness really that magical?
Nervous system evolved from ability to respond to external stimuli to centralized control of various processes to evaluation of risks/benefits of actions and finally to predictive modeling of external processes. Rudimentary concept of “self” and understanding of surrounding environment exists in various animals, so why is it strange that an animal with most complex brain has the most sophisticated concept of “self” and it’s placement in the ultimate surrounding environment - the universe?
Of course, apart from how the actual implementation works I don't think many people have a problem with that. And if it was just that, Wigner's friend wouldn't be nearly as interesting, it would suffice to say that if quantum mechanical effects really keep working all the way up to the macroscopic level (which again is anything but obvious), that his friend's body and brain would simply be in a superposition of both states.
But this understand of the surroundings and capability to react to stimuli comes with a subjective experience, which is what is actually meant with "consciousness". This is the hard problem of consciousness[0].
One of brain’s responsibilities is to model the environment, which means modeling all it’s parts. Since each brain is unique - those models are different and “personal”. Physical stimuli evoke some of those models and that’s what we call “blue color” or whatever else we’re experiencing.
I don’t think I understand the problem enough to understand why it’s a problem.
By inner experience, we mean that there's a subject which gets to have perception of such models.
There's no need for such subject to exist.
For example, most people assume that, so far, computers do not have any inner experience.
A computer could, conceivably, execute the same functions as our brain, yet have no inner experience of anything. Numbers get in, numbers get out, without any inner experience being needed.
Sure, just like a thermostat doesn’t experience the feeling of temperature, there is no need for human to feel it, so I understand that we need to explain why is it that we still feel the feeling rather than just observe the signal and react.
So why is it wrong to explain this by the necessary recursiveness of predictive modeling that includes modeling “self”? We observe the temperature but we also observe ourselves observing the temperature. First is the signal, second is the introspection of the model evoked by that signal - the feeling.
Yes but where does this observer come from in the first place? Some theories do presume the thermostat experiences the temperature, just in a less-sophisticated form of consciousness. If an "observer" is nothing more than neurons firing in response to stimuli, then it's not fundamentally different from the thermostat.
It is understandable that having an model of an observer can be useful to a brain.
But how/why does that opens a window to an actual observer, and not just a model, is the question.
And we only know that - an actual observer exists - through first hand experience. It is our most immediate and certain knowledge (Cogito, ergo sum), everything else can be questioned. Yet, there's nothing in physics or computer science that gives a hint to this being the case.
Ok, so first hand experience tells you an observer exists.
Are you sure that observer has your personality, mind and memories? Are you sure that observer is involved in any way with the world, other than observing it?
Or are those other things just part of the machinery, and quite illusory. For example our perception of time, coherent thought and personality aren't all that consistent, as we know from various experiments and observations.
Here is the crux of my point:
If there's an actual observer, let's call it "primal consciousness", but they are observing the world through the lens of a mind, which is a complex, self-referential, reactive process running on a brain and body, we don't need to say that any particular physical process "creates" consciousness. We can settle with physical processes create something complex and interesting, which runs models of itself and the world, which "primal consciousness" observes. The "mind machine" running on the physics does not contain the observer, it's observed by the observer.
That doesn't "solve" the hard problem, but it's a model with different properties and consequences than some of the other models.
> So why is it wrong to explain this by the necessary recursiveness of predictive modeling that includes modeling “self”?
It's not "wrong", it's just not parsimonious. A system can model itself without being conscious, any time you have state in a program you are doing this.
I like this idea. From my limited reading and understanding, the nervous system/brain and body seems to function as a huge number of feedback loops where the nerves are predicting a response in the body to a nerve firing event, doing the event, and then comparing what happened to the prediction. Moving your hand is a huge number feedback loops. Seems like a similar thing probably happens for abstract things like words. Building up all these sub models to a model of the self seems like a natural progression and could be very evolutionary beneficial, although with drawbacks also (paralyzing self doubt, depression, neuroticism, etc)
It's only wrong because it's not really an explanation. At best, it's a weak one.
Saying that an active model of self "is" what we experience as the consciousness we experience doesn't tell us why we have that experience.
We can just as easily imagine a complex machine with an active self-model that isn't conscious, as one that is.[1] So an active self-model doesn't tell us about consciousness. This shows them to be different concepts, not different names for the same concept. Which means neither "is" the other, and "is" is not an explanation.
[1] (To be a little more picky, we can't imagine that if we insist they are the same thing, but that leads to circular reasoning here. Our questioner can imagine both, and for an explanation to explain it needs to address the question, not wave it away by offering something circular.)
It all sort of falls apart when we only talk about whether an object other than ourselves is conscious or not.
As far as we know[2], we can't distinguish consciousness of other objects by observation. A hypothetical non-conscious machine might tell us it is conscious; we will never know if it's GPT-3000 talking or if it's another being like ourselves. So eventually we'll probably decide that it's moot, and treat it as conscious if it behaves convincingly and consistently like it is.
[2] That could change, it's not ruled out.
But that doesn't deal with the "hard problem" of consciousness, which is ourselves.
For ourselves, we are in no doubt about the direct experience of our own consciousness. We might convince ourselves that it's just an active self-model, processing, because of how we think of data processing machines these days. But we shouldn't, for one because that's a weak explanation that doesn't explain, and for two because there are other active self-models in the universe, and also in the much larger abstract realm of "unexecuted" self-models that could exist (pick an RNG seed and set of rules of your choice). We don't experience those, so the one(s) we do experience are notably distinct, for no obvious reason.
lots of interesting replies, thanks. i'll aggregate my thoughts into single comment to keep this discussion more focused.
it seems to me the only rebuttal is "sure, self-models can exist but it's concievable that they can exist without an observer, so why is there an observer?" and to me it sounds similar to "sure, an eye can exist without abiogenesis, so why do we only find it in organisms that resulted from abiogenesis?"
an eye is just a collection of amino-acids, nothing prevents an eye from spontaneously assembling in a primordial soup and we recognize that's absolutely impossible. however i would posit that due to configuration of physical interactions in our universe, it's virtually guaranteed for an eye to develop in any life-form that is exposed to star's radiation in earth-like conditions.
similarly, just that we can think of p-zombie doesn't mean it's a simpler system to natually occure. we don't have understanding of building blocks of consciousness like we do with chemistry and biology but the answer to the why question seems to be quite simple: we observe ourselves because we evolved to. and we can find more and more primitive examples of self-observation in more and more primitive animals, so it's not some binary phenomena.
Commenting a bit late since I had this tab open for a while, but I find this discussion interesting. Even more magical than consciousness itself, are people denying that there even is a hard problem of consciousness to begin with (if you assume the standard model) ;) You started with asking if consciousness is really that magical and finished with basically admitting that we don't have an answer for how it works yet.
The hard problem of consciousness is not just about the why, it's also about the how. That's exactly the magical part: how subjective, non-physical experiences (supposedly) come from physical interactions. Brushing it off as "evolution" is not sufficient to explain the how.
You’re right, I have no idea how and that’s hard to figure out. Probably I misunderstood the statement of the problem by only focusing on the why part.
Coming from engineering background, I would say we need to be looking for self perpatuating loops of neuronal activity (“strange loops” may be quite appropriate concept), but how would we go about looking for them - I have no idea because I’m not up to date on modern brain scanning tech.
Why hard problem is even here? Consciousness isn't magic, because chinese room in conscious, and it has nothing to do with quantum mechanics, because of paradoxes.
That still doesn't explain why I'm conscious, something that I have direct, first-hand experience of.
> Consciousness isn't magic, because chinese room in conscious
That doesn't explain anything.
Does that explain to the person experiencing consciousness why they are? No it doesn't.
It just says "something else is conscious so you are too". Which is not an explanation, it's circular.
Is it relevant if the Chinese room is conscious as well? Not really.
I am curious, though. Do you consider a system (such as a Chinese room) to be conscious if it's only implicit, by writing down the rules it should run, without actually performing any of the rules? What if it's so implicit that we don't even write down anything, we just refer to it by name, and assume we would create the rules if we needed to as the first steps in execution? Is it conscious when nothing happens at all, but it could happen? If yes, does that mean every possible thing that could occur is conscious even if it doesn't occur? Every physical possibility is conscious? The whole world of abstract mathematics is conscious? If the answer to any of those is no, where do you draw the line between conscious things (Chinese room) and not-conscious things?
Hard problem suggests that consciousness is magical, in which case it could mess with physics. But if consciousness isn't magic, then hard problem is a problem of understanding, not a problem of physics.
>without actually performing any of the rules?
Chinese room works like human mind, so it should run to be conscious.
You might consider reading the original paper. Because the experiment doesn't rely on conscious observers at all. (Unless you think photons are conscious, which I'm assuming you don't.)
Well, that's the point. The experiment shows that one of three assumptions has to be false. But one of these (AOE) being false is not surprising or concerning at all, when dealing with non-conscious observers (such as photons, indeed). With conscious observers it would be concerning to some (not to me, but to Wigner and presumably the authors), because of the supposition that there's something special about conscious observers (i.e., magical thinking, IMHO). So what has this paper really shown? Absoluteness of Observed Events is false when the observers are photons, which (my layman's assumption) should surprise exactly nobody. And then they propose to repeat the experiment with an AI in a quantum computer as the observer. Lol.
> Absoluteness of Observed Events is false when the observers are photons, which (my layman's assumption) should surprise exactly nobody
I'm with you but is this experimental result not new though? It seems like they're on our side and confirming what should be a non-surprising result, which is good work.
Definitely not new, the double-slit experiment in 1801 first hinted that this might be the case, and then Heisenberg calculated exactly how 'not absolute' observed events are based on the wavelength (aka momentum) of the photon you're using (in 1927).
> if one simply accepts that a conscious observer can be in a superposition like any other piece of matter, the paradox is resolved.
It’s not so simple. Suppose you’re in a box and observe a quantum experiment, and then I open the box and observe you. Then before I open the box you’re in a superposed state |x> + |y>, corresponding to the two possible outcomes x and y. Fine, no problem so far. But what is your own subjective experience? You, subjectively, inside the box, will only ever observe yourself to be in |x> or |y>, never |x> + |y>. Even if from the outside your brain can be said to be in a superposed state, your experience of the world is not superposed.
The |y> superposition of you has one experience, and the |x> superposition of you has a different experience. The |y> version has no information about the |x> experience, and vice versa, because those two states are orthogonal in Hilbert space. This is essentially the prediction that results in the Everett many worlds interpretation.
If you somehow had an experience of "both outcomes simultaneously", that would violate the quantum mechanical prediction that there is no mutual information between the two superpositions. There are two brain states, an x state and a y state, and they know nothing about each other.
To each brain state (with the limited information available to it) it would appear that something "definite" had happened, even though in the global picture, a superposition still exists.
Either you believe in collapse, so there are two of you, who each observed different things, and one of those will at some point cease to exist, or you embrace the Everett interpretation, and so there are two of you who each observe different things and go on to live their separate lives.
Both paths you laid out are built upon the idea that superposition happens. Is there any serious consideration these days that superposition itself is dead end?
When people compare QM theories they often do so on basis of the original experiments that were used to develop them. But as a discipline physics has moved well past those. Superposition has been proven and tested to exist, and we have long since moved past that phase and started building things, like quantum computers, on top of it. At this point, pretty much the only way superposition doesn't exist is that every time a physicist does something that superposition needs to work, a devil figures out what kind of result is needed to fake it and does that.
Trying to challenge superposition would kind of get same kind of results from physicists as trying to challenge the existence of electrons would get from people who build circuits. Like, if you have some interesting new theory, I am intrigued, but do understand that to get people to follow it you will need to explain how it replicates the results of what superposition would do in so many different cases that you won't be able to enumerate them in a week.
I didn't know that. If you have more information I'd love to learn about it. I was under the impression that collapse wiped out all information about any other states.
Many-worlds interpretation takes care of this. Anyway, how do you know your experience of the world cannot be superposed? Have you ever been in superposition? We make up stories to maintain consistent histories all the time.
> how do you know your experience of the world cannot be superposed?
For your experience of the world to be superposed it would mean that you carry out a quantum experiment with two mutually exclusive outcomes |x> and |y> and you actually experience the superposed result |x>+|y>. This would be like opening the box in the Schroedinger's cat experiment and actually observing the cat to be |alive>+|dead>, instead of either alive or dead. Maybe it's possible, but such an experience has never been reported.
It's "possible", but not really possible. The larger the object is, the harder it is to maintain superposition. Even the slightest nudge will tend to push it all into one state or other. Seeing the superposition directly would be like standing twenty octillion strands of spaghetti on end.
So it's "possible" but would never happen in a trillion lifetimes of the universe. You can tell the difference between that and "impossible" by watching the quantum mechanics work for a few isolated particles, and observing what it means for them to fall out of superposition equilibrium. But in practical terms, it's equivalent to impossible.
Nonlocality basically means that the universe has a global RNG state. If you write some code that uses a global RNG and no other global variables, that code will still be "local" in the sense that functions cannot communicate information between each other using the RNG. But the outcomes of the functions might still be correlated in interesting ways (corresponding to entanglement).
No, non-locality simply means that effects happen at infinite speeds, not that they carry information.
For example, the Copenhagen interpretation of QM abandons both locality and realism - particles don't have definite states, and they also communicate at infinite speed (but in a way that can't carry information).
Basically when you measure the state of two particles that are entangled, you find that their state is correlated in some way. For example, they may be entangled in such a way that their spins are the same. So if you measure one to be spin up (when measured along any axis), the other will also be spin up (when measured along the same axis). This happens regardless of the separation distance between the two particles. By carefully adjusting the axes along which you do your measurements, you can prove that the spins that you get are not predetermined (this is Bell's Theorem) and yet the particles are not communicating at the speed of light with each other either (the correlation remains even if you move the particles far apart and then do the measurements at the same time). Note that the actual spin value itself is random, and once you do the measurements the entanglement is broken, so you can't use this to transfer information faster than light. Hence what the parent poster means by "communication without information": each particle individually appears to be completely random, and yet when you compare them you see both particles are random _in the same way_ (or in the opposite way, particles can be anti-correlated too).
Sorry but there is nothing obvious about any of it. It all boils down to the measurement problem and entanglement. While I don't like psi-epistemic interpretations either, it's not like there is anything clearly inconsistent with them, or like psi-ontic and psi-complete interpretations make more sense or are devoid of issues. They all have problems and make assumptions that are hard to swallow, and if you believe any of them and start ridiculing others you are pretty much picking sides with no good evidence. So while you might have already pledged your faith, other people with probably as much understanding of the issues or more haven't.
Translating the thought experiment to an actual physical experiment is something worth appreciating.
A possible lack of absoluteness of observed events has implications for what observations may or may not be reconcilable using existing scientific methods.
> "From the point of view of the friend, the measurement result was determined long before Wigner had asked about it, and the state of the physical system has already collapsed. When now exactly did the collapse occur? Was it when the friend had finished their measurement, or when the information of its result entered Wigner's consciousness?"
This to me, shows just how hand-wavy the whole superposition/quantum collapse stuff really is. I have to say I side with Einstein with his view that everything is already in one state or the other. We don't cause the system to choose a state when we measure it. We simply discover the state it was already in. It is one thing to assign a probability to which state we will find it in. It is quite another to create an entire theory around quantum measurement.
There is nothing mysterious about this paradox. The friend performs the measurement ans discovers the state of the particle/system. Wiger doesn't know about it, but that doesn't mean it's not in that state, he just isn't aware of it yet. The fact this is a paradox just seems like silly mind games.
> how hand-wavy the whole superposition/quantum collapse stuff really is
Superposition isn't particularly hand-wavy. It's the basis of various technologies, and of precise numerical models we use to build technologies.
Quantum collapse is hand-wavy, and that's because it is not well understood. It might not exist (in which case superposition of mind-states is a thing), or it might exist (in which case mind-states may be definite). Quantum mechanics that we can calculate doesn't give an answer either way, even though it gives lots of other answers very precisely and correctly.
> Wiger doesn't know about it, [...] just seems like silly mind games.
When only applied to a single measurement it does seem like silly mind games, you're not wrong.
The motivation for those thought experiments was to try to reason back from the consequences found from more complex behaviour observed with multiple measurements, back to a simple system with only one measurement.
But when you have many measurements with entanglement, of which there are numerous physical experiements and confirmations by now, the idea that there are just multiple, correlated probabilistic states ready to be observed that we're not yet aware of is not consistent with physically measured behaviours. Those measurements aren't hand-waving, they are hard data, even though my explanation here is hand-waving and doesn't go into it.
> I side with Einstein with his view that everything is already in one state or the other
Numerous experimental results now contradict this view, or require other strange things to be true about the world (for example superdeterminism was mentioned in the article).
Your position is more or less defensible, but you will have to accept some compromises to your description of the universe that most physicists find untenable.
These experiments were really the nail in the coffin for Einstein's position for most physicists. Having to admit non-local hidden variables is a pretty distasteful result.
QM is not hand wavy and gives precise results for many practical problems. Your theory is wrong as per one of the simplest quantum experiment - the single electron double slit problem.
The variation of the double split experiment that supposedly proves that observance "changes" the outcome, is only a thought experiment, and therefore can be flawed due to incorrect thinking. Obviously, the basic double slit experiment which shows the wave/particle duality of light is done in practice and therefore empirical.
I like most on here have followed your thinking at some point, but have had to ditch our intuition along with comman sense and a good part of our sanity when we found out we were lost down the rabbit hole after schrodinger's cat ate our breadcrumbs.
An observer changing the outcome is a very real repeatable experiment with non wavey results.
It's similar to the problem that has been highlighted here by OP, to quote the artical I posted
"Indeed, the results of both Truscott and Aspect’s experiments shows that a particle’s wave or particle nature is most likely undefined until a measurement is made. The other less likely option would be that of backward causation – that the particle somehow has information from the future – but this involves sending a message faster than light, which is forbidden by the rules of relativity. "
It's interesting the way some scientists have been biased into this kind of magical thinking, probably because the integrity of conscious observation is so central to their practice/worldview. Nobody is free from the pressure of cognitive dissonance.
Whenever I hear about weird shit from the realm of quantum physics (this theory, double-slit, etc..) I can't help but think:
Why would I hardcode values for imperceptible objects, that would take an enormous about of ram and cpu time to constantly update values in the off chance it's needed.
Much more efficient to optimize for what the _player_ can see at their perspective. Oh and I should probably code in some error handling in the fluke event one of these particles is detected, I'll just calculate their position retroactively, the user will never be able to tell and we can host way more players due to the reduced memory.
It would seem to me that god is a junior dev and no one reviewed his pull requests.
That model of efficiency (RAM and CPU time) is based on classical computation.
When your underlying model of computation is full quantum computation, it's simpler to just run everything at once. It takes no energy if you run everything without picking out a scenario (but in a kind of "tree falling in a forest" way), and more energy if you select out specific scenarios to look what happened (I/O is expensive). Counter-intuitively, the computation part of quantum systems is free in ways that we consider classical computers expensive to run. It's reversible and doesn't consume any energy.
That may seem like it's avoiding the point, after all what does it take to run the "underlying model of computation".
But what I'm trying to say is that "quantum all the way down" (see also turtles) is as much a valid model as "mechanical computation all the way down", which your picture relies on. Neither of them is more fundamental.
It may seem like quantum-all-the-way-down is a bit artificial, because we can in principle run quantum simulations on classical computers, which seem simpler. But it turns out we can't. There is a fundamental intractability barrier for simulations above an arbitrary tiny size, which means we can only simulate interesting quantum systems using other quantum systems. It really is quantum-all-the-way-down.
If god came up with the quantum-all-the-way-down version, I'd say that's pretty clever, because it's way more efficient than anything you would implement, with your old-school classical RAM and classical CPU.
One problem with this view is that “unobserved”
particles still take up a large (possibly larger?) amount of computation. Rather than being in one position, it acts like a wave, being a little bit in every possible allowable position. These probability waves also interact with each other (which is what the double slit experiment demonstrates).
That being said if it was demonstrated that un collapsed wave functions are somehow more efficient to calculate that would definitely give credence to the simulation hypothesis.
> That being said if it was demonstrated that un collapsed wave functions are somehow more efficient to calculate that would definitely give credence to the simulation hypothesis.
Well said. I can certainly imagine a few functions that may prove to be more efficient at generating waves than fixed known positions + velocity for every subatomic particle in the universe.
Here's the part that really takes us off the rails, if we assume for a minute that we are in a simulation and that the parent world has godlike resources compared to our own and they likely have similar hardware concepts (ram, cpu, gpus, maybe even ASICS) then what functions are more efficient would depend on which hardware they have less of.
If ram is plentiful why not have fixed known values for every particle? Store it in memory and let the gpu detect collisions.
If gpus are plentiful (my guess) and we're bound by ram limitations, best to only store positions of things that are visible and clear the rest out of ram for more important calculations.
Imagine how inefficient it would be to render and simulate black holes colliding on the other side of the universe if the players will never even notice. Just queue up the function and run it on off-peak hours. Save the extra server resources for other simulations running in parallel.
If I'm right - a big if - we're likely an anomaly or early prototype among the simulations, one in which the dev team never imagined a race would evolve and progress enough to measure the bounds of their container. If I'm right again on this last point, we're likely being monitored closely to decide if subsequent patches require more resources to simulate completely and avoid player's realizing they're in a simulation or accept it as a remote possibility and move on.
if we're a simulation i somewhat doubt whoever made our simulation is really worried about "resources" and "efficiency". Whatever real universe that may exist may not even follow our laws of physics. Maybe energy is actually unlimited and free in the "real world". Maybe matter can be made from nothing effortlessly.
Why would someone build a simulation of a world utterly unlike their own?
When we build simulations for ourselves, they're always attempting to approximate reality as closely as possible. The goal is to learn useful things about our own world or society and to try out many forking paths, in a simplified representation of reality.
If we're in a simulation it stands to reason that whoever is running it is somewhat human-like, and exists in a world that has basically the same physical laws ... or at least, similar enough that sociological and technological development would be the same. For instance, the speed of light barrier is pretty damn inconvenient for us but would be great at blocking an arbitrarily large population and state space explosion. And why are these magic physical constants so arbitrary anyway?
If we are in a simulation, and our simulators did want to limit their resource consumption, adding in a few physical laws that are never really a problem in daily life and which block us from colonising the galaxy would be a nice way to do it.
Conversely, I would wonder why anyone would simulate something so similar to their own reality. We've already seen with humans that history repeats itself endlessly. Humans haven't really fundamentally changed in thousands of years. And humans do studies of the sociology of other species all the time. I think it'd be much more entertaining to simulate a world the laws are near opposite of our own. I would want to see what a species is capable of when restraints are lifted. To limit them to a single planet seems boring.
As an aside, I think one of my personal arguments that we're in a simulation is that we live in such an interesting time. We're beyond a world with 95% farmers. Technology is advancing faster than ever before. It's such a critical time in human history and the 20th century is personally where I'd choose to start a human simulation. It's convenient that this is our shared spot in time.
disagree on the junior dev part and on the god part.
what you’re describing is called “simulation theory” and it has been proposed and discussed at length
i think lazy evaluation makes sense in that context. i also think that having a few basic rules and after that applying them consistently across your simulation space make sense. if your basic space unit of reality is way smaller than the sims in it can perceive and measure they’re gonna start making stuff up
> disagree on the junior dev part and on the god part.
Good lord, some people here are definitely on the spectrum. FYI that was an attempt at humor. In case you want to mimic human social behavior in the future: you don't _disagree_ with a joke, you either find it funny or you don't.
> what you’re describing is called “simulation theory” and it has been proposed and discussed at length
I'm aware of the theory, just like everyone else whose seen the matrix or read Elon's Twitter feed. For the record, I was merely illustrating a point; both our universe and the code we write share certain optimizations that would seem to be too coincidental to be random.
> Good lord, some people here are definitely on the spectrum. FYI that was an attempt at humor. In case you want to mimic human social behavior in the future: you don't _disagree_ with a joke, you either find it funny or you don't.
Maybe he was disagreeing with your entire assesment and that your joke was neither funny or unfunny, but incorrect
> god is a junior dev and no one reviewed his pull requests.
> some people here are definitely on the spectrum. FYI that was an attempt at humor. In case you want to mimic human social behavior in the future ..
_Your_ failed attempt at humour doesn't justify labeling and gaslighting the person who didn't "get it", if anyone can ever consider that to be a joke.
> For the record, I was merely illustrating a point
Illustrating a point or making a joke? In either case, it doesn't justify your "human social behavior" of attacking the personality traits of the person disagreeing with you. My suggestion to you -in the future- accept that someone can have a different opinion and that you could be wrong and try to counteract that without resorting to labeling and gaslighting.
Being patronizing immediately after suggesting people here are on the autism spectrum is kind of insensitive to people who are autistic.
"God is a junior dev", is a mildly funny trope that has existed as long as I've been on the internet. I believe there's an xkcd about it where God says "we hacked most of it together with Perl".
Anyway, it's hard to tell what is and isn't a joke on the internet due to lack of vocal inflection. Maybe consider going easier on people when this happens in the future.
I always wondered about the "computability of the universe" and its compressibility.
If the universe is actually compressible, then object permanence may actually be a trick, like it is in video games. Objects are generated on demand and deleted to save space and processing.
If it's not compressible, which is where I lean, then the full universe must be fully computed every time. No savings can be made, and any delay in computing an object might propagate and cause recalculation cascades for other objects. I lean this way because from my layman's understanding, quantum mechanics act similarly to random seeds, which increases entropy by a lot.
If it's not compressible, then the smallest computer able to simulated the universe is at least as big as the universe itself.
The article lists three assumptions that one might intuitively think hold. The problematic assumption is this one:
> When someone observes an event happening, it really happened.
Given the experiment from the paper, I would rephrase this assumption as "it's not possible to rewind things". But obviously inside a computer simulation made up of operations that are all individually reversible, it's trivial to rewind things.
The experiment sets up a situation where certain information is reversibly recorded, and then the records are unmade by temporarily rewinding. The rewinding is obfuscated by hiding it inside of a measurement that is incompatible with the presence or absence of the record. I guess the authors might disagree about the measurement implying rewinding, but as a bit of evidence I'll note that Scott Aaronson and Yosi Atia have shown that performing the measurement in question over a simulated agent is at least as expensive as rewinding the simulated agent [1]. Whatever is being done, it is doing some seriously expensive screwing around with the agent's state. It's like the experiment has a step where you feed the human through a giant meat processing plant, and for some reason everyone is pretending that's not somehow important.
Basically, the authors are appealing to the intuition that humans are big complicated in-practice-irreversible things, so clearly a record is permanent if it has affected the state of a human. But then they imagine instantiating the human's state inside a ridiculously powerful computer capable of reversibly simulating time advancing and of performing operations that have been engineered specifically to mess with the presence or absence of the record's effects on the human's state. Surprise surprise, the record gets messed with. Then for the actual experiment the big complicated ball of dependent spaghetti that is a human is replaced by a nice simple photon going along one path or another path.
Based on my very limited understanding it seems that anything that interacts with a system can be treated as an "observer", and thus a chain of observations can occur as one particle interacts with the next, enlarging the system that is in superposition for the next observer.
Do physicists axiomatically treat a "conscious observer" as an atomic unit? It seems the case based on neurological and psychological research that the mind is actually not a single unit but many units working together, and the sense of a singular self is an illusion, which would correlate with the fact that we are made up of a conglomeration of atoms and systems made of these atoms.
When we say that a person is observing a quantum system, could this actually be a cascade of observations of various parts of the mind rather than a singular event, since the mind is not monolithic?
On a side note (and perhaps unrelated), if different parts of the brain/mind could be in different quantum states, could this be a factor in how the mind operates?
I don't know to what degree the metaphor is accurate but as a non-physicist I find thinking about quantum mechanical experiments easier if I think of them as running code on a vast decentralized computing infrastructure.
The concept of "measurement" is just executing code on one node that populates local registers- "creating" not "measuring". Yes- by running code you are pulling on a reality fabric that we still know almost nothing about. But I find it easier to visualize than relying on my colloquial/non-QM intuition in words like "measuring"- which implies something like a single centralized reality database, not a creative decentralized infrastructure. QM experiments seem to create things, not measure them.
Would love to learn metaphors of folks in the space.
I’m just now teaching myself QM (20 years after school). I can tell you that in QM, seemingly innocuous phrases like “locality” have highly unintuitive, nonstandard meanings. I’m just now getting a glimpse at how amazingly fucking awful pop-sci QM articles are.
If you’re interested in QM, read “The Theoretical Minimum”; the second book gets into QM. There are video lectures of the same material that complement the series, we’ll.
This perspective is interesting and potentially valuable, but it must be emphasized that quantum mechanics has linear evolution and hence “no cloning”. That means, when the registers are quantum objects, there is no such thing as copying a value into a register — there is only one version of each object, and you’re always manipulating it directly. That’s one aspect which leads to very difficult questions when thinking of “creating” at local nodes — because it’s the same object/copy that’s accessed at different nodes by various observers — so how can we ensure consistency of observations?
Thanks, understood- the metaphor wasn't intended to imply the registers themselves were quantum objects, more that the process of observation / measurement of some quantum phenomenon- metaphorically- could be a creative/derivative process. That metaphor/mental model would conflict, to some degree, with the centralized/cache coherency metaphor you cite, that aligns more with colloquial understanding of "measure."
Whether one is more appropriate than another I don't know. But I find the decentralized model produces more interesting naive questions for me to puzzle over. Of course those questions likely lead in a completely wrong direction. Cheers, thanks for engagement.
An alternative way to view measurement is that you're just entangling yourself with the quantum system, and are as much in superposition as the system itself.
Presumably, nothing was in superposition when the universe began. Everything after that evolved in time according to the Schrodinger equation, with everything becoming more superposed (i.e. their exact properties less certain) and entangled with everything else to at least some degree. Things can be more or less entangled with other things, and when you measure something you become very strongly entangled with it.
The part that seems the most dubious to me is the "choice":
> 2. It is possible to make free choices, or at least, statistically random choices.
I don't see what would be really compelling in that.
If it's false it would just mean that our actions are all kind of mechanistically decided by other events, and that does not shock me.
Other point:
> 1. When someone observes an event happening, it really happened.
What if it really happened in the reality of the observer (and maybe not in other realities)? What is meant by "reality"? Some insight is to be gained by precising what we call the reality, in relation to the corpus of all the observations in what we believe is our universe.
The main problem of quantum interpretation is the supposed, artificial, distinction between observer and observed system. In a quantum world this distinction is not possible, every event capable of "observing" a system is itself part of the system, modifying it. That's why such interpretations (postulating this separation) give all kinds of strange paradoxes that are impossible to solve. In the quantum world, there is no such thing as an observer, it is just all part of the system.
Consider for example the system composed of a pool table. In classical physics we can assume we can always measure the position and momentum of all balls in the table. But what would happen if measurement was not possible without big changes in position and velocity? Suddenly the whole notion of "observation" would lose its meaning, and the only way to describe the pool table is as a probabilistic mesh of balls moving around in uncertain directions.
Yes, I think you put your finger on it. As the observer is part of the system, his "reality" is as much superposed, entangled, etc., as the quantum events that compose it (himself and his observation too).
That would give credit to the multiverse interpretation of quantum mechanics then.
Maybe the observation process of a quantum phenomenon by a human observer is akin to the orthogonalization of a matrix:
as the observer becomes entangled with the observed particle, each possible observer-eigenstate (i.e. any observer in "his reality") ends up observing an eigenstate of the observed quantum phenomenon, instead of a dirty superposition.
To call it "orthogonalization of the observer-observed system" would be more descriptive than "wave-collapse", but the main difference is that the orthogonalization treats in parallel all possible outcomes, each linked to a different state (outcome) of the observer too, whereas the "collapse" view insists on the fact that the observed eigenstate is unique. And moreover, at least in mathematics, there are conditions for the orthogonalization to be possible at all, so that should be interesting.
Well, the problem is that the reality we observe daily is not at all like quantum reality. We know exactly where our instruments are, even though we know that they are completely stationary. There is no particle/wave duality to tennis balls.
This is the origin of the problem of observation - the predictions of quantum mechanics break down at some point, and we have no idea why. In particular, this happens very clearly when our measurement instruments directly interact with a quantum system, so we call this 'the measurement problem'.
Of course, we can postulate that in fact, despite all of our observations, both subjective and objective, the classical world also behaves quantically but we just can't notice it happening (e.g. the Many Worlds interpretation does this).
We can also say that the quantum world does NOT in fact behave that way, but that the universe is perfectly deterministic and just set up in such a way that we can never observe events that contradict this vision of the universe (superdeterminism claims this - that for example you will always happen to choose the right dial on a measurement apparatus to observe entanglement, but that there is no fundamental reason why that should happen - it could theoretically happen otherwise, but in this universe, ever since the big bang, it has been decided that at some point you will set that dial in the particular way that will make you observe the entanglement result).
There are other theories that postulate that quantum mechanics is in fact deterministic, but very hard to predict, but there are non-local effects (deBroglie-Bohm pilot wave theory is such an example).
We can also ignore the whole hope of trying to understand all this, and simply use the math if and when we need it to predict particle phenomena.
Even in the classical world we measure that pool table by interacting with it. If I stand next to the table and see it, it's because light hit it and then hit my eyes - but I've absorbed whatever light might have gone on its course through me as an observer and maybe hit the table again eventually (attenuated), and I also happen to emit some light myself. My body being present and warm, my breathing of air in the room, etc.. creates convection currents of air, which affect the fluid mechanics of the same volume of air that the balls are in.
My brain can't really perceive much of any of that and does a lot of filtering of information, and we've designed the pool table to be something we grasp and play with easily so the relative inertia of the balls will mostly overcome all the more minor terms to take into account wrt fluid mechanics, but all of that is classical physics already. "Closed systems" might only exist conceptually.
You may say that it's not surprising, but in fact all statistical-style reasoning relies on it. If the universe is superdeterministic, than no kind of statistical reasoning makes any sense, since everything is already set in stone, there is no probability to any event, it has either happened already or will never happen. Since there is no freedom or even randomness, there are no statistically independent events or measurements.
> no kind of statistical reasoning makes any sense
Probabilities are useful when you don't have perfect knowledge. But yes, they are an illusion. In truth, every event has a 100% chance of happening. And things that didn't happen aren't real events, they just seemed to be possible, but we were wrong.
Sure, but please stop requiring clinical trials for new medicine, as they have no more relevance than throwing a coin. Oh, and stop throwing coins, as the result is correlated with the future decision which you will take anyway.
In general, all of modern science, starting with quantum mechanics, depends on the idea that statistical independence exists. To truly abandon this idea, we would abandon almost all of the mathematical apparatus that has given us QM and all experimental observations of QM.
No principal investigator of a clinical trial has perfect knowledge of the current state of reality, nor the means to calculate subsequent states. Therefore statistics are very useful.
They may or may not have perfect knowledge, but if they don't have free will or any ability to randomly choose participants, then there is no reason to trust that the patients they choose are not correlated to the medicine they are testing, such that all of the patients they chose will have the maximum response possible to the medicine, with the least amount of side effects.
The whole premise of superdeterminism is that things which we deem to be uncorrelated are in fact correlated in such a way as to respect Bell inequalities while still preserving local realism.
The bell inequalities have been shown to hold true when the measuring 'decision' was left to signals coming from a pulsar some untold number of millions of light years away, implying that the particle you just generated today has its hidden state correlated with the state of that pulsar that many million years ago. So it seems that nature must be much more highly correlated that we would expect, so I don't know what you would trust as pseudo-random anymore.
Perhaps at some point we will be able to do the same triggered by fluctuations in the cosmic background radiation, proving that either quantum entanglement is not both local and real, or that the state of the two particles you just generated was decided at the big bang in such a way as to seem like the Bell inequalities are true.
Nice video. Now I wonder if God uses structurally shared data structures in his implementation of the world. I also wonder if there can be a kind of continuous description of branching that is more amenable to calculus than the discrete tree-like branching.
I think MWI actually predicts continuous branching. The prescription for MWI is just to run the Schrodinger equation forwards in time. So the branching would look like one blob of probability gradually becoming bimodal, and then splitting into two components with less and less overlap. (There might also be 3-or-more-way branches, not just binary splits.)
Great video. I have a question though. Is there a non zero probability that any two versions of the same person in many worlds could interact with each other?
> if the "friend" is a human-level artificial intelligence running in a massive quantum computer.
Well, there's your problem right there. No AI can run on a quantum computer. Intelligence is inherently a classical phenomenon because it requires copying information, and quantum states cannot be copied, only classical states can be copied.
The classical world "emerges" from the quantum world when you take a quantum system and choose to consider only a subset of its degrees of freedom. When you do that, what results is a "mixed state" which behaves classically. This is not a reflection of any (meta)physical reality, it's just a consequence of your point of view. But here's the thing: you yourself are a subset of a quantum system, and so you yourself have no choice but to take this point of view. You cannot ever take the "god's-eye view" and "see" the whole system. That is fundamentally impossible because "seeing" requires copying information, quantum states can't be copied yada yada yada. Even God cannot "see" the god's eye view!
I'm not convinced that intelligence requires copying information in the way that the no-cloning theorem forbids. Approximate cloning is not forbidden for example.
The problem is not copying internal states, the problem is I/O. A given instance of a quantum computation can only be queried once before it decoheres and goes "poof" because the querying process itself decoheres the system. So a quantum computer can never pass the Turing test because it cannot even participate in the Turing test without destroying itself.
> The problem is not copying internal states, the problem is I/O. A given instance of a quantum computation can only be queried once before it decoheres and goes "poof" because the querying process itself decoheres the system. So a quantum computer can never pass the Turing test because it cannot even participate in the Turing test without destroying itself.
You seem to be saying that a computer can only be called a quantum computer if it never decoheres. Or perhaps that it is only usable in some sort of batch-mode, like mainframes with punch-card decks, with each calculation cycle being a laborious one-shot affair.
I have little doubt that many near-future quantum computers will be used in exactly that mode, but also that at some point thereafter their coherence/decoherence cycles will start to be measured in Hz, Khz, etc.
Therefore there should in principle be no barrier to a quantum computer participating in a Turing test, or any other use-case that requires interaction.
> You seem to be saying that a computer can only be called a quantum computer if it never decoheres.
Not quite. I'm saying that in the context of using a quantum-computer to play the role of Wigner's friend, it must be isolated from the ultimate observer. If you're not going to require that, you might as well just do the experiment with an actual human.
Requiring the the quantum algorithm have literally no interaction with the outside world to count as quantum is a strawman.
An AI could use Grover search when trying to find a good enough response. That would be like part of the AI going into and out of superposition for an efficiency boost.
Many quantum tasks are more efficient if certain catalyst states are present [1][2]. That would be like a part of the AI staying permanently in superposition while still ultimately contributing to classical results, again for an efficiency boost.
Let's not lose the plot here: we're talking about a Wigner's-friend-type thought experiment. If you're not going to require that the quantum computer be isolated, then you may just as well use an actual human.
Agreed. The measurement at the end of the experiment requires the agent's state to stay under superposition with no leaks (or perhaps only trivial leaks that can be patched over).
> Intelligence is inherently a classical phenomenon because it requires copying information, and quantum states cannot be copied, only classical states can be copied.
Ok, so make classical pointers to quantum states and copy the pointers instead of the quantum states :)
A good point, but quantum computation can do anything that reversible classical computation can. If you do your entire computation while keeping all the qubits in the {|0>, |1>} basis, then performing a CNOT onto a qubit initialized to |0> will work as a classical copy. (It will fail to truly copy the quantum state, but that's fine.) You just need a large enough supply of qubits initialized to |0> for all the copies you plan to do.
> quantum computation can do anything that reversible classical computation can
Well, yeah, but that "reversible" hedge is quite significant. Reversible means you have to retain all of your intermediate states, and it is far from clear that you could build a reversible AI that would run for more than a few milliseconds before consuming all of the available resources in its light cone.
> You just need a large enough supply of qubits initialized to |0> for all the copies you plan to do.
Yes, and that produces exactly the same problem: it is far from clear that there are enough qbits in the universe to build a quantum AI this way.
You also have a more fundamental problem, which is that neither quantum computers nor reversible classical computers can do I/O, so how are you going to demonstrate that this thing is in fact an AI? It can't even participate in a Turing test, let alone pass one.
If your goal is human level AI, there are definitely enough qubits. Humans can subsist for quite a while on just a sandwich and a large room full of air. And that is with very inefficient usage of the available negentropy.
I guess one would have to demonstrate that the program implements an AI by running it on a computer with I/O, then move the same program over to an isolated quantum computer once it's passed that test. Maybe not completely philosophically satisfying, but should work in practice.
> Intelligence is inherently a classical phenomenon because it requires copying information, and quantum states cannot be copied, only classical states can be copied
I don't agree with this. The recent research in quantum biology[1] suggests that biology at least can read from quantum states -- birds may use it to navigate, our sense of smell may be a quantum interaction, and plants appear to use quantum properties to achieve almost perfect efficiency for photosynthesis. The human eye may be able to detect a single photon.[2] General quantum activity also seems possible, since lithium-6 and lithium-7 appear to produce different pharmacological results despite being only one neutron apart[3].
I also fall into the camp of supporting the Orch-OR theory of consciousness[4], which claims that our interpretation of a classical system for brains is incorrect, and that neurons have internal structures called microtubules which can maintain quantum states. Microtubules have been known to exist for decades, but it was thought that their purpose was purely structural. Anirban Bandyopadhyay demonstrated[5] that these structures appear capable of storing information at the quantum level.
Orch-OR would explain how anesthesia works, and how biology with classically simple brains are capable of extremely complex lifecycles and self-organization. I don't know enough about neuroscience to have any real insight on its validity, but there seems to be a huge disconnect between where we think we are with AI and the results so far. I have watched a butterfly navigate through a chain link fence on a windy day, and that kind of computation doesn't seem possible in a classical model, from a space or an energy perspective.
> Even God cannot "see" the god's eye view!
The strongest argument I see for a concept of God is the fact that the universe doesn't break. Quantum brains provide some interesting possibilities for the privacy of thought co-existing with some sort of perfect originator. In any case, given the amount of computation it would take just to model the biology of one person and the underlying quantum states, I am almost positive that we are not in a simulation.
For the purposes of being a stand-in for Wigner's friend, no, obviously not. The whole point of the thought experiment is that the "friend" observer exists in a superposition.
That depends on what you mean. If you mean using a simple system (like a single photon) as a stand-in for Wigner's friend, that has already been done. That's what the OP was about. If you mean using a complex entity like an actual human or an AI that can pass a Turing test, I'll give long odds against.
The later... I ask because every time I read about some thought experiment, it turns out to be an impossible experiment.
In a recent story I asked in general and others told me that there were thought experiments that became proper experiments. But it seems that they were somewhat different, more like pre-experiments. When something gets stuck in the thought experiment category, it's more like a no-experiment.
I have no problem admitting that this kind of ellucubration is useful to anticipate problems. But extracting conclusions from an impossible setup? That's like asking what would happen if an irresistible force meets an immovable object.
Entanglement means that there are systems with highly (more-than-classically) correlated state - that the combined state is something that can’t be represented by considering the systems independently - not that the states are the same. The no-cloning theorem prohibits a process that takes the state of a system and replicates it on another system. It’s certainly possible to have or to generate entangled sets of systems that have identical separable state, but not to copy an arbitrary state.
> Take a look at these three statements [...] they cannot all be true
Okay, so we have to kill one. The easiest one to kill is
> It is possible to make free choices.
In other words, quantum mechanics is incompatible with free will. This is clearly shocking to any physicist who believes in Free Will (as defined by philosophers thousands of years ago). But hasn't Free Will been recognized as nonsense for at least two centuries by now?
> But hasn't Free Will been recognized as nonsense for at least two centuries by now?
Among some, sure and it's not just physicists [1].
Acknowledging that the concept we've called free will is not likely to be real has some troubling implications regarding how we've structured our society which is likely one of the reasons it's not talked about very often.
The members of a system can 'experience' free will from their perspective, but it's a still a deterministic line of causation; their free will is part of the determinism.
In fact, all statistical experiments, from clinical trials to observations of Bell's Inequality depend on the idea that there exists either free will or true randomness (and that we can base our decisions on that source of randomness). If neither exist, then there is no such thing as statistically independent events, and no way to make statistical observations (since that would imply that your choice of what to measure would always be correlated with the result of your measurement).
Have it ever happened o you that you read "a pillar of reality" and it turned out to be more than a clickbait?
Me neither.
(To make it clear: the content is interesting, even if clearly for popular science audience (some statements about physics are false, e.g. "These are all intuitive ideas, and widely believed even by physicists." The title is worth a tabloid.)
this is not convincing at all. we pretend that we can have perfectly isolated quantum systems (a particle, a few particles) and we can reason about and extrapolate the findings. we cannot have isolated particles - they can be regarded as isolated for experimental purposes but the reality is that probably the whole universe is a massive wave function that encompasses everything and we can approximate in some really special cases (ie system above).
we also pretend that consciousness is something special and altering the state a somewhat complicated wetware had consequences on “reality”. it doesn’t.
the measurement problem is an extremely good example of how we think. a measurement does not make sense to a cow. it’s abstract. we made it up. we have conventions that help make it useful, but those conventions break down when our senses (or things we use to amplify out senses) cannot measure what we want to measure.
One this I always notice to be missing from discussions of quantum physics is the effect of scale. Does differences scale have relativistic effects, in a similar manner to differences in internal frames, and if at what do events in one scale frame of reference become simultaneous of an observer in another?
So, Schrödinger's cat has been alive the whole time and it’s rest of us in a state of superposition because the cat is conscious but can’t observe anything outside the box? Or because we are conscious and the cat conscious, we all retain our states regardless if we are in or out of the box?
Think of it as LOD generation in a game. As something observes or is near enough to observe, atomic matter is generated on demand. The Matrix; our simulation.
The paradox seems to rest on magical thinking about consciousness, and if one simply accepts that a conscious observer can be in a superposition like any other piece of matter, the paradox is resolved.
And, from the article:
For Wigner, this was an absurd conclusion. Instead, he believed that once the consciousness of an observer becomes involved, the entanglement would “collapse” to make the friend’s observation definite.
But what if Wigner was wrong?
Well, obviously Wigner is wrong, sorry for being flippant.
In this paper (preprint here: https://arxiv.org/pdf/1907.05607.pdf), the magical thinking about consciousness seems to be transmuted into "“Absoluteness of Observed Events” (i.e. that every observed event exists absolutely, not relatively)". The authors (it appears to me) mean by this that if an observer performs an experiment, while this observer is himself in a superposition, we have to regard the outcome of the experiment as absolute (i.e. not in a superposition) because it was made by a (conscious) observer.
In my opinion, both "Absoluteness of Observed Events", and the equivalent from the layman's article, "When someone observes an event happening, it really happened", is a disingenuous and confusing way of talking about observers who are in a superposition. We have crossed over from "quantum mechanics is weird" to "these superficially intuitive but clearly false assumptions about quantum mechanics are weird".
(by the way, I read the article here: https://theconversation.com/a-new-quantum-paradox-throws-the... because that site doesn't think I'm a robot and then redirects me to the homepage after filling out the captcha)