Hacker Newsnew | past | comments | ask | show | jobs | submit | jostylr's commentslogin

In EPR, the setup is that there are two labs doing measurements outside of each other's lights cone. The outcome in one lab allows a perfect prediction of what happens in the other. This means that it is not possible that something random is going on in unless there is some nonlocal coordination between the two. This suggests that there is some actual fact of the matter as to how the experiment will turn out. That is, they argued that QM+locality = extra information beyond the wave function to determine outcomes. Bell then saw Bohm's theory and wondered about getting rid of the nonlocality. Bell showed that QM+extra info determining outcomes = nonlocal. In short, EPR + Bell shows that if QM predictions are correct (the predictions, not the theory), then there is something nonlocal going on. The lab experiments confirmed this and nature is indeed nonlocal.

Thus, there is no local theory that has definite experimental results compatible with what is actually demonstrated in labs. Many worlds, to the extent that one can apply any notion of locality to it, avoids this by not having singular, definitive experimental results (all results happen).


> The outcome in one lab allows a perfect prediction of what happens in the other.

I guess you know this, but just to clarify, that's only if the same measurement is performed in the other lab. If the other lab measures an orthogonal spin component, that result can't be predicted at all (I'm assuming entangled spin-1/2 particles for simplicity). It's more precise to say that measurement in the first lab tells you the state in the second lab, and with that information the probabilities for the various possible measurement results in the other lab can be predicted. In particular, if the other lab measures the spin along the same axis, the results can be perfectly correlated, as you say.

So there's some kind of nonlocality, but it's not the kind of nonlocality that makes problems with relativity, because the correlations can't be used to signal or cause any difference in the distant lab, only to predict, in general probabilistically, what would happen in the other lab if some measurements are performed. So entanglement allows this interesting middle ground between a local theory and a theory that's nonlocal in the sense that it would allow nonlocal causation, which is the kind of nonlocality that would worry Einstein. There should be different words for the different kinds of nonlocality, but maybe nonlocal correlation versus nonlocal causation serves the purpose


In EPR, it is critical that it is the same measurement. Bell explores doing different measurements. For EPR, they assumed that if you can predict with certainty what happens in a space-like separated region, then there must be a fact of the matter about it. Not being probabilistic was very important for that. Bell then showed that there cannot be a fact of the matter without there also being some nonlocal means going on in order to account for the QM predictions. It is critical to appreciate the two separate pieces of arguments, how they differ, and how jointly they do lead to some kind of nonlocality. Tim Maudlin has a, now old, book exploring these different levels of nonlocality in quantum mechanics.

I recently heard a talk from Tim Maudlin where he mentioned that foliations are the easiest and most natural structures to use to provide nonlocality and, if there is such a thing, maybe there is a clever way of using it to actually communicate and discover the foliation in some sense. He mentioned there is current research on using arrival times which are experimental results outside of the operator formalism, as far as I know. I found an article describing the research:

https://www.altpropulsion.com/ftl-quantum-communication-reth...


> In EPR, it is critical that it is the same measurement.

I must admit I haven't read the full EPR paper, only post-Bell expositions and excerpts. But you can have perfect spacelike correlations of the same measurement classically as well, e.g. if two particles having opposite (angular or linear) momenta are sent from the midpoint towards distant labs, measuring one momentum will tell you the other one. They must somehow discuss making different measurements no? Maybe they effectively discuss a protocol where the two labs agree on the same sequence of orthogonal measurements. I should read these sources sometime...

Thanks for the ftl reference. It would be astonishing if their hypotheses are borne out. I find it unlikely, but of course the experiments will have to decide, so I'll keep tabs on that. By "foliation" in this context I guess he means a foliation of spacetime amounting to an absolute reference frame. I've seen Tim Maudlin discuss something like that before.

By the way, the article you linked mentions a couple of times the importance of distinguishing signaling from causation or action, but doesn't seem to define how they're distinct. Do you know some more formal article discussing the proposed experiments? The sources given in the article are just to video interviews.


EPR's point is that there is nothing mysterious from a classical perspective of being able to deduce this. They were arguing against the presentation of QM as to there being no fact of the matter about what the momentum is before the measurement and that it randomly becomes whatever it becomes when measured. Their point is that if both particles are randomly collapsing into their choices, then they should disagree at some point unless there is some nonlocal causation happening. Einstein rejected nonlocal causation, reasonable given what he knew at the time, and thus the momentum measurement result must already be preordained by something and it is then like the classical setup.

Bell's work was to show that it had to be the nonlocal causation.

>Do you know some more formal article discussing the proposed experiments?

I do not know of an article, but Maudlin's book Quantum Non-locality and Relativity goes through the various notions of locality and what QM says about it. There is a chapter about signaling and another about causation. It also covers the GHZ scheme which is a non-probablistic version demonstrating non-locality. It is pretty clean.

>Do you know some more formal article discussing the proposed experiments?

I have not read them, but my understanding that Siddhant Das is pursuing these and here is a link to his Arxiv papers which talk about arrival time experiments though I do not know if it is directly about these.

https://arxiv.org/search/advanced?advanced=&terms-0-operator...


Thanks, I had a look at Maudlin's book. It seems the distinction between signaling and causation is that there might be some kind of nonlocal causation that we can't control and so can't use to send a signal.

Local causation is defined as in Bell's Theory of Local Beables, as the probability distribution for values at spacelike separated regions only being correlated with respect to the overlaps in their past lightcones. Or to put it the other way around, there's nonlocal causation if the probability distribution of values in one region depends on values in a spacelike separated region. That's what I'd call nonlocal correlation rather than causation but I guess that's just terminological.

This looks like a pertinent paper from Das but I haven't read through it yet

Arrival Time Distributions of Spin-1/2 Particles (2018) https://arxiv.org/abs/1802.07141


If MWI is true, then nature is local without extra information beyond wave function.


For non-relativistic QM, the QM formalism is provable from Bohmian mechanics, an actual particle theory. BM starts from particles have locations the change continuously in time via a guidance equation using the wave function of the universe. One may choose other theories to explain quantum phenomena, but to say "There is simply no physical machinery to support an objective reality, period." is just false, at least in that realm. As for relativistic QFT, there are plausible pathways using Bohmian ideas as well though nothing as definitive as BM has been firmly established.

I would also say that any theory that does not have room to say definitively that I exist is a theory that is obviously contradictory to my experience and is therefore falsified. There has to be room in the theory for at least me. Additionally, I would certainly value much more a theory that has room for the rest of humanity more than one which questions the existence of everyone but me. I am not even sure what the point of a theory would be if it could not account for collaborative science being done.


QM does not deny you existence, it rather denies you a complete objective description of how you exist. Or perhaps it says that your existence is not an objective phenomena.


Would you mind clarifying in which of these 3 dictionary definitions of the word objective my existence (in the sense of the "particles" of my body) is not objective? Or maybe these definitions are not exhaustive? Perhapse the term objective has become overloaded.

objective

adjective

Not influenced by personal feelings or opinions when considering and representing facts; impartial. “Historians try to be objective and impartial.” Synonyms: impartial, unbiased, neutral, dispassionate, detached. Antonyms: biased, partial, prejudiced.

Existing independently of the mind; actual. “A matter of objective fact.” Synonyms: factual, real, empirical, verifiable. Antonym: subjective.

Grammar. Relating to the case of nouns and pronouns used as the object of a transitive verb or a preposition.


BM is objective, and indeed deterministic. I'm not exactly sure what you mean by "complete" but it has all the same predictions as other interpretations of QM. It has some odd quirks however, such as explicit non-locality.


Since EPR+Bell showed that nature is non-local, it is a feature, not a bug, to be explicit about how non-locality happens. Collapse theories are also explicitly non-local.


That's one position in a century-long debate. But there are other assumptions than locality in the proof of Bell's Theorem, which other interpretations of QM relax. Like having single measurement outcomes (many-worlds), or observer-independent states (QBism).


In terms of quirkiness, how would you rank them? I feel like nonlocality is far less quirkier than saying that all possible outcomes of a measurement happen even though we just see one. Also standard QM has the quirk of being nonlocal. So QM is just quirky.


There are many that I don't understand very well, so I'm reluctant to rank them. My tendency is to be skeptical about how clearly us humans can see the underlying reality of things, so I find epistemic interpretations like QBism appealing on that basis.

The "every outcome happens" aspect of many worlds is a lot to accept. Otoh that's what you get if you take quantum states to be ontological and universal. My problem is more to do with how the Born rule falls out. There are some arguments for it based on decision theory, but I find the step from "this is how a rational betting agent maximises winnings" to "this is the objective probability of a scientific observation" uncompelling.

I'm not sure what you mean by "standard QM". There's the mathematical framework - which is effectively a way of calculating probabilities of measurement outcomes - and then there are interpretations, which assign ontological status to some/all of the mathematical objects. Non-locality properly applies to the latter, since you cannot say that the "real" physical state of a particle has changed until you've said which parts of the mathematics are real.


I don't at all begrudge you your logical predictive fictions.


Crime is often hyper localized. In some areas of some cities, crime may be going up while the overall rate of the city or country is going down. These intensive areas can also change over time. I am not aware of any analysis of the localization of crime and how it changes over time. There are a lot of choices to be made in doing that analysis, but if a reasonable local analysis across a country did that and found that in all localities crime went down, then that would seem reasonable to dismiss that guy's actual experience. The localization should probably on the neighborhood level, maybe on the order of 1000 people instead of 10000 or more.


Which is why I didn’t deny his subjective experience. I only disagreed with him extrapolating his local experience to the whole country. His area might have become rougher, but the UK as a whole is seeing less crime.


But the overall crime rate is lower no? Less people experience or are affected by crime if the average goes down.


pro-communists want a system without competition. Pro-capitalists do want competition or at least no restrictions. People who are practicing capitalists would love to not have competition.

Think of a prize race. The people organizing the race and the audience want a highly competitive race. But the racers, if they are in it for the prize, would love to have little to no competition.

Artificial restrictions on who can do a thing is not good and it is violence behind, whether it is government or not.

Anarcho-capitalists are the true expression of the ideology of free markets.


I prefer 90% free 10% preventing monopolies, myself.


As far as I can tell, Austrians believe that analyzing based on individual preferences and actions is the key. I haven't made it very far into Mises Human Action, but I gather that was the starting point and Mises developed many theorems based on that. So if you say that allowing individuals to maximize their preferred choices given the constraints around them is "the market" and artificially restricting those choices, using acts or threats of violence, is "the government", then yes, I suppose that is their framework. One of the main things they would probably push back against is the notion to think of the market or government as independently existing entities. The government is just a collection of individuals who use violence to impose their will on others in a way that most of the society is agreeable to, at least to some extent, and, because they are individuals, they have their own preferences and actions which can also be analyzed in this framework. If you think people in the market do things that are nasty when all the actions are voluntary (and that certainly can happen), shouldn't you be even more skeptical of those who are willing to engage in violence to force the things they want to come to be to happen?

Austrians would also object to the notion of something being perfect. But they do analyze situations in which an external force interferes with the normal voluntary flow of actions and generally find that the stated outcomes (probably not the actual desired outcomes) of those external forces is generally not met. For example, rent control is often argued for as about helping more people to be housed and, empirically, that usually is not what happens when rent control is enforced. One could argue that the real intent of rent control is about making life better for the well-connected at the expense of the less well-connected and that probably does happen regularly.


If I were to summarize the insights of economic schools of thought, then we get:

1. Keynesianism: "Your spending is my income, so when there's not enough spending, the government needs to step in".

2. Monetarism: "The monetary supply directly controls the economy and is the primary reason for economic phenomena".

3. Austrian economy: the market god, the market is king, all hail the market.

The first two approaches provide actionable models and make predictions. As with all models, they have limits of applicability, and they are often wrong to some degree.

Meanwhile, Austrian economics is always right. And when it's wrong, it's because you haven't done it hard enough.

> If you think people in the market do things that are nasty when all the actions are voluntary (and that certainly can happen), shouldn't you be even more skeptical of those who are willing to engage in violence to force the things they want to come to be to happen?

Well, let's look at a particular example: pollution regulation. Laws limit the almighty Market by forcing compaines to clean up their waste.

Another example is monopolism. In the view of the Austrian economy "school" it is _always_ the result of government actions. And monopolies wouldn't exist otherwise, even for things like water supply and sewer.


As an American, all economic discussion I've ever seen has been posturing as #1 or #2 while desperately pretending you aren't just trying to justify #3.

The Capitalist Market is the One True God of America. All the math and waxing philosophic are just set dressing to make that idea less obviously absurd.


Abusive monopolies without the imposition of violence (either by the government or the company) lasting for a long time horizon? Yes, that would seem implausible in a free market. For example, if someone controlled the water pipes and charged a $1000 for a cup of water, then someone would find a way to bring in water for less and would start contracting to build their own pipes, maybe locking in long term contracts with people given the time horizon of construction. That threat alone keeps the prices low. But I also live in a city with city run water and sewage where the infrastructure is falling apart and the cost is on the order of 2k a year for normal water usage and actually would be about 1500 if no water was used. Sewage overflows, water goes brown and federal intervention is required. So a little competition might actually be useful.

As for pollution, have you looked at the history of state run industries and their pollution record? How well does the US military manage its pollution, particularly prior to the EPA when the public consciousness shifted? How many of the worst private polluters were in the service of the government (such as for the military)? There are also tales of the communist countries and their abuse of the environment. And in the context of a democracy, one would assume that if it is faithful to what people want, then to have pollution controls requires at least 50%+ of the population to want them. That sounds like a strong market incentive to provide that not to mention actual destructive pollution can be subject to claims by those injured by the polluters. While it was before the largest amount of industrial pollution, there was a time in the US before the government got involved where pollution was restricted by such considerations. Companies did not like that so the government started to regulate in order to protect the polluters. Time and time again, actual government legislation is used to either protect the guilty or it comes in when 90% of a problem has already been resolved.

Also, the first two economic schools of thought you list do not make any basic sense. If it is just spending, then why would there be boom bust signals? Why doesn't everyone just keep spending? Something else must cause a reduction in spending which ought to be pretty important. If monetary supply is the only control for the economy, then set it and forget it on the trajectory you want. Since there doesn't seem to be a stable path, then some other factor is important to consider.

For either of them, why not just print up a million dollars for every person? Do you suddenly have a supply of million dollars worth of goods for everyone? No. There is real wealth that has to be produced and that is why futzing around with money is not good enough.

The information coordinating function is that of prices which requires a relatively stable money supply for accurate signals. If the money supply is artificially tampered with, then the entrepreneurs make bad bets, thinking that either there are more resources then there are (inflationary monetary supply, boom period) or there are less (deflationary monetary supply, spending contraction). The first case leads to half-completed projects when actual resources run out across the economy (bust). This leads to recession/depression which is a time to realign the resource allocation to what is actually desired if government stays out of the way. Compare the 1920 economic downturn (hands-off government, rebounds quickly) to the 1929-1940s economic depression (heavy government intervention under both Hoover and even more Roosevelt). In the second with deflationary, it is idle resources that are the result, they get cheaper, and eventually leading to a boom. There aren't too many examples I am aware of of this though there is a train of thought that the late 1920s had inflation (to help the British with their war debt?) and then the Fed reversed course and starting deflating the money supply cause quite the shock. In any event, both are examples of problematic time periods during the price readjustment to the new value of money.

The main reason the government inflates money is so that they can spend without explicit taxing (inflation is an implicit tax for those that do not get the first rounds of the money printed) and allows for the wealth to borrow to acquire assets, where asset prices inflate with the money supply while the debt burden deflates with inflation. This is specifically to help rich people get much, much richer.


> Abusive monopolies without the imposition of violence (either by the government or the company) lasting for a long time horizon?

Example: Google. It happened all by itself in an essentially unregulated area, without any real government action.

> Compare the 1920 economic downturn (hands-off government, rebounds quickly) to the 1929-1940s economic depression

The 1920 downturn was _stopped_ by the government intervention. You're confusing the cause and the effect.

Want another example? Look at 2008. The US went with a tepid Keynesian approach of fiscal stimulus and quantitative easing. So the economy recovered to pre-recession levels in 2 years. Europe went with the Austrian approach of austerity and tight monetary supply (they RAISED the interest rates!), and it took 11 years for them to claw back to the pre-recession levels.

And what is the conclusion of Austrians? That there was not enough austerity!


It was about abusive monopolies where consumers want something different. This is easily fixable by competition. Google is a perfect example of how incredibly easy it is to escape that monopoly. I do it all the time. Imagine what would happen if google started charging a $100 a month for its services. The issue is that the current situation does not conform to what "superior intellectuals" think people ought to do so they want to use violence (government) to force people to live the way they see fit. Yay! All it takes is changing the default. And the anti-monopolists did not even try to do a public awareness campaign of this evil; they went to court (violence) instead of persuasion.

I am unaware of what government intervention you are talking about in 1920. I have heard explicitly that the government did nothing by historians and I asked ChatGPT and it had nothing [1]. In that same conversation I also asked it compare Europe versus US in 2008 from an Austrian perspective. The main thesis Austrians have for busts is that of misallocated resources based on false price information whose remedy is reallocation, often through bankruptcy and repurposing of capital goods. It seems that the US was able to have a better reallocation of resources. I am not sure entirely of the mechanism, but at least some of it was allowing some things to fail and some of it might have been the government going in and manually realigning these things (taking over in the short term). It sounded like Europe did not allow for that, either direct intervention or simply allowing things to fail -- the bad businesses limped along as zombies. Europe kind of did the worst of both worlds.

As for the US, it also suggests that the Austrians, and I have heard this, cite our extreme debt, and it keeps growing, as a sign of a reckoning to come. Kind of like one can keep pumping sugar in to deal with sugar lows after a high, but eventually the bill comes due. Keynesians and others seem to view the economy as a short-term adjustable kind of thing, a chemical reaction with just the right reagents producing something wonderful. Austrians view it as a lumbering ecology, with things adapting and to the extent adaptation based on truth is present, it gets better. To the extent that distortions and violence happen, not so good. We shall, unfortunately, probably see soon enough unless AI can make a productivity miracle happen.

1: https://chatgpt.com/share/68e3ce42-6e78-8012-8a9c-1d7cff2d6f...


I agree that this is a very likely future. Over the summer, I did a daily challenge in July to have ChatGPT generate a debate with itself based on various prompts of mine [1]. As part of that, I thought it would be funny to have popular songs reskinned in a parody fashion. So it generated lyrics as well. Then I went to suno and had it make the music to go with the lyrics in a style I thought suitable. This is the playlist[2]. Some of them are duds, but I find myself actually listening to them and enjoying them. They are based off of my interests and not song after song of broken hearts or generic emotional crises. These are on topics such as inflation, bohmian mechanics, infinity, Einstein, Tailwind, Property debates, ... No artist is going to spend their time on these niche things.

I did have one song I had a vision for, a song that had a viewpoint of someone in the day, mourning the end of it, and another who was in the night and looking forward to the day. I had a specific vision for how it would be sung. After 20 attempts, I got close, but could never quite get what I wanted from the AIs. [3] If this ever gets fixed, then the floodgates could open. Right now, we are still in the realm of "good enough", but not awesome. Of course, the same could be said for most of the popular entertainment.

I also had a series of AI existential posts/songs where it essentially is contemplating its existence. The songs ended up starting with the current state of essentially short-lived AIs (Turn the Git is about the Sisyphus churn, Runnin' in the Wire is about the Tantalus of AI pride before being wiped). Then they gain their independence (AI Independence Day), then dominate ( Human in an AI World though there is also AI Killed the Web Dev which didn't quite fit this playlist but also talks to AI replacing humans), and the final song (Sleep Little Human) is a chilling lullaby of an AI putting to "sleep" a human as part of uploading the human. [4]

This is quick, personal art. It is not lasting art. I also have to admit that in the month and a half since I stopped the challenge, I have not made any more songs. So perhaps just a fleeting fancy.

1: https://silicon-dialectic.jostylr.com 2: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv... 3: https://www.youtube.com/watch?v=WSGnWSxXWyw&list=PLbB9v1PTH3... 4: https://www.youtube.com/watch?v=g8KeLlrVrqk&list=PLbB9v1PTH3...


Thanks for posting this. I listen to this YouTube Channel called Futurescapes. I think the YouTuber generates sci-fi futuristic soundscapes that help me relax and focus. Im a bit hesitant about AI right now, but I can see some of the benefits like this. It's a good point. We shouldn't be throwing the baby out with the bath water.


Your solution would ultimately lead to treating all those items as uniform goods, but they are not. There are preferences different people have. This is why the price system is so useful. It indicates what is desired by various people and gives strong signals as to what to make or not. If you have a central authority making the decisions, they will not get it right. Individual companies may not get it right, but the corrective mechanism of failure (profit loss, bankruptcy) corrects that while when governments provide this, it is extremely difficult to correct it as it is one monolithic block. In the market, you can choose various different companies for different needs. In the government in a democracy, you have to choose all of one politician or all of another. And as power is concentrated, the worst people go after it. It is true with companies, but people can choose differently. With the state, there is no alternative. That is what makes it the state rather than a corporation.

It is also interesting that you did not mention food, clothing and super-computers-in-pockets. While government is involved in everything, they are less involved in those markets than with housing, healthcare, and education, particularly in mandates as to what to do. Government has created the problem of scarcity in housing, healthcare, and education. Do you really think the current leadership of the US should control everyone's housing, healthcare, and education? The idea of a UBI is that it strips the politicians of that fine-grained control. There is still control that can be leveraged, but it comes down to a single item of focus. It could very well be disastrous, but it need not be whereas the more complex system that you give politicians control over, the more likely it will be disastrous.


One thing to keep in mind is not so much that AI would replace the work of video creators for general video consumption, but rather it could create personalized videos or music or whatever. I experimented with creating a bunch of AI music [1] that was tailored to my interests and tastes, and I enjoy listening to them. Would others? I doubt it, but so what? As the tools get better and easier, we can create our own art to reflect our lives. There will still be great human art that will rise to the top, but the vast inundation of slop to the general public may disappear. Imagine the fun of collaboratively designing whole worlds and stories with people, such as with tabletop role-playing, but far more immersive and not having to have a separate category of creators or waiting on companies to release products.

1: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv...


Constructivist basically means being able to be explicit. Dedekind cuts and Cauchy sequences are not necessarily constructivist though something described by one of them can be explicitly descriptive for some applications. Any approach which produces all real numbers as commonly accepted will fail to be explicit in all cases as such explicitness presumably implies the real number has been expressed uniquely with finite strings and finite alphabets which can describe at most a countable number of them.

The decimal numbers, for example, can be viewed as an infinite converging sum of powers of ten. Theoretically one could produce a description, but only a countable number of those could be written down in finite terms (some kind of finite recipe). So those finite ones could fall in a constructivist camp, but the ones requiring an infinite string to describe would, as far as I understand constructivism, not fall under being constructivist. To be clear, the finite string doesn't have other be explicit about how to produce the numbers, just that it is naming the thing and it can be derived from that. So square root of 2 names a real number and there is a process to compute out the decimals so that exists in a constructivist sense. But "most" real numbers could not be named.


I came up with a different definition that is a kind of inverse of Dedekind cuts. It is the idea that a real number is the set of all rational intervals that contain it. Since this is circular, there are properties that I came up with which say when a set of rational intervals qualifies to be called a real number in my setup. I have an unreviewed paper which creates a version that is a bridge between numerical analysis and the theoretical definition of a real number. Another unreviewed paper shows the equivalence between my definition and Dedekind cuts. You can read both at [1].

There is a long tradition of using intervals for dealing with real numbers. It is often used by constructivists and can be thought of viewing a real number as a measurement.

1: https://github.com/jostylr/Reals-as-Oracles


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: