Reading these articles about light and quantum theory is always a frustrating experience for me. They are all invariably hand-wavy and never seem to explain things from first-principles. Does anyone have a good reference for a book that does justice to the subject?
On a related note, Simon Singh's book on the Big Bang Theory (http://www.amazon.com/Big-Bang-Universe-Simon-Singh/dp/00071...) was the first book I read that explored the Big Bang in great detail without being hand wavy. I wish he'd write a book dealing with quantum theory!
>They are all invariably hand-wavy and never seem to explain things from first-principles.
For first principles, you'll need to talk to the designer - scientists only have runtime profilers and guesswork. Even the guesswork isn't complete - there's no unified theory (https://en.wikipedia.org/wiki/Theory_of_everything) so it is kind of weird to ask everything to trickle down from first principles, IMO.
I think this article does a decent job of trying to describe the first principles we hope we can use to find why the speed of light in vacuum is what it is, but the reality is we still don't know. Plus, the article format doesn't allow for detailed explanation that a book would.
I'm not a physicist, but from my understanding, light doesn't move at any speed from its own point of view. Its actually instant. From creation to absorption (or dissolution?), there's no delay from its point of view. Fast or slow has no meaning for light. We view things as having speed because mass slows things down from being instant. Am I correct?
(Disclaimer: I am a physicist, but this is not my area of expertise at all. Happy to be corrected here).
Relativity is, like quantum mechanics, a fairly simple concept with incredibly complex implications and edge cases.
The easiest intuitive explanation of relative time I have found is the following:
- Assume our space time/universe is a 4D vector space. Three spatial dimensions, one time.
- Everything has a 4D vector which describes movement through this 4D space-time {x, y, z, t}
- The length/magnitude of this vector is fixed at the speed of light, c
- Normalize the dimensions of our 4D space as following: if we are not moving at all (x = y = z = 0) the vector points entirely in time our vector is {0, 0, 0, c} and, in our frame of reference, time moves as normal - "one second per second".
- Now, as we go faster and faster this vector no longer points purely in time. In the extreme case where the vector sum of the x, y, z (3D spatial component) is c, the time component t will be zero. This is why our photon does not experience time passing - it moves through space at c, so cannot move through time at all.
For the 3D velocities we normally experience we do not move in space fast enough to affect motion through time appreciably. Even the GPS corrections for relativity mostly arise due to Earth's gravity well distorting local space-time geometry, although their high velocity does contribute.
Objects with mass cannot be accelerated to c under our current understanding of relativity because kinetic energy diverges (approximately 1/sqrt(1-v^2/c^2)) - reaching speed of light would require infinite energy.
I'm not a physicist either. As a student, some years ago I asked my physics teacher if this conception was right, but I only got pretty elusive answers. It turns out that general relativity and quantum mechanics are not 100% coherent on this point. I suggest you to read this short, a-bit-technical-but-not-too-much essay on the problem: http://quasars.org/photon.txt
> "Observations of astronomical bodies under gravity do not show this decrease, and so far there is no sign that G varies in space"
Correct me if I'm wrong, but observations do show there's a significant problem with the equations, which led to the invention of "dark matter" and "dark energy".
Aside from that, is there any connection between the universal constants of physics and mathematical constants? It would be nice to be able to say, "The Universe is the way it is because circles."
Dark matter and energy are there to solve an issue with the cosmological constant which is a "value" that was introduced by Einstein to make his general theory of relativity work with the back was accepted as a static universe, basically it's the energy density of the vacuum in space.
This constant was then "abandoned" after Hubble discovered that the universe is expanding, it was still there but it's value was (assumed to be) zero.
When the scientific consensus reached a point where it was agreed on that the universe is accelerating the cosmological constant came back into play because with it you can explain the accelerated expansion, the problem is that we can't account for the additional energy needed to account for the acceleration so dark energy was introduced as a concept.
Dark Matter was introduced to solve another issue, one that can be seen on much smaller (or own galaxy) scales, basically if we look at the motion of stars and the rotation of galaxies we seem to be lacking mass, and quite a bit of it (although not nearly as much as dark energy) so again a place holder device was introduced in the form of dark matter
Now normally you would say well when you need to introduce additional doodads to account for something in your theory not in 1 instance but in 2 and maybe in many others your theory sucks.
The problem is that we've sorta proven that gravity works for the most part as we think it does, and if we go into weird stuff like localized gravitational constants well they don't account for the observations that required dark matter or energy, oh and they kinda break the universe.
The force of gravity and hence the gravitational constant has been proven to be "correct" in from the smallest (even atom level) observations and calculations to the macro and universal scales, and while it's true that we might be wrong, if we are we are wrong by not a mile but probably several parsecs this means that relativity and other current accepted gravitational theories are more wrong than the "Ether Theory".
They are locally correct, just like Newton's theories were correct up until we observed Mercury.
That being said I'm 99% in agreement with mainstream science: Einstein's work just makes too much sense to be wholly incorrect. Where I'm in strong disagreement with mainstream science is outright hacks where they turn dark matter/energy into real things (e.g. WIMPs). Non-constant G is a more worthwhile avenue than that IMO.
The curious thing is the trouble spots have been mapped. There are filaments and voids where the gravitational effects of unseen matter/stuff are plainly observable: something is there pulling on stuff. MOND is not enough to answer that.
I'm not saying that MOND is correct at all, I'm saying that we believed that classical Newtonian dynamics was seen as correct at one point - just like SR is seen as correct now. My opinion is that it's highly unlikely that SR is incorrect (or partially incorrect), however, I do see it as a remote possibility.
> There are filaments and voids where the gravitational effects of unseen matter/stuff are plainly observable
Another way of looking at it is a map of the sum of things that we don't know. Again, using Mercury as an example, we were able to calculate the [sum of] force the was "wrong."
The dark matter map is incredibly cool because it's the same thing done on a unbelievably massive scale. There were certainly people who believed that this unaccounted-for force on Mercury was a single force with a trivial explanation. We now know that this was not the case.
That is, in my opinion, the same mistake we are now making - simply because we can calculate a map of the effect does not mean that it's actually a map of real stuff.
Although I'm in disagreement with MOND I do think it does something right: the μ function. "This is something that we don't know." In the same way it is in my opinion that dark matter and dark energy are merely unknown functions and not "real matter" and "real energy." I certainly think that Turner and Zwicky had the same thought pattern when they coined the terms.
MOND is quite interesting indeed (been to a couple of Mordechai lectures about this subject), the only problem is that it can't be used to construct a functional universe under TeVeS stable stars couldn't form.
Sadly Jacob Bekenstein died just a month ago and I'm not sure how much work will be done on MOND now.
They also work on universal scales again for the most part.
The universe having more mass or more energy makes more sense than random gravity, or a phenomenon that is completely not-understood on the same scale as space-time curvature, the latter being mostly because we should observe it also locally.
Non-Constant G just breaks the universe, it's also not exactly "Non-Constant" it's still constant just at a different value as far as dark energy goes.
But considering that we calculate the value correctly on a local scale it makes very little sense that is changes all of a sudden especially considering that it changes the same everywhere including within our galaxy.
Dark matter is there to fix another very different picture because without it you'll have an almost completely random gravitational constant even on small scales - star systems, and galaxies, this simply can't be the case.
Even without going into relativity if you have a classical system (stars) that doesn't account for the motion you are missing mass some where.
Now there are other attempts to explain some of these effects e.g. the gravitational waves from super massive black holes that are slamming into each other, new experiments predominately those which will measure pulsars are will attempt to discover and measure those with much greater accuracy which might account for some of the randomness but won't account for the fact that we need more mass and energy in the universe to make things work.
Science has always used placeholders and mechanisms for things that should be there but weren't discovered, this is the great thing of good science because it's predictive, and more often than not it works, even when you don't understand the entire picture.
A great example of this is the periodic table it was constructed by pure observation of the attributes of various elements, it predated the standard model, the discovery of protons, neutrons, isotopes and the such. However it was so correct that it didn't matter because the principle worked and the extrapolated attributes of the elements were enough to build a system that didn't had to be changed when you figured out how all the internal gadgets work.
Now the Precession of Mercury is actually a great example, Newtonian laws of motion account for just under 95% of Mercury's orbital velocity, relativity (the fact that mass curves space) accounts for the missing 5%.
Before we understood relativity we had planet Vulcan which was the doodad that science constructed to explain the orbital mechanics of Mercury.
The big difference between Mercury and Dark Matter/Energy is that we need to find something that works on all scales, the need for dark matter and dark energy comes from relative small scales as well as very big ones, now there could be some other force or phenomenon we completely don't understand but you also can't look for something completely unknown. You would be hard pressed to look at something like the orbit of mercury and come up with relativity as an explanation that just wont work, you'll never end up with that as a result. Dark Matter or Dark Energy for the other hand are much wider place holders than a Planet X.
On top of that Dark Matter is not only needed to explain relativity but is also predicted by other theories mainly big bang nucleosynthesis, it may also be needed (or could account for) various observations of gravitational microlensing since MACHO's don't seem to be able to account for all of those.
And as far as WIMP's go well they aren't hacks, these are predicted by quite a few of nucleosynthesis theories that go beyond the standard model, we already have weak interactive particles, but they all tend to be extremely light however mass doesn't seem to play a role in their nature so WIMP's could very well exists, people didn't think neutrino's were real either.
Thanks for the very detailed response - very informative. The core of my argument is that we really need more of this:
> Now there are other attempts to explain some of these effects e.g. the gravitational waves from super massive black holes [...]
We just need to be looking at other options. It seems as though we are spending a lot of thinking time on one solution and it's frustration with that which likely leads me to words such as "hacks."
> Mercury
I wasn't suggesting it was the same thing, merely making an analogy: a somewhat similar type of mistake we made in the past. WIMPs could be real but they could just be the modern day Vulcan: I watched a great talk by Prof. Terry Rudolph[1] and he points out that our monkey brains have problems visualizing things smaller than "banana." Possibly our brains have problem visualizing forces bigger than "moon" and therefore we create these physical phenomena (Vulcan/WIMP) in a desperate attempt to conceptualize them.
While WIMPs may very well be the correct solution, I'd just like to see a little more energy spent exploring other solutions. Science is very much about being correct nowadays: getting published, getting the Nobel Prize, etc. I think it's far more rewarding to be wrong and science needs to briefly contemplate the philosophical roots that it was born of.
I agree that biological limitations could enforce very real limits on what we can come up with, I have real doubts that we could understand or formulate (working) physics that cover multiple dimensions for that same reason.
But we are looking, both dark matter and dark energy have multiple candidates.
Maybe Einstein was correct and we have a (small but positive) universal constant AKA space-time's property tax.
Other various proposed fields like the quintessence theory, and some various views on QFT could also be an answer to that.
So you have allot of different theories and proposal from small hacks like the universal constant to completely new theories but they pretty much are aimed and answering the same thing and are labeled under "dark energy".
Dark matter is the same thing, most candidates for dark matter are quite different and they all attempt to explain why we are all fatter than we think we are through different mechanisms.
We also have some other efforts to create completely different explanations for gravity like MOND that was mentioned.
So I don't really see that we are limiting ourselves, but as I said before you have to start searching for something at a known point or go completely out of left field, and so far usually you need a little (or well allot) of both because rarely (at least these days) were we can be off by so much that we need to rewrite the text books.
The problem with "the universe is the way it is because circles" is that our understanding / development of mathematics is based on ways of perception formed based on seeing the universe around is. It becomes a chicken and egg issue.
> Are some [constants] more fundamental than others? [...] one useful choice has been just three: h, c and G, collectively representing relativity and quantum theory. In 1899, Max Planck, who founded quantum physics, examined the relations among h, c and G and the three basic aspects or dimensions of physical reality: space, time, and mass.
I thought there were 5 "Planck units" (h, c, G, the electron charge, and Boltzmann's constant) matching 5 dimensions of physical reality (space, time, mass, also charge and temperature). But I don't understand two things:
* Why is Boltzmann's constant there, temperature being a statistical property more akin to the fine structure constant than the other 4 Planck units?
* Why do physicists nowadays prefer to use the electron charge instead of Coulomb's constant as Planck did? Coulomb's constant and the gravitational constant G slot into similar looking formulas for electromagnetism and gravity respectively, and pairing them seems more "fundamental" than using the electron charge.
> [...] matching 5 dimensions of physical reality (space, time, mass, also charge and temperature).
I'm not quite sure I understand that, but I'll try to answer your questions anyway:
Yes, Boltzmann's constant is more or less just a conversion factor between temperature and energy. If we measured temperature in joules, we wouldn't need Boltzmann's constant. But then the same is true for c; we could measure time in meters or distance in seconds.
If you have defined the basic units of mechanics (time, distance, and mass), the value of G is fixed. But not the value (and even the dimension!) of Coulomb's constant and of electric charges. You can use a system of units where Coulomb's constant is 1 and electric charge then has dimension mass^(1/2)length^(3/2)time^(-1), while in SI units it has dimension current*time. Basically, you can make Coulomb's constant have any value and any dimension you want. Using just Coulomb's constant or just an electric charge in such a formula wouldn't make much sense. On the other hand, something like the fine structure constant (which is practically the square of the elementary charge measured in natural units) is a quantity that makes sense inpendently of your system of units.
I guess the reason for that is that mass has meaning outside of gravity (in Newton's second law), but electric charge is confined to electromagnetism. There's also the thing with electric charge being quantized to integer multiples of the elementary charge e (apart from quarks for which it is an integer multiple of e/3, but they usually don't appear isolated), but mass not being quantized.
I think the similarities between electromagnetism and gravity are at least partly superficial. Coulomb's law looks a lot like Newton's law, but that's because both the electrostatic and the classical gravitational potential obey a poisson equation (in the statical limit and to the lowest order). And that's about the simplest equation such a potential can obey. I don't think there's much more to it than that (perhaps apart from both being long-ranged because their symmetries are not broken).
> both being long-ranged because their symmetries are not broken
I guess this is the property of both that appears similar, and it just seems more than coincidental that two of the four (or five) fundamental constants of nature describe these at least partly similar interactions.
When doing calculations the electric charge of a thingy is usually placed into some varible and when using electron charge versions of constants you can usually cancel them out. So less writing, typing and calculating; scientists are lazy.
> But some constants involve no dimensions at all. These are so-called dimensionless constants – pure numbers, such as the ratio of the proton mass to the electron mass. That is simply the number 1836.2 (which is thought to be a little peculiar because we do not know why it is so large). According to the physicist Michael Duff of Imperial College London, only the dimensionless constants are really ‘fundamental’, because they are independent of any system of measurement. Dimensional constants, on the other hand, ‘are merely human constructs whose number and values differ from one choice of units to the next’.
Say what?
Unitless constants are exactly the same stuff as dimensional constructs; it's just that the units canceled out. You measure a proton's mass in some arbitrary units, the electron in the same units and when you divide the two, the units go away. This does not create a philosophically distinct category of constant.
It's about the numerical value of these constants. The 1.673E-27 of the proton and the 9.109E-31 of the electron are completely arbitrary, but their ratio 1836 isn't. Besides, you can't really measure dimensional values. An experiment to measure the mass of the proton really is a complex way to measure the ratio of the mass of the proton to the mass of the international prototype kilogram; and that's a dimensionless value.
I thought "units canceled out" was the entire point. The units we use say everything about our assumptions such as the definition of distance and time (concepts, for example, closely tied to our understanding of light) and dimensionless constants allow us to safetly hide some of those assumptions.
Philosophically this is significant because these constants provide us our most basic hypothetical methods of communication with other technologically advanced species in the future, provided they share a relatively similar linear number system (which can be based on ratio of protons between different elements).
I think it does. Whatever our choice of units (provided you use the same choice of units in all calculations), the ratio of mass of the proton to the electron will be the same. This is what makes the constant fundamental: it is independent of any way we choose to measure it, so should be the same to anybody and everybody. If we were to meet an alien civilization and study their physics, as long as we know their number system, we would immediately recognize the number 1836.2.
> If we were to meet an alien civilization and study their physics, as long as we know their number system, we would immediately recognize the number 1836.2
Only if that number really is a constant, so that it matches in their pocket of the universe. For sure-fire mutual recognition of numbers, I would stick with something mathematically defined, like some large-ish prime numbers, pi, e, and so forth.
> the ratio of mass of the proton to the electron will be the same.
In effect, the electron's mass is then the unit of measure. A proton's mass is 1836.2 electrons, and so many kilograms, do many ounces, etc.
You are correct, the distinction here is between unitless constants and constants expressible only in terms of arbitrary units.
ex:
The fine structure constant 7.29735257×10^−3 is independent of any measurement system and takes the same value when computed in any of them. The speed of light, however, cannot be expressed without prior definition of units of time and distance. 299792458 m/s is equivalent to 1.8026175×10^12 furlongs per fortnight.
Do things 'go through' the vacuum of space or is it like newton's cradle where the fundamental building blocks just bump shoulders yet do not move?
So for example when light travels from a million light years away, is it one fundamental particle traveling through space. Or are there trillions of interactions passing energy along?
One thing to consider, is that given rate of change approaches zero as velocity approaches c, what does motion mean when there is no time for it to happen in, either as things moving or bumping?
Seriously, though, what does it mean to ask "Why is light so fast?" - are you looking for some purpose in light traveling fast? You won't find one. It just DOES.
In fact, given that the much simpler alternative to there being a speed limit, c, on the universe would be that there is NO speed limit, then the question really becomes, why is light so SLOW?
Certainly a lot of physicists are content with leaving it at "because it is." They're not searching for purpose, they're searching for explanation. The article doesn't just ask the question either, it explores a possible avenue for seeking the answer.
Experiments over the past few years seem to indicate that the speed of light can be derived by measuring electric and magnetic properties of the vacuum. This leads to the question: is it possible that the speed of light is the value it is because the vacuum restricts it to this value? This would mean the speed of light is not a fundamental constant, but an observable parameter of the vacuum.
The article is light on links, but here is a paper that I think the article was referring to, by Marcel Urban and colleagues at the University of Paris-Sud:
"When a real photon propagates in vacuum, it interacts
with and is temporarily captured by an ephemeral pair.
As soon as the pair disappears, it releases the photon to
its initial energy and momentum state. The photon continues to propagate with an infinite bare velocity. Then the photon interacts again with another ephemeral pair and so on. The delay on the photon propagation produced by these successive interactions implies a renormalisation of this bare velocity to a finite value.
This “leapfrog” propagation of photons, with instantaneous leaps between pairs, seems natural since the only length and time scales in vacuum come from fermion pair lifetimes and Compton lengths."
The paper claims that the speed of light might fluctuate as a consequence, and proposes possible experiments to test this.
"The propagation of a photon being a statistical
process, we predict fluctuations of its time of flight of the order of 0.05fs/√m. This could be within the grasp of
modern experimental techniques and we plan to assemble
such an experiment."
This could answer the question, why is light so slow?
Relative to matter? Photons are massless, so light doesn't get slowed down by interacting with remote particles as it travels.
Relative to humans? Humans are made of matter, and in addition to that, they are immensely complex bio-chemical mechanisms and depend a lot on slow stochastic processes such as diffusion, so a lot of other processes will appear fast to them.
From reading the article (and not enjoying it very much - see msravi's comment about its handwavyness), the question it's failing to answer isn't "why is light faster than X" but rather "why is the speed of light as it is".
In a lot of ways, that question is just about as meaningful as "why are there electrons?" Looked at from the right angle, you can say that there is only one speed at which everything always moves through space-time; the only difference, really, among speeds is how much of that speed extends timewards from the moving thing's perspective. (What space-time itself gets up to, and what it is apart from something we experience, is a different set of questions altogether.) The idea that we ought to be able to understand everything in principle is a horse long out of the barn.
Its definitely an interesting question. Something that limits all, moves uniformly no matter the point of view, doesn't interact with itself but does with everything else - its a singular feature of physics.
If I understand you right, one can wonder things like, maybe 'light' is really 'time' in some sense, not moving uniformly but instead the actual clock that drives the universe. Stuff like that is, at the least, interesting fodder for bull sessions.
The question you might ask is why doesn't EVERYTHING travel at the speed of light. The speed of light is one in some units. We just happened to develop the meter and the second before we could measure the speed of light. These units are totally arbitrary.
Everything does travel at the speed of light. It's the only speed that exists. When you aren't moving relative to the space around you, you are traveling at the speed of light along the time axis. As your speed increases through space, your speed along the time axis decreases to compensate, ensuring that your total speed in space-time is the speed of light.
So if the vacuum contains quantum fluctuations in which particles are being produced, I wonder what the mathematics looks like. How is the creation of a singular point-like element (or a pair of them) modeled mathematically? Roughly sketched of course :)
Search for creation and anihilation operators in second quantization. It's quite simple. I give you a field with some particles populating some modes, and applying the creation (annihilation) operator gives you a field with one more (less) particle in a mode. That's how the game starts. If you're a particle physicist you typically develop this further into a system in terms of an action so you can use a path integral formulation, and then the action is written in terms of a Lagrangian Density, where you write the fields and their couplings. You typically then talk in the language of propagators (look for electron and photon propagators) and Feynman diagrams (which may be obtained from the Hamiltonian or Lagrangian)
Condensed matter people and quantum opticians typically stick with the Hamiltonian (creation/annihilation) picture. Theories like superconductivity (BCS theory) are phrased like this. The standard model though is typically written in terms of the Lagrangian, though.
But it is. I can have devices that use single electrons. I can measure shot noise. Theories in terms of particles work. Turns out that theories in terms of off-mass-shell particles describing higher and higher terms of perturbation theory work well too.
I like to think of light as a something we made up in our minds.
A light bulb that is illuminating a wall is actually more like the light bulb touching the wall, rater than it sending out light that is hitting the wall.
On a related note, Simon Singh's book on the Big Bang Theory (http://www.amazon.com/Big-Bang-Universe-Simon-Singh/dp/00071...) was the first book I read that explored the Big Bang in great detail without being hand wavy. I wish he'd write a book dealing with quantum theory!