> [...] the paper advances the prediction that quantum computation will never be possible with more than 3 or 4 qubits. [...] I wonder: before uploading their paper, did the authors check whether their prediction was, y’know, already falsified? How do they reconcile their proposal with (for example) the 8-qubit entanglement observed by Haffner et al. with trapped ions [...]
(Note: that's a critique of the previous paper, not the linked one. Although the linked post mentions quantum computers not working, the linked paper does not touch the subjet.)
Furthermore the "incompressible fluid" they postulate sounds like it enables non-local behavior (which it has to to match current versions of the Bell test) so it is unable to help resolve the issue of reconciling GM with QM.
So this does rather less than they claim. Assuming that their claimed result is correct.
"In 1746 Euler modelled light as waves in a frictionless compressible fluid; a century later in 1846, Faraday modelled it as vibrations in ‘lines of force’ … Fifteen years later Maxwell combined these approaches, proposing that a magnetic line of force is a ‘molecular vortex’…"
They basically updated Maxwell's model. From their conclusion:
"We brought Maxwell’s 1861 model of a magnetic line of force up to date using modern knowledge of polarised waves and of experiments on quantised magnetic flux. Our model obeys the equations for Euler’s fluid and supports light-like solutions which are polarised, absorbed discretely, consistent with the Bell tests, and obey Maxwell’s equations to first order."
What's nice is that their model is classical. Even if it "just" makes exactly the same predictions as other models, it's nice to have a model where physical intuition can be brought to bear.
A classical model can also be simulated on a classical computer, so if it produces the same results as QM then quantum computation would be... fiction or just redundant?
If they could be shown to have independent effect of the kind that the vector potential was shown to have via the Ahronov-Bohm effect, then this whole approach to quantization would become extremely interesting. Otherwise, you're right: it's just another interpretation of QM, and not a very interesting one at that (despite their claims, as I explained in a separate comment, they can't reproduce the experimental violations of the CHSH inequalities in Aspect's and other experiments that introduce time-variation to precisely rule out the kind of prior communication they are arguing for.)
It says compressible not incompressible.
In "normal" quantum mechanics we have the wave-particle dualism. A particle behaves also like a wave, whatever that means.
In Bohm's hidden variable theory, also known as "pilot wave theory" the wave and the particles are separate. The pilot wave is a wave of unknown making (the theory does not say what it would be made of), and this wave follows the normal quantum mechanical behaviour. Then particles "ride" on this wave. So all the quantum mechanical wave effects happen in the pilot wave, and then the classical particle-like particles just follow theirs paths, already laid out by the wave.
Although "spooky action at a distance" is also experimentally verified, explaining this by postulating a wave that fills the whole universe, and explicitly reacts spookily over distances, makes the whole "spooky action at a distance" uncomfortable explicit in this theory, so it doesn't appeal to most physicists.
"The de Broglie–Bohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal."
http://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory#...
Yes. Bell's theorem doesn't forbid hidden variable theories (in fact, he has a publication preceding his famous result explicitly demonstrating a hidden variable theory that could reproduce QM measurements on a two-level system), just local hidden variable theories. In his collected papers, Bell often remarks that he feels Bohm's interpretation should be more widely studied.
It's a good question why it's not. My understanding, though I'm not well versed in the subject, is that there are some pretty severe shortcomings to it, particularly when you start considering systems with many particles. I've read some of Bohm's writings to try to understand it and as far as I got I found it pretty underwhelming; it seemed more like a bookkeeping trick than any sort of real insight.
I'm personally a fan of the Everett interpretation. Also called the many worlds. Which is what you get if you assume that quantum mechanics applies to the observer. Then the act of observation throws the observer into a superposition of possible states where different things were observed. And those states cannot meaningfully interact later for thermodynamic reasons.
Unless someone comes up with good reason to believe that quantum mechanics does not describe humans, I see no reason not to accept it. And if quantum mechanics is replaced by something different, to the extent that quantum mechanics is an accurate description of us, that interpretation remains correct.
How long would I have to study physics to be able to understand everything in this sentence?
Just start reading David J. Griffiths: Introduction to Electrodynamics. A very well written textbook. The problem might be, if you don't know vector calculus, you might not be able to read this book, so you need to learn some vector calculus, too.
Then start reading Introduction to Quantum Mechanics by Griffiths, too. Best introductory QM book that I know of. If you managed to read Electrodynamics, you should by now know enough calculus for this book, too. But you also need to know about complex numbers here.
The "inviscid compressible fluid" is about fluid mechanics. I don't know any splendid textbook on that.
For what it's worth, volumes two and three cover electrodynamics and quantum mechanics, respectively.
They are basically saying that the quantum mechanical 'strangeness' of light can be explained with classical, deterministic, physics. It is not necessary to have a separation in which quantum mechanics predominates at one level and trumps classical mechanics.
It call all be understood as movement of 'particles' of light (photons) on an underlying wave.
http://en.wikipedia.org/wiki/Magnetic_flux_quantum
http://simple.wikipedia.org/wiki/Magnetic_flux
http://en.wikipedia.org/wiki/Flux_tube
http://en.wikipedia.org/wiki/Quantum_vortex
http://en.wikipedia.org/wiki/Wave_packet
http://en.wikipedia.org/wiki/Maxwell%27s_equations
http://en.wikipedia.org/wiki/Differential_equation
http://en.wikipedia.org/wiki/Polarization_%28waves%29
http://en.wikipedia.org/wiki/Bell_test_experiments
http://en.wikipedia.org/wiki/Fluid_dynamics
I'm pretty sure this would all be accessible to an undergrad physics major who passes my math criterion above. It would probably be beyond them to do the work, but they should be able to follow it.
If it's true, it would be even more awesome than it sounds. It would be the biggest breakthrough in physics in 100 years. I'll give you long odds against it turning out to be true, but I won't bet my entire life savings on it.
[EDIT] I have now read the paper and I'm ready to bet my life savings that it's bogus. There's just nothing new here, just a hand-wavy argument that classical mechanics can violate the Bell inequalities because "lines of force." It's possible that QM will be overturned some day, but when it happens it won't look like this.
What fascinates me is that we have achieved so much in the "quantum age" of the past century using the models derived from a quantum mechanical approach to physics. That the bedrock [or lack of one] of that could be removed and provide a better, more consistent, approach seems so counter-intuitive. But then one recalls how long the Newtonian or Aristotelian approaches [or any other such system] stood.
Also would this be a return to universal models with an aether: wonder how Michelson-Morley works with "flux tubes"?
http://bayes.wustl.edu/etj/articles/cmystery.pdf
>While it is easy to understand and agree with this on the epistemological level, the answer that I and many others would give is that we expect a physical theory to do more than merely predict experimental results in the manner of an empirical equation; we want to come down to Einstein's ontological level and understand what is happening when an atom emits light, when a spin enters a Stern-Gerlach magnet, etc. The Copenhagen theory, having no answer to any question of the form: What is really happening when - - - ?", forbids us to ask such questions and tries to persuade us that it is philosophically naive to want to know what is happening. But I do want to know, and I do not think this is naive; and so for me QM is not a physical theory at all, only an empty mathematical shell in which a future theory may, perhaps, be built.
...and maybe chapter 10 of his book, "Probability Theory: The Logic of Science".
>We are fortunate that the principles of Newtonian mechanics could be developed and verified to great accuracy by studying astronomical phenomena, where friction and turbulence do not complicate what we see. But suppose the Earth were, like Venus, enclosed perpetually in thick clouds. The very existence of an external universe would be unknown for a long time, and to develop the laws of mechanics we would be dependent on the observations we could make locally.
>Since tossing of small objects is nearly the first activity of every child, it would be observed very early that they do not always fall with the same side up, and that all one’s efforts to control the outcome are in vain. The natural hypothesis would be that it is the volition of the object tossed, not the volition of the tosser, that determines the outcome; indeed, that is the hypothesis that small children make when questioned about this. Then it would be a major discovery, once coins had been fabricated, that they tend to show both sides about equally often; and the equality appears to get better as the number of tosses increases. The equality of heads and tails would be seen as a fundamental law of physics; symmetric objects have a symmetric volition in falling.
>With this beginning, we could develop the mathematical theory of object tossing, discovering the binomial distribution, the absence of time correlations, the limit theorems, the combinatorial frequency laws for tossing of several coins at once, the extension to more complicated symmetric objects like dice, etc. All the experimental confirmations of the theory would consist of more and more tossing experiments, measuring the frequencies in more and more elaborate scenarios. From such experiments, nothing would ever be found that called into question the existence of that volition of the object tossed; they only enable one to confirm that volition and measure it more and more accurately...
http://www.med.mcgill.ca/epidemiology/hanley/bios601/Gaussia...
The question I have, though, is this: does this model actually help model phenomena that we can't already model? Quantum gravity is the big spectacular example, but there are many others.
For instance, the Standard Model is very successful at predicting the anomalous magnetic moment of the electron. But it is not successful at predicting the same quantity for the muon. There are many other issues with the Standard Model that aren't so high-flung as quantum gravity.
Are classical models like these, if they can be shown to incorporate multiple particles interacting simultaneously, capable of going beyond the Standard Model or merely replicating it?
It sounds like they are using quantum mechanics to explain quantum mechanics.
well obviously.
Aspect's work is one of the most beautiful pieces of careful and precise experimental testing of an idea in the past half-century, and while it has been attacked from many perspectives it is still a very robust argument for the non-locality of reality. One of the important things about it is that the polarization direction was switched in a quasi-random way after the photons had left the source. Variations on this trick have been performed since, and they all agree with the predictions of quantum theory.
The authors say in this paper "The CHSH assumption is not true in Faraday's model. Instead there is prior communication of orientation along phase vortices such as(4), communication which the CHSH calculation excludes by its explicit assumption."
In experiments like Aspects, prior communication is ruled out because the experimental setup is varied in one arm of the apparatus outside forward light cone of the other photon. Each photon gets detected before the other one could possibly know (based on signalling at the speed of light) what polarizer orientation it should be lined up with.
So this is an interesting bit of work that might be useful in creating photonic quasi-particles in magnetic fluids that would allow for study of photon properties that might be difficult to get an experimental handle on otherwise, but the claim that they have a classical model that violates Bell's inequalities in a way that is relevant to the actual experimental work done in this area is considerably overblown.
"Chaotic Ball" model, local realism and the Bell test loopholes
http://arxiv.org/abs/quant-ph/0210150
...any thoughts?
While it is easy to imagine selective-detection effects that mess up the results enough to invalidate the test at the level of the inequality, it is very, very difficult to maintain all the physics required for precise, detailed agreement between theory and experiment of the kind that Aspect and others have shown. Here is an example of a "local realistic model" that reproduces the quantum mechanical results for in-time coincidences, but completely messes up any number of auxiliary measurements: http://www.tjradcliffe.com/?p=590
So while I'd love to see a modern version of Aspect's work using state-of-the-art entangled photon sources and the like, the likely reason it hasn't been done is that the odds of it revealing anything new and different are trivially small (but not zero, of course!)