“The first principle is that you must not fool yourself–and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.” –Richard Feynman
Last week, a collaboration of 160 scientists working in Italy wrote a paper claiming that their experiment shows neutrinos traveling faster than the speed of light. Although I wrote up some initial thoughts on the matter, and there are plenty of other excellent takes, we’re ready to go into even deeper detail.
First, a little background on the humble neutrino.
Early in the 20th century, physicists learned in great detail how radioactive decay worked. An unstable atomic nucleus would emit some type of particle or radiation, becoming a lower-mass nucleus in the process. Now, if you added up the total energy of the decay products — the mass and kinetic energy of both the outgoing nucleus and the emitted particle/radiation — you had better find that it equals the initial energy (from E = mc2) that you started off with!
For two of the three common types of decay — alpha and gamma decay — this was, in fact, observed to be true. But for beta decay, the kind shown in the diagram above, there’s always energy missing! No less a titan than Neils Bohr considered abandoning the law of conservation of energy on account of this. But Wolfgang Pauli and Enrico Fermi had other ideas.
In the early 1930s, they theorized that there is a new type of particle emitted during beta decay that was very low in mass and electrically neutral: the neutrino.
Because it’s uncharged, neutrinos are only detectable through the same nuclear interaction that causes radioactive decay: the weak nuclear force. It took more than two decades to begin detecting neutrinos, due to how mind-bogglingly weak their interactions actually are. It wasn’t until 1956 that the first detections — based on neutrinos (okay, technically antineutrinos) from nuclear reactors — occurred.
And since then, we’ve detected naturally occurring neutrinos, from the Sun, from cosmic rays, and from radioactive decays, as well as man-made ones from particle accelerators. The reason I’m telling you all this is because we’ve been able to conclude, based on this, that of the three types of neutrino — electron, muon, and tau neutrinos — that the mass of each of these neutrinos is less than one-millionth the mass of the electron, but still not equal to zero!
So whether these neutrinos are created in stars, in supernovae, by cosmic rays or by particle accelerators, they should move at a speed indistinguishable from the speed of light. Even over distances of thousands of light years. That is, of course, assuming special relativity is right.
So, as I told you last week, CERN made a neutrino beam by slamming ultra-high-energy protons into other atoms. This makes a whole slew of unstable particles (mesons), which decay into, among other things, muon neutrinos.
Now, here’s where it gets fun. Since they started taking data, CERN’s beam has slammed upwards of — are you ready? — 100,000,000,000,000,000,000 protons into other atoms to create these neutrinos!
These neutrinos then pass through more than 700 km of Earth before arriving in the neutrino detector. This distance, they claim (with justification), is so well-measured that the uncertainty in it is just 0.20 meters! Over the three years the OPERA experiment has been running, they’ve finally managed to collect around 16,000 neutrinos, which is a mind-bogglingly small percent — something like 10-14 % — of the neutrinos created!
Now, you may ask, how did they conclude that these neutrinos are arriving about 60 nanoseconds too early?
The first thing you go and do is measure all of your initial stuff — all 1020 protons — and figure out what you’re starting out with. In this case, the protons were created in pulses that lasted for just over 10,000 nanoseconds each, of roughly constant amplitude (but with some variation in there). You sum all the pulses together, and you get the distributions shown below.
Now, in a perfectly simple, idealized world, you would measure each individual proton and match it up with the neutrino you end up with in your detector. But of course, we can’t do that; it’s too difficult to detect neutrinos. For example, if you sent one proton at a time, with just 100 nanoseconds in between each proton, you would have to wait thirty years just to detect one neutrino! This will not do, of course, so here’s what you do instead.
You take this very precisely measured distance from your proton source to your neutrino detector, divide it by the speed of light, and that would give you the time delay you expect in between what you started with and what you receive. So you go and record the neutrinos you receive, subtract that expected time delay, and let’s see what we get!
“Oh my,” you might exclaim, “this looks terrible!”
And you’d be right, this is a terrible fit! After you re-scale the neutrino amplitude to match the initial proton amplitude, you’d expect a much better match than this. But there’s a good reason that this is a terrible fit: your detector doesn’t detect things instantaneously!
So you need to understand to extraordinary precision and accuracy what your detector is doing at every stage in the detection process. And the OPERA team does this, and comes up with an expected additional delay, due to their equipment, of 988 nanoseconds. They also try to quantify the uncertainty in that prediction — what we call systematic error — by identifying all the different sources of their measurement uncertainty. Here’s what they come up with.
Well, fair enough, only 7.4 nanoseconds worth of error on that, assuming that there are no unidentified sources of error and that the identified sources of error are unconnected to one another. (I.e., that you can add them in quadrature, by taking the square root of the sum of the squares of all the individual errors.)
So they do a maximum likelihood analysis on their recorded neutrino data and see how much they actually have to delay their data by to get the best fit between the initial protons and the final neutrinos. If everything went as expected, they’d get something consistent with the above 988 ns, within the allowable errors. What do they actually get?
A craptacular 1048.5 nanoseconds, which doesn’t match up! Of course, the big question is why? Why don’t our observed and predicted delays match up? Well, here are the (sane) options:
- There’s something wrong with the 988 nanosecond prediction, like they mis-measured the distance from the source to the detector, failed to correctly account for one (or more) stage(s) of the delay, etc.
- This is a very, very unlikely statistical fluke, and that if they run the experiment for even longer, they’ll find that 988 nanoseconds is a better fit than the heretofore measured 1048.5 nanoseconds.
- The neutrinos that you detect are biased in some way; in other words, the neutrinos that you detect aren’t supposed to have the same distribution as the protons you started with.
- Or, there’s new physics afoot.
Now, the OPERA team basically wrote this paper saying that the first option is what they suspect, but they don’t know where their mistake is.
We can look at the second option ourselves and see how good a fit a delay of 988 nanoseconds actually is compared to the data. The way they got their 1048.5 nanosecond delay as the best fit was to fit the edges of the distributions to one another. Here’s the data from those portions.
And while this is a very good fit, I’ve gone and generated an image of the graph in the upper-left-hand corner with, instead of the data points having a 1048.5 ns delay, only having a 988 ns delay. Let’s compare.
As you can tell just by eyeballing it, this fit isn’t significantly worse. It is slightly worse, but despite the claimed six-sigma statistical significance, you’d be hard-pressed to find someone who claims that this fit is inconsistent with their data. Their analysis simply says that it isn’t the most likely fit based on the data.
But there’s also the third option, that I haven’t seen anyone else talk about yet. And this is important, because I’ve seen people present at conferences claiming to have achieved superluminal velocities, and they are fooling themselves. Let me explain. (The next four images are mine.)
Imagine you start off with a pulse of “stuff” at an initial time, to — whether it’s light, neutrinos, electrons, whatever — moving at just about the speed of light in vacuum. The pulse, of course, has a finite amplitude, lasts for a certain length of time and exists at a certain location.
What do you expect to happen at some later time, tf?
You expect that the pulse will move a distance in that time that corresponds to it traveling at the speed of light for that time.
But what if you can’t observe the entire pulse? What if you can only observe some part of the pulse? This might happen, possibly, because it’s hard to detect, because something happens to filter the pulse, etc. Quite often, you’ll see results like this.
If all you did was measure the location of the final pulse (or what’s left of it), you could easily conclude that you went faster than the speed of light! After all, the pulse appears to travel a distance (from xo to xf + ∆x) that’s greater than the distance it would have traveled had it moved at the speed of light (from xo to xf).
But you’re misinterpreting what you’re seeing. You preferentially cut off the rear portion of the pulse!
And really, nothing moved faster than the speed of light. Each particle — each quantum — moved at (or slightly below) the speed of light, but you only measured some of the particles at the end, and so your results got skewed!
The speculations about how the theory might be wrong are, of course, already abundant. Sascha points out how extra dimensions could be the new physics that accounts for this mystery, Bob McElrath and others have pointed out to me (privately) that the Earth could have an index of refraction less than one for neutrinos, and of course neutrinos could be some highly modified type of tachyon, among other explanations.
But before we can take any of these suggestions seriously on physical grounds, we’ve got to consider the possibility that there are errors either in the experiment, the analysis, or the selection of neutrinos!
Because you must not fool yourself, and you are the easiest person to fool.