However, I beg to differ. All these questions are driven by something we do not understand at the theoretical level -the hierarchy problem of why the Higgs boson mass is so small compared to the energy scale of quantum corrections. It is a tough concept to elucidate, so allow me to divert you elsewhere for a discussion of what that issue is. Here instead I wish to point out that as experimentalists we are confronted with much more pressing challenges due to the fact that we have unfinished business in front of us: measurements we do not understand.
Physicists, in my opinion, can only entertain themselves with theoretical riddles once they have cleaned their laundry list of experimental mysteries. And we happen to have one long-standing mystery to solve, but it is not one for which the LHC is going to make a difference. On the contrary, it is something still not attended well enough with currently existing experiments.
The mystery is constituted by a set of experimental results in neutrino oscillation experiments, which hints at something fundamental which we have not understood yet. This might be the existence of a fourth kind of neutrino, a so-called sterile one. Or it might be just another collection of odd three-sigma effects which have nothing to do with one another except the common plague of inconclusive statistics and/or insufficient experimentalist dexterity.
Three or more neutrinos ?
In the standard model of particle physics there are 24 particles of half-integer spin called fermions. These are organized in two sets: 18 quarks and 6 leptons. Among leptons there are the three charged states e, μ, τ, and the three neutral partners ν_e, ν_μ, and ν_τ, the neutrinos.
While a large part of the high-energy physics community was busy trying to discover the Higgs boson, those three neutrinos have provided us with some startling surprises in the last 15 years: the observation of their oscillation phenomena, the evidence that they have non-zero masses, and a complex phenomenology which only today, after a decade of dedicated experiments, we start to understand in some detail. Big question marks still linger on but they, too, are being addressed.
In the process of investigating the complex neutrino phenomenology, those experiments have collected a list of discrepancies which might, with some stretch, all fit a very odd picture: the existence of a fourth kind of neutrino, more massive than all others and "sterile", in the sense that it does not interact with ordinary matter with any of the fundamental interactions except gravity.
The first experiment to report a discrepancy that could be explained by sterile neutrinos is LSND, a neutrino oscillation experiment at Los Alamos. The experiment reported a signal of electron antineutrinos appearing in a muon antineutrino beam. The signal was not confirmed by other experiments, but a similar anomaly appeared in MiniBooNE, which found a 3.8-standard-deviation effect. Reactor antineutrino experiments on the other hand report deficits of events at the 3-standard-deviation level, which could somehow fit the same picture. The possibility that sterile neutrino may also explain (at least in part) the observed amount of dark matter in the universe makes the topic even hotter than it would already be.
So what to do ?
The anomalies and experimental riddles mentioned above demand a vigorous effort to settle the matter once and for all. In a preprint appeared today in the arxiv Carlo Rubbia, Alberto Guglielmi, Francesco Pietropaolo and Paolo Sala argue that the technology of digital bubble chambers and the use of a double detector collecting data from a CERN neutrino beam could easily reach a definitive answer in the matter of one year or so of data taking.
The ICARUS detector is a large tank of liquid argon which acts like an old-time bubble chamber, but which records the digital image of the passage of charged particles (produced when neutrinos interact with the detector mass) using the ionization left by the particles rather than microscopic bubbles along their trail. It currently sits in the Gran Sasso mine in central Italy, but it could be transported to CERN and, along with another new detector of one quarter the mass but all other features equal to its larger counterpart, provide the simultaneous measurement of neutrino fluxes at two distances from the source. While this solution is not the only one which would provide the sensitivity to either conclusively exclude or prove the existence of sterile neutrinos, it is a quite attractive option.
Ah, five sigma, five sigma...
One funny feature of the short article is its incipit. It goes as follows:
The recent result at CERN-LHC with the detection of the Higgs particle [1] has demonstrated once again the necessity of a more than "5 sigma" evidence for the definitive assessment of fundamental physics discoveries.
I find this sentence remarkable for its implications. Saying that "the CERN result has demonstrated the need of 5 sigma for a definitive assessment" means, in my book, that the result showed in some way that 3 or 4 sigma were not enough. Which is the contrary of what I personally think for the specific case mentioned!, while it is exactly what appears to be the line of the CERN management, with their claiming discovery once each experiment achieved 5.0 standard deviations...
Now why do I think this is misguided ? First of all, the results showed at winter 2012 conferences, claiming significances in excess of three standard deviations in more than one channel from two separate experiments, were conclusive enough that the vast majority of HEP physicists was already certain that the Higgs boson had been found some four good months before the CERN director general said "I think we have it".
Second, there is nothing magical in the number 5. It corresponds to the non-round p-value of 2.7E-7, which means to say that observing data as discrepant or more with the model (the non-Higgs hypothesis, or the absence-of-sterile hypothesis) happens less than three times in ten millions, if statistical fluctuations are the only cause.
The "5-sigma = discovery" came in fashion slowly in HEP in the course of the seventies and eighties, following an era where smaller evidences were enough to claim to have observed some new phenomenon. Indeed, many of those claims had ended up in the dust bin, and raising the bar was a good idea. The technical reasons for the tightened requirement were two:
1) the look-elsewhere effect, and
2) the complexity of experiments.
The look-elsewhere effect arises when you have a multitude of places or distinct ways that a new signal may become manifest. If there is a 0.1% chance that you observe some funny peak in a histogram at a certain location, but there are 100 such locations that you concurrently examine, there's a fat 10% that you will observe one of them somewhere. The "trials factor" here is easy to evaluate; but often it is not as easy, and in some cases it may be much sneakier. For instance, if your experiment has dozens of collaborators doing different analyses, and you are the Spokesperson, you should expect a hot tentative new signal appearing on your desk and a overexcited colleague doing lots of hand-waving in front of you about every other Monday morning. How to cope with a hard-to-estimate trials factor ? Of course, by raising the required significance for a new observation claim.
The other problem that a higher significance requirement addresses is that of the data analysis being more and more complex, with systematic uncertainties of all kinds which experimenters have sometimes a hard time estimating. For some systematic effects one does not usually know the sampling distribution, which means that assuming they distribute as Gaussian densities is a wild call.
So imagine you say you measure a count rate of 2500+-300(stat)+-400(syst). Let's imagine this comes from subtracting off a large background, so that those 300 events of statistical uncertainty come from a quite Gaussian-like variation of background. As for the systematic uncertainty, that instead comes from some obscure variation observed in the counting rate from running conditions; something for which we have no clear model, but which we can eyeball to be not larger than +-400 in 68% of cases. Are we justified in saying that, since 300^2+400^2=500^2, i.e. summing (assumed Gaussian!) uncertainties in quadrature, we are observing an excess which is different from zero by five standard deviations (2500/500=5) ?
Of course not. The systematic effect could have very non-Gaussian tails, so that for instance, 10% of the cases it contributes a variation of 2000 counts, not 400. The probability to obtain the observed excess would in that case not be 2.7*10^-7, but of a few percent!
Now that I described the two main reasons why we nowadays use 5 standard deviations as an agreed-upon threshold to claim the discovery of a new physics process, let us look at the case of the Higgs discovery. While those analyses were quite difficult indeed, and unknown or ill-estimated systematic uncertainties were potentially still there, there were two experiments finding the same effect, in two different final states each. This is already a quite strong indication that one should not worry too much of detector-related systematics. As for the look-elsewhere effect, the experiments were already correcting for their trials factor when reporting their significance!
Okay, diatriba mode off. The short article advocating a deeper study of sterile neutrinos at CERN with electronic bubble chambers is worth reading in my opinion, because of the clarity of explaining the panorama of neutrino physics and the accessible level of the text. I will only allow myself to note that four decades after the era when Rubbia ran from one continent to the other proposing experiments at a monthly rate, he is still doing that - he is one of the stronger advocates of the muon collider besides being behind the new enterprise described in the linked preprint, and maybe half a dozen other projects... The man has a nuclear energy reactor hidden somewhere in his chest!
Comments