The text below is the sixth and last part of what could have become "Chapter 13" of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab", which I published in 2016. For part 1 see here; for part 2 see here; for part 3 see here; for part 4 see here; for part 5 see here .


Another anomaly bites the dust

If the search for superjets in Run 2 performed by the Illinois group left a few questions on the table, it was at least produced relatively quickly. Instead, it took until 2007 to obtain an answer on the nature on the 7.2 GeV bump in the invariant mass spectrum of pairs of muons observed by the Frascati group in Run 1. That result was obtained by the same authors.

Other things being equal, a signal of the decay of a new particle with a significance of barely two standard deviations in Run 1 data would become a towering peak with significance beyond discovery level (i.e., above five standard deviations) in a ten times larger dataset. This is due to the fact that the statistical uncertainty on the size of a signal scales with the square root of the size itself, as the signal comes in as a Poisson process. I will spend a few lines to describe why that is so.

The search for a new process is often cast as a simple "counting experiment", when one expects data to be produced by background processes at a certain known rate, say B events if the data size is L=100 inverse picobarns, and a new process may add to that a signal rate S. Let us say for simplicity that B is 100 events (one per inverse picobarn): if we observe 120 events we have an excess of 20, but that observation is not very striking, as it is not very unlikely for B to have ranged up, by a statistical fluctuation, to 120. The typical fluctuation of 100 events due to a Poisson process (which is one where events come in at random with a fixed average rate) is the square root of 100, that is, 10. So 120 events may be due to a 2-sigma fluctuation of the background, and the observation is not necessarily a proof of the existence of a signal contributing an extra rate of events in our data. In statistical parlance, the probability to observe 120 or more events, under the hypothesis that they come from a process with an average rate of 100, is of roughly 5%: not small enough to be ruled out, hence the alternative hypothesis (that there is also signal in the data) needs not be trusted as true.

Now let us imagine that we sit for 10 times longer, and collect 1000 inverse picobarns of data. We duly expect to have caught 1000 background events there. If those 120 events of the previous dataset were really due to 20 signal events on top of 100 background events, we would now see 1200 events in total. The uncertainty on the background prediction, 1000 events, is now the square root of 1000, which is about 31. The 200 events in excess are now much harder to explain with a statistical fluctuation of the background. The test of the background hypothesis now says that we saw 1200 events, expected 1000+-31, and the probability that we observed 1200 or more events from a process producing on average 1000 is of less than a part in 100 millions: it far exceeds the "five-sigma" criterion (in fact, 200 divided by 31 is well above 6 sigma), and we happily refute the background-only hypothesis: the added statistical power of the 10x larger dataset allows a conclusion to be drawn - a signal must be contributing there. Unless, of course, systematic uncertainties are playing tricks on us.

The Run 2 dimuon search

The analysis of data required to search for the new particle is straightforward to perform: one selects events containing a pair of good-quality opposite-sign muons, reduces backgrounds with a few simple kinematical requirements, computes the invariant mass of the muon pair, produces a histogram of that quantity, and fits it with a function capable of modelling a Gaussian bump over a smooth background. The fit returns the amount of signal, if one is present; otherwise, standard statistical methods allow one to compute an upper limit on that quantity from the fit parameters and uncertainties. With simulations one may then turn that numerical datum into a cross section (if a signal is there) or an upper limit on the signal cross section (if none can be claimed). As easy as pie.


Histogram of the reconstructed mass of opposite-sign muon pairs seen in data corresponding to a luminosity of 630 inverse picobarns of 1.96-TeV proton-antiproton collisions recorded by the CDF experiment in Run 2. The three Upsilon resonances are clearly visible on top of a smooth non-resonant background.

Above you can see the mass distribution which the Frascati group extracted from a dataset corresponding to an integrated luminosity (see Chapter 2) of 630 inverse picobarns of Run 2 data. No bump is apparent anywhere in the spectrum below 9 GeV, while above it the beautiful signals of the Y(1S), Y(2S), and Y(3S), the three "bottomonium" hadrons, are unmistakable. Giromini and his group performed a careful scan of the mass region between six and nine GeV. The results of the analysis were unquestionable: there was no signal. The resulting cross section limits, computed as a function of mass, excluded the possibility of the production of a narrow resonance, which could be due to a bound state of two scalar quarks, as the same authors had hypothesized seven years earlier. The Run 1 dimuon bump could not be anything else but a statistical fluctuation: case closed.

With the final word on the dimuon bump now carved in stone, compounded by the absence of any excess of superjets in Run II as certified by the Illinois group, and with Giromini apparently busy with precision measurements of innocuous B-physics observables, the fears of those CDF collaborators who had objected to the Italian's 2003 proposal to re-join CDF II started to appear ill-posed. However, a surprise was in store for everybody.