A Sino-Italian workshop on Applied Statistics was held today at the Department of Statistical Sciences of the University of Padova. The organizers were Alessandra Brazzale and Alessandra Salvan from the Department of Statistical Sciences, and Giorgio Picci from the "Confucius Institute".
The worshop featured ten presentations, four of which by Chinese colleagues. As I have made some friends in that Department thanks to the AMVA4NewPhysics network, I decided it was a good idea to follow at least some of the talks -this gave me a chance to exchange a few words with some of them, and to talk to the student that my network hired at the University of Padova, Greg Kotkowski.
Unfortunately, because of other obligations my attendance was limited to the first session in the morning, up to and including the coffee break (and unfortunately, a clinical exam I must take prevented me from enjoying the rich pastry there!). The three talks I could atttend to were by Lorenzo Finesso, Giorgio Picci, and Guido Masarotto.
Finesso talked about "Factor analysis models via I-Divergence Optimization", Picci discussed "Modeling complex systems by generalized factor analysis", and Masarotto's presentation was titled "Phase-I distribution-free analysis of multivariate data".
I must admit I did not understand very much of the first two presentations, because of my limited knowledge of factor analysis in general, and in particular because of my scarce understanding of the mathematics used in the presentations. But the talks were still interesting to follow - I always like to challenge myself into trying to puzzle out things given insufficient information. After all, isn't that what we ask our machine-learning algorithms to do for us ?
The third talk was more interesting to me as I could follow it more closely. The focus was on the monitoring of multivariate data looking for location shifts and other alterations that may occur in time series data. This is relevant to particle physics searches, of course, so that was an added reason to try and follow the presentation.
Masarotto's talk was clear and rich with simulation examples of his technique. He could show power curves as a function of the properties of the inserted alterations in the data. He considered global shifts in the average of one of the variables (modeled by a step function), or spikes, or transient shifts (a temporary variation of the average), or multiple step-function shifts, as well as linear shifts.
I was a bit surprised to see that the power curves were almost equal for global shifts and multiple shifts, but different for transient ones. This led me to ask the speaker the reason for it. It turned out to be a feature of the kind of simulated shifts he was considered in his power studies.
By the way, do you know what is the power of a test ? In hypothesis testing, you usually have to distinguish an "alternative hypothesis" from a "null hypothesis". The latter says that the data conforms to some model, while the former describes some departure from it. You test the hypothesis with some data and suitable "test statistic".
If for instance you tested the hypothesis that a variable has zero mean (the "null hypothesis") versus the multiple alternative that the mean is non-null (a composite "alternate hypothesis, which depends on the true non-null mean), one test statistic could be the sample average of your data.
You could decide that you accept the alternative hypothesis if you found your average to have a modulus larger than 2*<x>/σ_<x>, where we denote by σ_<x> the error on the mean. This can be estimated as the RMS of the data divided by sqrt(n-1), where n is the number of observations in the data.
Assuming a Gaussian distribution, your test will lead you to discard the null hypothesis in favor of the alternative, when the null is in fact true, about 5% of the time. This is called "type-I error", usually labeled α. The "type-II error rate (β) is conversely the chance that you accept the null hypothesis when in fact the alternative is true.
If the alternative is true, and the true mean is y, the chance that you accept it as true depends on how different y is from zero. The "power" of your test is defined as 1-β, and it can then be shown to be a curve with a minimum of 0.05 at zero, and rapidly raising towards one as y gets larger than, say, 3 or 4 in units of the error on the sample mean, σ_<x>. That is because if the true mean is so large it is unlikely that we measure it to be smaller than 2.
Power curves are quite useful to understand if your test is good in distinguishing hypotheses. But there exists no recipe to choose what value of the type-I erro rate should be in your case, and consequently what power your test has (for as it is simple to realize, a smaller type-I error rate always implies a larger type-II error rate, and so a smaller power).
The topic of where to live in the alpha-versus-beta plane is a long one, and deserves another post... I will stop here for today.
Comments