Fake Banner
Toponium Found By CMS!

The highest-mass subnuclear particle ever observed used to the the top quark. Measured for the...

The Problem With Peer Review

In a world where misinformation, voluntary or accidental, reigns supreme; in a world where lies...

Interna

In the past few years my activities on this site - but I would say more in general, as the same...

The Probability Density Function: A Known Unknown

Perhaps the most important thing to get right from the start, in most statistical problems, is...

User picture.
picture for Hank Campbellpicture for Heidi Hendersonpicture for Bente Lilja Byepicture for Sascha Vongehrpicture for Patrick Lockerbypicture for Johannes Koelman
Tommaso DorigoRSS Feed of this column.

Tommaso Dorigo is an experimental particle physicist, who works for the INFN at the University of Padova, and collaborates with the CMS and the SWGO experiments. He is the president of the Read More »

Blogroll
Yes, I know - I have touched on this topic already a couple of times in this blog, so you have the right to be bored and surf away. I am bound to talk about this now and then anyway, though, because this is the focus of my research these days. 
Recently I was in the Elba island (a wonderful place) for a conference on advanced detectors for fundamental physics, and I presented a poster there on the topic of artificial-intelligence-assistend design of instruments for fundamental physics. Below is the poster (I hope it's readable in this compressed version - if you really want a better pic just ask).



Neural networks are everywhere today. They are used to drive vehicles, classify images, translate texts, determine your shopping preferences, or finding your quickest route to the supermarket. Their power at making sense of small or large datasets alike is enabling great progress in a number of areas of human knowledge, too. Yet there is nothing magical about them, and in fact what makes them powerful is something that has been around for century: differential calculus.
When I explain to the public (in this blog, or at public conferences or schools) how the Large Hadron Collider operates, I have to gloss over a lot of detail that is unnecessary to grasp the important concepts, which enable other discussions on interesting subnuclear physics. This is good practice, and it also saves me from having to study details I have forgotten along the way - they say that what you are left with when you forget everything is culture, and I tend to agree. I have a good culture in particle physics and that's all I need to do some science popularization ;-)
As the twentythree regular readers of this blog know [(c) A.Manzoni], in recent times I have moved the main focus of my research to advanced applications of deep learning to fundamental science. That does not mean that I am not continuing to participate in the CMS experiment at the CERN Large Hadron Collider - that remains the main focus of my research; but it does mean that what remains of my brain functionalities is mostly invested in thinking about future applications of today's and tomorrow's computer science innovations. 
I was happy to meet Giorgio Bellettini at the Pisa Meeting on Advanced Detectors this week, and I thought I would write here a note about him. At 89 years of age Giorgio still has all his wits around him, and he is still as compelling and unstoppable as anybody who has met him will recall. It is a real pleasure to see that he still attends all sessions, always curious to hear the latest developments in detector design and performance. 

Two recent analyses by the CMS experiments stand out, in my opinion, for their suggestive results. They both produce evidence at the two-three sigmaish level of the signals they were after: this means that the probability of the observed data under the no-signal hypothesis is between a few percent and a one in a thousand, so nothing really unmistakable. But the origin of the observed effects are probably of opposite nature - one is a genuine signal that is slowly but surely sticking its head up as we improve our analysis techniques and collect more data, while the other is a fluctuation that we bumped into.