Academics direly need objective, meaningful metrics to judge the impact their publications have on their field of expertise. Nowadays any regular Joe will be able to show many authored papers in their CV, and it will be impossible to objectively assess the relative merits of each and every one of them, if you are trying to rank Joe in a list of candidates for tenure, or just a research job at a University.
"So let's appeal to the number of citations that those papers get!" has been the general feeling on the matter. Of course! Scientific articles that pass by unnoticed should not count; while articles that everybody read and cite in their future works should be a good indicator of a researcher's impact. Enter the Hirsch index - or H-index: that is the number of articles N, among those that an individual has authored, which have at least a number of citations N.
The Hirsch index is smart, but it is not too smart. Of course it is field-specific, as many other metrics: if you work in particle physics for a decade or so in a large experiment you will by default get a Hirsch index of at least 100 or so, as you got to publish all articles of the collaboration, and those were cited (or self-cited) quite a lot by the community. Conversely, if your field is the mating habits of the dung beetle, you will likely be ready to kill to obtain an H-index of three or four. Since most applications of academic impact are field-specific, that is not so much of a worry, though.
Another flaw of the H index is that it encourages a regime whereby colleagues cite each other in vicious circles. Theoreticians in particle physics know the phenomenon well: they receive at least a couple of emails a week from colleagues begging to cite them in the published version of their preprints. They oblige, knowing they will get the same treatment soon. But this skews the statistic and makes it less useful as a true measure of the real impact that any academic has in one's own field.
And then there's the bandwagon effect of "fashionable topics". If you were around at CERN 2 years ago, you will know all too well what I mean: every time a spurious effect is detected by an experiment, theorists will jump at it and try to put forth interpretations. Mind you - it's all very fine that this happens, and lots of good ideas flourish this way. But the 700 scientific papers produced in the wake of a 3-sigma effect in the mass distribution of photon pairs found by ATLAS at the end of 2015 are a bit too much - basically every HEP theorist around got the message: publish a paper on that thing, and your paper will receive hundreds of citations. Publish seven (as some colleagues did) and your H-index will progress accordingly, no matter if your articles contain garbage or good ideas.
If we look back at fashions, we see peaks of popularity of ideas and articles that collected hundreds of citations for what ? For being the first to put forth a beautiful, wrong idea. Those articles will weigh a lot in one's CV, but I argue they should weigh much less than articles that dealt with beautiful, correct ideas. Do you get my point ?
I thereby propose the T-index as a more refined metrology for researchers. It is computed as the following sum:
The sum runs on all citations of your articles: if an article is not cited it does not exist. But the weight of each citation is the time it took from the original publication to the time of publication of the article citing it, delta t. We may agree to count time in months for this purpose.
Please call this the T-index. As you can see, this number has the dimensions of time. That's not a really important observation, but indeed, we are focusing here on the fact that articles that stood the test of time, articles which people still remember after a long time, should count much, much more than articles written in the middle of a feeding frenzy, followed by a quickly decaying curve.
You will also observe, if you think about particle physics, that such a metric suits both theorists and experimentalists. For theorists, publishing something that your colleagues will remember in 10 years is a big achievement. For experimentalists, it also makes a lot of sense. But some of my colleagues will not like the fact that searches for new physics that are bound to be made un-important by more recent results (where "recent" automatically means "performed with larger datasets" given the typical way of operation of modern-day accelerators). So the setting of an upper limit on the strambonium decays to whatchamacallitino pairs will not be very valuable for bumping up the T index, as people will soon stop citing it when it's updated by more recent, stringent results, while the measurement of a standard model parameter will be likely to retain its value after more years.
In the end, no single number can picture the impact of a researcher in his or her field - of this, I think we all agree. But a vector of numbers could probably be saying something, if each input tells the story from a different angle. So the total number of authored papers, presented together with the total number of papers weighted by the inverse number of co-authors, and along with the H index and the T index, are bound to be a good set, IMHO.
Please feel free to criticize, insult, or troll in the comments thread below. And this will be even more appreciated if you do it a long time after the original publication date!! Thanks!
----
Tommaso Dorigo is an experimental particle physicist who works for the INFN at the University of Padova, and collaborates with the CMS experiment at the CERN LHC. He coordinates the European network AMVA4NewPhysics as well as research in accelerator-based physics for INFN-Padova, and is an editor of the journal Reviews in Physics. In 2016 Dorigo published the book “Anomaly! Collider physics and the quest for new phenomena at Fermilab”. You can get a copy of the book on Amazon.
The T-Index: More Meaningful Metrics For Scientists
Comments