The 2024 Nobel Prize for Physics was awarded to John Hopfield and Geoffrey Hinton, who many have called “the godfather of AI”. The award seems apt for the time we are in. In the Royal Swedish Academy of Sciences’s press release, they explained their decision to award the prize to the pair, saying that they, 


“used tools from physics to develop methods that are the foundation of today’s powerful machine learning. John Hopfield created an associative memory that can store and reconstruct images and other types of patterns in data. Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures.”

Questions About the Award

There has been some dissent about the Nobel Prize going, essentially, to AI researchers. Let’s call this the, “But is this physics?” reaction. In a piece titled, “Sick of AI hype? I have some bad news. AI is here to stay and the Nobels know it.”, Vox quotes several scientists who expressed bemusement that the award given was for physics. AI researcher, Andrew Lansen, remarked that, “Initially, I was happy to see them recognised with such a prestigious award, but once I read further and saw it was for Physics, I was a bit confused. I think it is more accurate to say their methods may have been inspired by physics research.” Meanwhile, the physicist Jonathan Pritchard tweeted that, “I’m speechless. I like ML [machine learning] and ANN [artificial neural networks] as much as the next person, but hard to see that this is a Physics discovery. Guess the Nobel got hit by AI hype." It doesn’t help that the Nobel Prize for Chemistry, awarded in part to Google DeepMind founder Demis Hassabis and his co-worker John Jumper for AlphaFold 2, an ML protein structure predictor. Nature reacted to these wins by saying that, “Computer science seemed to be completing its Nobel takeover”. What these researchers seem to have been reacting to is a steady take-over of science by computer science, and the Nobel Prize was a declaration that computer science research into other fields is serious and important work. 

Yes, it’s Science, a New Kind of Science

In many ways, the awarding of the Nobel Prize for Physics and Chemistry to AI researchers, shows the limits of how we see science. AI and ML are multidisciplinary, using insights and tools from physics, mathematics, neuroscience, computer science and cognitive science, it is impossible to say that it belongs to one specific science. It is an agnostic tool. Today, you will find biology labs with AI and ML researchers who know nothing about biology. The world’s most successful investment firm ever, Renaissance Technologies, employs AI and ML, hires scientists and mathematicians, and nobody with a finance background. Yet, if AI and ML are agnostic, the products of that work are obviously solving concrete problems in specific domains. It is right for these scientists to win their awards in physics and chemistry, not because they are physicists and chemists, but because their work most impacts physics and chemistry. 


There is some anxiety about this movement because there are fears that traditional scientists are being pushed aside and may no longer be needed. I think this is wrong. Gary Kasparov, the great chess champion, has often touted a concept he calls the “centaur”, which is a team of human beings and computers. He believes that they are superior to pure machine teams or pure human teams. AI presents a challenge to what we think of as doing science because of their inherent opacity: science has progressed with the understanding that the test of a theory is its ability to make novel predictions and offer explanations, yet, AI achieves this without recourse to any theoretical postulates, laws or hypotheses, making it hard to make a theoretical evaluation. Scientists are forced, therefore, to make a tradeoff between the predictive power of AI and the explanatory power they desire. There is no way to figure out the reasoning behind algorithms’ work. 


What these awards do is show what is possible with AI, but, I think they also highlight the increasing importance of centaurs. We need traditional scientists to try and make sense of the output of these novel predictions and to try and place them in their appropriate theoretical framework. Without scientists, all we have are results in need of interpretation. We need to see AI not as an adversary, but as a force multiplier that produces results no human being would be capable of, but also as something that is in need of a human scientist to place it into a theoretical context. The great scientists of the future will have a profound understanding of scientific theory and an ability to evaluate experimental results. The actual work of experimentation will increasingly go to machines.