Last time we examined Robert Batterman’s idea that the concept of emergence can be made more precise by the fact that emergent phenomena such as phase transitions can be described by models that include mathematical singularities (such as infinities). According to Batterman, the type of qualitative step that characterizes emergence is handled nicely by way of mathematical singularities, so that there is no need to invoke metaphysically suspect “higher organizing principles.” Still, emergence would remain a genuine case of ontological (not just epistemic) non-reducibility, thus contradicting fundamental reductionism.
This time I want to take a look at an earlier paper, Elena Castellani’s “Reductionism, Emergence, and Effective Field Theories,” dated from 2000 and available at arXiv:physics. She actually anticipates several of Batterman’s points, particularly the latter’s discussion of the role of renormalization group (RG) theory in understanding the concept of theory reduction.
Castellani starts with a brief recap of the recent history of “fundamental” physics, which she defines as “the physics concerned with the search for the ultimate constituents of the universe and the laws governing their behaviour and interactions.” This way of thinking of physics seemed to be spectacularly vindicated during the 1970s, with the establishment of the Standard Model and its account of the basic building blocks of the universe in terms of particles such as quarks.
Interestingly, according to Castellani, this picture of the role of fundamental physics began to be questioned — ironically — because of the refinement of quantum field theory, with the Standard Model coming to be understood as a type of effective field theory (EFT), “that is, the low-energy limit of a deeper underlying theory which may not even be a field theory.” She goes on to explain that, as a result, two positions have emerged: (a) there must be a more fundamental theory than the Standard Model, such as string theory, M-theory, or something like that. This is what Steven Weinberg calls “the final theory.” Or, (b) there is no final theory but rather a layered set of theories, each with its own, interlocking domain of application. Roughly speaking, (a) is a reductionist position, (b) an anti-reductionist one.
One of the most helpful bits of Castellani’s paper, at least in my mind, is that she clearly distinguishes two questions about the idea of “fundamentality:” (i) is there, in fact, a fundamental theory (i.e., is (a) or (b) above true)? And (ii) what, philosophically, does it mean for a theory to be fundamental with respect to another?
The author is (properly, I think) agnostic about (i), and devotes the latter part of the paper to (ii). Here are her conclusions, verbatim:
“the EFT approach provides a level structure of theories, where the way in which a theory emerges from another ... is in principle describable by using the RG methods and the matching conditions at the boundary. ... [but] The EFT approach does not imply antireductionism, if antireductionism is grounded on the fact of emergence.”
In other words, the concept of effective field theories is a good way to articulate in what sense one theory is more fundamental then another, but it does not settle the issue of whether there is, in fact, a theory of everything, and if so what the nature of such a theory might be. The issue of emergence and reductionism remains pretty much untouched as a result.
So what are we left with, you might ask? With some interesting insights found in the middle part of Castellani’s paper (particularly section 2, for people who wish to go to the source).
A good one is that the discussion about the existence of a final theory and the concept of fundamental physics is far from purely academic, and far from having generated an agreement even among physicists. Castellani brings us back to the famous debate in the late ‘80s and early ‘90s concerning the funding in the United States of the superconducting super collider, which was eventually canceled despite valiant efforts by prominent physicists like Weinberg. As the author points out, a major argument for spending billions on the collider was precisely that high energy physics was presented as “fundamental” to all other physics (indeed, to all other science) and therefore worthy of the expenditure.
But not everyone agreed, even in the physics community. Solid state physicist Philip Anderson was one of the dissenters, and published a highly influential article, back in 1972, opposing “what appears at first sight to be an obvious corollary of reductionism: that if everything obeys the same fundamental laws [the reductionist hypothesis], then the only scientists who are studying anything really fundamental are those who are working on those laws.”
Interestingly, Anderson did not reject reductionism, but what he called “constructionism”: the idea that one can work one’s way up from fundamental laws to reconstruct the whole of the universe and its workings. His rejection of constructionism was based on his acceptance of emergent properties: “at each new level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other.” As Castellani summarizes: “Condensed matter physics [according to Anderson] is not applied elementary particle physics, chemistry is not applied condensed matter physics, and so on. The entities of [science] X are emergent in the sense that, although obedient to the laws of the more primitive level [studied by science] Y (according to the reductionist hypothesis), they are not conceptually consequent from that level (contrary to the constructionist hypothesis).” Indeed, Anderson went so far as writing the following provocative statement: “the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science.” So much for dreams of a final theory of everything.
Another famous objector to Weinberg’s “fundamental physics is all there is” approach was biologist Ernst Mayr, who traded barbs with Weinberg in a famous exchange in the pages of Nature in 1987-88. Weinberg maintained that “there are arrows of scientific explanation that ... seem to converge to a common source ... [And particle physics deals with nature] on a level closer to the source of the arrows of explanation than other areas of physics,” and is therefore where the real action is. Mayr would have none of it, charging Weinberg with bad and confused reductionism. In the philosophical literature there are, broadly speaking three types of reductionism: constitutive (the method of studying something by looking at its components), theoretical (theories at one level can be shown to be special cases of theories at another level), and explanatory (knowledge of the lower level is sufficient). Both Weinberg and Mayr agree that explanatory reductionism is out of the question, but for Mayr so is theory reductionism, because it is inconsistent with emergence, which in turn was, for Mayr the biologist, an obvious fact about which fundamental physics is simply silent.
To recap the game so far, Castellani’s paper is more neutral about the reductionist / anti-reductionist debate and the role of emergence in it than the paper by Batterman which we examined last time. But it was useful to me because it provides some intriguing historical and even sociological background to the controversy, and also because it clearly separates the issue of effective (i.e., approximate) field theories from the question of whether there actually is a fundamental theory and, if so, what its nature might be. Stay tuned for part III...
Originally appeared on Rationally Speaking, October 15th.
Essays On Emergence Part II
Comments