Here is an interesting statistic: if we multiply the (approximate) number of computers currently present on planet Earth by the (approximate) number of transistors contained in those computers we get 10^18, which is three orders of magnitude larger than the number of synapses in a typical human brain. Which naturally prompted Slate magazine’s
Dan Falk to ask whether the Internet is about to “wake up,” i.e., achieve something similar to human consciousness.
He sought answers from Neuroscientist Christof Koch, science-fiction writer Robert Sawyer, philosopher Dan Dennett and cosmologist
Sean Carroll. I think it’s worth commenting on what three of these four had to say about the question (I will skip Sawyer, partly because what he said to Falk was along the lines of Koch’s response, partly because I think sci-fi writers are creatively interesting, but do not have actual expertise in the matter at hand).
Koch thinks that the awakening of the Internet is a serious possibility, basing his judgment on the degree of complexity of the computer network (hence the comparison between the number of transistors and the number of synaptical connections mentioned above). Koch realizes that brains and computer networks are made of entirely different things, but says that that’s not an obstacle to consciousness as long as “the level of complexity is great enough.” I always found that to be a strange argument, as popular as it is among some scientists and a number of philosophers. If complexity is all it takes, then shouldn’t ecosystems be conscious? (And before you ask, no, I don’t believe in the so-called Gaia hypothesis, which I consider a piece of New Age-y fluff).
In the interview, Koch continued, “certainly by any measure [the Internet is] a very, very complex system. Could it be conscious? In principle, yes it can.” And, pray, which principle would that be? I have started to note that a number of people prone to speculations at the border between science and science fiction, or between science and metaphysics, are quick to invoke the “in principle” argument. When pressed, though, they do not seem to be able to articulate exactly which principle they are referring to. Rather, it seems that the phrase is meant to indicate something along the lines of “I can’t think of a reason why not,” which at best is an argument from incredulity.
Koch went on speculating anyway: “Even today it might ‘feel like something’ to be the Internet,” he said, without a shred of evidence or even a suggestion of how one could possibly know that. He even commented on the possible “psychology” of the ‘net: “It may not have any of the survival instincts that we have ... It did not evolve in a world ‘red in tooth and claw,’ to use Schopenhauer’s famous expression.” Actually, that wasn’t Schopenhauer’s expression (apparently, the phrase traces back to a line in a poem by Alfred Lord Tennyson published in 1850), but at least we have an admission of the fact that psychologies are traits that evolved.
And talk about wild speculation: in the same interview Koch told Slate that he thinks that consciousness is “a fundamental property of the universe,” on par with energy, mass and space. Now let’s remember that we have — so far — precisely one example of a conscious species known in the entire universe. A rather flimsy basis on which to build claims of necessity on a cosmic scale, no?
Dennett, to his credit, was much more cautious than Koch in the interview, highlighting the fact that the architecture of the Internet is very different from the architecture of the human brain. It would seem like an obvious point, but I guess it’s worth underscoring: even on a functionalist view of the mind-brain relationship, it can’t be just about overall complexity, it has to be a particular type of complexity. Still, I don’t think Dennett distanced himself enough from Koch’s optimism:
“I agree with Koch that the Internet has the potential to serve as the physical basis for a planetary mind — it’s the right kind of stuff with the right sort of connectivity ... [But the difference in architecture] makes it unlikely in the extreme that it would have any sort of consciousness.”
The right kind of stuff with the right sort of connectivity? How so? According to which well establish principle of neuroscience or philosophy of mind? When we talk about “stuff” in this context we need to be careful. Either Dennett doesn’t think that the substrate matters — in which case there can’t be any talk of right or wrong stuff — or he thinks it does. In the latter case, then we need positive arguments for why replacing biologically functional carbon-based connections with silicon-based ones would retain the functionality of the system. I am agnostic on this point, but one cannot simply assume that to be the case.
More broadly, I am inclined to think that the substrate does, in fact, matter, though there may be a variety of substrates that would do the job (if they are connected in the right way). My position stems from a degree of skepticism at the idea that minding is just a type of computing, analogous to what goes on inside electronic machines. Yes, if one defines “computing” very broadly (in terms, for instance, of universal Turing machines), then minding is a type of computing. But so is pretty much everything else in the universe, which means that the concept isn’t particularly useful for the problem at hand.
I have mentioned in other writings John Searle’s (he of the Chinese room thought experiment) analogy between consciousness as a biological process and photosynthesis. One can indeed simulate every single reaction that takes place during photosynthesis, all the way down to the quantum effects regulating electron transport. But at the end of the simulation one doesn’t get the thing that biological organisms get out of photosynthesis: sugar. That’s because there is an important distinction between a physical system and a simulation of a physical system.
My experience has been, however, that a number of people don’t find Searle’s analogy compelling (usually because they are trapped in the “it’s a computation” mindset, apparently without realizing that photosynthesis also is “computable”), so let’s try another one. How about life itself? I am no vitalist, of course, but I do think there is a qualitative difference between animate and inanimate systems, which is the whole problem that people interested in the origin of life are focused on solving (and haven’t solved yet). Now, we know enough about chemistry and biochemistry to be pretty confident that life as we know it simply could not have evolved by using radically different chemical substrates (say, inert gases, to take the extreme example) instead of carbon. That’s because carbon has a number of unusual (chemically speaking) characteristics that make it extremely versatile for use by biological systems. It may be that life could have evolved using different chemistries (silicon is the alternative frequently brought up), but there is ample room for skepticism based on our knowledge of the much more stringent limitations of non-carbon chemistry.
It is in this non-mysterian sense that, I think, substrate does matter to every biological phenomenon. And since consciousness is — until proven otherwise — a biological phenomenon, I don’t see why it would be an exception. To insist a priori that it is in fact exceptional is to, ironically, endorse a type of dualism: mind is radically different from brain matter, though not quite in the way Descartes thought.
As it turns out, cosmologist Sean Carroll was the most reasonable of the bunch interviewed by Falk at Slate. As he put it: “There’s nothing stopping the Internet from having the computational capacity of a conscious brain, but that’s a long way from actually being conscious ... Real brains have undergone millions of generations of natural selection to get where they are. I don’t see anything analogous that would be coaxing the Internet into consciousness. ... I don’t think it’s at all likely.” Thank you, Sean! Indeed, let us stress the point once more: neither complexity per se nor computational ability on its own explain consciousness. Yes, conscious brains are complex, and they are capable of computation, but they are clearly capable of something else (to feel what it is like to be an organism of a particular type), and we still don’t have a good grasp of what is missing in our account of consciousness to explain that something else. The quest continues...Originally appeared on
Rationally Speaking.
Comments