Another reason for this move is usual for a blogger: the blog is a testing ground. You, as a reader, may jump in and point out fallacies or suggest improvements in the comments thread - which would all be very much appreciated!
So here is my text, amended of parts that are not relevant here.
Thanks to a series of groundbreaking discoveries in exoplanetology, during
the past thirty years we have gradually come to realize that planets similar
to our own, where we may speculate that there be a significant possibility
of emergence of carbon-based life forms such as those inhabiting Earth,
are fairly common. Indeed, a handful of such candidates have already been
identified within few tens of light years of distance from us, and their
number keeps growing as we improve our detection technologies. On the other
hand, geological studies indicate that planet Earth formed about 4.6 billion
years ago, when it withstood gradual accretion through gravitational
interactions from a disk of debris in orbit around our Sun.
Fossil evidence indicates that only after over four billion years did
pluricellular life start to flourish on it, and a wealth of different
creatures progressively inhabited its seas and lands. On such a time scale
the emergence of biological life forms capable of rational thought, endowed
with acquired self-awareness, and entertaining communication and craftsmanship
skills is a very recent phenomenon, and one which might constitute only a
brief parenthesis in the history of our planet.
When considering our Universe from the point of view of its content of
intelligence –irrespectively of the exact definition we may agree to give
to that noun-, armed uniquely with estimates of the number of Earth-like
planets along with knowledge of what happened on our own planet until now,
we are bound to assess what phenomena have the potential of causing mass
extinctions. A number of reasonably well-understood events of cataclysmic
nature, from gravitational motion of asteroids and comets to solar flares,
super-volcano eruptions, or nearby supernovae explosions should be assessed
for their expected rates, which contribute to reduce the expected duration
of our civilization. To those phenomena one must then add several potential
occurrences of anthropogenic nature, including nuclear or biological warfare,
climate change, and ecological collapse. In the category of anthropogenic
threats of relevance, an emerging one is the willful or serendipitous
generation of an artificial general intelligence (AGI) that develops goals
of its own, which end up being misaligned with the survival of humanity.
Such a scenario is only hypothetical, and it is very hard to speculate on
its likelihood. Yet even at the level at which we know it today, artificial
intelligence (AI) is already a potential contributor to significant
existential risks for humankind; its vicious use for the pollution of the
information ecosystem with falsehood, undermining the basis of trust among
individuals and nations, is one outcome we have already started to have to
cope with. A variety of consequences of the technological advancements in
applied AI that have already taken place similarly constitute clear and
present dangers today. While the probability of anthropogenic catastrophes
is harder to estimate than that of natural extinction-level events, the
former is generally agreed to be much larger, and to therefore contribute
more significantly to a reduction of the expected life span of intelligent
life on Earth.
Barring the possible yet arguably very unlikely achievement of a widespread
colonization of nearby habitable planets by human beings, which would make
intelligent life significantly more robust to global existential threats
of both anthropogenic and natural origins, we must therefore come to terms
with the idea that biological intelligence might be a rare occurrence in the
Universe. Billions of Earth-like planets exist, yet the existence of
intelligent life forms is likely only an ephemeral phenomenon on each of
them. A different conclusion could however be reached if we consider the
potential diffusion of AGI. By sitting on the initial linear-looking slope
of a sneaky exponential curve, we have witnessed in the past few decades what
appeared to be a very slow rise in non-biological intelligence. From the
development of the theoretical underpinnings of AI in the first half of the
twentieth century to the coming of age of powerful computers toward the end
of the millennium, and then from the rise of smartphones to today's
widespread AI systems, we have witnessed each new advancement without
experiencing a real shock. However, things are bound to change very soon, as
the exponential trend in the capabilities of AI, its diffusion, and its
overall impact on society are all becoming manifest. This has also
revitalized discussions that experts have been having for over sixty years
about the possibility to create AGI systems, endowed of self-consciousness
and general capabilities that would quickly transcend human intelligence,
bringing a wealth of new arguments to those who argue about the likelihood
of that scenario. The question today appears not if, but when, such systems
will be produced. While opinions still differ widely, it today appears
relatively uncontroversial to state that whether it will be in 20 or 50 more
years, an AGI will arise on Earth -it looks like an evolutionary necessity,
whose fuel is the enormous empowerment and profit it would guarantee to its
owners. And since artificial general intelligence may be able to transcend
many of the existential risks that biological intelligence is subjected to,
and is much more fit to endow itself of the means to become an interplanetary
phenomenon, it is certainly not unreasonable to conclude that it may be
artificial, and not biological, the most common substrate of intelligence
in the Universe. We find this observation quite significant, and in itself
one which in and of itself constitutes yet another concrete reason why we
should look at the future of artificial intelligence with redoubled interest
and concern. As logical and unavoidable a natural outcome of human evolution
as we believe AI may be, shaping the way this gestation will end is still
partly in our hands: and the consequences of our actions are undeniably
quite large.
the past thirty years we have gradually come to realize that planets similar
to our own, where we may speculate that there be a significant possibility
of emergence of carbon-based life forms such as those inhabiting Earth,
are fairly common. Indeed, a handful of such candidates have already been
identified within few tens of light years of distance from us, and their
number keeps growing as we improve our detection technologies. On the other
hand, geological studies indicate that planet Earth formed about 4.6 billion
years ago, when it withstood gradual accretion through gravitational
interactions from a disk of debris in orbit around our Sun.
Fossil evidence indicates that only after over four billion years did
pluricellular life start to flourish on it, and a wealth of different
creatures progressively inhabited its seas and lands. On such a time scale
the emergence of biological life forms capable of rational thought, endowed
with acquired self-awareness, and entertaining communication and craftsmanship
skills is a very recent phenomenon, and one which might constitute only a
brief parenthesis in the history of our planet.
When considering our Universe from the point of view of its content of
intelligence –irrespectively of the exact definition we may agree to give
to that noun-, armed uniquely with estimates of the number of Earth-like
planets along with knowledge of what happened on our own planet until now,
we are bound to assess what phenomena have the potential of causing mass
extinctions. A number of reasonably well-understood events of cataclysmic
nature, from gravitational motion of asteroids and comets to solar flares,
super-volcano eruptions, or nearby supernovae explosions should be assessed
for their expected rates, which contribute to reduce the expected duration
of our civilization. To those phenomena one must then add several potential
occurrences of anthropogenic nature, including nuclear or biological warfare,
climate change, and ecological collapse. In the category of anthropogenic
threats of relevance, an emerging one is the willful or serendipitous
generation of an artificial general intelligence (AGI) that develops goals
of its own, which end up being misaligned with the survival of humanity.
Such a scenario is only hypothetical, and it is very hard to speculate on
its likelihood. Yet even at the level at which we know it today, artificial
intelligence (AI) is already a potential contributor to significant
existential risks for humankind; its vicious use for the pollution of the
information ecosystem with falsehood, undermining the basis of trust among
individuals and nations, is one outcome we have already started to have to
cope with. A variety of consequences of the technological advancements in
applied AI that have already taken place similarly constitute clear and
present dangers today. While the probability of anthropogenic catastrophes
is harder to estimate than that of natural extinction-level events, the
former is generally agreed to be much larger, and to therefore contribute
more significantly to a reduction of the expected life span of intelligent
life on Earth.
Barring the possible yet arguably very unlikely achievement of a widespread
colonization of nearby habitable planets by human beings, which would make
intelligent life significantly more robust to global existential threats
of both anthropogenic and natural origins, we must therefore come to terms
with the idea that biological intelligence might be a rare occurrence in the
Universe. Billions of Earth-like planets exist, yet the existence of
intelligent life forms is likely only an ephemeral phenomenon on each of
them. A different conclusion could however be reached if we consider the
potential diffusion of AGI. By sitting on the initial linear-looking slope
of a sneaky exponential curve, we have witnessed in the past few decades what
appeared to be a very slow rise in non-biological intelligence. From the
development of the theoretical underpinnings of AI in the first half of the
twentieth century to the coming of age of powerful computers toward the end
of the millennium, and then from the rise of smartphones to today's
widespread AI systems, we have witnessed each new advancement without
experiencing a real shock. However, things are bound to change very soon, as
the exponential trend in the capabilities of AI, its diffusion, and its
overall impact on society are all becoming manifest. This has also
revitalized discussions that experts have been having for over sixty years
about the possibility to create AGI systems, endowed of self-consciousness
and general capabilities that would quickly transcend human intelligence,
bringing a wealth of new arguments to those who argue about the likelihood
of that scenario. The question today appears not if, but when, such systems
will be produced. While opinions still differ widely, it today appears
relatively uncontroversial to state that whether it will be in 20 or 50 more
years, an AGI will arise on Earth -it looks like an evolutionary necessity,
whose fuel is the enormous empowerment and profit it would guarantee to its
owners. And since artificial general intelligence may be able to transcend
many of the existential risks that biological intelligence is subjected to,
and is much more fit to endow itself of the means to become an interplanetary
phenomenon, it is certainly not unreasonable to conclude that it may be
artificial, and not biological, the most common substrate of intelligence
in the Universe. We find this observation quite significant, and in itself
one which in and of itself constitutes yet another concrete reason why we
should look at the future of artificial intelligence with redoubled interest
and concern. As logical and unavoidable a natural outcome of human evolution
as we believe AI may be, shaping the way this gestation will end is still
partly in our hands: and the consequences of our actions are undeniably
quite large.
Comments