In recent times, artificial intelligence has become ubiquitous. Besides powering our cellphones, directing what advertisements we get when we browse internet or read our emails, and creating content in the media, AI-powered hardware is more and more widespread, including self-driving vehicles, home appliances, and a host of other systems for industrial use. Perhaps the last straw is constituted by the release of large language models such as ChatGPT, with the consequent wave of derived applications. But in a steep slope of new technological developments, we have to get accustomed to constant novelties - at an accelerating pace.

All the while, we have come to observe a number of problematic trends in these developments. In this situation it becomes important to identify specific areas of concern where the community -including scientists- should get involved in an effort toward reducing the likelihood of potentially negative outcomes. Those outcomes, in some scenarios that many would still consider alarmistic today, and a few are even ready to ridicule, belong to the realm of global existential threats, and thus call for the most serious and careful consideration.

The consequences to humanity of the booming trend of AI technology are hard to predict, but it is clear that disruptions are in order, as our societies are not capable of adapting with the necessary speed to phenomena that display a continuous increase in their rate of change - which crucially, should not be mistaken as acceleration, but rather, jolt. This should motivate us to be fully aware of its unfolding, to be prepared for what awaits us and to possibly create effective amortisers.

As a scientist I personally cannot claim to myself a role I do not have, one that scientists never possessed in the first place, and one they are not in any case likely to ever be granted, save maybe in situations that would be so compromised as to configure by themselves an already certain defeat (in Hollywood disaster movies, the last ditch attempts of government leaders to “call the scientist” is a common cliche’).

However, as a scientists, all along the way I feel the responsibility to give my contribution, which I can frame as bringing the attention of society to those impending threats. It was with that spirit that I added my signature to the recent letter calling for some moratorium on AI developments while we develop protocols for effective safeguards. Though with little hope to change the agenda, this is a moral obligation for informed scientists. Raising the bar further, and making an effort to bring our concerns to the level at which policy-making changes can be suggested and supported, can only be the result of a higher level of awareness of the threats we are facing. 

A Torino scale for AI

A framework for giving a proper weight to different threats connected to the development of new AI technologies and their introduction in our societal system can be proposed in analogy with the Torino scale.

The Torino scale is a number from 0 to 10 that qualifies the severity of a threat of impact with Earth of an asteroid. It was proposed by Richard Binzel at the United Nations in 1995, as a way to gauge how much attention should be paid to the assessment of impact probability by near-Earth asteroids that is routinely provided by dedicated monitoring systems and telescopes.

The probability of impact is a number that may change over time very significantly, depending on the time during which the object trajectory is followed and measured. Since there are thousands of objects to keep track of, resources should be driven by the importance of improving the measurement of trajectories for objects that have the highest destruction potential. This can be quantified by the kinetic energy carried by the object at impact. The Torino scale is shown below.  

(Image credit: Wikipedia)

I believe that a conceptually similar scale should be developed for AI threats. Although we have no means of quantifying the probability with which any of the hypothesized threats could end up manifesting themselves, nor the damage they would cause to humanity (or to smaller scale environments and systems), it is still a useful exercise to paint a qualitative map where the perceived or assessed likelihood of outcomes is on the horizontal axis, and on the vertical axis is the severity of the outcome. This may help us start a discussion on the hierarchy of those threats, which would guide us toward paying more attention and studies to the ones which maximize the product of likelihood and severity.

With that purpose in mind, I came up with a list (certainly incomplete, and meant to be little more than an example) of some of the potential threats posed by AI systems -present and future-, their scope, a level of likelihood of manifestation within a time scale of 10 years (A) and 30 years (B), and a level of severity. This could be the input of a graph like the one shown above.

I think it is important to force ourselves into assessing in at least a semi-quantitative way these likelihoods and severity. By disagreeing on the exact numbers we can move the discussion further. I suggest that we use the following scale:

VLL = very low likelihood (0.0001 and below)
LL  = low likelihood (0.0001-0.01)
ML = medium likelihood (0.01-0.1)
HL = high likelihood (0.1-0.5)
VHL = very high likelihood (0.5 and above)

For severity, we might initially assess threats with only three levels:

LS = low severity (effects limited geographically or to closed systems or topics, not going to affect humanity as a whole)
MS = medium severity (effects that may cause major disruptions in our society, its functioning, or the well being of large sectors of the population or of the environment)
HS = high severity (existential threats to humankind).

I propose that HS threats should be assessed with a threat index of 10 if estimated to correspond to at least a low likelihood (LL, p>0.0001), and 9 if of VLL. For MS threats, I will assign them a threat index varying from 4 (VLL) to 8 (HL). For LS threats, I will assign threat indices from 0 to 4.

Here is a short list of considered threats connected with the development of AI technologies, and proposed levels of likelihood and severity for the short term (A), medium-long term (B) and long term (C): 





I will now comment on the various threats above, and how I came to the above assessments.

1 Disruption of the stock market by ultra-effective prediction of trends

AI has been used in a number of ways to improve the performance of stock trading by large companies. Until now, the typical use has been in analysis and as assistance to human decisions, but in the future their use may become more autonomous. I am not aware of systems achieving performances so high that they may disrupt the system, yet such a scenario cannot be excluded in the long term. Stock markets worldwide have built-in mechanisms to react to anomalous speculative activity, e.g. the volatility interruption mechanism, or even stronger protocols that may suspend trading activities altogether, but they may be too slow to cope with a superintelligent system. Again, the issue here seems to be the balance of new developments and countermeasures in a quickly developing situation. The likelihood of adverse effects may only increase in the short term, and the severity of the outcomes has been assessed as medium. In the long term the stock markets will become more robust to AI-powered intervention for the simple reason that it could be foreseen that most of the trading will become automated.

2 Persistent loss of trust in news and manipulation of public opinion:

The creation of fake news and their use for the pollution of the information market has already started happening. Today large language models may be easily customized to produce nocuous and malicious content. Recent notable uses of chatbots, automated distribution of false content in social media, and similar content-producing AI systems have been used e.g. to bias the US electorate in the 2020 elections, or to undermine trust in vaccines. Further, image processing systems can today be used to create “deep fakes”, i.e. pictures and videos whose appearance is indistinguishable from true footage. While the pollution of the information highways has so far remained mostly confined to specific targets, its use to bias public opinion in case of wars or other large-scale situations is going to create a general distrust of any source of information and news. The likelihood of this outcome is therefore very high -basically certain at the time of writing; yet we believe that it is likely that effective countermeasures will become available and get widespread use in the medium term. This threat is therefore probably mostly connected to the transition period when powerful AI systems start to be available for generic applications, when balancing measures have not been developed or agreed upon yet.

3 Social disruptions due to job automation:

This global threat has been discussed in detail by a number of sources. It is generally agreed that a number of tasks of repetitive nature - such as the operation and driving of vehicles for the transportation of  humans or goods, the delivery of goods to the end users, intermediate processing tasks in production chains, and many others - are going to suffer a significant impact from the substitution of human operators with AI-driven expert systems, due to the economical benefit of the latter; this process has already started and it will only intensify in the short term. Similarly, the AI impact on education will be explicit in the widespread availability of real-time translation of speech, in acquired higher trust of large language models outputs, and in the automated production of educational audio/video products. Many experts argue that humans will not be substituted, though, but rather empowered by AI assistance, with a beneficial and qualifying effect on their work conditions. It remains to be seen what will be the net outcome of the ongoing transition, but if we only look at the likelihood and severity of negative outcomes in this area, we have to assess these as medium in both cases. This is likely to also be a transient effect, so I believe the impact will decrease in the long term to a low severity one.  

4 Nocuous influence of new freely available AI tools on society:

When ChatGPT3 was released, educators and academics around the world immediately recognized the disruptive effect this new readily available tool was going to have on the grading system of student homework. The demonstration that large language models may easily mimic the expressive level, jargon, and content of a high schooler, and thus be used by the latter to drastically cut the time required to learn study material without any risk, has created a difficult situation in many school systems. I consider this an example of negative side-effects of new technologies that constitute an otherwise positive outcome for society. 

5 Development of AI-powered armies and weapons:

The topic of AI for military applications is a complex one to deal with, mainly because of lack of knowledge in the status of research and developments in the area. Information is generally unavailable to the public, so it is hard to extrapolate what the future developments may be. The production of new weapons and automated offensive devices or automata generates several different risks: the potential for world domination by the country that first acquires a novel technology offering higher power of destruction or other forms of dominance is only one of them. Other risks, such as that of a drift toward dystopic societies with ultra-authoritarian control by robotic units, have been considered by a variety of science fiction works and movies. Overall, the severity of the risks involved varies from medium to high, but the likelihood is not easy to assess. In this situation I would prefer to err on the side of caution, evaluating the odds as low, medium, and high in the three considered time frames.

6 Yielding control of critical systems to AI algorithms:

The economical benefits of automation create a general tendency of substituting human decisions with algorithms. While the latter may show lower failure rates (as e.g. has already happened in some cases, such as diagnosis of pathologies from clinical images) and therefore be overall beneficial in regular operation, here we are concerned with the fact that such a substitution, and the automation of decision procedures, involves risks associated with misbehaviour or malfunction not considered in the design phase, and that produce catastrophic effects. One example of this kind is the famous failure of an early-warning system detecting the launch of ICBM missiles by the US toward the Soviet Union. On September 26, 1983, a rare alignment of the detection satellite with the sun and with the field of view of the US territory caused the system to report the launch of five missiles, when in fact the detection was an artifact due to Sun glare. Only the presence of a “human in the middle” and the doubts about an attack involving only five missiles prevented a retaliation strike that could cause the start of a nuclear war.  In general, the risk of overconfidence in automated decision cannot be quantified if not connected with specific systems under control, but we may still assess its potential severity as a whole as medium. The likelihood of manifestation of nocuous effects can be assessed as medium in the short term, and low/very low in longer time scales, as we gain confidence in our validation systems and design more powerful and reliable countermeasures.

7 Superintelligence is created and develops goals misaligned with those of humanity:

This scenario has been discussed quite broadly in the past, including in science fiction literature. The development of a system with super-human capabilities in a broad enough range of tasks,  entails the possibility that the system acquires the skill to reflect on itself and its role, and then develops objectives different from those it was originally designed to address. This might bring it to become hostile to humans, or to simply pursue its goals in ways that conflict with human life on our planet. In both cases, this is an existential risk of the highest severity, and therefore regardless of its perceived or assessed likelihood it must be assigned very high values on the scale.  

What do we do with it

While the use of the original Torino scale is straightforward - it answers the precise question of which asteroid should be more carefully monitored in the future (telescope time costs resources and money), the compilation of a Torino scale for AI is of less clear use.

It is mandatory to stress that the scale is based on totally arbitrary assesments of likelihood and on very debatable levels of severity. Hence the values on the scale cannot be the basis of motivated decisions - e.g., resource allocation to monitor the situation in the various categories. On the other hand, perhaps precisely the fact that severity levels are hard to define makes the table useful: it forces us to think at the various assessments, and it may be a very good way to start a discussion across the board on the general problem of facing the potential threats from AI in a disciplined and structured way.

The incompleteness of the list is, along the same lines, a motivation to add items to it, and to interrogate ourselves on things we may be missing. In all cases, an open discussion on these issues, at all levels - social media, scientific papers, institutions, policy makers - can only be beneficial, as at least it will increase awareness. Given the speed at which new developments are popping up, there cannot be enough attention to the associated threats.