Imagine a system that can think what other system thinks that other system thinks and this hole goes ever deeper. This is a system that is able to do universal and general modeling or say, simulation, and it’s called “Turing complete.” So far humans are the only animal who are “Turing complete.” However, what humans, or at least some of us, currently pursue is an artificial intelligence that achieves “Turing complete.” When another “Turing complete” exists, chances are that humans could be destroyed by the more powerful intelligence. This becomes our existential risk. When we discussed this danger at class, it dawned on me that approaching this time of super intelligence is similar to the biological evolution of humans from apes. The difference is that this is a time of intelligence evolution and when it completes, “humans” might no longer be the humans we perceive now.

The article “Artificial intelligence – can we keep it in the box?” by Jaan Tallinn and Huw Price, who are well aware of the existential risk, cautiously gives us a position of standing in the middle — not being too pessimistic about the risks to developing such a “strong AI”, nor being hasty in making it. So far we’ve been progressively developing the narrow AI, the AI that is designed to tackle certain problems rather than the general problem. It’s possible that we keep extending the capability of the narrow AI and meanwhile avoid dangers. It sounds like a choice of both safety and benefits.

To more clearly illustrate the existential risk and understand what our future is likely to be, Jaan Tallinn suggests a model of “intelligence stairway” that he gives in the lecture video attached below. The “stairway” looks like this:

intelligent-stairway

Understanding from the chart, the technology progress that leads to an AGI (Artificial General Intelligence, the AI that is “Turing Complete”) is similar to biological evolution that distinguishes humans from other animals. Tallinn particularly asserts that they have four differences: 1) design power: tech progress is more powerful than evolution because it is able to reach out and grab designs from a larger design space, with is human intelligence compared to natural sources during evolution; 2) control: tech progress will keep the control of the legacy of the more powerful design; 3) biological evolution takes so much longer than tech progress for developing a function (thinking of the time taken for developing the function of viewing with an eye v.s. a camera); 4) nature as a result of biological evolution takes longer than a city as a result of tech progress. Hence, in the stage of tech progress, we should expect 1) great design power; 2) humans not in control of the future; 3) fast environmental changes. In this sense, the intelligence explosion after tech progress requires or equals to ecological catastrophe.

Even if this predicted stairway were true, it doesn’t mean that the future of human being would be fixed. We could think how we can best survive from this potential catastrophe. Tallinn thinks in the process of humans developing AGI, there are three kinds of people: the people not yet capable of design it so they are harmless, the people who are enlightened with the awareness of existential risk and intend to avoid it in their design, and lastly the most dangerous kind who are capable of designing it without realizing it might kill people. First of all, we humans have controls of human AI developers to make them all enlightened and realize that we should develop the safe strong AI. The question then is that two approaches to AI safety lie in front of us in the current stage of only capable of developing the narrow AI: one is to aim for the friendly AI that is with AGI’s power as well as human interest that protects human survival, which is still tough for us because it requires us to put both human psychological thinking and the way it changes through time, into codes and programs; the other choice is to avoid AGI by keeping narrow AI developing till the point it gets closest to the AGI. For Tallinn, he certainly goes for the second choice.

I think Tallinn is very pragmatic given the awareness of the existential risk. The risk of developing AI is similar to that of using nuclear power. Its power and risk grows together and the question leaves to us regarding how to use the power. For Tallinn, he suggests we develop AI safety protocol. But before that, it’s necessary to enlighten AI developers and prevent them from the most dangerous kind of people, probably in our species.

 

References:

Huw Price and Jaan Tallinn, “Artificial Intelligence—Can We Keep it in the Box?”, in The Conversation, 6 August 2012.

Jaan Tallinn on the Intelligence Stairway“, a talk for Sydney Ideas at the University of Sydney. Youtube video attached.