Philosophy of Technology

Class Blog

Is agency of AI an illusion?

In this post, I would like to continue the discussion in class about how the lessons from history remind us that the new frontier of science is easily taken to be magical, and then I’ll introduce a follow up question from professor Weslake.

In our last class, two videos were showed. One is about a new AI chess player with a particular kind of algorism who has only self-learnt for four hours from zero, and then beat the existing best AI chess player by making “the move” (I forget its name) that were thought to only able to be made by a human. The other is about how a man is trying to stop a machine dog from opening a door, while the dog’s movements are extremely similar to those of real dogs. These remind me of a video of Sophia, an AI, talking and responding to various questions. ( There are many videos about her, and this is the most popular one, with nearly 12 million views.

History completely mirrors this situation. I take the example of electricity, which has been discussed extensively in class. When electricity is first found, people are amazed, especially for its effect of stimulating muscle movements. Electricity is then thought to be the mysterious flow that can explain how life works. A proof of this would be the story Frankenstein by Mary Shelly, in which electricity were thought to be the key to the creation of life. This part of history should always remind us that people tend to be amazed by new products of technology. Just like when these videos of lively AI are showed to them. When viewing these videos we have the urge to attribute intelligence or agency to these AI, and people sometimes do goes a bit far. One fact about the AI Sophia is that Saudi Arabia has already given her citizenship, when the experts deny that Sophia is genuinely having agency. ( I think this is inappropriate for two reasons, one is that since Sophia is not yet an agent, granting her citizenship would involve joking on either the notion of citizenship or on laws. Second, this would create an illusion for general public that our AI has already developed to a magical stage, that we can create agents. Just as the tempting but false idea that electricity has some power of creating life, the idea the we are already able to create genuine agents is also tempting and very likely to be false.

However, professor Weslake pointed out a disanalogy between the earlier situations and the AI chess player case, which is that in the former case, the experts would know that the tempted thought from the general population is false. However, in the latter case, even the creator of the chess AI would not know how exactly the AI reasons from one move to another, because of the particularity of the algorism, therefore may not give a definite answer to the question that whether this AI has agency.

I highly agree to this point, and I think that the reason that our experts fail to know is due to the speciality of the subject matter, namely mind, agency, etc. One way we used to analyze the seemingly magical thing is by decomposing it into material parts, for example, some complicated huge machine can be the subject of this analysis. However when it comes to AI, since we are assuming that we as purely material being, which after decomposition would have nothing interesting left, can be agents, we cannot ran the decomposition argument of AI. We can only look at the their algorisms and decide if it’s with agency.

I suspect that the only way to determine whether AI has agency is to combine human brain with some mechanical part, so that we would probably know how it feels when the algorism is carried out. If we cannot understand AI by looking into it, we may do so by trying to be an AI.

I’m looking forward to the part about AI ethics.

« »

Philosophy of Technology at NYU Shanghai, a course by Anna Greenspan and Brad Weslake.