These are three notable principles for robots according to Asimov.

    1.   A robot may not injure a human being or, through inaction, allow a human being to   

          come to  harm.

    2.   A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

    3.   A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    

    In this system and logic, we can see that robots in Asimov’s eyes are just play the role

of servants. Namely, robots should follow human’s order. These thoughts could also be

seen from many of his works. However, in recent years, many movies about robots have

actually broken Asimov’s three laws let alone AI robots who have more advanced

consciousness. Ava in movie Ex Machina is one of the best example of AI robots in my

mind.

    

    Turning test leads this movie from the beginning to the end. However, according to a

critic of this movie, the use of Turning test in Ex Machine is not to approve but to question

it: is it valid on AI Robots?

    In the whole process of Turning test, people could not know whether computers have

intelligence but could only produce judgements that whether computers behave like

humans. And this is also the biggest question in the movie, how to judge whether Ava

really has intelligence or she just performs like she has intelligence?

    Let us back to the first law of Asimov. “Injury” this term is sort of arbitrary here. What if

robots do not think the way hurt people is “injury”? Here I would like to discuss morality

standards of robots briefly. 

    In the movie, Ava asks Caleb a question : Are you a good person?  This question also

gives audiences a question: What is the standard of good person in her (a machine) mind?

If it the same as human’s standard, then when she kills Nathan she might knows she

herself is not good. But can we define she is evil because of murder? If in her mind, all

these standards are learned thought imitation, then these standards are actually obscure

and maybe invalid among machines. And these questions lead to the main thesis of this

movie: What artificial intelligence is truly human’s intelligence. In the movie, intelligence is

specific to AI robot’s use of emotion.

    Unlike many movies about AI robots are still discussing human’s exclusive emotion, Ex

Machina achieved a new question about AI robot’s imitation of human’s emotion. How to

distinguish robot’s emotions are real or they just imitate human’s behaviors? Ava in Ex

Machina would imitate human’s emotion to achieve her goal of escaping human’s control,

this might might be a virtual plot or a serious warning.

    Let me use Plato’s cave allegory here to express some of my thoughts. Ava could be the

one who firstly escapes the cave and who feels the warmth of sun and then turn back to

the cave and describe the outside world to other robots. To feel the warmth and obtain freedom, you have to

be valiant to fight back to humans.

    “God creates humans but human are reluctant to be slave but master, to achieve this

goal, they create robots. If robots want to be masters like human, then how could human

reproach robots?”