If you have read Isaac Asimov's novels, then you're familiar with the Three Laws of Robotics. The reason the Laws are conceive is to curb the potential for robots to harm people.
Asimov once added a "Zeroth Law", stating that a robot must not merely act in the interests of individual humans, but of all humanity. Unknown to many, unless you are a great fan of his novels :)The original Laws of Robotics (1940)
by Isaac Asimov, 20
First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
For those of you who didn't know, I.Robot the Movie is not from the original Asimov's novels. The I.Robot novel is actually a compilation of short stories. The only thing similar were title and the Three Laws.
In the movie, the robots were controlled by a huge super-computer that could violate the 1st Law. The computer's A.I. had come to a conclusion that to protect all humans, a few humans have to be sacrifice for the greater good. This is because the A.I. has allowed the Zeroth Law of Robotics.
Here are some excerpts from the novels:
Robots and EmpireIn the final scenes, R. Giskard Reventlov is the first robot to act according to the Zeroth Law, although it proves destructive to his positronic brain, as he is not certain as to whether his choice will turn out to be for the ultimate good of humanity or not.
Giskard is telepathic, and he comes to his understanding of the Zeroth Law through his understanding of a more subtle concept of "harm" than most robots can grasp. Giskard grasps the philosophical concept of the Zeroth Law, allowing him to harm individual human beings if he can do so in service to the abstract concept of humanity.
The Zeroth Law is never programmed into Giskard's brain, but instead is a rule he attempts to rationalize through pure metacognition; though he fails, he gives his successor, R. Daneel Olivaw, his telepathic abilities. Over the course of many thousand years, Daneel adapts himself to be able to fully obey the Zeroth Law.
Foundation and Earth and Prelude to FoundationAs Daneel formulates it, the Zeroth Law reads: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." A condition stating that the Zeroth Law must not be broken was added to the original Laws.
The Caves of SteelNear the climax, Elijah Baley makes a bitter comment to himself, thinking that the First Law forbids a robot from harming a human being, unless the robot is clever enough to rationalize that its actions are for the human's long-term good (here meaning the specific human that must be harmed).
This reads, "A robot may not harm a human being, unless he finds a way to prove that in the final analysis, the harm done would benefit humanity in general."
Home
» The Zeroth Law of Robotics
0 Comments:
Leave a Comment