3 Laws of Robotics Asimov

Authors other than Asimov often created additional laws. In October 2013, at a meeting of the EUCog[56], Alan Winfield proposed a revision of 5 laws published in 2010 by the EPSRC/AHRC working group with comments. [57] Although Asimov places the origin of the Three Laws at a certain date, their appearance in his literature took place over a certain period of time. He wrote two robot stories without explicit mention of the laws, “Robbie” and “Reason”. However, he assumed that the robots would have some inherent safety precautions. “Liar!”, his third robot story, mentions the first law for the first time, but not the other two. The three laws eventually appeared together in “Runaround.” When these and other stories were compiled in anthology I, Robot, “Reason”, and “Robbie,” they were updated to recognize the three laws, although the material Asimov added to “Reason” did not quite match the Three Laws as he described them elsewhere. [10] In particular, the idea of a robot that protects human lives when it does not believe that these humans actually exist is at odds with Elijah Baley`s reasoning, as described below. Instead of laws restricting the behavior of robots, robots should be able to choose the best solution for a particular scenario, robots and artificial intelligences should not inherently contain or obey the Three Laws; Their human creators must choose to program them and find a way to do so. There are already robots (for example, a Roomba) that are too easy to understand when they cause pain or injury and know they should stop. Many are equipped with physical safety precautions such as bumpers, warning reels, safety cages or restricted areas to prevent accidents. Even the most complex robots currently in production are unable to understand and apply the Three Laws; This would require significant advances in artificial intelligence, and even if AI could achieve intelligence at the human level, the inherent ethical complexity as well as the cultural/contextual dependence on laws prevent it from being a good candidate for formulating robotic design constraints. [46] However, as robots become more complex, interest in developing policies and safeguards for their operation also increases.

[47] [48] Robots advanced in fiction are usually programmed to manage the Three Laws in a sophisticated way. In many stories, such as Asimov`s “Runaround”, the potential and severity of all actions are weighed and a robot will break the laws as little as possible instead of doing nothing at all. For example, the First Law may prohibit a robot from acting as a surgeon, as this action can cause harm to a human; However, Asimov`s stories eventually included robotic surgeons (“The Bicentennial Man” is a notable example). If robots are sophisticated enough to weigh alternatives, a robot can be programmed to accept the need to cause damage during surgery to prevent the greater damage that would occur if the surgery was not performed or performed by a fallible human surgeon. In “Evidence,” Susan Calvin points out that a robot can even act as a prosecutor because in the U.S. judicial system, it`s the jury that decides guilt or innocence, the judge who decides the verdict, and the executioner who applies the death penalty. [43] Ljuben Dilov`s 1974 novel, Icarus Way (also known as The Journey of Icarus), introduces a fourth law of robotics: “A robot must establish its identity as a robot in all cases. Dilov justifies the fourth protective measure as follows: “The latest law put an end to the costly aberrations of designers to give psychorobots as human a form as possible. And the resulting misunderstandings.

[30] For example, “robots” made of DNA and proteins could be used in surgery to correct genetic disorders. Theoretically, these devices should really follow Asimov`s laws. But in order for them to track commands via DNA signals, they would essentially have to become an integral part of the human being they were working on. This integration would then make it difficult to determine whether the robot is independent enough to fall under the laws or operate outside of them. And on a practical level, it would be impossible for her to determine whether the orders she would receive would cause harm to the man if they were carried out. The laws of robotics are presented as something like a human religion and are mentioned in the language of the Protestant Reformation, with the series of laws containing the Zero Law known as the “Giskardian Reformation” belonging to the original “Calvinian Orthodoxy” of the Three Laws. Zero Law robots under the control of R. Daneel Olivaw constantly struggles with the robots of the “First Law”, which deny the existence of the Zero Law and promote agendas other than Daneel. [27] Some of these programs are based on the first clause of the First Law (“A robot must not hurt a human..”), which advocates strict non-interference in human politics so as not to cause harm without knowing it. Others are based on the second sentence (“.

or, through inaction, allow a human to be injured”) and argues that robots should openly become a dictatorial government to protect humans from any potential conflict or catastrophe. Marc Rotenberg, president and executive director of the Electronic Privacy Information Center (EPIC) and professor of privacy law at Georgetown Law, argues that robotics laws should be expanded to include two new laws: Roger Clarke (aka Rodger Clarke) wrote two papers analyzing the complications of implementing those laws if systems could ever: apply them. He argued that “Asimov`s laws of robotics were a very successful literary tool. Perhaps ironically, or perhaps because it was artistically appropriate, the sum of Asimov`s stories refutes the claim he began with: it is not possible to reliably limit the behavior of robots by developing and enforcing a set of rules. [52] On the other hand, Asimov`s later novels The Robots of the Dawn, Robots and Empire and Foundation and Earth imply that the robots caused their worst long-term damage by fully obeying the Three Laws, thus depriving humanity of inventive or risky behavior. In the face of all these problems, Asimov`s laws offer little more than founding principles for someone who wants to create robot code today. We need to follow them with a much broader set of laws. However, without significant developments in AI, implementing such laws will remain an impossible task.

This entry was posted in Uncategorized. Bookmark the permalink.
Abrir Chat