Randall Munroe has discussed the Three Laws in various cases, but perhaps more directly through one of his comics called The Three Laws of Robotics, which considers the consequences of each arrangement different from the three existing laws. Trevize frowned. “How do you decide what is harmful or not harmful to humanity as a whole?” The other big problem with laws is that we need significant advances in AI for robots to actually track them. The goal of AI research is sometimes described as the development of machines that can think and act rationally and like a human. So far, the imitation of human behavior in the field of AI has not been well studied, and the development of rational behavior has focused on limited and well-defined areas. In October 2013, at a meeting of the EUCog[56], Alan Winfield proposed a revision of 5 laws published in 2010 by the EPSRC/AHRC working group with comments. [57] The flaw in the laws is that they assume that morality and moral decisions can be made by means of an algorithm, that discrete yes/no answers are enough to “solve” moral dilemmas. They are not enough. (Or, to be sufficient, many, many, many more “laws” would be needed than those set out to cover the wide range of “what if” and “but it” qualifications that still occur.) The laws of robotics are presented as something like a human religion and are mentioned in the language of the Protestant Reformation, with the series of laws containing the Zero Law known as the “Giskardian Reformation” belonging to the original “Calvinian Orthodoxy” of the Three Laws. Zero-law robots under the control of R. Daneel Olivaw is constantly fighting against the robots of the “First Law”, which deny the existence of the Zero Law and promote agendas other than Daneel. [27] Some of these programs are based on the first clause of the First Law (“A robot must not hurt a human..”), which advocates strict non-interference in human politics so as not to cause harm without knowing it. Others are based on the second sentence (“.
or, through inaction, allow a human to be injured”) and argues that robots should openly become a dictatorial government to protect humans from any potential conflict or catastrophe. Authors other than Asimov often created additional laws. Advanced robots in fiction are usually programmed to manage the Three Laws in a sophisticated way. In many stories, such as Asimov`s “Runaround”, the potential and severity of all actions are weighed and a robot will break the laws as little as possible instead of doing nothing at all. For example, the First Law may prohibit a robot from acting as a surgeon, as this action can cause harm to a human; However, Asimov`s stories eventually included robotic surgeons (“The Bicentennial Man” is a notable example). If robots are sophisticated enough to weigh alternatives, a robot can be programmed to accept the need to cause damage during surgery to prevent the greater damage that would occur if the surgery was not performed or performed by a fallible human surgeon. In “Evidence,” Susan Calvin points out that a robot can even act as a prosecutor because in the U.S. judicial system, it`s the jury that decides guilt or innocence, the judge who decides the verdict, and the executioner who applies the death penalty. [43] Brendan Dixon comes in and says: It`s even worse than he says! “Laws” are ambiguous, even for a human being. What does it mean, for example, not to “harm”? Actually quite sticky to train.
I wonder if a logical consequence of the 3 laws of robots is that robots must teach people objective moral laws, for example to avoid “harm to a human being caused by inaction”. For example, robots would put an end to all wars, abortions and euthanasia in the world and make a massive evangelistic effort to prevent people from inflicting infinite harm on themselves by going to hell. three laws of robotics, rules developed by science fiction author Isaac Asimov, who attempted to create an ethical system for humans and robots.