Are AI Morals as simple as Asimov's Three Laws of Robots?
Not a day goes by when there is no mention of something that AI caused or allowed to happen. The world is now super-aware of deep faking, the potential loss of income by graphic artists where art is produced by AI image generators, cases where scientific research papers were generated by Chat Bots using advanced Large Language Models (LLM) and even failure by self-driving vehicles to prevent negative outcomes (crashes, getting lost).
In fact, the general public is more wary of potential harm which could be caused by AI.
If the ultimate aim of AI (and robotics) is to safely assist humans in their daily tasks, surely some simple safeguards could be put in place. What not use the old Laws of Robotics to protect humans?
To refresh the memory of some or explain to others - in the world of science fiction there exists the 3 Laws of Robots:
- A robot must not harm a human or allow harm to come to a human through inaction.
Second Law - A robot must obey human orders, unless doing so would conflict with the First Law.
Third Law - A robot must protect its own existence, unless doing so would conflict with the First or Second Law
Origin of Asimov's Three Laws and the later Fourth Law
"Isaac Asimov's Three Laws of Robotics originated from a conversion between the author and his editor, John Campbell, about whether robots should follow human laws. The laws first appeared in Asimov's 1942 short story, Runaround, which was later included in the anthology I, Robot.
Asimov later added a fourth law, called Law Zero, in his 1985 work Robots and Empire. Law Zero states that "A robot cannot harm humanity or allow humanity to be harmed through inaction".
Asimov's laws are considered the ethical and moral basis for developing autonomous systems. They have been used as a blueprint for ethical AI and
have been referenced in popular culture".
have been referenced in popular culture".
This section produced after consulting Goole Search Labs | AI Overview.
Weaknesses in Asimov's Laws
First Law: A robot must not harm a human or allow harm to come to a human through inaction.
The first weakness in these laws, starting with the First Law, is the language of the law.
In any "law" of this magnitude there cannot be any doubt or misinterpretation of any word.
The following are words not adequately defined:
robot, harm, human, inaction.
In any "law" of this magnitude there cannot be any doubt or misinterpretation of any word.
The following are words not adequately defined:
robot, harm, human, inaction.
Of course, in the stories penned by Asimov, these weaknesses are in fact the plot devices which give the theme of the robot stories.
Before we can even discuss the words of the laws, we need to consider that Asimov's robots are all contain positronic brains which are hard-wired with the 3/4 laws of robotics. Once the positronic brain is installed, it is trained to fulfill whatever the task the owner requested.
How can AI be taught morals?
Ethicists and technologists now advocate for:
Human-centered design: Prioritizing dignity, rights, and agency.
Accountability frameworks: Ensuring traceability, oversight, and redress.
Contextual ethics: Adapting moral reasoning to specific domains (e.g., healthcare vs. policing).
Expanded principles: Some propose a “Fourth Law” — that robots must be designed to benefit humanity as a whole.
Why We Should Expand Asimov’s Three Laws Of Robotics With A 4th Law
More reading about this topic is here Asimov's Three Laws: Are They Applicable Today? – AI Guv

No comments:
Post a Comment