The Three Laws of Robotics and Humanity | Writing Forums
Writing Forums

Writing Forums is a non-profit community managed writing environment. We provide an unlimited opportunity for writers and poets of all abilities to share their work and communicate with other writers and creative artists.

We offer an experience that is safe, welcoming and friendly, regardless of participation level, knowledge or skill. There are several opportunities for writers to exchange tips, engage in discussions about techniques, and grow in their craft. Participate in forum competitions that are exciting and helpful in building skill level! There's so much more to explore!

The Three Laws of Robotics and Humanity (1 Viewer)

Not open for further replies.


Senior Member
This was a fun one to write -- if you have any thoughts on the arguments provided, I'd really love to hear them. Thanks!


The Three Laws of Robotics and Humanity

Around the year 1940, Isaac Asimov established The Three Laws of Robotics, a hierarchial set of guidelines the robots in his stories were forced to obey. These rules were more than just operational guidelines for fictional machines, they are a set of ethics that reflected heavily on humanity.

The First Law states: "A robot may not harm a human being, or, through inaction, allow a human being to come to harm." This is easily the most vital of the laws, as it sets a very solid base for the other two. This illustrates the basic fear existing within humanity of the unnatural, of the mechanical and non-organic. For millenia, humans have lived without electronics, existing in environments crafted by their own hand. The homo sapien has lived largely independently from other organisms, save for the hunting of food. By creating machines that are capable of even low-level logic, we have, in our own mind, created a non-organic organism.

Though most computers or machines are not organisms under any stretch of the definition, our mind conceptualizes the processing of logic as an organic method, and computers have this in overwhelming quantities. Computers approaching extremely high level logic and organic methods of existence have broken down these rudimentary barriers that humans see as the point of seperation between us and these machines, and that has given a rise to the fear of machines, or mechanophobia. This fear is typically caused by the realization that we are not in control of our fate, and our fellow humans are the cause.

Typically, humans see machines as devices created for one purpose only: to serve. In essence, they are slaves. While this is acceptable by most standards under current conditions, the whole situation is turned upside-down when the possibility of sentient thought in robots is introduced. When robots approach the values that make humanity unique among all creatures of the earth, we begin to fear that our usefulness will decline. One of the greatest fears among humanity is the fear of having no use, to be something that exists only for the sake of existing.

This issue was approached very heavily in the Science Fiction series Star Trek: The Next Generation through the character Lieutenant Commander Data. Data was an android created by the cyberneticist Dr. Noonien Soong, something truly unique in the Star Trek universe. He was capable of extremely high-level logical thought and was well known for his aspirations towards humanity. In the episode "The Measure of a Man," it was determined that Data was, in fact, a sentient being. This is only one example of the machine sentience that has frightened humanity for decades.

Another famous example this fear is present in the 1999 film by Andy and Larry Wachoski, "The Matrix" and its universe later portrayed in various mediums, the least of which includes two additional movies and a series of short animations. In "The Matrix", human-created machines have gained sentience, and, seeing with their new-found humanity that they are merely slaves to humans, begin what would later become an all-out war that ended with the ultimate slavery of humanity in an almost shocking reversal of roles.

Ultimately, Asimov's First Law of Robotics illustrates the natural fear of that which is far more powerful than one's self. By creating this First Law, Asimov illustrates an intense understanding of this concept and an overwhelming refusal to allow human cultures to reject scientific progress in favor of the preservation of self identity.

The Second Law states: "A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law." This is a logical extension of the First Law in that it allows humans a greater control over robots. Without it, robots could easily be stubborn, utterly useless creations of metal and silicon, serving no purpose whatsoever. By creating robots, humans have historically expected some sort of service from them. A robot which is built for no particular purpose is, in the current mindset, a useless robot.

Historically, humans have always strived for power. This is one of the few constants of humanity over time, and it has led to the fall of many an empire. Though it is certainly not an unusual phenomenon among humanity -- but a recent and poignant one -- slavery in the United States illustrates this concept with ease. Not always limited to the African American culture, this slavery was the ultimate exercise in control over others. Similarly, robots have thus far undergone a similar situation. Though they are not truly sentient by any stretch of the imagination, they are slaves to a master that ultimately has control of everything they do.

When Asimov decided that humans would possess complete control over robots under The Three Laws of Robotics, he issued a very clear statement about our nature: humans desire one thing above all others: control.

The Third Law states: "A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law." This law reflects very clearly on the animal nature of humans and also confirms the animalistic existence of robots. All creatures are "programmed" with one specific goal in mind: self-preservation, whether it be long or short term.

Through this, nearly all animals exhibit a symbiotic existence with their environment, the most obvious exception being humankind. This relationship allows the further long-term existence of a species, and is thus an essential law to be introduced into robotics. If sentient robots were to surrender themselves to a form of death -- unless ordered to do so, as per the hierarchial order of the laws.

This law illustrates the fact that humans are, in fact, animals at their very base, and as such focus very much on self-preservation. This is ultimately the law that creates the humanity of the robots, providing for sentience in a very perceptive way Asimov was infamous for. On a surface level, this law protects the interest of humans, it more specifically ensures the continued existence of robotics.

The Three Laws of Robotics may outwardly seem like a very human ploy to ensure that their creations continue to service very human interests, but it is also apparent that these laws reflect very strongly upon our own human existence. Essentially, the Laws confirm that a sentient robot is much more than a system of wires and processors, they confirm that a sentient robot is as human as any of us.

Matthew Montgomery


Senior Member
what exactly is the point you are making here?... i can't seem to ferret it out... am i being dense?

it would help, if you can humor me with a one-sentence compilation of your premise... hugs, m


Senior Member
My basic point was that Asimov's Three Laws of Robotics apply to humanity in a very distinct way. I also wanted to show how robots in both a science fiction and a scientific sense reflect our own humanity. Does that clear it up at all?
Not open for further replies.