Está en la página 1de 4

Erickson 1

Tyler Erickson
Ms. Pritchard
English 12
25 April 2016
Is AI a threat to humanity?
In recent years, the threat of Artificial Intelligence on humanity has been a popular topic
in the technological world, with many opposing the idea of creating a fully self-aware machine.
AI poses an extremely real threat to humanitys entire existence due to its ability to surpass our
most intelligent minds with ease, create huge problems due to minor errors in their creation and
deployment, and integrate with our pre-existing Internet of Things. Due to its capability to
destroy our way of life as we know it today, and all the unknown factors that accompany it,
Artificial Intelligence should be treated with the utmost care.
The first problem presented by AI, which is agreed on by the majority, is the idea that the
intelligence of mankind will prove to be miniscule to that of an AI. Humanity has been the
leading species on the intellectual ladder for so long that the idea of suddenly being surpassed
frightens the world. Spencer Schneier, in his article about the threats of AI, raises the point that,
[humans] have managed to wipe out thousands [of animals] by accident [. . .] without ever
having intended to do so(1). Being lower on the evolutionary ladder, these machines would
treat us similarly to the way we have treated these animals. Why would a superior being worry
about the well-being of a that which they believe to be inferior? The human population as a
whole has proved that such an act of mercy is not likely to be shown, especially by an AI lacking
the ability of moral decision making. Stephen Hawking, possibly the worlds greatest scientific
mind, stated that [AI] would take off on its own, and re-design itself at an ever increasing rate
thereby Humans, who are limited by slow biological evolution, couldnt compete, and would be
superseded(2). When a leading mind, such as Stephen Hawking, warns the world of the
coming danger of some aspect of technology, it is not to be taken lightly. The Singularity, which
Hawking warned about, is the moment at which AI is capable of recursively improving its own
intelligence without human intervention, in theory allowing for an infinitely intelligent being to be
created. Such an event would, very likely, lead to the demise of the human race because we
would be left behind with our simple intelligence while the beings we created surpassed us in
every field, leaving us to be the equivalent of an ant in the eyes of our creation.
Assuming that the AI can be controlled, other problems arise including the ability of the
human race to safely use AI without turning it loose on the world without the correct guidelines.
Concepts such as Isaac Asimovs Three Rules of Robotics, showcased in his book I, Robot,

Erickson 2
are used as a basis for the possible countermeasures that can be put in place to keep Artificially
Intelligent machines in check. However, Asimov used his book to show that even a set of rules
that may seem completely sound, devoid of loopholes can never take into account every single
possibility. A common thought experiment associated with the control of AI is to contemplate
how a machine would attempt to complete a given, seemingly simple, task. For example,
Spencer Schneier proposed that if given the task to produce original designs for thank you
cards the AI would take every measure possible to complete its task, including possible
Eliminating the human race [because it is] an organization of matter that is not perfectly
optimized to make art for thank you cards(1). While that example is quite extreme, a more likely
possibility is that the machine will allocate such a large amount of resources to completing its
task that it would could effectively deplete the worlds supply of said resource. Most of these
thought exercises have undeniably horrific possibilities, meaning that the creators of such
Artificial Intelligences must implement effective countermeasures to prevent such things from
happening.
Possibly the most realistic of the fears associated with the development of AI, and the
most likely to happen in the near future, is the relationship between the Internet of Things and
machine intelligence. The Internet of Things is the model for a network of varied technologies,
all connected to one another for information, functionality, etc, allowing for an almost completely
automated system. Devices utilized in the IoT include thermostats like Googles Nest, smart
watches, and even Espresso machines; the applications are endless, allowing monitoring of
various behavioral traits. However, when accompanied with the power of Artificial Intelligence,
the IoT becomes a complex neural network, collecting information on its users, and giving AI
direct access to the peoples homes. Abdalla Kablan from techcrunch warns of the upcoming
pairing of the IoT and AI, asserting that Once these traits get imbibed in a pattern that renders
them smart, there is no reason why humans would not be enamored and eventually enslaved by
such machines(3). With the knowledge of practically every humans patterns and behaviors,
these machines will have the ability to control each and every one of them with ease. On top of
that, with the connection of the Internet of Things, such events could be coordinated effortlessly
by these intelligent machines in order to minimize resistance. This is yet another slightly
extreme example, however, milder variations of these events could very likely take place, right
under the collective noses of the human race.
Many supporters of Artificial Intelligence may say that, mankind will find a way to stay in
control at all times. We can always pull the plug, right?(4). While that remains a possibility for
grounded, stationary machines, assuming they have some sort of manual override and havent

Erickson 3
decided to install some form of backup power system, whats stopping a mobile AI from blocking
all attempts to shut it down? These machines will be smart enough to know when they are
going to be shut down and scrapped, repurposed, etc.. Assuming these machines will have
some form of will to survive, whether it be to complete their given task or complete some selfprovided directive, they will not simply stand back while they are about to be decommissioned.
These factors are only a few of many that cause AI to pose a threat to humanity, forcing
us to possibly forfeit all the benefits of such a technology for the safety of our race. Is the
advancement of this technology important enough to warrant the risk? Before making any move
towards machine intelligence, I ask that you contemplate whats at stake and decide whether or
not the human race is worth more than Artificial Intelligence.

Works Cited

Erickson 4

Cellan-Jones, Rory. "Stephen Hawking Warns Artificial Intelligence Could End Mankind
- BBC News." BBC News. BBC, 02 Dec. 2014. Web. 26 Apr. 2016.
Kablan, Abdalla. "AI Is Not a Threat to Humanity, but an Internet of Smart Things
May be!" TechCrunch. TechCrunch, 20 Apr. 2016. Web. 26 Apr. 2016.
Schneier, Spencer. "The Threat of Artificial Intelligence: Extinction." THE
CAROLINIAN. The Carolinian, 23 Mar. 2016. Web. 26 Apr. 2016.
Visser, Arthur. "Are AI and IoT a Threat to Mankind?" Connector Supplier. Connector
Supplier, 16 Mar. 2015. Web. 26 Apr. 2016.

También podría gustarte