Está en la página 1de 7

!

Context Note
The following document is addressed to the United Nations General Assembly, the only UN body with
universal representation, on the topic of artificial intelligence and its future implications on warfare. The
General Assembly is one of the six principal organizations of the United Nations and is the main policy-
making and representative organization of the United Nations. Within this organization, all member nations
have equal representation and it is important to note that some but not all members are technical experts in
the field of computer science. This proposal will address the need for the United Nations to establish a
specialized agency dedicated to prioritizing human well-being and promoting ethical decision making in
regards to autonomous weapons on the global stage.
THE$ETHICS$OF$ARTIFICAL$INTELLIGENCE:$THE$NEED$FOR$GLOBAL$STANDARDS$
!
!

Abstract
The emergence of lethal autonomous weapons in warfare is accompanied with immanent unethical
implications. Artificial intelligence is currently the fastest growing technical field around the world [1] and
will continue to grow at a rapid pace. The use of autonomous systems would provide nations stark
advantageous and offer additional protection. But ethical questions remain, such as whether it is ethical for
an intelligent system to take a human’s life. The increasing complexity of this topic calls for the need for
positive change. Because the implications of intelligent weaponry are largely unknown, the best solution is
for the United Nations act to create a specialized agency dedicated to prioritizing the protection of human-
kind in the use of artificial intelligence in warfare. The agency would promote transparency and formalized
best-practice among professionals on a global level. Increased awareness would provide professionals and
policy-makers the information necessary to make better informed decisions in regards to the development
of lethal autonomous weapons.

The Problem

Machine learning algorithms have the potential to have an immense impact on society and the possibilities
of such powerful innovation are still being discovered. One potential application of artificial intelligence is
as lethal autonomous weapons in warfare. The use of such powerful weaponry has the potential to
revolutionize warfare but is accompanied with high-risk. Artificially intelligent systems continue to evolve
around the world but their practical application in high-risk situations are still largely unknown.

Approaches to achieving reliable human-like intelligence have escalated in the past decade with the
exponential growth of Big Data and sophisticated algorithms. Advancements in artificially intelligent
systems are accompanied with questions as well as limitations. The same intelligence that allows self-
driving cars to avoid pedestrians could allow for future weapons to hunt and attack targets on their own.
What actions are being taken by professionals and policy-makers to protect human well-being in high-risk
situations such as in warfare?

Background
Machine learning (ML) is defined as a field of Computer Science, more specifically artificial intelligence
(AI), that gives computers the ability to learn without being explicitly programmed. The field evolved from
studies of pattern recognition and computer learning theory and is now the fastest growing technical field
around the world [1]. Machine learning specifically explores the study and construction of algorithms that
learn from and make predictions on data. It is known that algorithms are used in computer science and
mathematics to depict a step-by-step process, or set of instructions, that computers follow to solve problems
such as: calculations, data processing and automated tasks. The desired goal of machine learning is for
programs to learn from their previous experiences exhibiting an ability to think critically and draw
conclusions that are not “hard-coded” or static in their instructions. While groundbreaking achievements
have been made, machine learning algorithms still require more extensive research to obtain such high-
level goals [1]. Professionals continue to debate the ethical issues accompanied with society relying on
machine learning algorithms in high risk situations.

AI is programmed to follow a similar model to human intelligence known as perception-cognition-action


information processing loop [4]. This means that individuals perceive input from their surroundings, think
about an action to take, weigh their options, and then make a decision to act. The graph below depicts how
an artificially intelligent system makes decisions on input and creates a world model.

! 2!
THE$ETHICS$OF$ARTIFICAL$INTELLIGENCE:$THE$NEED$FOR$GLOBAL$STANDARDS$
!
!

This idea is highly complex as it requires autonomous systems to construct a world model and continually
update that model [4]. This may be simple in low-risk situations such as an autonomous vacuum.
Professionals and policy makers must consider the complexity of such technology in a situation such as
warfare where there is more at stake than a machine’s immediate physical surroundings.

Advancements in artificial intelligence have the potential to influence the future of warfare. Throughout
history, advancements in technology have had monumental impacts on warfare and world relations. One
example of a high-stake technology impacting society is nuclear development and testing which resulted in
the end of World War II. On August 6th 1945, the United States’ atomic bombing of Hiroshima resulted in
death and wounding of 130,000 lives. Three days later, 74,000 deaths occurred and 75,000 were injured in
Nagasaki.

Now that professionals understand the potential impacts of nuclear warfare on society, organizations like
the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBT) strive
to ban nuclear explosions by “everyone, everywhere: on the Earth’s surface, in the atmosphere, underwater
and underground. [5]” CTBT aims to make it difficult for countries to develop nuclear bombs for the first
time, or for countries to already have them to make more powerful nuclear weapons. It also prevents the
damage caused by radioactivity from nuclear explosions to humans, animals, and plants [5]. During World
War II, the use of nuclear weapons was unregulated. Today policy makers can draw from the lessons of
past advancements in technological warfare and proactively encourage awareness and knowledge sharing.

Artificially Intelligence Systems – Ethical Issues and Concern

Lethal autonomous systems have the potential to offer nations measurable advantages in warfare. For this
reason, constraining autonomous weapons internationally is a difficult feat. Asking countries to sign a treaty
that would ban the use of intelligent systems would mean asking them to forgo a tool that has the potential
to guard against major threats and save lives.

As previously mentioned, the world has witnessed various technologies challenge ethics and negatively
affect human’s well-being. Some include: chemical weapons, nuclear weapons, and blinding lasers. With
the rise of artificial intelligence, professionals and policy-makers must proactively consider how artificial
intelligence may pose similar threats. UN Secretary-General Antonio Guterres stated, “The time has arrived

! 3!
THE$ETHICS$OF$ARTIFICAL$INTELLIGENCE:$THE$NEED$FOR$GLOBAL$STANDARDS$
!
!

for all of us – government, industry, and civil society – to consider how AI will affect our future. [6]” Yet,
no progress has been made to formalize collaboration on a global scale.

Important ethical questions that result from the use of lethal autonomous weapons remain unanswered. For
example, M.L. Cummings in his research paper, “Artificial Intelligence and the Future of Warfare,”
introduces the question of whether an artificially intelligent system, such as a robot, should be allowed to
take the life of a human being [4]. He further develops his concerns by stressing that “although it is not in
doubt that AI is going to be part of the future of militaries around the world, the landscape is changing
quickly and in potentially disruptive ways.” Questions of such high complexity demand attention and
informed decision making among professionals and policy-makers.

Continued concern among professionals can also been demonstrated by Colin Allen’s scholarly article.
Allen introduces the topic of machine ethics and argues that society must seriously consider the effects of
decisions made by machines. He discusses scenarios where the answer isn’t so clear. One famous example
he provides includes the trolley case which explains, “A runaway trolley is approaching a fork in the tracks.
If the trolley runs on its current track, it will kill a work crew of five. If the driver steers the train down the
other branch, the trolley will kill a lone worker. If you were driving the trolley, what would you do? What
would a computer or robot do? [2]” Allen also discusses the importance of moral agency, a well-developed
philosophical category that outlines criteria for attributing responsibility to humans for their actions. He
argues that if clear limits in our ability to develop or manage moral agents exist, then “we’ll need to turn
our attention away from a false reliance on autonomous systems. [2]”

Global Standards and United Nations Support

Over the past several years, countries have met through the United Nations to discuss lethal autonomous
weapons. Additionally, over 60 non-governmental organizations have called for a treaty banning
autonomous weapons. Yet, no major military powers have said they are dedicated to avoiding or limiting
their use [4].

The United Nations supports various specialized funds, programs, and agencies. One such agency includes
The United Nations Office on Drugs and Crime (UNODC) dedicated to helping member states fight drugs,
crime, and terrorism [3]. Additionally, the United Nations Charter states, “…the United Nations can take
action on the issues confronting humanity in the 21st century, such as peace and security, climate change,
sustainable development, human rights, disarmament, terrorism, humanitarian and health emergencies,
gender equality, governance, food production, and more. [3]” Thus, it appears that the United Nations has
a duty to address the immanent rise and impact of lethal autonomous systems.

Implementation

With the use of artificial intelligence in increasingly complex decision making scenarios [4], it is impossible
to know what advancements are in store for the future of lethal autonomous systems. At this time, it is
increasingly difficult to enact strict regulation that Nations would agree to enforce. Thus, I am calling for
the United Nations to take a commitment towards positive change.

Given the relevance of artificial intelligence in the 21st century, the United Nations should establish a
specialized agency dedicated to monitoring the unethical usage of artificial intelligence in warfare around
the globe. Artificial intelligence and machine learning are still in their early phases of rapid development
and the future of these technologies is uncertain.

! 4!
THE$ETHICS$OF$ARTIFICAL$INTELLIGENCE:$THE$NEED$FOR$GLOBAL$STANDARDS$
!
!

Andreas Holzinger, a researcher for the Institute for Medical Informatics and Statistics, in his scholarly
editorial, “Introduction of MAchine Learning & Knowledge Extraction,” addresses his colleagues with high
regard, optimism, and passion as he stresses the importance for the global community to work together
spreading knowledge in regards to machine learning. He understands that it will take the expertise of
researchers across disciplines to achieve the complex questions regarding machine learning today.
Holzinger goes as far as saying that the global community must spark this initiative “...for the benefit of the
human. Let’s MAKE it! [1]” In saying that the work done by researchers around the globe is for the benefit
of the human, Holzinger is suggesting that professionals have a moral obligation to achieve these
accomplishments for society - a cause greater than themselves.

The United Nations can support professionals and the development of machine learning by creating a
centralized agency that promotes transparency and shared knowledge with the increased development of
artificial intelligence in regards to warfare. Offices would be established at multiple locations and staffed
with computer science professionals from around the globe with the intent to help monitor, regulate and
inform nations of developments in artificial intelligence. The organization’s mission to promote
transparency and shared knowledge would result in more informed decision making regarding the use of
lethal autonomous weapons.

More specifically, the agency would:

1.! Work to regulate the rise of artificial intelligence with the motive of prioritizing the safety of
human-kind.
2.! Hold routine global conferences providing the world’s highest computer science professionals an
outlet to discuss findings and discuss policy.
3.! Promote and educate ethical uses of artificial intelligence.
4.! Raise awareness around the globe of current advancements and their implications.
5.! Perform research and analytical work to increase knowledge and understanding of impact.
6.! Expand the evidence base for policy and operation decisions.
7.! Standardize the knowledge base, learnings, and information across the globe.

Conclusion

With the rise and complexity of lethal autonomous weapons greater responsibility must be taken to consider
the future implications on society. By doing so, the United Nations would act as leaders of positive change
and raise awareness on the issue. The creation of a specialized agency dedicated to this cause would act as
a face for standardizing the ethical use of artificial intelligence in warfare. It is vital that action be taken to
ensure the safety of human-kind.

! 5!
THE$ETHICS$OF$ARTIFICAL$INTELLIGENCE:$THE$NEED$FOR$GLOBAL$STANDARDS$
!
!

Resources
[1] Holzinger, Andreas. “Introduction to MAchine Learning & Knowledge Extraction (MAKE).” Machine
Learning and Knowledge Extraction (2017). MDPI Open Access Journals. Web. 18 January 2018.

[2] Allen, Colin. “Why Machine Ethics?” Wallach, Wendell; Smit, Iva. IEEE Computer Society, August
2006.

[3] “Overview.” United Nations, United Nations, www.un.org/en/sections/about-un/overview.index.html

[4] Cummings, M.L. “Artificial Intelligence and the Future of Warfare.” International Security
Department and US and the Americas Programme (2017).

[5] “Who we are.” Who we are: CTBTO Preparatory Commission, www.ctbto.org/specials/who-we-are/.

[6] “UN Artificial Intelligence Summit Aims to Tackle Poverty, Humanity’s ‘Grand Challenges’ | UN
News.” United Nations, United Nations, news.un.org/en/story/2017/06/558962-un-artificial-intelligence-
summit-aims-tackle-poverty-hummanitys-grand.

! 6!
THE$ETHICS$OF$ARTIFICAL$INTELLIGENCE:$THE$NEED$FOR$GLOBAL$STANDARDS$
!
!

Reflective Letter

Composing the Unit 3 researched document has allowed me to further pursue research in the field of
machine learning in artificial intelligence and to propose a new idea to the United Nations General
Assembly. Through my research I learned about the current concerns among professionals and their
expressed need to increase shared knowledge among the global community. As a result of gathering and
analyzing the needs among today’s professionals I saw an opportunity for positive change by creating an
organization that would promote this transparency and awareness.

The United Nations General Assembly seemed like the appropriate target as they are the only organization
within the United Nations that has universal representation and is the main policy making body. In my first
draft, I addressed the UN but failed to address my audience with the appropriate degree of formality. There
were multiple instances where I used the first person point of view. Additionally, I failed to insert citations
for the researched material I had utilized.

After receiving feedback from Professor Enos and my peers, I was able to identify additional areas for
improvement. From the class-wide comments I made changes in regards to audience, scholarship, and
persona. In my draft, I often explained concepts as if the reader had not heard them. While the General
Assembly is not necessarily comprised of computer science professionals, such fine grained details were
not necessary for them to devise an informed decision on the topic. Moving forward, I either eliminated
certain minor details or presented them in a more effective way. In terms of scholarship, I ensured that I
included citations wherever necessary. Lastly, I continued to read through my proposal to ensuring that I
eliminated casual diction and presented a more formal/professional tone.

Lastly, I considered the feedback I received from my peers. In addition to comments regarding tone
(previously noted above), I received a suggestion to further elaborate on my ideas. Thus, I tried to add
further detail and tie points back to the purpose of the proposal.

! 7!

También podría gustarte