Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Registration IP904(SP)
Year 2014-2015
There exists a straightforward argument in the subject of machine ethics in implementing robot
cars which is based on Philippa Foots Trolley Problem1. The already conflicting answers to
this problem only raises more questions such as, Who takes the decision? the car or the
human? who writes the software? [1], which just complicates the subject. Although a research
paper strongly says that the designers of the algorithms responsible should, as far as possible,
leave ethical issues to users, and when this is not possible, the ethical assumptions in the
algorithm should at least be transparent and easy to identify by users.[2], emergency driving
decisions would have to be taken swiftly and most of the time doesnt allow enough time for
permission/alerts from the driver/passenger.
Essentially the answer to the problem is an algorithm that gives us a choice between murder and
failure to render assistance. But how would this look in practice? Would you like to enter your car
and set it up for "murder mode"[3]?
Not only will my Individual Research Project be a valued addition to the existing research and
development but will also work on finding practical and varying solutions to either resolve or avoid the
conflict. The basic premise of this project is to find solution(s) with legal, technical and ethical reasoning
by gathering all the expansive information from several dierent sources and methods.
Goals
With this project, I plan to gather all the legal knowledge to judge and categorise every
possible road accident scenario. Then I intend to create several critical examples and
choices a human would could physically make for the same with supporting information
which will be amassed .
I will then draft the dierent legal and financial consequences/liabilities and create an
aggregation of values that would help in a simulative assessment using a software.
Using the results, I will formulate algorithms to provide utilitarian solutions dierent cases and
simultaneously create a universal procedure that could be used. I will repeatedly test the
algorithm under varying extreme conditions for authentication.
From these case solutions, one general process will be chosen which raises minimum legal
and ethical questions and design/formulate a workaround/law specifically for self-driving cars
for the ones those exist which if passed should render autonomous cars safe on public
roads.
I will then construct robot(s) prototype in order to attempt to prove my concept and formulate
other uses to extend this concept/procedure in the real-world.
Subject Content
Robot ethics is one of the more controversial topics. It is more than a set of inflexible rules. Its
behaviour depends on context and a number of environmental factors. It's most likely that
advanced "ethical" robots will not come about by developing some sort of hardwired ethical
system, but instead will come from"the development of robots that can better communicate and
negotiate with the people they encounter to reach mutually agreeable outcomes. [4]
Then there exist Asimovs Laws, also known as The Three Laws of Robotics. They are a set of
formulated by the science fiction author Isaac Asimov.
The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to
harm.
A robot must obey the orders given to it by human beings, except where such
would conflict with the First Law.
orders
A robot must protect its own existence as long as such protection does not conflict with the
First or Second Law.[5]
Asimov developed these laws, not as proposal for a rule-based robot code of ethics, but as a
literary device to demonstrate the defects in such a system. Robots are extremely literal and do
not possess any sort of common sense. For such a system to follow commands based on just
three rules will lead to varied problems.
For example, the first rule forbids a self-driving car to let a human come to harm. A literal
interpretation of this rule would prohibit sudden braking, even to avoid a collision, because it
would result in a whiplash for the passengers. For a much more basic instance, a robot surgeon
will not be able to make the first incision for a live saving surgery. There exist infinite novel
scenarios and modelling the rules to suit each and every one of them is a vain task. [6]
Proposed Methodology
For this report I will be using a combination of Qualitative and Quantitative research
methodologies to triangulate hypotheses/options considering dierent data. The data will be
collected at an early stage of the project to ensure the reflection of all the opinions from the initial
part of the project.
The Qualitative research technique will mainly include interviews with open questions to elicit a
varied answer but it will only be sent to valued and established members of the research world.
The responses from the interview will be deeply considered in the direction and output the
research would later be producing. There is a high probability that I would not receive responses
from a lot of the invitees due to several reasons but an initial conversation with supporting
information on what I am doing as a part of this research should give them enough incentive to
answer the questions list when I do send them a little later.
The Quantitative research method will employ the use of questionnaires with closed questions,
preferably multiple choice, and primarily be used to gather statistical data and a general
consensus to support the existing facts in the research. The surveys will be circulated through
easy-to-use online pages and physical circulation in lectures/classes. Using the data in a later
phase would be easier in the former case but the latter ensures more feedback because of the
environment it is performed under, although it has to be entered in a computer manually.
appendix
1 Trolley Problem
There is a runaway trolley barrelling down the railway tracks. Ahead, on the tracks, there are five
people tied up and unable to move. The trolley is headed straight for them. You are standing
some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a
different set of tracks. However, you notice that there is one person on the side track. You do not
have the ability to operate the lever in a way that would cause the trolley to derail without loss of
life (for example, holding the lever in an intermediate position so that the trolley goes between the
two sets of tracks, or pulling the lever after the front wheels pass the switch, but before the rear
wheels do). You have two options: (1) Do nothing, and the trolley kills the five people on the main
track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which
is the correct choice?
Bibliography
1.
Hutton, P., Automated Vehicles, in A Thinking Aloud, A. Dunoyer, et al., Editors. 2014:
Thinking HIghways.
2.
3.
Angler, M.W., The Ethics Of Algorithms: Whom Would You Run Over? 2013.
4.
Love, D. A Simple Experiment Shows How Hard It Is To Get Robots To Behave Ethically.
2014 [cited 2014 September 27, 2014]; Available from: http://
www.businessinsider.com/ethical-robots-elevator-experiment-2014-4.
5.
6.
Goodall, N.J., Ethical Decision Making During Automated Vehicle Crashes. 2014.
Further References
8.
9.
10. Millar, J., Robot Car Ethics, in Robot Car Ethics, J. Millar, Editor. 2014.
11. Google, Stop sign no longer a problem. 2014.
Patents
12. Zhu; Jiajun; (Sunnyvale, C.F.D.I.S.F., CA) ; Dolgov; Dmitri A.; (Mountain View, CA), SYSTEM
AND METHOD FOR PREDICTING BEHAVIORS OF DETECTED OBJECTS, USPTO,
Editor. 2011: The United States of America.
13. Prada Gomez; Luis Ricardo (Hayward, C., Fairfield; Nathaniel (Mountain View, CA),
Szybalski; Andy (San Francisco, CA), Nemec; Philip (San Jose, CA), Urmson; Christopher
(Mountain View, CA), Transitioning a mixed-mode vehicle to autonomous mode, USPTO,
Editor. 2011: The United Sates of America.
14. Montemerlo; Michael Steven; (Mountain View, C.D.D.A.M.V., CA) ; Urmson; Christopher
Paul; (Mountain View, CA), ZONE DRIVING, USPTO, Editor. 2011: The United States of
America.
15. Fairfield; Nathaniel; (Mountain View, C.U.C.M.V., CA) ; Thrun; Sebastian; (Palo Alto, CA),
TRAFFIC SIGNAL MAPPING AND DETECTION, USPTO, Editor. 2010: The United States
of America.
16. Dolgov; Dmitri A.; (Mountain View, C.U.C.P.M.V., CA), DIAGNOSIS AND REPAIR FOR
AUTONOMOUS VEHICLES, USPTO, Editor. 2011: The United States of America.
17. Alberto Broggi, G.S., Christopher K. Yakes, Vision system for an autonomous vehicle,
USPTO, Editor. 2007: The United States of America.
47. Brandom, R. Self-driving cars can navigate the road, but can they navigate the law? 2012
[cited 2014 June 23]; Available from: http://www.theverge.com/2012/12/14/3766218/
self-driving-cars-google-volvo-law.
48. Kilcarr, S. New challenges face V2V and V2I connection eorts. 2014 [cited 2014 June
21]; Available from: http://fleetowner.com/blog/new-challenges-face-v2v-and-v2iconnection-eorts.
49. Shook, H.B., Driverless vehicles: liability and new automotive technologies. 2013, In-house
Lawyer.
50. Shankland, S. US to push for mandatory car-to-car wireless communications. 2014 [cited
2014 June 22]; Available from: http://www.cnet.com/news/us-to-push-for-mandatorycar-to-car-wireless-communications/.
51. Annaswamy, T.S.a.A.M., Vehicle-to-Vechicle/Vehicle-to-Infrastruture Control. The Impact of
Control Technology, 2011.
52. Chatfield, T., When is it ethical to hand our decisions over to machines? And when is
external automation a step too far?, in Aeon. 2014.
53. Goodwin, W.C.a.A. Six reasons to love, or loathe, autonomous cars. 2013 [cited 2014
June , 2014]; Available from: http://www.cnet.com/news/six-reasons-to-love-or-loatheautonomous-cars/.
54. Knight, W. Proceed with Caution toward the Self-Driving Car. 2013 [cited 2014 June 23];
Available from: http://www.technologyreview.com/review/513531/proceed-with-cautiontoward-the-self-driving-car/.
Gantt Chart
Title
Effort
Sep
2014
2w
2) Investigation
2w
1w
1w
2w
4w 2d
3w 0.25h
1w
12w 4d
7.5h
3w
6w
3w 4d
7.5h
10w
2w
2w
2w
8.4) Testing
2w
2w
7w
3w
3w
1w
Oct 2014
Nov 2014
Dec 2014
Jan 2015
Feb 2015
Mar 2015
Apr 2015
May 201
Mind Map
Components?
Cost
Physical Simulation
Development of a universal
testing
TIme
Thorough investigation of all
Ethical Information
ethical confusions/
misconceptions
What is needed to
Restrictions
prevent/resolve issues?
Legal Information
Advantages / Freedoms
Is it necessary?
Key Issues
advances
material
Technology
Information gathering of
current findings