Using the Programme Guide of the World
Science Fiction Convention in Helsinki Finland in August 2017 (to which I will
be unable to go (until I retire from education)), I will jump off, jump on,
rail against, and shamelessly agree with the BRIEF DESCRIPTION given in the pdf
copy of the Programme Guide. The link is provided below…
Robot Morality
With robot cars
soon on our streets and with robots as caretakers questions of ethics and
morals rise. How should a robot car choose to react in an accident (save
passenger or save most lives)? What kinds of ethics and moral questions rise
from using robots as caretakers of our children, elderly, disabled or ill. What
about killer robots that are constructed by the armies of the world? Is it
morally right to teach a robot to kill?
Su J. Sokol:
social rights activist, writer, lawyer
Tara Oakes: fan
with collection of 330 robots
Lilian Edwards: a
UK academic specializing in Internet law and intellectual property
Tony Ballantyne: author
of novels and short stories that have appeared worldwide
Asimov and Jeff
Vintar and Akiva Goldsman should have been here as well. You’ve probably heard
of Isaac Asimov, the author who practically invented robotic morality with The
Three Laws of Robotics:
0) A robot must
not harm humanity.
1) A robot may not
injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must
obey the orders given it by human beings except where such orders would conflict
with the First Law.
3) A robot must
protect its own existence as long as such protection does not conflict with the
First or Second Laws.
The Wikipedia
article below looks at dozens of additional laws proposed both realistically
and in the interest of extending Asimov’s Laws, some tongue-in-cheek, and one,
author Karl Schroeder's “Lockstep” character reflects that robots “probably had
multiple layers of programming to keep [them] from harming anybody. Not three
laws, but twenty or thirty.” [Thought: maybe my story from last week should be
re-written taking robot morality into consideration; maybe even The 3 Laws…]
The other two
names I mentioned above are screen writers – one a former English professor,
the other with screenplays like “Charlie’s Angels” and “Mr. and Mrs. Smith” on
his resume. Just the people to imbue a reflection on robotic morality with a really
thoughtful storyline! *sigh*
Hopefully the
discussion was well-done. The participants seem to have reasonable credentials
(the one who collects models has most likely read extensively about each one of
her exhibits.)
Robots with (or
without) morals are fascinating to discuss, partially (I think) because to
them, we’ll be “gods” – at the very least, their creators. Being “gods” is
something Humans have deeply desired ever since the first Human deified himself
or herself. We continue to do it even today – creating robots “in our image”
seems to be a given.
But what about
robots NOT in our image? Will we imbue (or program, if you prefer) them with
laws that govern them? What about the sixth thing that the futurists were
supposed to consider (“Will people accept self-driving cars?”) We already name
our cars (can anyone reading this claim that they didn’t name ONE of their
cars?), and we know that we’ve already moved away from paying attention to
driving so we can focus on our interactions with social media and mechanizing
minimum-wage jobs is already well under way. Cars parking themselves has made
the leap from strangeness to “standard option” and I can’t see that insurance companies
NOT giving people a discount for the feature.
So we’ve accepted
robots into our culture. What about
other cultures? How would robots go over in Nigeria? Haiti? India? I can take a
guess about the first two cultures as I’ve spent A LITTLE time in them; I’ve
never been to India, but I can guess that the introduction of true robots will
have a profound impact. (Would robots either take the place of or become “untouchables”?)
Even here, with the boom in the “sex doll” industry, are robotic prostitutes
about to take over “the world’s oldest (contested) profession”? If so, what if
a robotic prostitute wanted to leave her (its?) profession?
STAR TREK: The Next
Generation debated something similar to that in one of its iconic episodes, “The
Measure of a Man”. It “has been considered by critics to be the first great
episode of The Next Generation. It has also been included in lists of the best
and most groundbreaking episodes of the series.” In it, “…Data resigns his
commission rather than be dismantled for examination by an inadequately skilled
scientist, a formal hearing is convened to determine whether Data is considered
property without rights or is a sentient being.” (imdb)
Clearly Data,
Robbie the Robot, Sonny, and the Lost In Space robot had some sort of
programming that included morals. Others like the dozens of robots featured on the
long-running British DOCTOR WHO, have no compunctions whatever to do anything
they’re asked to do, many of them killing both Humans and non-Humans. The
robotic intelligences of regular science fiction writers Gregory Benford, Iain
M. Banks, Keith Laumer, Gene Wolfe, and Alastair Reynolds are possibly
monstrous, but unintentionally so – bacteria and viruses would probably
consider us monstrous.
So – will robots
have morals? It’s pretty clear that if they are INTENTIONALLY made to be like
us (not always physically, but intellectually), then they most likely will. But
if they are not made for that specific purpose, will we bother with imbuing
them with morality, what will they be like?
VIKI, the AI from “I,
Robot” who was designed to run the US Robotics company, certainly had the Three
Laws programmed into it, but it could also choose to ignore those laws. Given
no morality (which the faithful believers in The Singularity seem to believe
will happen), what kinds of things are powerful computers and artificial
intelligences capable of? The company that was purchased by Tesla was once
called Perbix and engineered simple manufacturing robots. Tesla (according to
the article), plans on taking the company to a whole new level – my question is
that will they give their car manufacturing robots any kind of morality
programming? What would a murder by a manufacturing robot look like? How would
you investigate it? How would you prevent it?
Much food for
thought!
Reference: http://www.startribune.com/tesla-buys-perbix-brooklyn-park-based-maker-of-factory-automation-equipment/455669603/,
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Additional_laws,
http://tvtropes.org/pmwiki/pmwiki.php/Main/MechanicalLifeforms
No comments:
Post a Comment