Asimov’s Three Laws of Robotics have so insinuated
themselves into the technological culture of the 21st Century, that
not only do they in their original, fiction form:
0) A robot may not harm humanity, or, by inaction, allow
humanity to come to harm.
1) A robot may not injure a human being or, through
inaction, allow a human being to come to harm.
2) A robot must obey the orders given it by human beings
except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such
protection does not conflict with the First or Second Laws.
They also appear in numerous other places (See https://en.wikipedia.org/wiki/The_Three_Laws_of_Robotics_in_popular_culture
for lists and links).
In the 2004 movie, “i, Robot” which takes place twenty
years from now, the AI, VIKI uses the NS-5 robot upgrade to bring the city of
Chicago under robotic governance because Humans can’t do anything right and are
liable to kill themselves off. “She” is only following the Zeroth Law and is
taking control for the sake of Humanity.
Let’s face it: there’s no way this is going to happen!
Look at the “far-seeing” writers of the BACK TO THE
FUTURE movies. By this year, we were supposed to have not ONLY home fusion
reactors, but they would be so cheap and common that they would be the
equivalent of Mr. Coffee machines. Don’t even get me started about hoverboards
as antigravity toys, and turning
plain-old-ordinary-cars into flying cars by having a “hoverconversion done in
the early 21st Century”!
Never happen!
Hold on a second...let’s back up to 2016. I was watching
TV one night, when this commercial came on: https://www.youtube.com/watch?v=ltqt-c_mAac.
Collision Avoidance Technology, which of course seems like a great idea, along
with “self-driving cars” (check THIS out! http://www.cnn.com/2015/11/13/us/google-self-driving-car-pulled-over/)
is here and now.
True artificial intelligence, alas (?) doesn’t seem to be
keeping up with CAT, though the hype about the inevitability of intelligence
beyond our own seems to drown out those who caution that aside from ethically,
we just aren’t THERE yet. This article, from the oft-times hyper-strident
Huffington Post, http://www.huffingtonpost.co.uk/jodie-tyley/artificial-intelligence_b_8279030.html,
cautions that AI isn’t “just around the corner”. In fact, HAL9000 isn’t about
to start ordering us around in the next five years, either. Cory Doctrow,
hardly someone you’d call a “stick-in-the-mud” regarding the wonders of
technology and the impending Singularity, cautions: “It's not making software that can solve our
problems: it's figuring out how to pose those problems so that the software
doesn't bite us in the...” (http://boingboing.net/2015/08/18/the-real-hard-problem-of-ai.html)
The movie seems to me to be an entirely separate entity
that shares only the title and the Three Laws. It’s a different story and a
different future. In this one, an
artificial intelligence called VIKI, Virtual Interactive Kinetic Intelligence, is a benevolent dictator,
taking over to save Humanity from itself. While this makes for great story, it
seems to me that the 2035 of this movie is probably more than two decades away.
I could be wrong. Collision avoidance MAY be just the beginning
of long leap into the a technological singularity. But I’d be incredibly surprised
if I got an NS-5 on my 70th birthday or if ABC Evening News
announced, “Today, the Singularity arrived...” on my 80th
birthday...
1 comment:
There's a lot of brilliant people who hope very much that the singularity doesn't happen - Elon Musk, Stephen Hawking... I think they'd be happy to hear your pessimism haha.
Post a Comment