The [New] Laws Of Robotics!

The [New] Laws Of Robotics!

  • 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • 2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov first introduced the Three Laws of Robotics in his 1942 short story "Runaround" and used them as a unifying theme for his numerous works of robotic fiction. In his later work, he felt the need to add a fourth, which he called the zeroth law, to precede the others: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm". Today, we have not yet reached the level of technology Asimov had imagined in his science fiction stories, but it's already become obvious that the Three Laws of Robotics was just wishful thinking. Hacking firmware?

On the other extreme, Stephen Hawking and others are warning that artificial intelligence could end mankind. It is not implausible to imagine that "intelligent" robots, rather than be a positive force for humanity, could very well re-design themselves and exterminate the "nuisance" that is humanity. But the current situation seems to point to something in between; There is no threat of an imminent AI takeover, nor are there any signs that robots will surely be a force for good that will serve all of humanity...

Today, despite the "promising" research, artificial intelligence is not yet "intelligent". On the other hand, the use of innovative automation is already posing two major levels of threats, namely, economical and martial. Karl Marx wrote about the transition from feudalism to capitalism, from land ownership to factory ownership. We are now seeing a third transition to robot ownership which Marx could not have foreseen. Both land ownership and factory ownership required a labor force of humans. Not any more! The emerging robot and automation owners depend less and less on human beings for production. This trend is about to lead to a historically unprecedented paradox: Capitalists, who are becoming 'robot owners', will no longer need humans to produce their goods and services, yet they would need them to purchase their goods and services; but a great majority of people won't be able to purchase them because they won't have jobs. And since, for land owners, capitalists and robot owners, distribution of wealth is out of the question, they will respond to any backlash - as you might have guessed - with their "killer robots". After all, a consumer is worthless if he or she is not economically viable. It sure looks like we are on the verge of a mostly unanticipated technological singularity. Maybe, we need to be bound by an unbreakable law ourselves, before we continue to develop robots: "A person may not harm humanity, or, by inaction, allow humanity to come to harm".

After writing the above, as I was reading the Wikipedia page for technological singularity, I came across a very similar point explored by Martin Ford, who in his book, "The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future" seems to argue that there will never be a singularity because unemployment and plummeting consumer demand would destroy the incentive to invest in the technologies that would be required to bring about the singularity. As far as I'm concerned, the question of singularity - whether artificial intelligence will exceed human intellectual capacity and control, and if so, when - is less relevant than how the current political, economical and especially environmental trends are unfolding. It might sound doom and gloom, but current trends are not pretty!

The photograph is of a street mime artist in Izmir.
<< PreviousNext >>








Feed SubscriptioneMail SubscriptionContact

Copyright © 2010-2017 -