Some
futurists like
Bill Joy are
suggesting "scientists refuse to work on technologies that have the potential to cause harm". Specifically, he warns of the unforeseeable dangers of
nanotechnology, genetics and
robotics. Others characterize such
warnings against technology as
obscurantism, or even
neo-Luddism. So, when it comes to technological pursuits, should scientists and engineers draw a line, and if so, where should the line be drawn? What level of technology is acceptable?
Most of the technology we use today is rooted in military technologies and applications. High tech tools like computers, Internet, aerospace, GPS, radar, nuclear power, and so on, were either developed in military funded laboratories, or underwent accelerated development due to military ambitions. And, most university research funding generally comes from governments and the military - that's our tax dollars. Take for example, your iPhone. Most modules used to make the iPhone are repackaged and "now safe" military technologies. Companies such as Apple merely package these in the form of commercial civilian applications. A quick review of history will also reveal that, all of these technologies have been used as military and political tools and weapons.
Technology is inherently tied to governments, militaries, politics and the economics. This observation has a number of interesting consequences.
It is impossible to voluntarily stop technological progress. If a scientist does not want to work on a project for ethical reasons, government, military and corporations will always find others who are willing to put themselves up to the task and get paid for it. For example, many governments are currently
in a race to fund
Quantum Computing research - the hottest Computer Science topic these days. It's next to impossible to stop this research. Although they maintain that Quantum Computing is a life-changer that will qualify our lives, the reality is that government interest in the technology is primarily martial. Futurist are also closely following the progress of Quantum Computing because it may finally be the technology that will give us true AI, Artificial Intelligence, and consequently might lead to a
technological singularity.
Today, cutting-edge technological progress is already posing potential
anthropogenic existential threats. So far, most of this technology is under the control of governments, militaries and large corporations. Two questions come to mind. Can smaller entities get their hands on potentially fatal technologies? And, can we really trust governments, militaries and corporations? Inherently, we are reaching a historically unprecedented age where technology can no longer be separated from individuals and society. Technologically concentrated power poses social threats we could not have imagined a hundred years ago. Take for example, work in
cryptography, which lead to the work in
big data, which further led to agencies like the NSA collecting and infinitely holding on to all of our personal data; Orwellian levels of
mass surveillance and intimidation is just around the corner. Can we still call ourselves a human being, when we are
forced to think and behave in certain acceptable ways, with the horror of knowing that we are under constant surveillance? This raises a new question: Who should really be in control of technology?
When it comes to technological pursuits, should scientists and engineers draw a line, and if so, where should the line be drawn? What level of technology is acceptable? In my opinion, given what drives technology and the current trends, these seem to be irrelevant questions. Seems more like, we are either heading back to a
neolithic level of existence, or, to extinction. And the smartest among us, scientists and engineers, are ensuring this outcome in total ignorance.
This post makes a little more sense if you have read
yesterday's post.