...Let's see... I suspect you don't really object that this is a plausible scenario. What you really believe (or maybe just hope) is that it will be us, our side, our army that will acquire such marvelous weapons. The enemy won't have them, and so we, with our superior technology, will emerge victorious and live happily ever after, having crushed the barbarians. Yey!
It is typically Americans who display this attitude regarding hi-tech weapons. (If you are an American and are reading this, what I wrote doesn't imply that you necessarily display this attitude; note the word “typically”, please.) The American culture has an eerily childish approach toward weapons, and also some outlandish (but also child-like) disregard for human life. (Once again, you might be an intelligent, mature American, respecting life deeply; it is your average compatriot I am talking about.) Here is what an American journalist wrote in Washington Post, on May 6, 2007:
“So where does the air vehicle called the Predator [i.e., a flying robot] fit? It is unmanned, and impressive. In 2002, in Yemen, one run by the CIA came up behind an SUV full of al-Qaeda leaders and successfully fired a Hellfire missile, leaving a large smoking crater where the vehicle used to be.”
Yes, just as you read it: a number of human beings were turned to smoke and smithereens, and this pathetic journalist, whoever he is, speaking with the mentality of a 10-year-old who blows up his toy soldiers, reports in cold blood how people were turned to ashes by his favorite (“impressive”, yeah) military toys. Of course, for overgrown pre-teens like him, the SUV was not full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks), of terrorists, sub-humans who aren't worthy of living, who don't have mothers to be devastated by their loss. Thinking of the enemy as subhuman scum to be obliterated without second thoughts was a typical attitude displayed by Nazis against Jews (and others) in World War II.
[ ... ]
As I explained earlier, it's not just the automation of Bongard problems that's involved. It's about the automation of cognition. Anyone who works toward making machines intelligent, and especially wanting machines to “come alive”, must understand the grave ethical issues involved in such an endeavor. Consider the following email message sent by a student at Indiana University (IU, the academic institution where I did my Ph.D.) in 2008 (my emphasis):
Hello everyone,
The IU Robotics Club is having its first meeting this Thursday,
January 17th. We are a group of undergraduates and graduates hailing
from many fields with a common interest in all robotics, automata,
synthetic life and artificial intelligence. We encourage people to make
stuff come alive, whatever they decide that means.
Does anyone at IU realize the ethical issues that these kids are toying with? Is it really more important to be concerned with cloning and stem cell research? Does it not matter at all that these kids, or maybe their children, might be turned to a loose collection of quantum particles some time in the not-so-remote future by the fruits of their own toy-making? Or is it that what causes the indifference is the remoteness of the future, whereas other ethical issues in science are present here-and-now? But don't the seriousness of the nuctroid threat and its logical inevitability make any impression on anyone? ...
~ more... ~
No comments:
Post a Comment