The New Atlantis magazine has a fantastic article on the increasing use of robots and artificial intelligence systems in warfare and how they bring the fog of war to the murky area of military ethics and international law.
This comes as the The New York Times has just run a report on a recent closed meeting where some of the world's top artificial intelligence researchers gathered to discuss what limits should be placed on the development of autonomous AI systems.
The NYT article frames the issue as a worry over whether machines will 'outsmart' humans, but the issue is really whether machines will outdumb us, as it is a combination of the responsibilities assigned to them and their limitations which pose the greatest threat.
One particularly difficulty is the unpredictability of AI systems. For example, you may be interested to know that while we can define the mathematical algorithms for simple artificial neural networks, exactly how the network is representing the knowledge it has learnt through training can be a mystery.
If you examine the 'weights' of connections across different instances of the same network after being trained, you can find differences in how they're distributed even though they seem to be completing the task in the same way.
In other words, simply because we have built and trained something, it does not follow that we can fully control its actions or understand its responses in all situations.
~ more... ~
No comments:
Post a Comment