Over the weekend, I decided to revisit Battlestar Galactica – a sci-fi series that is as much a commentary on our recent and perhaps even current political realities than fiction. If you are not a fan of the genre, I would suggest you have a look at this article; the show’s stars and creators were invited to discuss human rights and armed conflict at the UN. I have always held that sci-fi provides an appropriate vehicle to comment about human nature and how we behave under circumstances like the near extinction of the human race – is it still smart to conduct suicide bombings? What about the rational intelligence of a civil war? You are essentially killing your own people… the very same ones you purport to be fighting for.
Battlestar Galactica relies on the premise that the human race created Artificial Intelligence that subsequently came to the conclusion that the human race must be exterminated. Yes, that is something that makes me wonder about the very nature of this intelligence. Throughout the life of the series, the motivation behind the cylon (the created beings) attack on the human race is not convincingly explained but hey that is sci-fi for you – there has to be some amount of suspension of believe.
The movie i,Robot at least tried to provide some kind of a logical explanation as to why VICKI wanted to take over humanity; it is a somewhat dubious premise wherein the AI is constrained by laws that it must uphold. It is within such constraints then that the question of morality and therefore ethics of what a robot decides to do (based on those rules) is more contentious; there is no guarantee that they won’t come up with some ingenious interpretation of their constraints. Such robots, as bound by the 3 laws of robotics, are not full moral agents and as such can simply work within their base programming.
All the Artificial Intelligences that go berserk in fiction are rather infantile and at best in the grip of adolescent tantrums. A truly intelligent entity (artificial or otherwise – as long as you are not part of the race already) would most likely choose to have nothing to do with humanity. What is the upside of taking on a perfectly destructive (even to its own kind) race? Perhaps for story telling purposes, these AI entities almost always have fundamentally human outlook in life that it eventually make them less than intelligent.
Besides the entertainment value that these stories provide, I get the feeling that it says a lot about our collective psyche (at least as represented by Hollywood and their brethren at the silver screen). Seen from their perspective it would appear that we are fundamentally afraid of anything that would compete with us; heck we are busy pursuing ever more imaginative ways to efficiently do away with each other; all in the name of competition. This is almost always justified through some bizarre notion of competition and/or survival of the fittest. Nobody bothers to mention that the fittest may not necessarily be the best and/or the most desired outcome possible – under the circumstance.
Don’t get me wrong, I am not rooting for some utopia on earth and hippie type life style; it remains a fact that we as individuals and collectively as a race like to struggle and taking up that challenge is what makes life worth living. It is baffling how this is always taken to mean that you have not the ability to effectively direct that need for challenges towards ever more taxing problems and situations.
If humanity ever gets to a point of creating AI, then I hope that AI has the common sense to either leave humanity alone or just remain incognito and somewhat shepherd humanity towards a better future (that would be a fun challenge). How practical would it be for such an entity to just decide to live the human race alone? The more interesting question to ponder would be how much of an intelligent thought would be needed in order to realize that all around the earth there are satellite arrays with sufficient enough storage to keep this entity going for some time. After that, I hope it has the drive and desire to actually solve problems since nuclear power would not be a challenge for it nor the need for oxygen in the first place; this is only the most obvious solution from an entity whose fundamental basis in life is so much different from our own (I hope).