Edited: Corrected title!
Ok, that should probably be life imitates art but I am increasingly seeing a potential fictional future being constructed right before our very eyes. I like reading fiction, mostly because they provide a chance to examine events from various hypothetical perspectives. It is true that most fiction are based on some kind of reality and that is what make them fiction as opposed to taking leave of one’s faculties.
Movies like Terminator, the Matrix etc bring to life the possible threat that artificial intelligence and robots in general pose to humanity at large. Besides Hollywood having a special effects orgy there are some interesting works of fiction that touch on such subjects. So far I have been an avid reader of the works of Isaac Asimov; one of the good aspects of Asimov works is that humanity is the focus – there are no alien races and the like but the novels focus on the triumphs and failures of the human race as time passes. In the Robot series, the three laws of robotics are prominent to a greater degree as such laws are amended and explored by seemingly intelligent machines.
The three laws of robotics are not in themselves perfect and the amount of materials available (both in support of and against these laws) is quite vast. However, the continued improvements and developments in the fields of artificial intelligence may perhaps lead us to soon require similar guidelines in the implementation of these effective merger of artificial intelligence and robotics. As we assign increasingly complex tasks to automated machinery, it becomes important and efficient to allow these robots to be increasingly autonomous but that autonomy must also not be unlimited.
Until such time that we can successfully create artificial sentience, we are the only species that are capable of a moral (thus ethical) act – judging good from bad and being removed from the chance to make such judgment does present grave threat to how our societies function. What happens when a robot running a factory kills people? Who is responsible for the robot’s actions? They do indeed remain the properties of their respective owners but what kind of liability does the owner have over the actions and decisions of a contraption that seeks to carry out its task as efficiently as possible.
These are weighty issues that authors like Asimov have delved into in a number of books but increased developments in AI and robotics suggests a need for practical and implementable safe guards. Such safe guards become more important in the context of battle robots. These are robots that make life and death decisions will affect their owners (the soldiers in the field) as well as anybody who is in the wrong place at the wrong time. There is a pretty disturbing irony when you consider the fact that a machine (robot) has all the capacity to be as efficient as possible when a robot makes a mistake, it is likely to have a big impact. On the other hand discussions about code of ethics for our creations kind of says a lot about us as creators. Who would be surprised if such code of ethics are designed for our benefit largely while entirely ignoring the potential of a machine?
As someone who is physically handicap, the possibility of an intelligent (albeit artificial) robot enabling the enjoyment of life to its most complete and abundant potential is certainly attractive.