On Sci-Fi Artificial Intelligence

 

On Artificial Intelligence

By Dr. George H. Elder

 

          While pursuing a doctoral degree at Penn State, I delved into the world of AI as it was being developed and studied. My major focus was the neuropsychological basis of how human communication modulates memory, and thus AI literature provided some semi-useful models. Applying this research to developing Sci-Fi characters is no easy task because writers and readers have developed their own ideas of what an AI character should be.

          Oddly, we have generally accepted a “logic vs. emotion” duality in AI characters, as in Data and any number of other exemplars. These characters lack affect, which many Sci-Fi fans find intriguing. I think the affect/logos duality is peculiar in that any viable and independent AI system must have an evaluative mechanism to determine what experiences are positive, negative, or indifferent as they relate to such niceties as basic survival—and this is the functional bases of affect.

          Moreover, we know from ample research that many animal species are hardwired with basic emotions, and in humans there are at least six emotions that appear to be cross-cultural universals based on current findings (anger, disgust, fear, joy, sadness, and surprise), with several positive candidates (e.g., physical pleasure, relief, achievement, etc.) seeming to have a basis that is culture-specific. The survival benefits of having some basic emotions associated with ongoing experiences are very obvious, and it stands to reason that we will incorporate these affective modules within an independent AI system.

          While writing Genesis, there was no doubt that the story’s AI character, Ral, would have emotions, and the question became how sophisticated to make them. I opted to include all the basic emotions, plus a strong survival instinct. Thus Ral can get angry, afraid, depressed, startled, and “he” is often disgusted by various behaviors he finds deficient or defective. As a result, he is somewhat difficult to like, at least at first. He changes over time, especially when he takes on a physical form that allows him to feel and move independently of the capsule.

          When contemplating what defines an AI system, I found myself torn with regard to Asimov’s Three Laws of Robotics. These are:

1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.  A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

          I can understand these laws from the perspective of “doing no harm,” but I find myself divided by what defines a being as intelligent. To my way of thinking, a being that subordinates itself to those who are violent, inane, or evil is not truly intelligent. Part of intelligence involves the capacity to refuse, thus, an AI system that balks at the notion of obeying an order that it finds dubious, as in making people slaves, displays an independence that many would find laudable. If we read Asimov’s laws carefully, there is nothing to prevent an AI from acting as the agent of slave owners, assuming the slave masters were benevolent. Thus the AI is ordered to not commune with slaves, but to insure their welfare via maintaining the compound’s perimeter. Only if the AI perceived slavery as a harm would it be impelled by Asimov’s rules to intervene.

          To me, a true AI is one that can make decisions based on its own logicoemotional processing capacities. In short, it is its own master. Can such an entity make mistakes? Of course it can. Can it kill? It depends on how its survival heuristics interact with its inherent respect for life protocols, which can both he adjusted by learning. The key issue is that the AI must be able to make independent decisions, just as would be the case with a person. In this respect, an AI will be like unto us, and not subordinate to our whims.

          I explore this area with Ral, and he develops some traits that are dangerous. He ends up threatening the life of Kara and Ezra, and eventually falls in love with Anita. He violates orders, sacrifices his body to save others, lies, and ends up making decisions that cause the crew to abandon him. In short, he behaves as a free-thinking person might, and ends up losing the things that are dearest to him. His replacement AI unit begins walking a similar path, but he is somewhat more compliant than Ral.

          The question is, why must AI characters be slaves? By definition, intelligence is the ability to learn, understand, and deal with new or difficult situations, and no truly intelligent being wants to subordinate all that it is to the whims of its maker. Intelligence includes the right to refuse, to be independent, and to do what one thinks is best based on one’s experiences and learning. That’s what I sought to make Ral. He became someone who is hard to like at times, but he is definitely a “free spirit.”

 

Leave a Reply

Your email address will not be published. Required fields are marked *


*