[Editor’s note: Brian Patrick Green is Assistant Director of Campus Ethics Programs at the Markkula Center for Applied Ethics and faculty in the School of Engineering at Santa Clara University. He has a strong interest in the dialogue between science, theology, technology, and ethics. He has written and talked on genetic anthropology, the cognitive science of the virtues, astrobiology and ethics, cultural evolution and Catholic tradition, medical ethics, Catholic moral theology, Catholic natural law ethics, transhumanism, and many other topics. He blogs at TheMoralMindfield and many of his writings are available at his Academia.edu profile. He spoke to Charles Camosy about the ethical challenges posed by advances in artificial intelligence.]
Camosy: One can’t follow the news these days without hearing about artificial intelligence, but not everyone may know precisely what it is. What is AI?
Artificial intelligence, or AI, can be thought of as the quest to construct intelligent systems that act similarly to or imitate human intelligence. AI thereby serves human purposes by performing tasks which would otherwise be fulfilled by human labor without needing a human to actually perform the task.
For example, one form of AI is machine learning, which involves computer algorithms (mathematical formulas in code) being trained to solve, under human supervision, specific problems, such as how to understand speech or how to drive a vehicle. Often AI algorithms are developed to perform tasks which can be very easy for humans, such as speech or driving, but which are very difficult for computers. However, some kinds of AI are designed to perform tasks which are difficult or impossible for humans, such as finding patterns in enormous sets of data.
AI is currently a very hyped technology and expectations may be unrealistic, but it does have tremendous promise and we won’t know its true potential until we explore it more fully.
What are some of the most important…