Since the Industrial Revolution, nations across the globe have brought on waves of technological innovation that have drastically altered the global economic landscape. From the steam engine to the telephone to the widespread use of electricity, technology has served a pivotal role throughout history in advancing the capabilities of the human race. Today, the world faces its next great hurdle along the path of technological progress, possibly its largest one yet; artificial intelligence. Our current socio-economic systems trend towards increased automation; by the early 2030s, roughly 38% of US jobs are expected to be at risk of automation, with more than 85% of customer interactions projected to be managed without a human. Although integration of machines into the global economy is no new concept, the notion of machine learning and consciousness presents policymakers with a host of new social, political, and ethical concerns. To better explore and address these concerns, one must first ask; what exactly is artificial intelligence, and what steps should we take to control it? This article will argue in favor of programming AI with normative philosophy in order to benefit the future of the human race, focusing on German philosopher Immanuel Kant’s theory as a starting point for AI ethics.
We have seen the use of touch-screen soda dispensers and ATMs for years, but the more modern term, “AI”, describes the study and design of intelligent agents, which is a “system that perceives its environment and takes actions which maximize its chances of success”. Otherwise referred to as computational intelligence or rationality, put simply, AI is a blanket term used to describe any form of intelligence demonstrated by a machine. The majority of existing AI technology takes the form of simple AI, or machines that rely on decision trees, or a predetermined set of rules and algorithms for success. Machine Learning, however, is distinct in that it allows machines to learn without being explicitly programmed. This type of technology allows for machines to improve their decision-making process by incorporating and analyzing swaths of data pertaining to a particular task and the success rate of certain actions. Often referred to as complex AI, deep learning machines work by picking out recognizable patterns and making decisions based on them. Thus, the more data you feed it, the smarter it becomes, and the better it works. In today’s society, we can already see the positive benefits of machine learning in many of our most innovative technologies, such as the predictive analytic machines that generate shopping recommendations or the AI used in security and antivirus applications worldwide. The drawbacks, however, are far less evident.
Despite Terminator-esque representations of a war-torn future dominated by robots, the potential dangers of the advancing field of AI research lie in the fundamental lack of control at the heart of the development of autonomous machines. For many experts in the field, the question of whether or not AI will yield beneficial or harmful results for the human race is simply the wrong one; instead, the focus falls on determining the degree of control with which we execute this line of progress. Most notably, Tesla and SpaceX CEO Elon Musk remains outspoken in his belief that AI’s development will outpace our ability to manage it in a safe way. Musk even went as far as to claim that AI development poses a greater threat to humanity than the advent of nuclear weapons, citing the machine intelligence that defeated the world champion in the ancient Chinese strategy game, “Go”. Although much of the AI that is used in the status quo has yet to cross this intelligence threshold, the increasing development of neural networks for complex AI has opened the door for an exponential uptick in the rate of machine learning. Once the cat is out of the bag, warns Musk, the intelligence in question will be unstoppable, and has the potential to wreak havoc on all of society. Autonomous drone strikes. Release of deadly chemical weapons. Violent revolution fueled by mass media propaganda campaigns. When one considers the degree to which we rely on machines for public health, global military operations, and political communications, the necessity for control over the activity of AI becomes abundantly clear. Therefore, the solution to the dangers of AI development lie in our ability as humans to control its behavior past a certain threshold of growth, through whatever means necessary.
As outlined above, policymakers have a clear incentive to avoid a scenario in which the rate of AI learning exceeds our ability to control it. However, as can be seen in the status quo, leading governments have failed to adequately regulate their respective tech industries, causing them to be caught in a game of catch-up with AI developers. As evidenced by the 2018 Congressional hearing questioning Facebook CEO Mark Zuckerberg, the government has allowed the tech industry to exceed its reach, with no major value shift or policy agenda in sight. The question now becomes, what should policymakers do to maximize the chance that an AI outbreak would yield positive consequences for society? The answer lies in the study of philosophy, or ethics. At its most basic level, philosophy, or the debate over morality, is a question of what an agent ought to do. This question is especially relevant when applied to AI; after all, an AI with an intelligence level greater than that of humans would be able to rewrite its own code, effectively making itself anything it wants to be. Yet, there is much uncertainty as to whether an AI would want to rewrite itself in a hostile form, or a peaceful one. This is where ethics comes in. If AI developers could program machines with ethics that morally prohibited them from harming humans, then the scenario in which they utilize their neural capabilities for harm becomes much less probable. In this sense, ethics comes into view as the primary method of control, potentially the last one, that humans could bear over their creations.
Next, we are tasked with determining which system of ethics would yield the best outcome for humanity in the event that AI exceeds our ability to regulate it. Before making this determination, it is important to understand some core distinctions between various branches of philosophical thought. Normative philosophy is divided into two major categories: consequentialism and deontology. Consequentialism dictates that the morality of an action be determined by looking purely to the consequences of said action, and that an ethical agent ought to seek to achieve the maximal state of affairs. Deontology, on the other hand, is a system of ethics that uses universal rules to distinguish right from wrong. A deontologist would not be concerned with the consequences of an immoral action, even if the consequences were positive, since the action is deemed immoral by its very nature. Similarly, a consequentialist would not care if an action is intrinsically immoral or violates certain rights, insofar as the action produces good consequences down the road. Put simply, consequentialists are concerned with ends, and deontologists are concerned with means. In the context of AI decision-making, this distinction could make all the difference; for example, a consequentialist AI might decide that killing one particularly evil human would amount to thousands of lives saved down the road, thus justifying the practice of murder on the part of AI for the greater societal good, despite the intrinsic wrongness of such an act. Conversely, a deontological AI would disregard the future benefit of killing the evil individual for the sake of avoiding committing a violation of that individual’s fundamental rights. A third, less prominent branch of normative philosophy, known as virtue ethics, offers an alternative to the more rule-based approaches listed above. Conceived by Aristotle, virtue ethics is an approach to normative ethics that emphasizes the virtues, or moral character of an agent, rather than the duties and consequences involved in an agent’s action. Under this theory, an AI would be considered “ethical” if it took actions that were reflective of intuitively desirable character traits, such as honesty, courage, or wisdom.
Insofar as the goal of programming AI with a system of ethics is to preserve the future wellbeing of the human race, then any attempt at formulating a normative theory upon which to program AI must center around the value of humans as moral agents. Otherwise, we run the risk of becoming an obstacle in the way of the machine’s progress. Created by enlightenment thinker Immanuel Kant, “Kantianism” is used to refer to the deontological theory derived from universal principles of human worth. This section will lay out the primary arguments for adopting a Kantian system of normative ethics for AI, by explaining the theory’s applicability to AI and various advantages this approach holds over alternatives.
First and foremost, Artificial Intelligence must be programmed with a rule-based (deontological) system of ethics, instead of the more calculative and character-based approaches of consequentialism and virtue ethics. Robotics expert Matthias Scheutz argues that the need for a “computationally explicit trackable means of decision making” requires that ethics be grounded in deontology. Since AI have the potential to make incredibly complex moral decisions, it is important that humans are able to identify the logic used in a given decision in a transparent way, so as to accurately determine the morality of the action in question. This necessitates deontology, as theories that rely on valuation of consequences or judgements of character are far more subjective and difficult to track in an ordered manner.
Furthermore, Kantianism is uniquely suitable to AI programming because of its prioritization of the self-determination and rational capacities of other moral agents. While attempting to formulate a moral theory, Kant began his inquiry by drawing a distinction between the moral status of rational agents, and non-agents. According to Kant, humans are morally distinct from other beings in their ability to use their rational capacities to set and pursue certain ends. This status would also apply to AI. Dr. Ozlem Ulgen, member of the UN group of Governmental Experts on Lethal Autonomous Weapons Systems, claims that technology may be deemed to have rational thinking capacity if it engages in a pattern of logical thinking from which it rationalizes and takes action. Although Kant’s concept is reserved for humans, the capacity aspect may be fulfilled by AI’s potential for rational thinking. Not only does this prove the suitability of Kantian ethics to AI, but it also provides built-in advantages when it comes to protecting human interests. As Kant identifies the source of moral value as individual reason, the rules that follow accordingly seek to protect that same capacity. For example, Kantian ethics prohibits harming others, as doing so would fundamentally contradict the capacity for reason within other moral agents. In this sense, a Kantian AI would be far less likely to do harm unto humans, as the core tenet of their philosophy would be tied to our shared rational capabilities. Thus, Kantian ethics provides a human-centric approach to formulating moral rules.
Lastly, the subjectivity at the heart of consequentialist and virtue ethical approaches to morality provide a comparative advantage to Kantian ethics. Consequentialist theories, on one hand, mandate that we maximize the probability of good consequences, but don’t inform us of what those consequences are. Thus, consequentialist theories are incomplete in that they leave it up to the agent in question to determine what they consider to be a “moral good”. This poses potential problems when applied to AI, as they could very well decide that the extermination of the human race is a good consequence, and thus act to achieve it. Similarly, virtue ethics relies on the notion of “good character”, or the idea that we ought to inculcate certain character traits within society. This commits the same error as consequentialists often do, as it fails to provide a comprehensive account of the “good person”, leaving room for AI to drum up their own conceptions of virtuous character in order to suit their own needs. Kantianism, however, avoids this pitfall, as it sources “the good” within the agent itself. To a Kantian, actions are good insofar as they respect the right of other moral agents to set and pursue ends, thus further helping to create a human-centric system of ethics.
In answering the question of which system of ethics would be most suitable for programming AI, a variety of other questions arose, all of which demand further investigation. Evidently, reaching a definitive conclusion on the issue of AI ethics is no easy task; after all, humans have been debating back and forth between different philosophies and modes of thought for hundreds of years, and the conversation doesn’t seem to be ending any time soon. The one thing we, as a society, can agree on despite differences in perspective is the idea that morality is fundamentally subjective. If history is any example, it is clear that the ethical systems by which people choose (or attempt) to live their lives is heavily contingent on their individual point of view. As such, the mission of determining how to program an ethical AI is problematized by the reality that we, as humans, do not operate under a perfect ethical framework to begin with. According to virtual reality developer and CEO Ambarish Mitra, this concern is easily surmountable, as he argues that AI could help us create one. Referred to as “super morality”, many in the field of AI development believe that the potential for AI to reach a level of consciousness “beyond” that of humans gives them the potential to reach the sort of higher ethical truth for which we have been searching for so long. If moral truth is to be discovered through reflection and deliberation, machines with higher rates of learning and cognition than that of humans would have a better shot at discovering that truth than we ever will.