The World Mind

American University's Undergraduate Foreign Policy Magazine

No Longer Human: Addressing the use of Artificial Intelligence in IR

InternationalAshton Dickerson

Artificial intelligence is a principal instrument for international relations. In areas such as cyber security, military application, and threat monitoring, artificial intelligence isn’t just something that could change the political landscape as we know it: AI will create it. The idea of a non-human entity having a specific agency could make a massive change in politics at the international level. AI has been a doctrine of apocalyptic notions that the world will end with robots. Artificial intelligence has long been a fanciful vision for the future in cinema, art, and literature. The future, however, might be nearer than we think. Already, forms of artificial intelligence affect our everyday lives, including Google translate and search, facial recognition, navigation apps, social media, banking, and even Netflix. From the moment we wake up, AI impacts and influences our lives. Our preferences are tailored to what we are likelier to buy and what we would most likely watch and go. How is this advancement in technology going to change the political and international community? It is safe to assume that artificially intelligent systems might dominate decision-making in the future and that the next cyber attack might not even be human.

In 2022, artificial intelligence will have progressed far enough to become the most revolutionary technology ever created by man. According to Google CEO Sundar Pichai, its impact on our evolution as a species will be comparable to fire and electricity. He also warned that the development of AI was still in its very early stages but that its continued growth would be extreme, stating, “I view it as the most profound technology that humanity will ever develop and work on.” The 2022 trends of AI include advanced language modeling, no-code AI platforms, computer vision technology in business, and creative AI. In global security ventures, artificial intelligence will be increasingly more significant in handling and adequately managing. However, these 2022 AI trends do not come without a cost. Cybercrime has been identified as a substantial threat to global prosperity by the World Economic Forum, which urged nations worldwide to work together to address it. This cybersecurity threat is expansive, transformative, and critical, and with the continued rise in cyberattacks, there is massive growth in the AI market. A July 2022 report by Acumen Research and Consulting says the global market was $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030. More and more money is being put into this industry, and consequently, working together as an international community is essential in preventing catastrophic damage. Specifically for international relations, AGI might be capable of executing any cognitive or operational task for which human intelligence is currently necessary. These advancements in IR will fundamentally change how the world will look in the near future. 

In a Chatham House Report titled “Artificial Intelligence and International Affairs Disruption Anticipated,” AI can be used in international politics and policymaking in three categories: Analytical roles, predictive roles, and operational roles. In the first category, Artificially intelligent systems are already found in analytical roles, combing through large datasets and deriving conclusions based on pattern recognition. This can be especially helpful when monitoring the outputs of sensors set up to confirm compliance with, for example, a nuclear, chemical, or biological arms control treaty that might be too demanding for human analysts. In predictive roles, artificially intelligent systems may offer opportunities for policymakers to understand possible future events. One such example in the arena of international affairs would be the possibility of modeling complex negotiations. AI might take on other predictive roles with a bearing on geopolitics, contributing to more accurate forecasting of elections, economic performance, and other relevant events. The last category, operational roles, is the traditional sense of robots. The day-to-day functioning of the international system would not be expected to change if truck drivers, ship crews, or pilots were replaced with automation. Still, the large-scale replacement of existing human labor in these capacities will likely cause widespread economic and political disruption in the short to long term. MIT economist Daron Acemoglu’s new research showcases this shift in labor. From 1990 to 2007, adding just one additional robot replaced about 3.3 workers nationally. With this rapid transformation in the labor force, the ethical landscape of AI is becoming more and more significant to the lives of everyone on Earth. 

From a policy standpoint, it is essential to know what data is used, an AI model’s guiding assumptions and the kinds of practices developers employ. The Council on Foreign Relations recently conducted a conference titled “The Future of AI, Ethics, and Defense.” Speakers discussed the intersection of technology, defense, and ethics and the geopolitical competition for the future of innovation. Speakers included former secretary of defense Ash Carter, cofounder of LinkedIn Reid Hoffman, and professor at the Institute for Human-centered artificial intelligence Fei-Fei Li.  Discussing these implications of AI, Fei-Fei Li states, “We have a society that wants to respect human rights, we want to be inclusive, we want to use AI or technology for good, we can have a culture of transparency and accountability, and we can form multi-stakeholder allegiance to both push for innovation, but also put the right guardrails. And this kind of foundation in our world is our competitive advantage.”  Showcasing the ethics that need to be addressed as artificial intelligence advances, Fei-Fei Li concludes that these technologies can benefit society instead of holding it back in the long run. The accountability and transparency related to AI are crucial for the international community to maintain an ethical environment. But what exactly are the ethical challenges of AI? 

Brian Patrick Green, director of Technology Ethics at the Markkula Center for Applied Ethics, addresses these challenges in his article titled “Artificial Intelligence and Ethics: Sixteen Challenges and Opportunities.” Fundamentally, artificial intelligence is increasing at a tremendous rate. Technical safety and if the technology works as intended are important for companies. Another challenge is bias and the malicious use of AI. For example, China's facial recognition system logs more than 6.8 million records daily. The Chinese government is accused of using facial recognition to commit atrocities against Uyghur Muslims, relying on the technology to carry out "the largest mass incarceration of a minority population in the world today." In Russia, authorities have long used biometric data for artificial intelligence-powered facial recognition to surveil and prosecute peaceful protestors and other critics. Additionally, during the Invasion of Ukraine, Russia employed “deepfakes” in propaganda warfare. Deepfakes are images or videos created using AI that can show scenes of things that never happened or even people that never existed. This technology is still increasingly advanced and could be extremely realistic in the future. Furthermore, using AI technology to create cyber weapons to control autonomous tools like drone swarms are being developed. Russian President Vladimir Putin when speaking about the international race to develop artificial intelligence noted, “whoever becomes the leader in this sphere will become the ruler of the world.” 

Anja Kaspersen and Wendell Wallach are senior fellows at Carnegie Council for Ethics in International Affairs. In November 2021, they published an article that changed the AI ethics conversation: “Why Are We Failing at the Ethics of AI?” Concerning the ethical implications of AI, the article states, “Society should be deeply concerned that nowhere near enough substantive progress is being made to develop and scale actionable legal, ethical oversight while simultaneously addressing existing inequalities.” There has been some work done to address ethical concerns, however. There are many ethical proposals, but they are not always coinciding, uniform or unanimous, even in an organization like the EU. A number of publications on AI highlight the need for policies and regulations that would diminish the risks and direct AI development and use toward public benefit. Public policy and governance can ensure that global AI development is a positive-sum game increasing benefits for all. Suggestions for the  US  leadership include calls for building strategic partnerships worldwide. Balancing competition and cooperation is indispensable, for artificial intelligence isn’t just a component of policy anymore but integral in global security. 

With artificial intelligence dominating everyone on the Earth and advancing at an alarming rate, understanding ethical implications and security risks is necessary to maintain a peaceful world that benefits citizens' lives instead of harming them. Artificial intelligence has a multitude of issues that need to be addressed by the international community, including the capability of cyberattacks, the creation of cyberweapons like drones, and propaganda warfare. With the malicious use of facial recognition and privacy crises, there is a plethora of panic and anxiety for the public. With continued technological advancement, policies must be updated and executed. U.S policymakers must balance risks, benefits, and responsibilities when continuing AI endeavors. Additionally, an enormous amount of ethical issues still need to be tackled. AI research will dominate global affairs, and it can be hard to predict outcomes with this transformation. Whatever the future holds, it is clear that artificial intelligence will be a part of it, or perhaps more accurately, directly beside humankind.