Not everyone agrees on how to save the world from AI

Managing Editor Adam Abadi ('22) analyses the conflict over the potential risks and benefits of artificial intelligence that could deeply impact human society in the coming decades.

By
Adam Abadi
on
November 15, 2021
Category:
Policy

Self-driving cars. Automated medical diagnoses. Algorithms that decide where to deploy police officers. Robotic soldiers. Artificial intelligence (AI) has the unsettling potential to transform nearly every facet of modern life over the next several decades.

AI is the ability of computer programs to carry out tasks in a way that mimics human intelligence. Some applications of AI will likely be beneficial — for example, self-driving cars may greatly improve mobility for the elderly and some people with disabilities. 

AI also poses potentially devastating dangers that governments, business leaders, and computer scientists are facing increasing pressure to address. However, not everyone agrees on which of those dangers are the most pressing.

There are two rival schools of thought on the risks posed by AI.

One prioritizes addressing long-term dangers that could purportedly threaten the existence of human civilization as we know it. The other focuses on how AI may amplify contemporary social injustices in the short term. In this article, the two sides will be referred to as “AI long-termists” and “AI short-termists” to capture this long-term versus short-term dichotomy.

The mainstream view: investigating short-term dangers of AI

The AI short-termists primarily care about how AI could perpetuate present-day societal problems such as racism, the automation of jobs, and mass surveillance. Essentially, AI short-termism is the conventional wisdom on dangers of AI. It draws from a range of academic fields including sociology, economics, and political science in addition to computer science itself.

AI short-termists often highlight the threats posed by algorithmic bias, which can cause computer algorithms to replicate the racial and gender biases held by humans. For example, one algorithm used by healthcare providers was less likely to recommend preventative care to Black patients than to white patients with the same medical history. Algorithmic bias has recently received a large amount of media coverage, as biases have been identified in places ranging from predictive policing to facial recognition software to automated resume screening.

AI short-termists are also uneasy about the potential for AI to replace workers in certain industries, such as truck driving, potentially resulting in higher unemployment or falling wages. This concern received a large amount of attention after Andrew Yang’s 2020 presidential campaign emphasized the dangers of automation. It is supported by expert opinion; leading economists such as Daron Acemoglu and Angus Deaton have agreed that AI poses a major economic threat to many American workers.

Other concerns of AI short-termists include surveillance and data privacy, especially regarding the use of facial recognition software. Facial recognition AI is increasingly used by American police departments to identify suspects, much to the chagrin of state legislators and activists who are trying to ban the practice. The Chinese government is taking AI surveillance a step further, using facial recognition and CCTV cameras to identify individuals as members of the persecuted Uighur ethnic group and monitor their movements at all times.

Private companies, legislators, and mainstream policy research organizations usually adopt the short-termist perspective when analyzing risks from AI. For example, the Brookings Institution and the Urban Institute — two highly influential think tanks with considerable access to federal policymakers — have each published extensive research into potential regulatory and legal solutions to short-term AI risks. 

While AI is largely unregulated by the American federal government, the European Union is developing a comprehensive regulatory framework for addressing short-term dangers of AI. The proposed EU regulations would impose stringent transparency and oversight obligations on AI systems deemed “high-risk”, such as those used in automated hiring processes or border control. The framework would also ban the use of AI for mass surveillance, such as facial recognition in public spaces.

The unconventional view: prioritizing long-term AI risk

The AI long-termists take a more speculative and unconventional approach to AI risk.

They focus on what they call existential risk from artificial general intelligence (AGI). AGI, which does not currently exist, is hypothetical AI that can surpass the intelligence of human beings. About half of AI experts believe that AGI will be developed by 2040.

AI long-termists fear that if a powerful enough AGI was programmed to work toward some innocuous goal but misinterpreted its instructions, the results could be disastrous — potentially unleashing mass destruction or even destroying human civilization, hence the term “existential risk”.

One hypothetical example could arise from an AGI that is designed to help address climate change. Maybe the AGI misinterprets its directions as a mandate to cool the Earth’s climate by any means necessary, learns that a nuclear war would produce massive amounts of soot that could block sunlight and drastically cool the planet, and then decides to blackmail government officials into launching nuclear weapons.

This kind of possibility may sound like science fiction, but existential risks posed by AGI are taken seriously by a dedicated contingent of AI long-termists. The Oxford philosopher Nick Bostrom is perhaps the most influential; his 2014 book Superintelligence: Paths, Dangers, Strategies was instrumental in disseminating the technical and philosophical arguments for AI long-termism. In a 2014 interview with the BBC, Stephen Hawking — who was the most well-known scientific figure in the AI long-termist camp — warned that “the development of full artificial intelligence could spell the end of the human race”.

AI long-termists are a much smaller and more cohesive group than AI short-termists. AI long-termism is most commonly espoused by people working at tech companies in Silicon Valley and by some academic researchers, including the computer science professor who wrote the most widely used introductory textbook on AI.

The AI long-termists have assembled an ecosystem of niche research institutes to study existential AI risk. Some are affiliated with major universities, such as the Future of Humanity Institute at Oxford, the Centre for the Study of Existential Risk at Cambridge, and the Center for Human-Aligned AI at UC Berkeley. Others are independent nonprofits, such as the Machine Intelligence Research Institute and OpenAI. These institutes act as hubs for computer scientists, software engineers, and policy experts to research how AGI can be developed in a way that minimizes existential risk. AI long-termist research is often highly technical, relying heavily (but not exclusively) on advanced computer science and mathematics.

The research institutes are primarily funded by wealthy tech entrepreneurs who are AI long-termists themselves, such as Elon Musk, Ethereum co-founder Vitalik Buterin, and Skype co-founder Jaan Tallinn. These entrepreneurs either donate directly to the research institutes or fund grantmaking organizations such as the Open Philanthropy Project and Berkeley Existential Risk Initiative, which then dole out millions of dollars to the AI research centers themselves.

Consequently, the AI long-termist movement is a bizarre marriage of credentialed academic researchers, Silicon Valley elites, and zealous software engineers. It is perhaps the largest (and strangest) charitable cause that most people have never heard of.

The “conflict” between AI short-termists and AI long-termists

There is no inherent contradiction between the concerns of each school of thought on AI risk. However, some AI short-termists have vocally opposed AI long-termism on the grounds that it is a distraction from short-term AI risks.

For example, Kate Crawford, co-founder of NYU’s short-termist AI Now Institute, wrote in the New York Times that concerns about AGI are “a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality” and noted that “for those who already face marginalization or bias, the threats are here”. 

Jerome Pesenti, Head of AI at Facebook, has also asserted that AI long-termism is “distracting us from the real issues (e.g. fairness)”. 

Economist Daron Acemoglu, whose research has identified automation as a cause of increasing income inequality, has written that we need “to recognize the tangible costs that AI is imposing right now — and stop worrying about evil super-intelligence.” 

In contrast, it is rarer for AI long-termists to vocally oppose the concerns of AI short-termists. The long-termists often acknowledge short-term dangers of AI but simply prioritize long-term existential risks.

Regardless of which AI risks actually materialize in the coming decades, it is astounding that these parallel yet oddly asymmetric intellectual ecosystems may both shape humanity’s next technological era.

Learn more:

Vox - The Case for Taking AI Seriously as a Threat to Humanity

New York Times - Biased Algorithms are Easier to Fix Than Biased People

University of Cambridge - Why the AI impacts ecosystem must move beyond ‘near-term’ and ‘long-term’

Adam Abadi

Adam (’22) is an economics major from Brooklyn, NY with interests in public policy, electoral politics, and data science. He has worked as an intern for an environmental nonprofit and an economic consulting firm. Adam enjoys playing Scrabble, reading surreal fiction, and playing D&D in his spare time.