{"id":21621,"date":"2023-09-09T05:47:52","date_gmt":"2023-09-09T05:47:52","guid":{"rendered":"https:\/\/web3unplugged.io\/blog\/?p=21621"},"modified":"2023-09-09T05:47:54","modified_gmt":"2023-09-09T05:47:54","slug":"a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous","status":"publish","type":"post","link":"https:\/\/web3unplugged.io\/blog\/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous\/","title":{"rendered":"A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous"},"content":{"rendered":"\n<p>Everything dies \u2014 baby, that\u2019s a fact. And, if the world cannot manage the current race to superhuman artificial intelligence between great powers, everything may die much sooner than expected.<\/p>\n\n\n\n<p>The past year has witnessed an explosion in the capabilities of artificial intelligence systems. The bulk of these advances have occurred in generative AI \u2014 systems that produce novel text, image, audio, or video content from human input. The American company OpenAI took the world by storm with its public release of the ChatGPT large language model (LLM) in November 2022. In March, it released an updated version of ChatGPT powered by the more powerful GPT-4 model. Microsoft and Google have followed suit with Bing AI and Bard, respectively.<\/p>\n\n\n\n<p>Beyond the world of text, generative applications Midjourney, DALL-E, and Stable Diffusion produce unprecedentedly realistic images and videos. These models have burst into the public consciousness rapidly. Most people have begun to understand that generative AI is an unparalleled innovation, a type of machine that possesses capacities \u2014 natural language generation and artistic production \u2014 long thought to be sacrosanct domains of human ability.<\/p>\n\n\n\n<p>But generative AI is only the beginning. A team of Microsoft AI scientists recently released a paper arguing that GPT-4 \u2014 arguably the most sophisticated LLM yet \u2014 is showing the \u201csparks\u201d of artificial general intelligence (AGI), an AI that is as smart \u2014 or smarter \u2014 than humans in every area of intelligence, rather than simply in one task. They argue that, \u201c[b]eyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting.&#8221; In these multiple areas of intelligence, GPT-4 is \u201cstrikingly close to human-level performance.\u201d In short, GPT-4 appears to presage a program that can think and reason like a human. Half of surveyed AI experts expect an AGI in the next 40 years.<\/p>\n\n\n\n<p>AGI is the holy grail for tech companies involved in AI development\u2014 primarily the field\u2019s leaders, OpenAI and Google subsidiary DeepMind \u2014 because of the unfathomable profits and world-historical glory that would come with being the first to develop human-level machine intelligence.<\/p>\n\n\n\n<p>The private sector, however, is not the only relevant actor.<\/p>\n\n\n\n<p>Because leadership in AI offers advantages both in economic competitiveness and military prowess, great powers \u2014 primarily the United States and China \u2014 are racing to develop advanced AI systems. Much ink has been spilled on the risks of the military applications of AI, which have the potential to reshape the strategic and tactical domains alike by powering autonomous weapons systems, cyberweapons, nuclear command and control, and intelligence gathering. Many politicians and defense planners in both countries believe the winner of the AI race will secure global dominance.<\/p>\n\n\n\n<p>But the consequences of such a race are potentially far more reaching than who wins global hegemony. The perception of an AI \u201carms race\u201d is likely to accelerate the already-risky development of AI systems. The pressure to outpace adversaries by rapidly pushing the frontiers of a technology that we still do not fully understand or fully control \u2014 without commensurate efforts to make AI safe for humans \u2014 may well present an existential risk to humanity\u2019s continued existence.<\/p>\n\n\n\n<p>An Arms Race?<\/p>\n\n\n\n<p>The dangers of arms races are well-established by history. Throughout the late 1950s, American policymakers began to fear that the Soviet Union was outpacing the U.S. in deployment of nuclear-capable missiles. This ostensible \u201cmissile gap\u201d pushed the U.S. to scale up its ballistic missile development to \u201ccatch up\u201d to the Soviets.<\/p>\n\n\n\n<p>In the early 1960s, it became clear the missile gap was a myth. The United States, in fact, led the Soviet Union in missile technology. However, just the perception of falling behind an adversary contributed to a destabilizing buildup of nuclear and ballistic missile capabilities, with all its associated dangers of accidents, miscalculations, and escalation.<\/p>\n\n\n\n<p>Missile gap logic is rearing its ugly head again today, this time with regard to artificial intelligence, which could be more dangerous than nuclear weapons. China\u2019s AI efforts are raising fears among American officials, who are concerned about falling behind. New Chinese leaps in AI inexorably produce flurries of warnings that China is on its way to dominating the field.<\/p>\n\n\n\n<p>The reality of such a purported \u201cAI gap\u201d is complicated. Beijing does appear to lead the U.S. in military AI innovation. China also leads the world in AI academic journal citations and commands a formidable talent base. However, when it comes to the pursuit of AGI, China seems to be the laggard. Chinese companies\u2019 LLMs are 1-3 years behind their American counterparts, and OpenAI set the pace for generative models. Furthermore, the Biden administration\u2019s 2022 export controls on advanced computer chips cut China off from a key hardware prerequisite for building advanced AI.<\/p>\n\n\n\n<p>Whoever is \u201cahead\u201d in the AI race, however, is not the most important question. The mere perception of an \u201carms race\u201d may well push companies and governments to cut corners and eschew safety research and regulation. For AI \u2014 a technology whose safety relies upon slow, steady, regulated, and collaborative development \u2014 an arms race may be catastrophically dangerous.<\/p>\n\n\n\n<p>The Alignment Problem<\/p>\n\n\n\n<p>Despite dramatic successes in AI, humans still cannot reliably predict or control its outputs and actions. While research focused on AI capabilities has produced stunning advancements, the same cannot be said for research in the field of AI alignment, which aims to ensure AI systems can be controlled by their designers and made to act in a way that is compatible with humanity\u2019s interests.<\/p>\n\n\n\n<p>Anyone who has used ChatGPT understands this lack of human control. It is not difficult to circumvent the program\u2019s guardrails, and it is far too easy to encourage chatbots to say offensive things. When it comes to more advanced models, even if designers are brilliant and benevolent, and even if the AI pursues only its human-chosen ultimate goals, there remains a path to catastrophe.<\/p>\n\n\n\n<p>Consider the following thought experiment about how AGI may be deployed. A human-level or superhuman intelligence is programmed by its human creators with a defined, benign goal \u2014 say, \u201cdevelop a cure for Alzheimer\u2019s,\u201d or \u201cincrease my factory\u2019s production of paperclips.\u201d The AI is given access to a constrained \u201cenvironment\u201d of instruments: for instance, a medical lab or a factory.<\/p>\n\n\n\n<p>The problem with such deployment is that, while humans can program AI to pursue a chosen ultimate end, it is infeasible that each instrumental, or intermediate, subgoal that the AI will pursue \u2014 think acquiring steel before it can make paperclips \u2014 can be defined by humans.<\/p>\n\n\n\n<p>AI works through machine learning: it trains on vast amounts of data and \u201clearns,\u201d based on that data, how to produce desired outputs from its inputs. However, the process by which AI connects inputs to outputs \u2014 the internal calculations it performs \u201cunder the hood\u201d \u2014 is a \u201cblack box.\u201d Humans cannot understand precisely what an AI is learning to do. For example, an AI trained to \u201cpick strawberries\u201d might instead have learned to \u201cpick the nearest red object\u201d and, when released into a different environment, pick both strawberries and red peppers. Further examples abound.<\/p>\n\n\n\n<p>In short, an AI might do precisely what it was trained to do and still produce an unwanted outcome. The means to its programmed ends \u2014 crafted by an alien, incomprehensible intelligence \u2014 could be prejudicial to humans. The \u201cAlzheimer\u2019s\u201d AI might kidnap billions of human subjects as test subjects. The \u201cpaperclip\u201d AI might turn the entire Earth into metal to make paperclips. Because humans can neither predict every possible means AI might employ nor \u201cteach\u201d it to reliably perform a definite action, programming away any dangerous outcome is infeasible.<\/p>\n\n\n\n<p>If sufficiently intelligent, and capable of defeating resistant humans, an AI may well wipe out life on Earth in its single-minded pursuit of its goal. If given control of nuclear command and control \u2014 like the Skynet system in Terminator \u2014 or access to chemicals and pathogens, AI could engineer an existential catastrophe.<\/p>\n\n\n\n<p>Arms Racing or Alignment Governance? A Risky Tradeoff<\/p>\n\n\n\n<p>How does international competition come into play when discussing the technical issue of alignment? Put simply, the faster AI advances, the less time we will have to learn how to align it. The alignment problem is not yet solved, nor is it likely to be solved in time without slower and more safety-conscious development.<\/p>\n\n\n\n<p>The fear of losing a technological arms race may encourage corporations and governments to accelerate development and cut corners, deploying advanced systems before they are safe. Many top AI scientists and organizations \u2014 among them the team at safety lab Anthropic, Open Philanthropy\u2019s Ajeya Cotra, DeepMind founder Demis Hassabis, and OpenAI CEO Sam Altman \u2014 believe that gradual development is preferable to rapid development because it offers researchers more time to build safety features into new models; it is easier to align a less powerful model than a more powerful one.<\/p>\n\n\n\n<p>Furthermore, fears of China\u2019s \u201ccatching up\u201d may imperil efforts to enact AI governance and regulatory measures that could slow down dangerous development and speed up alignment. Altman and former Google CEO Eric Schmidt are on record warning Congress that regulation will slow down American companies to China\u2019s benefit. A top Microsoft executive has used the language of the Soviet missile gap. The logic goes: \u201cAGI is inevitable, so the United States should be first.\u201d The problem is that, in the words of Paul Scharre, \u201cAI technology poses risks not just to those who lose the race but also to those who win it.\u201d<\/p>\n\n\n\n<p>Likewise, the perception of an arms race may preclude the development of a global governance framework on AI. A vicious cycle may emerge where an arms race prevents international agreements, which increases paranoia and accelerates that same arms race.<\/p>\n\n\n\n<p>International conventions on the nonproliferation of nuclear bombs and missiles and the multilateral ban on biological weapons were great Cold War successes that defused arms races. Similar conventions over AI could dissuade countries from rapidly deploying AI into more risky domains in an effort to increase national power. More global cooperation over AI\u2019s deployment will reduce the risk that a misaligned AI is integrated into military \u2014 and even nuclear \u2014 applications that would give it a greater capacity to create a catastrophe for humanity.<\/p>\n\n\n\n<p>While it is currently unclear whether government regulation could meaningfully increase the chances of solving AI alignment, regulation \u2014 both domestic and multilateral \u2014 may at least encourage slower and steadier development.<\/p>\n\n\n\n<p>Fortunately, momentum for private Sino-American cooperation on AI alignment may be building. American AI executives and experts have met with their Chinese counterparts to discuss alignment research and mutual governance. Altman himself recently went on a world tour to discuss AI capabilities and regulation with world leaders. As governments are educated as to the risks of AI, the tide may be turning toward a more collaborative world. Such a shift would unquestionably be good news.<\/p>\n\n\n\n<p>However, the outlook is not all rosy: as the political salience of AI continues to increase, the questions of speed, regulation, and cooperation may become politicized into the larger American partisan debate over China. Regulation may be harder to push when \u201cChina hawks\u201d begin to associate slowing AI with losing an arms race to China. Recent rhetoric in Congress has emphasized the AI arms race and downplayed the necessity of regulation.<\/p>\n\n\n\n<p>Whether or not it is real, the United States and China appear convinced that the AI arms race is happening \u2014 an extremely dangerous proposition for a world that does not otherwise appear to be on the verge of an alignment breakthrough. A detente on this particular technological race \u2014 however unlikely it may seem today \u2014 may be critical to humanity\u2019s long-term flourishing.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Everything dies \u2014 baby, that\u2019s a fact. And, if the world cannot manage the current race to superhuman artificial intelligence between great powers, everything may die much sooner than expected. The past year has witnessed an explosion in the capabilities of artificial intelligence systems. The bulk of these advances have occurred in generative AI \u2014 [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":21623,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[2],"tags":[],"class_list":["post-21621","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"rttpg_featured_image_url":{"full":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22.jpg",1600,1152,false],"landscape":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22.jpg",1600,1152,false],"portraits":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22.jpg",1600,1152,false],"thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22-150x150.jpg",150,150,true],"medium":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22-300x216.jpg",300,216,true],"large":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22-1024x737.jpg",1024,737,true],"1536x1536":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22-1536x1106.jpg",1536,1106,true],"2048x2048":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22.jpg",1600,1152,false],"post-thumbnail":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22.jpg",583,420,false],"graptor-sq-xs":["https:\/\/web3unplugged.io\/blog\/wp-content\/uploads\/2023\/09\/Untitled-22.jpg",100,72,false]},"rttpg_author":{"display_name":"Admin CG","author_link":"https:\/\/web3unplugged.io\/blog\/author\/admin-cg\/"},"rttpg_comment":0,"rttpg_category":"<a href=\"https:\/\/web3unplugged.io\/blog\/category\/news\/\" rel=\"category tag\">news<\/a>","rttpg_excerpt":"Everything dies \u2014 baby, that\u2019s a fact. And, if the world cannot manage the current race to superhuman artificial intelligence between great powers, everything may die much sooner than expected. The past year has witnessed an explosion in the capabilities of artificial intelligence systems. The bulk of these advances have occurred in generative AI \u2014&hellip;","_links":{"self":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/21621","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/comments?post=21621"}],"version-history":[{"count":1,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/21621\/revisions"}],"predecessor-version":[{"id":21624,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/posts\/21621\/revisions\/21624"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media\/21623"}],"wp:attachment":[{"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/media?parent=21621"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/categories?post=21621"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/web3unplugged.io\/blog\/wp-json\/wp\/v2\/tags?post=21621"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}