stag hunt example international relations

From that moment on, the tenuous bonds keeping together the larger band of weary, untrusting hunters will break and the stag will be lost. %PDF-1.3 % [49] For example, by defecting from an arms-reduction treaty to develop more weapons, an actor can gain the upper hand on an opponent who decides to uphold the treaty by covertly continuing or increasing arms production. <>stream 0000003954 00000 n This subsection looks at the four predominant models that describe the situation two international actors might find themselves in when considering cooperation in developing AI, where research and development is costly and its outcome is uncertain. [23] United Nations Office for Disarmament Affairs, Pathways to Banning Fully Autonomous Weapons, United Nations, October 23, 2017, https://www.un.org/disarmament/update/pathways-to-banning-fully-autonomous-weapons/. 0000009614 00000 n Table 7. One final strategy that a safety-maximizing actor can employ in order to maximize chances for cooperation is to change the type of game that exists by using strategies or policies to affect the payoff variables in play. Robert J Aumann, "Nash Equilibria are not Self-Enforcing," in Economic Decision Making: Games, Econometrics and Optimisation (Essays in Honor of Jacques Dreze), edited by J. J. Gabszewicz, J.-F. Richard, and L. Wolsey, Elsevier Science Publishers, Amsterdam, 1990, pp. Indeed, this gives an indication of how important the Stag Hunt is to International Relations more generally. [2] Tom Simonite, Artificial Intelligence Fuels New Global Arms Race, Wired., September 8, 2017, https://www.wired.com/story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race/. Depending on the payoff structures, we can anticipate different likelihoods of and preferences for cooperation or defection on the part of the actors. The payoff matrix is displayed as Table 12. . A day passes. No payoffs (that satisfy the above conditions including risk dominance) can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. If both sides cooperate in an AI Coordination Regime, we can expect their payoffs to be expressed as follows: The benefit that each actor can expect to receive from an AI Coordination Regime consists of the probability that each actor believes such a regime would achieve a beneficial AI expressed as P_(b|A) (AB)for Actor As belief and P_(b|B) (AB)for Actor B times each actors perceived benefit of AI expressed as bA and bB. Rabbits come in the form of different opportunities for short-term gain by way of graft, electoral fraud, and the threat or use of force. In Just War Theory, what is the doctrine of double effect? Absolute gains will engage in comparative advantage and expand the overall economy while relative . Each model is differentiated primarily by the payoffs to cooperating or defecting for each international actor. In the stag hunt, what matters is trust Can actors trust that the other will follow through Depends on what they believe about each other, What actors pursue hinges on how likely the other actor is to follow through What is Game Theory theory of looking strategic interaction The second player, or nation in this case, has the same option. The current landscape suggests that AI development is being led by two main international actors: China and the United States. If the United States beats a quick path to the exits, the incentives for Afghan power brokers to go it alone and engage in predatory, even cannibalistic behavior, may prove irresistible. Despite the damage it could cause, the impulse to go it alone has never been far off, given the profound uncertainties that define the politics of any war-torn country. Similar to the Prisoners Dilemma, Chicken occurs when each actors greatest preference would be to defect while their opponent cooperates. Weiss, Uri, and Joseph Agassi. Deadlock occurs when each actors greatest preference would be to defect while their opponent cooperates. (1) the responsibility of the state to protect its own population from genocide, war crimes, ethnic cleansing and crimes against humanity, and from their incitement; What is the difference between structural and operational conflict prevention? The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. If security increases cant be distinguished as purely defensive, this decreases instability. Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. in . If they both work to drain it they will be successful, but if either fails to do his part the meadow will not be drained. But what is even more interesting (even despairing) is, when the situation is more localized and with a smaller network of acquainted people, most players still choose to hunt the hare as opposed to working together to hunt the stag. They will be tempted to use the prospect of negotiations with the Taliban and the upcoming election season to score quick points at their rivals expense, foregoing the kinds of political cooperation that have held the country together until now. This article is about the game theory problem about stag hunting. 'The "liberal democratic peace" thesis puts the nail into the coffin of Kenneth Waltz's claim that wars are principally caused by the anarchical nature of the international system.' This is the third technology revolution., Artificial intelligence is the future, not only for Russia, but for all humankind. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends now whether it can be controlled at all.[26]. In short, the theory suggests the variables that affect the payoff structure of cooperating or defecting from an AI Coordination Regime that determine which model of coordination we see arise between the two actors (modeled after normal-form game setups). The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism. As a result, concerns have been raised that such a race could create incentives to skimp on safety. An example of the payoff matrix for the stag hunt is pictured in Figure 2. [44] Thomas C. Schelling & Morton H. Halperin, Strategy and Arms Control. A classic game theoretic allegory best demonstrates the various incentives at stake for the United States and Afghan political elites at this moment. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. [22] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, Machine Bias, ProPublica, May 23, 2016 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Table 5. 0 What should Franks do? Two players, simultaneous decisions. 1 The metaphors that populate game theory modelsimages such as prisoners . The second technology revolution caused World War II. Game Theory 101: The Complete William Spaniel shows how to solve the Stag Hunt using pure strategy Nash equilibrium. I thank my advisor, Professor Allan Dafoe, for his time, support, and introduction to this papers subject matter in his Global Politics of AI seminar. Payoff matrix for simulated Chicken game. Moreover, the AI Coordination Regime is arranged such that Actor B is more likely to gain a higher distribution of AIs benefits. Actor As preference order: CC > DC > DD > CD, Actor Bs preference order: CC > CD > DD > DC. The game is a prototype of the social contract. The Stag Hunt game, derived from Rousseaus story, describes the following scenario: a group of two or more people can cooperate to hunt down the more rewarding stag or go their separate ways and hunt less rewarding hares. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry. 2 Examples of states include the United States, Germany, China, India, Bolivia, South Africa, Brazil, Saudi Arabia, and Vietnam. A great example of chicken in IR is the Cuban Missile Crisis. Table 3. Finally, there are a plethora of other assuredly relevant factors that this theory does not account for or fully consider such as multiple iterations of game playing, degrees of perfect information, or how other diplomacy-affecting spheres (economic policy, ideology, political institutional setup, etc.) [41] AI, being a dual-use technology, does not lend itself to unambiguously defensive (or otherwise benign) investments. December 5, 2010 at 2:49 pm. Therefore, if it is likely that both actors perceive to be in a state of Prisoners Dilemma when deciding whether to agree on AI, strategic resources should be especially allocated to addressing this vulnerability. The Stag Hunt 2,589 views Aug 6, 2020 A brief introduction to the stag hunt game in international relations. In this scenario, however, both actors can also anticipate to the receive additional anticipated harm from the defector pursuing their own AI development outside of the regime. Gray[36] defines an arms race as two or more parties perceiving themselves to be in an adversary relationship, who are increasing or improving their armaments at a rapid rate and structuring their respective military postures with a general attain to the past, current, and anticipated military and political behaviour of the other parties.. In order to mitigate or prevent the deleterious effects of arms races, international relations scholars have also studied the dynamics that surround arms control agreements and the conditions under which actors might coordinate with one another. 0000004367 00000 n The stag is the reason the United States and its NATO allies grew concerned with Afghanistans internal political affairs in the first place, and they remain invested in preventing networks, such as al-Qaeda and the Islamic State, from employing Afghan territory as a base. Payoff variables for simulated Prisoners Dilemma. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect. [13] Tesla Inc., Autopilot, https://www.tesla.com/autopilot. As of 2017, there were 193 member-states of the international system as recognized by the United Nations. As discussed, there are both great benefits and harms to developing AI, and due to the relevance AI development has to national security, it is likely that governments will take over this development (specifically the US and China). Hunting stags is most beneficial for society but requires a . As will hold for the following tables, the most preferred outcome is indicated with a 4, and the least preferred outcome is indicated with a 1., Actor As preference order: DC > CC > DD > CD, Actor Bs preference order: CD > CC > DD > DC. They can cheat on the agreement and hope to gain more than the first nation, but if the both cheat, they both do very poorly. For example, if the players could flip a coin before choosing their strategies, they might agree to correlate their strategies based on the coin flip by, say, choosing ballet in the event of heads and prize fight in the event of tails. As a result, this could reduce a rival actors perceived relative benefits gained from developing AI. [12] Apple Inc., Siri, https://www.apple.com/ios/siri/. What is the 'New Barbarism' view of contemporary conflicts? \wb94W(F}pYY"[17/x(K+jf+M)S_3ZP7~Nj\TgTId=/o7Mx{a[ K} She dismisses Clausewitz with the argument that he saw war as "the use of military means to defeat another state" and that this approach to warfare is no longer applicable in today's conflicts. Also, trade negotiations might be better thought of as an iterated game the game is played repeatedly and the nations interact with each other more than once over time. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. As an advocate of structural realism, Gray[45] questions the role of arms control, as he views the balance of power as a self-sufficient and self-perpetuating system of international security that is more preferable. In this section, I briefly argue that state governments are likely to eventually control the development of AI (either through direct development or intense monitoring and regulation of state-friendly companies)[29], and that the current landscape suggests two states in particular China and the United States are most likely to reach development of an advanced AI system first. 0 Payoff variables for simulated Chicken game. hVN0ii ipv@B\Z7 'Q{6A"@](v`Q(TJ}Px^AYbA`Z&gh'{HoF4 JQb&b`#B$03an8"3V0yFZbwonu#xZ? 16 (2019): 1. [38] Michael D. Intriligator & Dagobert L. Brito, Formal Models of Arms Races, Journal of Peace Science 2, 1(1976): 7788. Based on the values that each actor assigns to their payoff variables, we can expect different coordination models (Prisoners Dilemma, Chicken, Deadlock, or Stag Hunt) to arise. The paper proceeds as follows. Prisoners Dilemma, Stag Hunt, Battle of the Sexes, and Chicken are discussed in our text. She argues that states are no longer Finally, if both sides defect or effectively choose not to enter an AI Coordination Regime, we can expect their payoffs to be expressed as follows: The benefit that each actor can expect to receive from this scenario is solely the probability that they achieve a beneficial AI times each actors perceived benefit of receiving AI (without distributional considerations): P_(b|A) (A)b_Afor Actor A and P_(b|B) (B)b_Bfor Actor B. Furthermore, a unilateral strategy could be employed under a Prisoners Dilemma in order to effect cooperation. Most events in IR are not mutually beneficial, like in the Battle of the Sexes. Why do trade agreements even exist? Hume's second example involves two neighbors wishing to drain a meadow. [4] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). This is visually represented in Table 2 with each actors preference order explicitly outlined. As a result, there is no conflict between self-interest and mutual benefit, and the dominant strategy of both actors would be to defect. HV6am`vjyJ%K>{:kK$C$$EedI3OilJZT$h_'eN. }}F:,EdSr The article states that the only difference between the two scenarios is that the localized group decided to hunt hares more quickly. One is the coordination of slime molds. David Hume provides a series of examples that are stag hunts. In 2016, the Obama Administration developed two reports on the future of AI. 0000002790 00000 n It comes with colossal opportunities, but also threats that are difficult to predict. Because of the instantaneous nature of this particular game, we can anticipate its occurrence to be rare in the context of technology development, where opportunities to coordinate are continuous. Payoff variables for simulated Stag Hunt, Table 14. hunting stag is successful only if both hunters hunt stag, while each hunter can catch a less valuable hare on his own. Finally, I discuss the relevant policy and strategic implications this theory has on achieving international AI coordination, and assess the strengths and limitations of the theory in practice. In a security dilemma, each state cannot trust the other to cooperate. [5] Stuart Armstrong, Nick Bostrom, & Carl Shulman, Racing to the precipice: a model of artificial intelligence development, AI and Society 31, 2(2016): 201206. This essay first appeared in the Acheson Prize 2018 Issue of the Yale Review of International Studies. (e.g., including games such as Chicken and Stag Hunt). This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium. Combining both countries economic and technical ecosystem with government pressures to develop AI, it is reasonable to conceive of an AI race primarily dominated by these two international actors. Charisma unifies people supposedly because people aim to be as successful as the leader. 714 0 obj [21] Moreover, racist algorithms[22] and lethal autonomous weapons systems[23] force us to grapple with difficult ethical questions as we apply AI to more society realms. What are some good examples of coordination games? September 21, 2015 | category: These strategies are not meant to be exhaustive by any means, but hopefully show how the outlined theory might provide practical use and motivate further research and analysis. [20] Will Knight, Could AI Solve the Worlds Biggest Problems? MIT Technology Review, January 12, 2016, https://www.technologyreview.com/s/545416/could-ai-solve-the-worlds-biggest-problems/. International Relations Classical Realism- Morganthau- anarchy is assumed as a prominent concern in international relations,with the international Stag Hunt Meanwhile, the escalation of an arms race where neither side halts or slows progress is less desirable to each actors safety than both fully entering the agreement. Individuals, factions and coalitions previously on the same pro-government side have begun to trade accusations with one another. > the primary actors in war, having been replaced by "group[s] identified in terms of ethnicity, religion, or tribe" and that such forces rarely fight each other in a decisive encounter. Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Name four key thinkers of the theory of non-violent resistance, Gandhi, martin luther king, malcon X, cesar chavex. Additionally, Koubi[42] develops a model of military technological races that suggests the level of spending on research and development varies with changes in an actors relative position in a race. This iterated structure creates an incentive to cooperate; cheating in the first round significantly reduces the likelihood that the other player will trust one enough to attempt to cooperate in the future. This means that it remains in U.S. interests to stay in the hunt for now, because, if the game theorists are right, that may actually be the best path to bringing our troops home for good. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. Discuss. How does the Just War Tradition position itself in relation to both Realism and Pacifism? Under this principle, parties to an armed conflict must always distinguish between civilians and civilian objects on the one hand, and combatants and military targets on the other. {\displaystyle a>b\geq d>c} Members of the Afghan political elite have long found themselves facing a similar trade-off. [35] Outlining what this Coordination Regime might look like could be the topic of future research, although potential desiderata could include legitimacy, neutrality, accountability, and technical capacity; see Allan Dafoe, Cooperation, Legitimacy, and Governance in AI Development, Working Paper (2016). If both choose to row they can successfully move the boat. 8,H7kcn1qepa0y|@. . Use integration to find the indicated probabilities. Half a stag is better than a brace of rabbits, but the stag will only be brought down with a . [43] Edward Moore Geist, Its already too late to stop the AI arms race We must manage it instead, Bulletin of the Atomic Scientists 72, 5(2016): 318321. 0000001656 00000 n [7] Aumann concluded that in this game "agreement has no effect, one way or the other." Together, the likelihood of winning and the likelihood of lagging = 1. Read about me, or email me. By failing to agree to a Coordination Regime at all [D,D], we can expect the chance of developing a harmful AI to be highest as both actors are sparing in applying safety precautions to development. Payoff variables for simulated Deadlock, Table 10. HW?n9*K$kBOQiBo1d\QlQ%AAW\gQV#j^KRmEB^]L6Rw4muu.G]a>[U/h;@ip|=PS[nyfGI0YD+FK:or+:=y&4i'kvC Table 8. Finally, Jervis[40] also highlights the security dilemma where increases in an actors security can inherently lead to the decreased security of a rival state. How do strategies of non-violent resistance view power differently from conventional 'monolithic' understandings of power? Moreover, each actor is more confident in their own capability to develop a beneficial AI than their opponents. The ongoing U.S. presence in Afghanistan not only enables the increasingly capable Afghan National Security Forces to secure more of their homeland, but it also serves as a very important political signal. A hurried U.S. exit will incentivize Afghanistans various competing factions more than ever before to defect in favor of short-term gains on the assumption that one of the lead hunters in the band has given up the fight. In the context of the AI Coordination Problem, a Stag Hunt is the most desirable outcome as mutual cooperation results in the lowest risk of racing dynamics and associated risk of developing a harmful AI. Dipali Mukhopadhyay is an associate professor of international and public affairs at Columbia University and the author of Warlords, Strongman Governors, and the State in Afghanistan (Cambridge University Press, 2014). How can the security dilemma be mitigated and transcended? In the US, the military and intelligence communities have a long-standing history of supporting transformative technological advancements such as nuclear weapons, aerospace technology, cyber technology and the Internet, and biotechnology. [14] IBM, Deep Blue, Icons of Progress, http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/. Language links are at the top of the page across from the title. We are all familiar with the basic Prisoners Dilemma. An example of norm enforcement provided by Axelrod (1986: 1100) is of a man hit in the face with a bottle for failing to support a lynching in the Jim Crow South. Julian E. Barnes and Josh Chin, The New Arms Race in AI, Wall Street Journal, March 2, 2018, https://www.wsj.com/articles/the-new-arms-race-in-ai-1520009261; Cecilia Kang and Alan Rappeport, The New U.S.-China Rivalry: A Technology Race, March 6, 2018, https://www.nytimes.com/2018/03/06/business/us-china-trade-technology-deals.html. As a result, it is important to consider deadlock as a potential model that might explain the landscape of AI coordination. Since the payoff of hunting the stags is higher, these interactions lead to an environment in which the Stag Hunters prosper. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c". Different social/cultural systems are prone to clash. Namely, the probability of developing a harmful AI is greatest in a scenario where both actors defect, while the probability of developing a harmful AI is lowest in a scenario where both actors cooperate. f(x)={323(4xx2)0if0x4otherwise. It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. For example, one prisone r may seemingly betray the other , but without losing the other's trust. Any individual move to capture a rabbit will guarantee a small meal for the defector but ensure the loss of the bigger, shared bounty. This is why international tradenegotiationsare often tense and difficult. 1. 0000003027 00000 n The story is briey told by Rousseau, in A Discourse on Inequality: "If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach The Stag Hunt represents an example of compensation structure in theory. d As the infighting continues, the impulse to forego the elusive stag in favor of the rabbits on offer will grow stronger by the day. Using game theory as a way of modelingstrategicallymotivated decisions has direct implications for understanding basic international relations issues. ? [50] This is visually represented in Table 3 with each actors preference order explicitly outlined. First, I survey the relevant background of AI development and coordination by summarizing the literature on the expected benefits and harms from developing AI and what actors are relevant in an international safety context. However, anyone who hunts rabbit can do sosuccessfullyby themselves, but with a smaller meal. But, at various critical junctures, including the countrys highly contentious presidential elections in 2009 and 2014, rivals have ultimately opted to stick with the state rather than contest it. Orcas cooperatively corral large schools of fish to the surface and stun them by hitting them with their tails. [37] Samuel P. Huntington, Arms Races: Prerequisites and Results, Public Policy 8 (1958): 4186. First-move advantage will be decisive in determining the winner of the race due to the expected exponential growth in capabilities of an AI system and resulting difficulty of other parties to catch up. . Anarchy in International Relations Theory: The Neorealist-Neoliberal Debate Created Date: 20160809151831Z 0000002169 00000 n Payoff matrix for simulated Stag Hunt. [15] Sam Byford, AlphaGo beats Lee Se-dol again to take Google DeepMind Challenge series, The Verge, March 12, 2016, https://www.theverge.com/2016/3/12/11210650/alphago-deepmind-go-match-3-result. They suggest that new weapons (or systems) that derive from radical technological breakthroughs can render a first strike more attractive, whereas basic arms buildups provide deterrence against a first strike. [28] Once this Pandoras Box is opened, it will be difficult to close. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. [8] If truly present, a racing dynamic[9] between these two actors is a cause for alarm and should inspire strategies to develop an AI Coordination Regime between these two actors. The heated debate about the possibility of a U.S. troop withdrawal from Afghanistan, prompted by recent negotiations between the U.S. government and the Taliban, has focused understandably on the military value of security assistance. The theory outlined in this paper looks at just this and will be expanded upon in the following subsection. [28] Armstrong et al., Racing to the precipice: a model of artificial intelligence development.. Leanna Litsch, Kabul Security Force Public Affairs. In this model, each actors incentives are not fully aligned to support mutual cooperation and thus should present worry for individuals hoping to reduce the possibility of developing a harmful AI. The response from Kabul involved a predictable combination of derision and alarm, for fear that bargaining will commence on terms beyond the current administrations control. This additional benefit is expressed here as P_(b|A) (A)b_A. Throughout history, armed force has been a ubiquitous characteristic of the relations between independent polities, be they tribes, cities, nation-states or empires. Not wanting to miss out on the high geopolitical drama, Moscow invited Afghanistans former president, Hamid Karzai, and a cohort of powerful elitesamong them rivals of the current presidentto sit down with a Taliban delegation last week. So far, the readings discussed have commented on the unique qualities of technological or qualitative arms races. Although the development of AI at present has not yet led to a clear and convincing military arms race (although this has been suggested to be the case[43]), the elements of the arms race literature described above suggest that AIs broad and wide-encompassing capacity can lead actors to see AI development as a threatening technological shock worth responding to with reinforcements or augmentations in ones own security perhaps through bolstering ones own AI development program. For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI . Payoff matrix for simulated Prisoners Dilemma. [30] Greg Allen and Taniel Chan, Artificial Intelligence and National Security. Report for Harvard Kennedy School: Belfer Center for Science and International Affairs, July 2017, https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf: 71-110. PxF`4f$CN*}S -'2Y72Dl0%^JOG?Y,XT@ dF6l]+$.~Qrjj}46.#Z x^iyY2)/c lLU[q#r)^X hTIOSQ>M2P22PQFAH It is his argument: "The information that such an agreement conveys is not that the players will keep it (since it is not binding), but that each wants the other to keep it."

Heaven Nightclub London Tickets, Articles S