Mathematics in Applied Sciences and Engineering https://ojs.lib.uwo.ca/mase Volume 2, Number 2, June 2021, pp.72-86 https://doi.org/10.5206/mase/11137 THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES JASON LEQUYER AND MONICA-GABRIELA COJOCARU Abstract. Generalized Nash Games are a powerful modelling tool, first introduced in the 1950’s. They have seen some important developments in the past two decades. Separately, Evolutionary Games were introduced in the 1960’s and seek to describe how natural selection can drive phenotypic changes in interacting populations. The dynamics of Evolutionary Games are frequently studied using the Replicator Equation, however there is no general theory about how to derive these kinds of dynamics for more complex games, such as Generalized Nash Games. In this paper we extend and generalize the Replicator Equation by using an analogy with the Projected Dynamical System, and show how this extension can be used to derive a Replicator Equation for a wide range of problems. We also show how this extension enables us to to consider the dynamics of a new type of Evolutionary Games where we add shared inter-species contraints. 1. Introduction Nash Games as we know them were first introduced in 1950 by Nash [17] and have a wide array of applications in applied sciences, most notably economics and engineering. The Generalized Nash Game, the subject of this paper, was described only four years later by Arrow and Debreu [1], but took much longer to unravel and has not yet gained the currency of its precursor. A Generalized Nash Game (GN) seeks to describe a situation where each player’s choice of strategy somehow affects the choices available to his/her opponents. However, since everyone takes their turn all at the same time, this leads to games that cannot be played by normal people, at least not in the traditional sense. To illustrate, a classic example of a Nash Game is rock-paper-scissors. A GN version of this is rock-paper-scissors where ties are prohibited, i.e. if one player picks rock, another cannot also pick rock. But this game is impossible to play with another individual because a player cannot possibly know in advance what their opponent is going to pick, thus one cannot knowingly adhere to the mentioned restriction. For this reason, Ichiishi calls them pseudo-games [13]. Despite this artificiality, there are settings where outside forces ensure the satisfaction of constraints and, moreover, the model has explanatory value even in circumstances where this is not the case [9]. In general, it is difficult to find the equilibrium points of a GN. However the issue becomes easier if each player is subject to identical constraints (as far as variables under that player’s control are concerned). This is known as a GNSC (Generalized Nash Game with Shared Constraints). There Received by the editors November 10 2020; revised 3 March and 8 May 2021 respectively, accepted 9 May 2021; published online 18 May 2021. 2010 Mathematics Subject Classification. Primary 91A22. Key words and phrases. Evolutionary Games; Generalized Nash Games; Variational Inequalities; Projected Dynamical Systems; Replicator Equation. Monica-Gabriela Cojocaru was supported in part by National Science and Engineering Research Council (NSERC)Discovery Grant #400684. 72 THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES 73 are many computational methods for finding some equilibria of the GNSC [9] and recent work gives a method for extracting all such points [15, 6]. Another type of game we consider is the Evolutionary Game, first described by Maynard Smith [19]. Evolutionary games seek to model the evolution of phenotypes as a function of natural selection, such as in the well-known Hawk-Dove game [12]. The dynamics of these games can often be described by the Replicator Equation of Taylor and Jonker [20]. In this paper we build a bridge between the most fundamental type of Evolutionary Game and the GNSC, by establishing a connection between their Nash Equilibria via the Replicator Equation. This bridge allows us to extend the existing model to accommodate new types of problems, as GNSC’s are richer and more diverse than population games. We build this bridge by extending the Replicator Equation so that it may be applied to GNSCs and we derive this extension using an analogy between the Replicator Equation and what is known as the Projected Dynamical System. The Projected Dynamical System (PDS) is a type of discontinuous dynamical system that was first introduced in the 70’s by Henry [11], studied further in the 90’s by Dupuis and Nagurney [8] and extended in the early 2000’s by Cojocaru and Jonker [5] to Hilbert spaces of any dimension. PDS are intimately linked to GNSCs in that the steady state set of a PDS is a subset of the Nash Equilibria of its associated GNSC. The PDS is useful to us in this paper because it gives a known, distinct, game dynamic that’s already applicable to Evolutionary Games and GNSCs alike. The relationship between the Projection Dynamic and the Replicator Dynamic was first studied by Lahkar and Sandholm [18, 14]. In their papers, the authors elucidate similarities between the revision protocols implied by the two dynamics, and establish some properties of the solution trajectories of these systems for population games. In this paper we aim to expand this analogy beyond just population games, by extending the Replicator Equation and showing that a key theorem still holds that relates the rest points of our extended Replicator Equation to those of the Projection Dynamical System. In doing so, we allow for these two dynamics to be considered for GNSCs, which are varied and more general than population games. To show that our extension of the Replicator Equation is useful, we prove a part of the Folk Theorem of Evolutionary Game Theory, namely that every stable rest point of the Replicator Equation is a Nash Equilibrium of the corresponding Population Game [7, 12]. As such, our Theorem 4.2 generalizes this aspect of the Folk Theorem so that it may be applied to any GNSC defined on a polytope. Then, after associating these two concepts, we show how we can extend Evolutionary Games under our new framework in Section 5. But to accomplish this, we first need a way to frame population dynamics problems as Nash Games, which we illustrate in Section 3 for a standard Nash Game, and in Section 5 for a GNSC. 1.1. Contribution and Significance. In this paper we extend the Replicator Equation for simple evolutionary games, so that it can be applied to GNSCs. We show how with this extension, we can introduce shared inter-species constraints into Evolutionary Games and recover a corresponding Repli- cator Equation to study its dynamics. Specifically, our contributions are as follows: (1) We show how the existing Replicator Equation can be written as a sum of projections for simple Evolutionary Games, in equation (4.1) . (2) We use this new way of writing the Replicator Equation to extend it from simple evolutionary games to GNSCs in (4.3), and we show that a part of the Folk Theorem holds for this represen- tation (Theorem 4.2), which allows us to use this extended Replicator Equation to find Nash Equilibria of the associated GNSC 74 JASON LEQUYER AND MONICA-GABRIELA COJOCARU (3) We show how we can add shared inter-species constraints to population games and find the corresponding Replicator Equation using this extension (Example 5.2). 2. Brief mathematical background 2.1. Convex Analysis. We will recall some basic definitions used in convex analysis, for ease of read- ing. Most of these definitions and results are drawn from or based on those found in [3]. Given a finite set of vectors β = {x1, . . . ,xm} where xi ∈ Rn we say that a vector y is an affine combination of β if we can find λ1, . . . ,λm ∈ R such that y = λ1x1 + . . . + λmxm, λ1 + . . . + λm = 1. If each λi ≥ 0, then we say that y is a convex combination of β. A set K ⊆ Rn is said to be convex if K contains every convex combination of vectors in K. Given a set K ⊆ Rn, we can construct a convex set by taking all convex combinations of vectors in K or construct an affine set by taking all affine combinations. We call these the convex hull and affine hull of K respectively and we formally define them as conv(K) = {y ∈ Rn : y = λ1x1 + . . . + λmxm, ∑ λi = 1, λi ∈ R+, xi ∈ K}, aff(K) = {y ∈ Rn : y = λ1x1 + . . . + λmxm, ∑ λi = 1, λi ∈ R, xi ∈ K}. The affine hull is important for defining the relative interior of a set. In optimization we are often working with low dimensional sets embedded in higher dimensional spaces, so we need a more general notion for the interior of a set: the relative interior fulfills this role. Given some set K, the relative interior of K (ri(K)) is defined as ri(K) = {x ∈ K : B�(x) ∩aff(K) ⊆ K for some � > 0}, where B�(x) is the open ball of radius �, centered at x. We also often consider the normal cone of a convex set. Given some convex set K ⊆ Rn, we define the normal cone of K at some point x ∈ K to be NK(x) = {y ∈ Rn : 〈y,x∗ −x〉≤ 0 ∀x∗ ∈ K}. In addition to convex sets, we also consider convex functions. Given some convex set K ⊆ Rn, a function θ : K → R is said to be convex if for every t ∈ [0, 1] and x1,x2 ∈ K, we have f(tx1 + (1 − t)x2) ≤ tf(x1) + (1 − t)f(x2) 2.2. Polytopes. Now we will give a brief review of polytopes. A bounded, convex polytope is defined as the convex hull of a non-empty finite set of real points. For simplicity, we will just call this a polytope for the remainder of our paper. Let P be a polytope. The set s = {x1, . . . ,xn} such that P = conv(s) is not unique; however, there does exist a unique minimal spanning set. We call this set ext(P) and its elements are called vertices of P [14]. A convex subset F of P is called a face when for every distinct x,y ∈ P , if the line segment connecting x and y intersects F at some point other than x or y, then F contains the entire line segment. A face is itself a polytope, the convex hull of some subset of the vertices of P , therefore we can denote a face Fr where r ⊆ ext(P). Note that P = Fext(P) is a face of itself. If the line segment connecting two vertices Vi,Vj is a face of K, we call them adjacent vertices and denote this relationship Vi Vj. Theorem 2.1. Suppose P is a polytope and Fr is a face of P . Let x ∗ ∈ Fr and r = {V1, . . . ,Vn}. Then for every Vi ∈ r, we have that span(Fr −x∗) = span{Vi −Vj : Vj ∈ r, Vi Vj}. Proof. This follows from Corollary 11.7 in [10]. � THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES 75 For any face Fr, we define the dim(Fr) as the dimension of the vector space associated with the affine hull of Fr. Equivalently, for every x ∗ ∈ Fr, we have that dim(Fr) = dim(span(Fr −x∗)). Theorem 2.2. Suppose Fr is a face of some polytope K and x ∈ Fr. Then Fr is the lowest dimensional face containing x iff x ∈ ri(Fr). Proof. See [10], Theorem 5.6. � If Fr is a face of Fr′ and dim(Fr) < dim(Fr′) we say that Fr is a proper face of Fr′. If dim(Fr) = dim(Fr′) − 1 we call Fr a facet of Fr′. Theorem 2.3. Suppose Fr is a face of some polytope K and dim(Fr) ≤ dim(K) − 2. Then Fr is the intersection of at least two facets of K with K. If dim(Fr) = dim(K)−1 then Fr is the intersection of exactly one facet of K with K. Proof. See [10], Theorem 10.4. � We say that two distinct faces Fr and Fr′ are adjacent if Fr∩Fr′ is nonempty and dim(Fr) = dim(Fr′). Lemma 2.4. If Fr is a facet of K, x ∈ Fr and x /∈ ri(Fr), then x ∈ Fr′ for some facet Fr′ adjacent to Fr. Proof. Since x /∈ ri(Fr), then by Theorem 2.2 we can find a face Fr∗ with dim(Fr∗) < dim(Fr) such that x ∈ Fr∗. Then dim(Fr∗) ≤ dim(K) − 2, hence by Theorem 2.3 Fr∗ is the intersection of at least two facets of K. Any such facet is adjacent to Fr and contains x. � We can equivalently describe a convex bounded polytope K in terms of a set of affine functions H = {g1, . . . ,gn}, where K = {x : g1(x) ≥ 0, . . . ,gn(x) ≥ 0}. We call this the halfspace representation, and just like with the vertex representation, the choice of H is not unique. The fact that we can express bounded polytopes in these two different ways was first proved by Krein and Milman [16]. Any face of K is just K intersected with some combination of hyperplanes gi(x) = 0 where gi ∈ H. Given such a representation of K, we can define the faces based on which functions gi are nonzero somewhere on that face. Specifically, we can denote the faces of K as Fh where h = {g ∈ H : g(x) > 0 for some x ∈ Fh} and it will hold that if Fh is a face of Fh ′ , then h ⊆ h′ with h = h′ iff Fh = Fh ′ . We will refer to the set h as the halfspace identifier of the face Fh with respect to H. We now have both a top-down and bottom-up representation for a face. Lemma 2.5. Suppose Fh is a face of some polytope K, where K has halfspace representation H ⊇ h = {g1, . . . ,gn}. Then gi(x) > 0 for all x ∈ ri(Fh), gi ∈ h. Proof. Let gi ∈ h. Suppose we can find x∗ ∈ ri(Fh) such that gi(x∗) = 0. Since gi ∈ h, we can also find x ∈ Fh such that gi(x) > 0. Consider the line segment connecting x∗ and x: γ(λ) = (1 −λ)x + λx∗. For λ ∈ [0, 1]. Since x∗ ∈ ri(Fh) we can extend this segment beyond x∗ slightly so that γ(λ) ∈ Fh for all λ ∈ [0, 1 + �] for some � > 0. Since gi(x) is affine, this gives us: gi(γ(1 + �)) = gi((1 − (1 + �))x + (1 + �)x∗) = gi(x∗) + �(gi(x∗) −gi(x)) < 0, a contradiction. � 76 JASON LEQUYER AND MONICA-GABRIELA COJOCARU 2.3. Variational Inequalities and Projected Dynamical Systems. Given a subset K of Rn and a mapping F : Rn → Rn, the Variational Inequality Problem (VI) is to find a solution x to the inequality (see [5]) 〈F(x),y −x〉≥ 0, ∀y ∈ K. (2.1) A Projected Differential Equation is defined as follows: Given some closed convex set K ⊆ Rn, we define the projection operator PK at a point x, as the unique element PK(x) ∈ K such that ‖x−PK(x)‖≤‖x−y‖, ∀y ∈ K. We then define the vector projection operator from a vector v ∈ Rn to a vector x ∈ K as ΠK(x,v) = lim δ→0+ PK(x + δv) −x δ . Note that this is equivalent to taking the Gateaux derivative of the projection operator onto K, in the direction of v. Now, for some vector field −F : Rn → Rn, the class of differential equations known as Projected Differential Equations takes the form ẋ = ΠK(x,−F(x)), x(0) ∈ K. (2.2) Last but not least, it is known from [5], that if K is closed and convex then for any x∗ such that 0 = ΠK(x ∗,−F(x∗)) we have that 〈F(x∗),y −x∗〉≥ 0, ∀y ∈ K, and the converse is also true. In general the projected system we introduced here has a discontinuous righthand side, though existence and qualitative studies have been known since the first work by Henry in 1970’s [11], Aubin and Cellina in the 80’s [2] and followed up by many others (see [8, 5], see [4] and the extensive references therein). There is a similar notion of a projected equation (see [2, 21]) defined by ẋ = PK(x−αF(x)), x(0) ∈ K for a given value of α > 0 real number. (2.3) The righthand side of this equation is continuous and, with good values of α, solutions of this projected equation amount to “smoothed out” (differential) approximations of the trajectories of (2.2). In general it is up to practitioners to choose one of the two versions of the projected system. If continuity in the equation’s vector field is desired, then equation (2.3) can be considered. We take here the point of view of the PDS (2.2). 3. Games and the Replicator Equation A Generalized Nash Game (GN) is characterized by a finite set of players {P1, . . . ,Pn}, where player Pi controls the variable xi and has an objective function θi(x1, . . . ,xn). The goal of each player is to minimize their objective function subject to some constraint set xi ∈ Ki(x−i), where x−i denotes (x1, . . . ,xi−1,xi+1, . . . ,xn). The key feature here is that each Ki depends on variables beyond Player i’s control. A Nash Equilibrium is any strategy (x1, . . . ,xn) where no player can lower their objective function by unilaterally altering their strategy, i.e. for every i ∈{1, . . . ,n}, for every x∗i ∈ Ki(x−i), we have θi(xi,x−i) ≤ θi(x∗i ,x−i). Here is the basic form of a Generalized Nash Game: Player 1(x1) : min θ1(x1,x−1) s. t. { x1 ∈ K1(x−1) , · · · , Player n(xn) : min θn(xn,x−n) s. t. { xn ∈ Kn(x−n) THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES 77 For the remainder of this paper we will assume that Ki(x−i) is closed and convex and ∇θi is Lipschitz continuous for each i. If there additionally exists a convex set K such that for each i we have that Ki(x−i) = {xi : (xi,x−i) ∈ K}, then we call the game a GNSC, or a Generalized Nash Game with Shared Constraints, named because in the case of sets defined by inequalities, it is easy to see that this restriction amounts to saying that each player’s strategy set Ki(x−i) can be defined by the exact same set of inequality constraints. Now, let us turn to a more specific kind of problem, Evolutionary Games. An evolutionary game is a game where there is a population of agents, whose strategies evolve according to some rule, that may model various adaptation processes. In the simple two-player symmetric case, Evolutionary Games are matrix games where each member of a population has a choice among n pure strategies {e1, . . . ,en}. In the associated matrix A, we have that Aij = π(ei,ej) is just the payoff of playing pure strategy ei in a population that exclusively plays strategy ej [7]. All strategies must belong to the simplex ∆n = {(p1, . . . ,pn) : ∑n i=1 pi = 1, pi ≥ 0}. A Nash Equilibrium is any strategy x ∈ ∆ n such that [12] π(p,x) ≤ π(x,x) for every p ∈ ∆n. Although these games are usually described in the literature as we did above, it is easy to see that they are essentially solving the following Nash Game: Player 1(x) : max xTAy s. t. { x ∈ K , Player 1’(y) : min 1 2 (y −x)T (y −x) s. t. { y ∈ Rn, where K = ∆n. In this system y is just a shadow variable that tests strategy x against all other strategies, and its objective function ensures x = y at all solutions. This system is known from [9] to share its Nash Equilibria with the solutions to the Variational Inequality( x∗ y∗ ) : 〈( −Ay∗ y∗ −x∗ ) , ( x y ) − ( x∗ y∗ )〉 ≥ 0, ∀ ( x y ) ∈ K ×Rn. (3.1) Note that at any solution to (3.1) we must have that y∗−x∗ = 0, otherwise we could always find some y ∈ Rn to make the inner-product negative. Therefore this variational inequality shares its solutions (in x) with the problem find x∗ s.t. : 〈−Ax∗,x−x∗〉≥ 0, ∀x ∈ K, (3.2) which is known from [8] to share its solutions with the rest points of the Projected Dynamical System ẋ∗ = ΠK(x ∗,Ax∗). For simplicity of notation, let us denote x := x∗ going forward, hence we write ẋ = ΠK(x,Ax). (3.3) It is also known [7] that the Replicator Equation associated to our game is ẋi = xi(e T i Ax−x TAx), i ∈{1, . . . ,n} (3.4) We therefore have two different dynamics we can use to study evolutionary games, (3.3) and (3.4). In [18, 14], the authors throughly study the relationship between these two dynamics and the revision protocol implied by each. We would like to extend these dynamics to the much more general problem of GNSCs. The projection dynamic of course, is already used extensively in the study of GNSCs. However the Replicator Equation has not to our knowledge ever been applied to GNSCs; equation (3.4) only gives 78 JASON LEQUYER AND MONICA-GABRIELA COJOCARU us the dynamics of very elementary evolutionary games. Versions of (3.4) have been adapted for more sophisticated kinds of population games (see [7] for an overview), however so far there is no analogue for anything as broad as GNSCs. In the next section we build this analogue, by devising a method to derive a Replicator Equation from a given Projected Dynamical System. 4. Results 4.1. Extending the Replicator Equation. It is known by the Folk Theorem of Evolutionary Game Theory [7], that any stable rest point of (3.4) in K = ∆n must be a Nash Equilibrium. This implies that such a point is also a rest point of (3.3), which raises the question: what is the relationship between the system in (3.4) and the system in (3.3)? Notice that we can rewrite (3.4) in the following way ([·] denotes the Iverson bracket) ẋ = n∑ i=1 n∑ j=1 xixj(Ax · (ei −ej))(ei −ej)[i < j], (4.1) where ei and ej are just the coordinates of two adjacent vertices on the simplex ∆ n, Ax·(ei−ej))(ei−ej) is the un-normalized projection of Ax onto the line connecting these two vertices and xi and xj are just the constraints which aren’t identically zero on that line. This system is similar to (3.3), however we exclusively project onto the edges of the polytope K instead of the tangent cone of the entire set. This system is continuous, but the cost of this continuity is that we generate new rest points which are not necessarily equilibria of the original game. However we can find some, but not all, of these equilibria via stability tests [7]. In the spirit of this process, let us now try and apply this technique to a more general type of problem. Suppose K is a bounded convex polytope in Rn (See Chapter 2 for background on polytopes). Then K has some half-space representation H = {g1, . . . ,gk}, where each gi is affine and K = {x : g1(x) ≥ 0, . . . ,gk(x) ≥ 0}. Let −F : Rn → Rn be Lipschitz continuous. Consider the Projected Differential Equation ẋ = ΠK(x,−F(x)) (4.2) Number the vertices of K as {V1, . . .Vm}. For each Vi Vj, we can find a halfspace identifier in H for the edge connecting these two vertices hij ⊆ H. Let gij = ∏ g∈hij g. Then, mirroring the procedure we used with the Replicator Equation, we can consider the following classical dynamical system ẋ = m∑ i=1 m∑ j=1 gij(x)((Vi −Vj) · (−F(x))(Vi −Vj)[i < j and Vi Vj]. (4.3) Note that we are essentially performing vector projection onto each edge, without the usual normalizing term. (4.3) is our proposed way of extending the Replicator Equation to GNSCs. It is continuous, just like the original Replicator Equation, however there is no guarantee that our extension has any relation at all to the associated GNSC, since we have no idea whether the Folk Theorem holds for this system. While proving the Folk Theorem was trivial for the case of elementary evolutionary games (see [12] for a short and simple proof), there is no obvious way to extend this proof to GNSCs. Therefore the next subsection is dedicated to proving that a result analogous to a part of the Folk Theorem holds for our system in (4.3) (Theorem 4.2). Equipped with this theorem we can then relate the rest points of our system in (4.3) to the Nash Equilibria of our original GNSC, which we do in Section 5. THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES 79 4.2. Connecting the extended Replicator Equation to GNSCs. For absolute clarity, in the theorems that follow when we say stable rest point we mean the usual definition: a rest point x∗ is stable if and only if, for every � > 0, there exists a δ > 0 such that for every solution x(t), if t0 is such that ‖x(t0)−x∗‖ < δ, then ‖x(t)−x∗‖ < � for every t ≥ t0. Also, when we say face invariant, we mean that if any solution x(t) lies on some face at time t0, then it will remain on that face for all future t ≥ t0 (usually called forward invariance). Theorem 4.1. The system in (4.3) is face invariant. Theorem 4.2. A stable rest point of (4.3) is a rest point of (4.2). Before we can prove these theorems, we need four more lemmas about polytopes. Lemma 4.3. Let Fr be a face of some polytope K and x ∗ ∈ Fr. We have that span(Fr−x∗)∩(K−x∗) = Fr −x∗. Proof. (⇒) Assume x ∈ span(Fr −x∗) ∩ (K −x∗) and x /∈ Fr −x∗. Now consider any y ∈ ri(Fr −x∗). By convexity we have that λx + (1 − λ)y ∈ span(Fr − x∗) ∩ (K − x∗) for every λ ∈ [0, 1]. Since y ∈ ri(Fr −x∗), then for some λ ∈ (0, 1) we must have that λx + (1 −λ)y ∈ Fr −x∗. Therefore by the definition of a face, Fr contains x, a contradiction. The (⇐) direction is obvious. � Lemma 4.4. Suppose Fr1 and Fr2 are two adjacent facets of Fr′. Then for every x ∗ ∈ Fr1 ∩Fr2 , we have that span(Fr′ −x∗) = span((Fr1 −x ∗) ∪ (Fr2 −x ∗)). Proof. Clearly span(Fr1 −x∗) and span(Fr2 −x∗) are subspaces of span(Fr′ −x∗). Since Fr is a facet of Fr′, then dim(Fr1 −x∗) + 1 = dim(Fr′ −x∗) = dim(Fr2 −x∗) + 1. Therefore it suffices to show that span(Fr1 −x∗) 6= span(Fr2 −x∗) which follows easily from Lemma 4.3. � Lemma 4.5. Suppose Fr is a facet of Fr′. Then for every x ∗ ∈ Fr there is a unit vector nr(x∗) ∈ (Fr′ −x∗) called the inner-normal of Fr at x∗ on Fr′, such that for every v ∈ span(Fr′ −x∗) nr(x ∗) ·u = 0, for every u ∈ span(Fr −x∗) (4.4) v = u + knr(x ∗), for some u ∈ span(Fr −x∗), k ∈ R (4.5) If x∗ ∈ ri(Fr), then nr(x∗) is a feasible direction, (4.6) with k > 0 for v ∈ (Fr′ −x∗) \ (Fr −x∗). Proof. Clearly span(Fr − x∗) is a subspace of span(Fr′ − x∗). Since Fr is a facet of Fr′, then dim(Fr − x∗) = dim(Fr′ − x∗) + 1. Then we can take any orthonormal basis of span(Fr − x∗) and extend it to an orthonormal basis of span(Fr′ −x∗) by the addition of a single vector, call it w. Suppose that v1, v2 ∈ (Fr′ − x∗) \ (Fr − x∗). Then from the last paragraph, we know we can write v1 = u1 + k1w and v2 = u2 + k2w, with u1, u2 ∈ span(Fr −x∗) and k ∈ R. If k1 = 0 or k2 = 0, this would contradict Lemma 4.3. Assume that k1 < 0 < k2. Then the line connecting v1 and v2 intersects Fr, but contains points that aren’t in Fr, contradicting the fact that Fr is a face. Therefore k1 and k2 must have the same sign, and so by choosing nr(x ∗) = w or nr(x ∗) = −w as appropriate, we get that (4.4) and (4.5) hold. Now suppose x∗ ∈ ri(Fr). Then for p > 0 sufficiently small, we will have that −pu1 ∈ (Fr − x∗). Hence by convexity, λv1 + (1 − λ)(−pu1) ∈ (Fr′ − x∗) for all λ ∈ [0, 1]. And we can choose our λ so that λv1 + (1 −λ)(−pu1) = λknr, proving (4.6). � 80 JASON LEQUYER AND MONICA-GABRIELA COJOCARU Lemma 4.6. Suppose Fr is a polytope, and Fr′ is a proper face of Fr. Then for every Vi ∈ r′ there exists Vj ∈ r \r′ such that Vi Vj. Proof. Let x∗ ∈ Fr, Vi ∈ r′. From Theorem 2.1 we know that span(Fr − x∗) = span{Vi − Vj : Vj ∈ r, Vi Vj}. If there doesn’t exist Vj ∈ r \r′ with the desired property then span{Vi −Vj : Vj ∈ r, Vi Vj} = span{Vi −Vj : Vj ∈ r′, Vi Vj}, and therefore span(Fr −x∗) = span(Fr′ −x∗), a contradiction. � Equipped with the above Lemmas, we can now prove Theorem 4 and 5. Proof of Theorem 4.1. Let Fh = Fr be the lowest dimensional face of K such that x ∈ Fh. Suppose Fh ′ = Fr′ = conv(Vi,Vj) 6⊆ Fr = Fh. Then h′ 6⊆ h. Therefore gij(x) = 0. Taking this together with Theorem 2.2 ensures that we remain on Fh and therefore every face to which x belongs. � Proof of Theorem 4.2. If K is a singleton, then the result is trivial, therefore assume dim(K) ≥ 1. Suppose x∗ is not a rest point of (4.2), but is a rest point of (4.3). Then −F(x∗) /∈ NK(x∗). Let Fr be the lowest dimensional face containing x∗ such that −F(x∗) /∈ NFr (x∗). We have that ẋ∗ = m∑ i=1 m∑ j=1 gij(x ∗)((Vi −Vj) · (−F(x∗))(Vi −Vj). (4.7) Where the sum is over all i and j such that i < j, Vi Vj and Vi,Vj ∈ r. Since x∗ is a rest point of (3.4) we must have that ẋ∗ = 0. If x ∈ ri(Fr), this means that gij(x∗) > 0 for every Vi,Vj ∈ r (Lemma 2.5). Thus (Vi −Vj) · (−F(x∗)) = 0 for every Vi,Vj ∈ r, Vi Vj. However since for any fixed Vi we have that Fr − x∗ ⊆ span{Vi − Vj : Vj ∈ r , Vi Vj} (Theorem 2.1) then this contradicts the fact that −F(x∗) /∈ NFr (x∗). Therefore x∗ /∈ ri(Fr), hence there must exist some facet Fr∗ of Fr such that x ∈ Fr∗ (Theorem 2.2). Now suppose x∗ /∈ ri(Fr∗). Then we can find another facet, Fr′ adjacent to Fr∗ such that x∗ ∈ Fr′ (Lemma 2.4). We must have that −F(x∗) ∈ NFr∗ (x ∗) and −F(x∗) ∈ NFr′ (x ∗) (otherwise we’ve found a lower dimensional face that doesn’t have this property). However by Lemma 4.4, this implies that −F(x∗) ∈ NFr (x∗), a contradiction. Therefore x∗ ∈ ri(Fr∗) and x∗ belongs to only one facet of Fr. Let nr∗(x ∗) be the unit inner-normal of Fr∗ on Fr at x ∗ (obtained from Lemma 4.5). Since −F(x∗) /∈ NFr (x ∗), then for some v ∈ Fr −x∗, we have that −F(x∗) ·v > 0. We can write v = u + knr∗(x∗) for some u ∈ Fr∗ − x∗, k ∈ R (Lemma 4.5). Since −F(x∗) ∈ NFr∗ (x ∗), then v ∈ (Fr − x∗) \ (Fr∗ − x∗). Therefore k > 0 (Lemma 4.5). We have −F(x∗) ·v = −F(x∗) ·u + knr∗(x∗) = k(−F(x∗) ·nr∗(x∗)) > 0. Thus −F(x∗) ·nr∗(x∗) > 0. For all Vi ∈ r, Vj ∈ r, V1 Vj we have that Vi −Vj = u + knr∗(x∗), for some k ∈ R (Theorem 2.1/Lemma 4.5). Hence −F(x∗) · (Vi −Vj) = (−F(x∗) ·nr∗(x∗))(nr∗(x∗) · (Vi −Vj)). Thus (−F(x∗) · (Vi −Vj))(nr∗(x∗) · (Vi −Vj)) = (−F(x∗) ·nr∗(x∗))(nr∗(x∗) · (Vi −Vj))2, which is ≥ 0. Now fix Vp ∈ r, Vq ∈ r∗ \r such that Vp Vq (exists by Lemma 4.6). By Lemma 4.3 we have that nr∗(x ∗) · (Vp −Vq) 6= 0. This implies that (−F(x∗) ·nr∗(x∗))(nr∗(x∗) · (Vp −Vq))2 > 0. Then by continuity of F, we can find an open ball B�0 (x ∗) in Fr such that for some γ > 0 γ < (−F(x) ·nr∗(x∗))(nr∗(x∗) · (Vp −Vq))2. THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES 81 We can further constrain �0 so that B�0 (x ∗) ∩ Fr∗ ⊆ ri(Fr∗) and B�0 (x∗) contains no other facets of Fr aside from Fr∗ (we can do this second part because x ∗ belongs to only one facet). Therefore within this neighbourhood we have d((x−x∗) ·nr∗(x∗)) dt = m∑ i=1 m∑ j=1 gij(x)((Vi −Vj) ·−F(x))((Vi −Vj) ·nr∗(x∗)) = m∑ i=1 m∑ j=1 gij(x)(−F(x) ·nr∗(x∗))(nr∗(x∗) · (Vi −Vj))2 ≥ gpq(x)(−F(x) ·nr∗(x∗))(nr∗(x∗) · (Vp −Vq))2 ≥ gpq(x)γ. Where the sums are over all i and j such that i < j, Vi Vj and Vi,Vj ∈ r. Consider any �, δ such that 0 < δ ≤ � < �0. We know that nr(x∗) is a feasible direction by Lemma 4.5, therefore let y(0) = k0nr(x∗) be a solution to the IVP, where k0 > 0 is sufficiently small so that y(0) ∈ (B�(x∗)∩Fr) ⊆ (B�0 (x∗)∩Fr). Consider the set U = {x ∈ (B�0 ∩ Fr) : (x − x∗) · nr ≥ k0}. Clearly U ∩ Fr∗ = {}; hence U contains no facets of Fr. Thereore U ⊆ ri(Fr) (Theorem 2.2). Thus we can find λ > 0 such that gpq(x) > λ for all x ∈ U (continuity and Lemma 2.5). Since d((x−x ∗)·nr) dt > 0 on ri(Fr) and (4.2) is face invariant (Theorem 4.1) we know that y(t) is also U invariant. Therefore d((y −x∗) ·nr) dt ≥ gpq(x)γ ≥ λγ, contradicting Lyapunov stability. � Note that the converse to Theorem 4.2 is not true (see [12], exercise 7.2.2). With this theorem we show that stable rest points of our extended replicator equation are rest points of the projection dynamic. Since it is known that the rest points of PDS are Nash Equilibria, then this taken together with Theorem 4.2 allows us to ultimately say that stable rest points are Nash Equilibria, and hence a key part of the Folk Theorem applies to our extension of the replicator equation. While Lahkar and Sandholm [17] could rely on an already established link between their games and the Replicator Equation, we needed to reprove that this link is still there for our extension. Our hope is that this result potentially paves the way for the type of analyses conducted by Lahkar and Sandholm [18] to be extended into this more general setting. 5. Examples and Extensions In this section we will consider examples that illustrate how (4.3) achieves three basic purposes. First, it recovers the standard Replicator Equation, showing that what we have is in fact a generalization of that concept. Second, it allows us to incorporate the shared constraints of Generalized Nash Games into elementary Evolutionary Games. Finally, it enables us to express a given GNSC as a classical dynamical system, regardless of whether that GNSC corresponds to any particular Evolutionary Game. We should recall that our method for finding the extended Replicator Equation associated to a game and vice versa, consists of several steps. For clarity, and since they are used to solve the examples that follow, we will enumerate these steps. First, to find the extended Replicator Equation associated to a Generalized Nash Game we must (call this Method 1): (1) Find the Variational Inequality associated to the Generalized Nash Game (2) Find the Projected Dynamical System associated to this Variational Inequality 82 JASON LEQUYER AND MONICA-GABRIELA COJOCARU (3) Find the Replicator Equation associated to the Projected Dynamical System (as per (4.3)) To take an Evolutionary Game and find the extended Replicator Equation we must (call this Method 2): (1) Express the Evolutionary Game as a Generalized Nash Game (as at start of Section 4) (2) Find the Variational Inequality associated to the Generalized Nash Game (3) Drop the shadow variables (as in the paragraph after (3.1)) (4) Find the Projected Dynamical System associated to the Variational Inequality (5) Find the Replicator Equation associated to the Projected Dynamical System (as per (4.3)) We can of course skip Variational Inequalities, and just move straight to a Projected Dynamical System, however we include this procedure for clarity of analysis and to better mirror our exposition in the previous sections. We will now apply these steps to three examples. Example 5.1. The 1-species evolutionary game in (2.1) can be extended to an arbitrary number of species [7]. In the simplest case, we have two species, call them Species A and Species B, each of whose members can choose between n and m possible pure strategies {e1, . . . ,en} and {f1, . . . ,fm} respectively. In this case we have two associated matrices A ∈ Rn×(m+n) and B ∈ Rm×(m+n), where eTi A(p,q) represents π1(ei, (p,q)), the payoff of playing pure strategy ei in a population where Species A adopts mixed strategy p and Species B adopts mixed strategy q. π2(fi, (p,q)) has a similar meaning for Species B. The strategies of Species A and Species B respectively must belong to the simplices ∆n = {(p1, . . . ,pn) : ∑n i=1 pi = 1, pi ≥ 0} and ∆ m = {(q1, . . . ,qm) : ∑m i=1 qi = 1, qi ≥ 0}. A Nash Equilibrium is defined as any strategy x ∈ ∆n and y ∈ ∆m such that [6] π1(p, (x,y)) ≤ π1(x, (x,y)) for every p ∈ ∆n, π2(q, (x,y)) ≤ π2(y, (x,y)) for every q ∈ ∆m. It is known that the Replicator Equations associated to this problem are [7] ẋi = xi(π1(ei, (x,y)) −π1(x, (x,y))) for i = 1, . . . ,n, ẏj = yj(π2(fj, (x,y)) −π2(y, (x,y))) for j = 1, . . . ,m. If we assume each player can choose between two possible strategies, this reduces to ẋ1 = x1x2(e1A(x,y) −e2A(x,y)), ẋ2 = x1x2(e2A(x,y) −e1A(x,y)), ẏ1 = y1y2(e1B(x,y) −e2B(x,y)), ẏ2 = y1y2(e2B(x,y) −e1B(x,y)), (5.1) for some A,B ∈ R2×2. First we will attempt to derive these dynamics using our extension. We will do this by using Method 2, as described at the start of this section. First we express it as a GN: Player 1(x) : max xTA(x′,y′), s. t. { x ∈ K1 Player 1’(x′) : min 1 2 (x′ −x)T (x′ −x), s. t. { y ∈ R2 Player 2(y) : max yTB(x′,y′), s. t. { y ∈ K2 Player 2’(y′) : min 1 2 (y′ −y)T (y′ −y), s. t. { y ∈ R2, (5.2) where K1 = ∆ 2 = K2. Let K = K1 × K2. Then this problem shares its Nash Equilibria with the solutions (x∗,x ′∗,y∗,y ′∗) of the Variational Inequality  x∗ x ′∗ y∗ y ′∗   : 〈 −A(x ′∗,y ′∗) x ′∗ −x∗ −B(x ′∗,y ′∗) y ′∗ −y∗   ,   x x′ y y′  −   x∗ x ′∗ y∗ y ′∗   〉 ≥ 0, ∀   x x′ y y′   ∈ K1 ×R2 ×K2 ×R2. (5.3) THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES 83 Note that at any solution of (5.3) we must have that x ′∗−x∗ = 0 = y ′∗−y∗, otherwise we could always find some x′,y′ ∈ R2 to make the inner-product negative. Therefore this variational inequality shares its solutions (in (x∗,y∗)) with (x∗,y∗) : 〈(−A(x∗,y∗),−B(x∗,y∗)), (x,y) − (x∗,y∗)〉≥ 0, ∀(x,y) ∈ K, (5.4) and hence these solutions correspond to the rest points of the Projected Dynamical System ˙(x∗,y∗) = ΠK((x ∗,y∗), (A(x∗,y∗),B(x∗,y∗))). For simplicity of notation, we will denote x := x∗ and y := y∗ going forward. Now K is a polytope in R4 with vertices V1 = (1, 0, 1, 0), V2 = (1, 0, 0, 1), V3 = (0, 1, 1, 0), V4 = (0, 1, 0, 1). Using the halfspace representation H = {x1,x2,−x1 −x2 + 1,x1 + x2 −1,y1,y2,−y1 −y2 + 1,y1 + y2 −1}, we can then find the corresponding extended Replicator Equation ˙(x,y) = x1y1y2((V1 −V2) · (A(x,y),B(x,y)))(V1 −V2) +y1x1x2((V1 −V3) · (A(x,y),B(x,y)))(V1 −V3) +y2x1x2((V2 −V4) · (A(x,y),B(x,y)))(V2 −V4) +x2y1y2((V3 −V4) · (A(x,y),B(x,y)))(V3 −V4) Notice that V1 −V2 = V3 −V4 and V1 −V3 = V2 −V4, hence: ˙(x,y) = (x1 + x2)y1y2((V1 −V2) · (A(x,y),B(x,y)))(V1 −V2) +(y1 + y2)x1x2((V1 −V3) · (A(x,y),B(x,y)))(V1 −V3), But x1 + x2 = 1 = y1 + y2 everywhere on K, therefore ˙(x,y) = y1y2((V1 −V2) · (A(x,y),B(x,y)))(V1 −V2) +x1x2((V1 −V3) · (A(x,y),B(x,y)))(V1 −V3). Simplifying this, we get ẋ1 = x1x2(e1A(x,y) −e2A(x,y)), ẋ2 = x1x2(e2A(x,y) −e1A(x,y)) ẏ1 = y1y2(e1B(x,y) −e2B(x,y)), ẏ2 = y1y2(e2B(x,y) −e1B(x,y)), which is the known Replicator Equation (5.1). Example 5.2. Now let us try to use our method to extend the model. Suppose we want to implement the shared constraint x2 + y2 ≤ 1 (perhaps being a “2-strategist” consumes some finite resource). Then proceeding again by Method 2, our generalized Nash Game then becomes: Player 1(x) : max xTA(x′,y′) s. t. { x ∈ K(y) Player 1’(x′) : min 1 2 (x′ −x)T (x′ −x) s. t. { y ∈ R2 Player 2(y) : max yTB(x′,y′) s. t. { y ∈ K(x) Player 2’(y′) : min 1 2 (y′ −y)T (y′ −y) s. t. { y ∈ R2, (5.5) where K(y) = ∆2 ∩{x : x2 + y2 ≤ 1} and K(x) = ∆2 ∩{y : x2 + y2 ≤ 1}. Therefore (x,y) ∈ K where K = ∆2∩{(x,y) : x2 +y2 ≤ 1}. Then some of the Nash Equilibria for this game (in (x,y)) are solutions (x∗,y∗) to the Variational Inequality (x∗,y∗) : 〈(−A(x∗,y∗),−B(x∗y∗)), (x,y) − (x∗,y∗)〉≥ 0, ∀(x,y) ∈ K. (5.6) Now, unfortunately our shared constraint means that although every solution to (5.6) is a solution to (5.5), the set of solutions to (5.5) may be much larger than this. We can however extend this Variational Inequality to a family of Variational Inequalities using the parametric method in [6, 15] to recover many 84 JASON LEQUYER AND MONICA-GABRIELA COJOCARU solutions of (5.5). For simplicity we will only implement our shared constraint x2 + y2 ≤ 1 on (5.3) however this method can be extended to each member of the family of Variational Inequalities in [6]. Now, (5.6) shares its solutions with the rest points of the Projected Dynamical System ˙(x∗,y∗) = ΠK((x ∗,y∗), (A(x∗,y∗),B(x∗,y∗)). Notice that K is a bounded polytope with vertices: V1 = (1, 0, 1, 0), V2 = (1, 0, 0, 1), V3 = (0, 1, 1, 0). Therefore, again denoting x := x∗ and y := y∗, and using the halfspace representation H = {x2,y2, 1− x2 −y2,−x1 −x2 + 1,x1 + x2 − 1,−y1 −y2 + 1,y1 + y2 − 1} we find the extended Replicator Equation ˙(x,y) = y2(1 −x2 −y2)((V1 −V2) · (A(x,y),B(x,y)))(V1 −V2) + x2(1 −x2 −y2)((V1 −V3) · (A(x,y),B(x,y)))(V1 −V3) + x2y2((V2 −V3) · (A(x,y),B(x,y)))(V2 −V3), If we set u = e1A(x,y)−e2A(x,y) and v = e1B(x,y)−e2B(x,y), then with a bit of algebra and invoking the fact that x1 + x2 = 1 = y1 + y2, we can arrive at the differential equation ẋ1 = x2(x1u−y2v), ẋ2 = −x2(x1u−y2v) ẏ1 = −y2(x2u−y1v), ẏ2 = y2(x2u−y1v), and by Theorem 4.2, we know that the stable rest points of this classical dynamical system will be Nash Equilibria of our original game. We can therefore call this the Replicator Equation associated to the game. Example 5.3. This extension does not just help us with Evolutionary Games, it can be used as an alternative way to consider the dynamics of any GNSC. For example, consider the following two player game: Player 1(x) : min 1 2 (x− 2)2 s. t. { −x ≤ 0 x + y − 1 ≤ 0 , Player 2(y) : min 1 2 (y − 3)2 s. t. { −y ≤ 0 x + y − 1 ≤ 0 (5.7) The set of Nash Equilibria for this game is easily calculated to be {(t, 1 − t) : 0 ≤ t ≤ 1}. Notice that this problem is a GNSC, with global constraint set K = {(x,y) ∈ R2≥0 : x+y−1 ≤ 0}. Then proceeding by Method 1, we find the corresponding Variational Inequality: (x∗,y∗) : 〈(x∗ − 2,y∗ − 3), (x,y) − (x∗,y∗)〉≥ 0, ∀(x,y) ∈ K, (5.8) And then find the corresponding PDS: ˙(x∗,y∗) = ΠK((x ∗,y∗), (−x∗ + 2,−y∗ + 3)). Now the vertex representation of K is {(0, 0), (0, 1), (1, 0)} and a halfspace representation is {x ≥ 0,y ≥ 0, 1 −x−y ≥ 0}. Thus we can find the corresponding extended Replicator Equation ẋ = x((x− 2)(x− 1) + y(y − 3)), ẏ = y(x(x− 2) + (y − 3)(y − 1)), whose only stable rest point on K is (0, 1), which does correspond to a rest point of our PDS and to the original game (in fact this is the only rest point of this PDS). THE REPLICATOR DYNAMICS OF GENERALIZED NASH GAMES 85 6. Conclusions and Future Work In this paper we generalize the Replicator Equation so that it may applied to any GNSC defined on a Polytope. Theorem 4.2 relates the stable rest points of this extended Replicator Equation with the rest points of a Projected Differential Equation. This connection allows us to expand certain Evolutionary Games by introducing shared inter-species constraints via the GNSC. Currently there are many different variations of the standard two player population game, for example games where the payoff is not a matrix or multiplayer games (see [7] for an overview), each of which has its own version of the Folk Theorem of Evolutionary Game Theory. If further work is done to adapt parts 2 and 3 of the Folk Theorem to our model, then we would have a complete general version of the Folk Theorem for which these all could be considered special cases. In the literature, the Replicator Equation has already been unified with other models such as the Price equation and the Generalized Lotka-Volterra equation [14]. With this result we make yet another such connection, however it should be noted that Generalized Nash Games are extremely broad and are a superset of all classical Nash Games. We also point out that our connection is reciprocal, we don’t just place the Replicator Equation under the umbrella of the Projected Dynamical System, it actually gives us a new way of looking at GNSCs. This new perspective is an alternative to the Projected Dynamical System, in that the Replicator Equation frames these problems as classical (with righthand sides of class C1) dynamical systems. Further work could investigate whether our results hold on an arbitrary convex set, not just a convex bounded polytope. We are optimistic that such a generalization is achievable, and it should be noted that the Projected Dynamical System itself was originally only shown to work on polyhedral constraints in [8] before it was shown to apply to arbitrary convex sets 11 years later in [5]. Another possible direction would be adapting our method to the much broader class of Generalized Nash Games without shared constraints. This would perhaps be the most ambitious way to continue since there are still large theoretical obstacles that need to be overcome before we can solve these types of games in general. More specifically, we would need to find a way to determine the Replicator Dynamics of a quasivariational inequality, which is a much less well understood mathematical object. References 1. K. J. Arrow and G. Debreu, Existence of an equilibrium for a competitive economy, Econometrica 22 (1954), no. 3, 265. 2. J.-P. Aubin and A. Cellina, Differential inclusions, Springer Berlin Heidelberg, 1984. 3. D. P. Bertsekas, Nonlinear programming, Journal of the Operational Research Society 48 (1997), no. 3, 334–334. 4. M. G. Cojocaru, P. Daniele, and A. Nagurney, Projected dynamical systems and evolutionary variational inequalities via hilbert spaces with applications1, Journal of Optimization Theory and Applications 127 (2005), no. 3, 549–563. 5. M.-G. Cojocaru and Leo B. Jonker, Existence of solutions to projected differential equations in hilbert spaces, Pro- ceedings of the American Mathematical Society 132 (2003), no. 1, 183–193. 6. M.-G. Cojocaru, E. Wild, and A. Small, On describing the solution sets of generalized nash games with shared constraints, Optimization and Engineering 19 (2018), no. 4, 845–870. 7. R. Cressman and Y. Tao, The replicator equation and other game dynamics, Proceedings of the National Academy of Sciences 111 (2014), no. Supplement 3, 10810–10817. 8. P. Dupuis and A. Nagurney, Dynamical systems and variational inequalities, Annals of Operations Research 44 (1993), no. 1, 7–42. 9. F. Facchinei and C. Kanzow, Generalized nash equilibrium problems, Annals of Operations Research 175 (2009), no. 1, 177–211. 10. B. Grünbaum, Convex polytopes, Springer New York, 2003. 86 JASON LEQUYER AND MONICA-GABRIELA COJOCARU 11. C. Henry, An existence theorem for a class of differential equations with multivalued right-hand side, Journal of Mathematical Analysis and Applications 41 (1973), no. 1, 179–186. 12. J. Hofbauer and K. Sigmund, Evolutionary games and population dynamics, Cambridge University Press, May 1998. 13. T. Ichiishi, Game theory for economic analysis, Economic theory, econometrics, and mathematical economics, Aca- demic Press, New York, 1983. 14. R. Lahkar and W. H. Sandholm, The projection dynamic and the geometry of population games, Games and Economic Behavior 64 (2008), no. 2, 565–590. 15. T. Migot and M.-G. Cojocaru, A parametrized variational inequality approach to track the solution set of a generalized nash equilibrium problem, European Journal of Operational Research 283 (2020), no. 3, 1136–1147. 16. D. Milman and M. Krein, On extreme points of regular convex sets, Studia Mathematica 9 (1940), no. 1, 133–138 (eng). 17. J. F. Nash, Equilibrium points in n-person games, Proceedings of the National Academy of Sciences 36 (1950), no. 1, 48–49. 18. W. H. Sandholm, E. Dokumacı, and R. Lahkar, The projection dynamic and the replicator dynamic, Games and Economic Behavior 64 (2008), no. 2, 666–683. 19. J. M. Smith, Evolution and the theory of games, Cambridge University Press, Cambridge ; New York, 1982. 20. P. D. Taylor and L. B. Jonker, Evolutionary stable strategies and game dynamics, Mathematical Biosciences 40 (1978), no. 1-2, 145–156. 21. Y. S. Xia and J. Wang, On the stability of globally projected dynamical systems, Journal of Optimization Theory and Applications 106 (2000), no. 1, 129–150. Corresponding author, Lunenfeld-Tanenbaum Research Institute, Mount Sinai Hospital, Toronto, ON, Canada. Room 1070 E-mail address: jlequyer@lunenfeld.ca University of Guelph, 50 Stone Rd E, Guelph, ON N1G 2W1. MacNaughton 549. E-mail address: mcojocar@uoguelph.ca