Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Optimal dynamic incentive scheduling for Hawk-Dove evolutionary games

K. Stuckey, R. Dua, Y. Ma, View ORCID ProfileJ. Parker, View ORCID ProfileP.K. Newton
doi: https://doi.org/10.1101/2021.08.15.456406
K. Stuckey
1Department of Aerospace & Mechanical Engineering, University of Southern California, Los Angeles CA 90089-1191
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
R. Dua
2Department of Mathematics, University of Southern California, Los Angeles CA 90089-1191
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Y. Ma
3Department of Physics & Astronomy, University of Southern California, Los Angeles CA 90089-1191
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
J. Parker
4Division of Biology and Biological Engineering, California Institute of Technology, Pasadena CA 91125
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for J. Parker
P.K. Newton
5Department of Aerospace & Mechanical Engineering, Mathematics, and The Ellison Institute, University of Southern California, Los Angeles CA 90089-1191
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for P.K. Newton
  • For correspondence: newton@usc.edu
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

The Hawk-Dove mathematical game offers a paradigm of the trade-offs associated with aggressive and passive behaviors. When two (or more) populations of players (animals, insect populations, countries in military conflict, economic competitors, microbial communities, populations of co-evolving tumor cells, or reinforcement learners adopting different strategies) compete, their success or failure can be measured by their frequency in the population (successful behavior is reinforced, unsuccessful behavior is not), and the system is governed by the replicator dynamical system. We develop a time-dependent optimal-adaptive control theory for this nonlinear dynamical system in which the payoffs of the Hawk-Dove payoff matrix are dynamically altered (dynamic incentives) to produce (bang-bang) control schedules that (i) maximize the aggressive population at the end of time T, and (ii) minimize the aggressive population at the end of time T. These two distinct time-dependent strategies produce upper and lower bounds on the outcomes from all strategies since they represent two extremizers of the cost function using the Pontryagin maximum (minimum) principle. We extend the results forward to times nT (n = 1, …, 5) in an adaptive way that uses the optimal value at the end of time nT to produce the new schedule for time (n + 1)T. Two special schedules and initial conditions are identified that produce absolute maximizers and minimizers over an arbitrary number of cycles for 0 ≤ T ≤ 3. For T > 3, our optimum schedules can drive either population to extinction or fixation. The method described can be used to produce optimal dynamic incentive schedules for many different applications in which the 2 × 2 replicator dynamics is used as a governing model.

I. INTRODUCTION

The Hawk-Dove game (aka Chicken or Snowdrift game) is a game-theoretic paradigm for studying the conflict between players (or populations of players) who use two opposing strategies: aggressive (Hawks) and passive (Doves). One way of framing the conflict is to consider competition in the animal world where two different species compete for a limited resource [1–4]. With no Hawks in the population, Doves will share the resources and avoid conflict. With no Doves, the Hawks will fight with each other for resources, taking the risk of injury or death. If Hawks are present in large enough numbers, the Doves will flee without fighting. A sufficient fraction of Doves, on the other hand, can cooperate and expel the Hawks from the population thereby protecting the resource [5]. The challenge is to find conditions for stable co-existence of the two opposing populations. In the context of military conflicts, the game is framed as the game of chicken, thought of as a situation in which two drivers head towards each other in a single lane trying not to be the first to swerve away (Doves), each mindful of the fact that if neither swerves (Hawks), both will die. Key to this game is that the cost of losing is greater than the value of winning. Versions of this (static) game have been analyzed and used extensively in political science communities to study strategies associated with the problem of nuclear brinkmanship [6]. In this set-up, the payoffs are fixed, and the interactions unfold based on the cost-benefit balance determined by these payoffs.

In a more complicated setting, one might want to measure repeated interactions in populations of competitors, Embedded Image, where winning and losing is rein-forced by the relative frequencies of the two competing populations (frequency dependent selection as in Dar-winian evolution). For this, the replicator dynamical sys-tem is commonly used [7–9]: Embedded Image with x1 + x2 = 1, 0 ≤ x1 ≤ 1, 0 ≤ x2 ≤ 1, where each variable has the interpretation of frequency in the population or the alternative interpretation as a probability of picking a member of one of the two subgroups randomly. It is useful to also think of the variables Embedded Image as strategies (heritable traits) that evolve, with the most successful strategy dominating, as in the context of Dar-winian evolution [4] by natural selection. Here, A is the 2 × 2 payoff matrix, Embedded Image is the fitness of population i, and Embedded Image is the average fitness of both populations, so xi in (1) drives growth if the population i is above the average and decay if it is below the average. The fitness functions in (1) are said to be population depen-dent (selection pressure is imposed by the mix of popula-tion frequencies) and determine growth or decay of each subpopulation. Because of this, these equations are also used extensively in the reinforcement learning commu-nity where success begets success and failure leads to a downward spiral of frequency in the population [10].

Using the standard Hawk-Dove payoff matrix [5]: Embedded Image where the population x1 are the Hawks (aggressive), and x2 are the Doves (passive), the strict Nash equilibrium, Embedded Image is the mixed state Embedded Image since Embedded Image for all Embedded Image. This implies that the mixed state is also an evolutionary stable state (ESS) of the replicator system (1) as discussed in [11]. It is also useful to uncouple the two variables in (1) and write a single equation for the aggressor population frequency x1: Embedded Image Note also that a single equation for the passive population x2 is easily obtained using the change of variable x1 = 1 − x2 in eqn (3).

The question we address in the paper is whether it is possible to alter the entries in the payoff matrix A in a time-dependent fashion (dynamic incentives) in order to optimally achieve some pre-determined goal (such as minimizing aggression) at the end of fixed time T? Dynamically altering the entries of a payoff matrix in an evolutionary game setting has only recently been studied by coupling the entries, for example, to a system that represents an external environment [12, 13]. In the context of nuclear brinkmanship, is it possible to alter the payoff incentives dynamically in order to achieve a goal [6] that would not be achievable with fixed payoffs? Is it possible to offer dynamic economic incentives that optimize some desired outcome across a population of participants [14, 15]? Can one optimally design time-dependent incentive schedules of rewards/punishments to compel groups of people to get vaccinated [16]? For co-evolving microbial populations, is it possible to dynamically schedule selective antibiotic agents in order to steer the evolutionary trajectory in an advantageous direction [17, 18], or even reverse antibiotic resistance, or in the context of scheduling chemotherapy treatments, is it possible to design schedules optimally that make best use of the chemotherapy agents administered in order to delay chemotherapeutic resistance [8, 9, 19–21]? Control theory is increasingly being used in a wide range of biological applications [21–27] but to date, has not been systematically implemented in the context of evolutionary games as far as we know, aside from [8, 9, 21, 28].

One evolutionary context where an apparent Hawk-Dove scenario may require attainment of a quasi-stable equilibrium condition is during the evolution of symbiotic relationships in which one partner is aggressive or predatory. For example, hostile colonies of eusocial insects, such as ants and termites, are plagued by a diversity of solitary arthropods that have evolved to infiltrate the social system and parasitize the nest [30, 31]. The majority of such parasitic species evolved from free-living ancestors without any behavioral specialization [32, 33]. It follows that the initial steps in establishing the symbiosis were contingent on these free-living species (the Doves) entering into equilibrium with their aggressive eusocial hosts (the Hawks). This equilibrium, once attained, may have provided an essential, permissive stepping stone to evolving the essential adaptive traits—such as social behaviors and pheromonal mimicry—that facilitate social parasitism [32].

To address these and related types of settings, we develop a mathematical framework to determine time-dependent incentive schedules for altering the payoff entries of a Hawk-Dove evolutionary game in such a way as to (i) maximize aggression at the end of time T, and (ii) minimize aggression at the end of time T. By considering the bang-bang schedules that produce these upper and lower bounds on the competing frequencies, we can conclude that any alternative payoff schedule will produce a result that lies somewhere between the two bounds as each are extremizers of a cost function associated with the Pontryagin maximum (minimum) principle. We then extend the time-period to time nT (n = 1, …, 5) by using an adaptive control method that adjusts the schedule in the (n + 1)st window based on the ending frequency values from the nth window. The schedules produced drive aggression down to an absolute minimum Embedded Image, or drive it up to an absolute maximium Embedded Image, which are functions of the cycle-time T. These values provide absolute lower and upper bounds on opposing behavior strategies in an evolutionary setting.

II. OPTIMAL CONTROL THEORY FOR THE REPLICATOR DYNAMICAL SYSTEM

To implement an optimal dynamic incentive strategy, we consider the system: Embedded Image Embedded Image Embedded Image where A1(t) represents our control with entries in the off-diagonal terms, and A0 is the baseline Hawk-Dove payoff matrix. The time-dependent functions Embedded Image Embedded Image are bounded above and below, − 1 ≤ u1(t) ≤ 1, − 1 ≤ u2(t) ≤ 1 and have a range (− 3 ≤ a12 ≤ 5; − 1 ≤ a21 ≤ 11) that allows us to traverse the plane along any path depicted in red in figure 1, starting in the Hawk-Dove zone in the uncontrolled (u1 = 0; u2 = 0) case which is shown in figure 2 in the phase plane (a) and the frequency plane (b). The ESS for the uncontrolled case is x1 = 1/3. The control path chosen, and the time parametrization 0 ≤ t ≤ T determines both the sequence of games being played as well as the switching times (the times at which the path crosses over from one region to the next) between games. We denote the total control output Embedded Image: Embedded Image with total output delivered in time t, then: Embedded Image and: Embedded Image Embedded Image where T denotes a final time in which we implement the control over one cycle. We consider Embedded Image as a constraint on the optimization problem, with Embedded Image, and our goal is to first find schedules that minimize and maxi-mize aggression (x1) at the end of one cycle t = T sub-ject to this constraint. For the uncontrolled case, we know x1 → 1/3 as t → ∞ and we compare the controlled cases with the uncontrolled case, both satisfying the constraint. Also notice that the linear growth rate in (3) is (a12 − a22) = 1 − 0 = 1, so we scale T the same way in our computations, as T = 1. We then perform the optimization adaptively over multiple cycles nT using the end value of cycle nT as the initial condition to compute the optimal schedule for the (n + 1)st cycle. Using this method, we are able to identify absolute maximizers and minimizers as a function of the cycle time T.

FIG. 1.
  • Download figure
  • Open in new tab
FIG. 1.

Twelve regions in the (a12, a21) plane [29] define which game is being played. We choose a22 at the origin (without loss). Starting at t = 0 in the Hawk-Dove square, what are the paths to travel that minimize and maximize aggression at time t = T?

FIG. 2.
  • Download figure
  • Open in new tab
FIG. 2.

Dynamics of the uncontrolled (u1 = 0, u2 = 0) Hawk-Dove evolutionary game. (a) Phase portrait associated with the aggressor population x1. Both Hawk and Dove dominance (x1 = 1, 0) are unstable fixed points, while the mixed state x1 = 1/3 is the evolutionarily stable strategy (ESS); (b) Hawk dynamics for various initial conditions. T = 1 is the end of one control cycle and also the linear growth rate of the Hawk-Dove system.

A. Optimal control formulation

A standard form for implementing the Pontryagin maximum (minimum) principle with boundary value constraints is: Embedded Image Embedded Image where we would like to minimize or maximize a general cost function: Embedded Image

Since the method is standard, we will just briefly describe the basic framework and refer readers to [34–38] for more details on how to implement the approach. Following [37] in particular (see page 62 Theorem 4.2.1), we construct the control theory Hamiltonian: Embedded Image where Embedded Image are the co-state functions (i.e. momenta) associated with Embedded Image and Embedded Image respectively. Assuming that Embedded Image is the optimal control for this problem, with corresponding trajectory Embedded Image, the canonical equations satisfy: Embedded Image Embedded Image Embedded Image Embedded Image where i = (1, 2). The corresponding boundary conditions are: Embedded Image Embedded Image Embedded Image

Then, at any point in time, the optimal control Embedded Image will minimize the control theory Hamiltonian: Embedded Image

The optimization problem becomes a two-point boundary value problem (using (19)–(21)) with unknowns Embedded Image whose solution gives rise to the optimal trajectory Embedded Image (from (15)) and the corresponding control Embedded Image that produces it [34–37]. We choose our cost function (13): Embedded Image and we solve this problem by standard numerical shooting type methods [37].

III. RESULTS

In this section we show the results of the adaptive optimal control method to minimize and maximize aggression at time T = 1, and then further at the end of multiple cycles t = nT. Figure 3(a)–(i) shows the maximizing (blue) and minimizing (red) trajectories for nine initial conditions. The corresponding bang-bang schedules that produce these trajectories are also shown in each case. It is straighforward to prove that the optimal schedules must be bang-bang since the controllers are linear in the governing equations. In each case, we show the uncontrolled (dashed curve) Hawk-Dove trajectory, which ends in between the maximizer and minimizer as expected.

FIG. 3.
  • Download figure
  • Open in new tab
FIG. 3.

Maximizing (blue) and minimizing (red) trajectories for nine initial conditions. Dashed curve shows the uncontrolled Hawk-Dove trajectory which lands in between the max and min at T = 1. Dark (light) blue bar shows u1 = 1 (u2 = 1), white bar shows u1 = − 1 (u2 = − 1) associated with the maximizing control schedule; Dark (light) red bar shows u1 = 1 (u2 = 1), white bar shows u1 = − 1 (u2 = − 1) associated with the minimizing control schedule. All schedules are bang-bang. (a) x1(0) = 0.01; (b) x1(0) = 0.05; (c) x1(0) = 0.1; (d) x1(0) = 0.3; (e) x1(0) = 0.5; (f) x1(0) = 0.7; (g) x1(0) = 0.9; (h) x1(0) = 0.95; (i) x1(0) = 0.99.

Figure 4 shows the maximizing (blue) and minimizing trajectories over n = 5 cycles. We obtain these adaptively, using the endpoint from the nth cycle to compute the optimal schedule for the following (n + 1)st cycle. Two special initial conditions are shown in figure 5. For x1(0) = 0.08, the minimizing (red) trajectory shown in figure 5(a) ends at x1(1) = 0.08, hence is periodic. This value (and corresponding schedule) corresponds to an absolute minimizer Embedded Image for aggression x1. By contrast, for x1(0) = 0.79 shown in figure 5(b), the maximizing (blue) trajectory ends at x1(1) = 0.79, hence is periodic. This value (and the corresponding schedule) corresponds to an absolute maximizer Embedded Image for aggression. These two special initial conditions are shown in figure 6 over n = 5 cycles confirming the periodicity of the minimizing trajectory (red) in figure 6(a) and the periodicity of the maximizing (blue) trajectory in figure 6(b). The sequence of games that the system cycles through to achieve the minimizing sequence is shown in figure 7, while the maximizing sequence is shown in figure 8. These are obtained from eqn (3) and the four equations:

  1. u1 = 1; u2 = 1: Embedded Image

  2. u1 = 1; u2 = − 1: Embedded Image

  3. u1 = − 1; u2 = 1: Embedded Image

  4. u1 = − 1; u2 = − 1: Embedded Image.

FIG. 4.
  • Download figure
  • Open in new tab
FIG. 4.

Maximizing (blue) and minimizing (red) trajectories for nine initial conditions over n = 5 cycles, with dashed blue/red curves joining the end values after each cycle. The adaptive schedule for the (n + 1)st cycle is calculated based on the endpoint of the nth cycle. Black dashed curve shows the uncontrolled Hawk-Dove trajectory. (a) x1(0) = 0.01; (b) x1(0) = 0.05; (c) x1(0) = 0.1; (d) x1(0) = 0.3; (e) x1(0) = 0.5; (f) x1(0) = 0.7; (g) x1(0) = 0.9; (h) x1(0) = 0.95; (i) x1(0) = 0.99.

FIG. 5.
  • Download figure
  • Open in new tab
FIG. 5.

Maximizing (blue) and minimizing (red) trajectories for the two special initial conditions x1(0) = 0.08, 0.79. For the larger initial condition, the maximizing schedule (blue) produces an absolute maximum Embedded Image for T = 1. For the smaller initial condition, the minimizing schedule (red) produces an absolute minimum Embedded Image for T = 1. Dashed curve shows the uncontrolled Hawk-Dove trajectory. Dark (light) blue bar shows u1 = 1 (u2 = 1), white bar shows u1 = − 1 (u2 = − 1) associated with the maximizing control schedule. (a) x1(0) = 0.08; (b) x1(0) = 0.79.

FIG. 6.
  • Download figure
  • Open in new tab
FIG. 6.

Maximizing (blue) and minimizing (red) trajectories for the two special initial conditions x1(0) = 0.08, 0.79 over n = 5 cycles. Notice that the minimizing trajectory (red) shown in (a) exactly repeats for each cycle (x1(0) = x1(T)) since the schedule is an absolute minimizer, while the maximizing trajectory (blue) shown in (b) exactly repeats for each cycle (x1(0) = x1(T)) since the schedule is an absolute maximizer. Dashed curve shows the uncontrolled Hawk-Dove trajectory. (a) x1(0) = 0.08. Dashed red horizontal line indicates an absolute minimizer at Embedded Image; (b) x1(0) = 0.079. Dashed blue horizontal line indicates an absolute maximizer at Embedded Image.

FIG. 7.
  • Download figure
  • Open in new tab
FIG. 7.

Minimizing sequence of four games (Deadlock; Leader; Prisoner’s Dilemma; Game 9) associated with initial condition x1(0) = 0.08 that produces the absolute minimizer. Red line marks the distance decreased from initial point (unfilled red dot) to final point (filled red dot). Green line marks the distance increased from initial point (unfilled green dot) to final point (filled green dot).

FIG. 8.
  • Download figure
  • Open in new tab
FIG. 8.

Maximizing sequence of four games (Prisoner’s Dilemma; Leader; Deadlock; Game 9) associated with initial condition x1(0) = 0.79 that produces the absolute maximizer. Red line marks the distance decreased from initial point (unfilled red dot) to final point (filled red dot). Green line marks the distance increased from initial point (unfilled green dot) to final point (filled green dot).

In figure 9 we show the minimizing values and maximizing values of x1(T) vs. x1(0) through the full range 0 ≤ x1(0) ≤ 1. Notice that at the endpoints, the two values converge (since for the linear system, the schedule does not matter, only the total Embedded Image). Figure 9(b) shows the ratio x1(T)/x1(0) − 1 (percentage increase or decrease) vs. initial condition x1(0) through the full range 0 ≤ x1(0) ≤ 1. When the maximizing (blue) curve crosses x1(T)/x1(0) − 1 = 0, (i.e. x1(0) = x1(T)) an absolute maximizer is achieved (for Embedded Image Embedded Image), while when the minimizing (red) curve crosses x1(T)/x1(0) − 1 = 0, an absolute minimizer is achieved (for Embedded Image). In figure 10 we show how Embedded Image and Embedded Image depend on the cycle-time T. Interestingly, as T → 0, Embedded Image which is the ESS for the uncontrolled Hawk-Dove system. For Embedded Image and Embedded Image showing that for large enough cycle times we can drive either of the sub-populations to extinction or to fixation.

FIG. 9.
  • Download figure
  • Open in new tab
FIG. 9.

(a) Hawk initial condition x1(0) versus Hawk frequency at final time x1(T) for maximizing (blue) and minimizing (red) schedules. Vertical dashed line at x1(0) = 0.65 marks the maximum difference between the minimizer and the maximizer; (b) Change in Hawk frequency as a function of initial condition. Points above the line x1(T)/x1(0) − 1 = 0 represent an increase over time and points below this line represent a decrease over time. The two intersection points x1(0) = 0.08 and x1(0) = 0.79 mark the absolute minimizer and maximizer initial conditions for T = 1.

FIG. 10.
  • Download figure
  • Open in new tab
FIG. 10.

Embedded Image (blue) and Embedded Image (red) as a function of cycle-time T. Dashed horizontal line at x1 = 1/3 is the ESS for the uncontrolled Hawk-Dove system where the two curves meet as T → 0.

IV. DISCUSSION

Our goal in this manuscript is to lay out the general mathematical framework for determining optimal dynamic incentive schedules (time-dependent payoff schedules) that maximize/minimize certain behaviors in an evolutionary game theory setting using the 2 × 2 replicator. dynamical system with a Hawk-Dove payoff matrix as our baseline. By changing the payoff entries in a time-dependent manner, subject to constraints, we are altering the payoff-reward structure of the Hawk-Dove interaction as the populations evolve, which is equivalent to selecting a sequence of 2 × 2 evolutionary games in such a way that an optimum is achieved after a fixed passage of time. The determination of these schedules requires a balance between the timescale on which the payoffs change, and the timescale of the underlying replicator dynamical system in such a way that the Pontryagin maximum/minimum principle is satisfied.

As mentioned earlier, there are many settings in which dynamic payoffs can be used to achieve a certain out-come (developing chemotherapeutic schedules that manage chemo-resistance, antibiotic scheduling to avoid and even reverse antibiotic resistance in microbial populations, or the introduction of economic incentive packages to guide behavior). One of the more compelling potential applications of the methods developed in this paper is to frame people’s attitudes towards vaccination acceptance as a social contract [39, 40] and to devise dynamic incentive methods to encourage vaccination acceptance as well as to explore their theoretical limitations. Our method uses the Pontryagin maximum/minimum principle along with the 2 2 replicator dynamical system, with contraints, to determine schedules over one cycle time T, then we extend the results adaptively over multiple cycles nT. We show this leads to the identification of an absolute maximizer and minimizer Embedded Image for the aggressor population, both of which are functions of the cycle time T. We believe the general framework layed out in the paper can be extended to N N replicator systems, as well as discrete (stochastic) models for the interaction of a finite number of partcipants using a Moran process, and we are currently extending the methods in this paper to include those settings.

ACKNOWLEDGMENTS

We gratefully acknowledge support from the Army Research Office MURI Award #W911NF1910269 (2019-2024).

Footnotes

  • ↵* kstuckey{at}usc.edu

  • ↵† rajvirdu{at}usc.edu

  • ↵‡ yongqiam{at}usc.edu

  • ↵§ joep{at}caltech.edu

References

  1. [1].↵
    J. M. Smith, On Evolution, 8 (1972).
  2. [2].
    J. M. Smith and G. Price, Nature 246, 15 (1973).
    OpenUrlCrossRefWeb of Science
  3. [3].
    J. M. Smith, J. Theor. Bio. 47, 209 (1974).
    OpenUrlCrossRefPubMedWeb of Science
  4. [4].↵
    J. Smith, Evolution and the Theory of Games (Cambridge University Press, 1982).
  5. [5].↵
    M. A. Nowak, Evolutionary Dynamics: Exploring the Equations of Life (Harvard University Press, 2006).
  6. [6].↵
    S. Brams, Game Theory and Politics (Dover, 2004).
  7. [7].↵
    R. Cressman and Y. Tao, Proc. Nat’l. Acad. Sci. 111, 10810 (2014).
    OpenUrlAbstract/FREE Full Text
  8. [8].↵
    P. K. Newton and Y. Ma, Physical Review E 99, 022404 (2019).
    OpenUrl
  9. [9].↵
    Y. Ma and P. K. Newton, Physical Review E 103, 032408 (2021).
    OpenUrl
  10. [10].↵
    T. Borgers and R. Sarin, J. Econ. Theory 77, 1 (1997).
    OpenUrlCrossRefWeb of Science
  11. [11].↵
    J. Hofbauer and K. Sigmund, Evolutionary Games and Population Dynamics (Cambridge University Press, 1998).
  12. [12].↵
    J. Weitz, C. Eksin, K. Paarporn, S. Brown, and W. Ratcliff, Proc. Nat’l. Acad. Sci., doi: 10.1073/pnas.104096113 (2016).
    OpenUrlCrossRef
  13. [13].↵
    A. Tilman, J. Plotkin, and E. Akcay, Nature Comm. 11, https://doi.org/10.1038/s41467 (2020).
  14. [14].↵
    D. Friedman, Econometrica 59, 637 (1991).
    OpenUrlCrossRef
  15. [15].↵
    L. Hurwicz and S. Reiter, Designing Economic Mechanisms (Cambridge University Press, 2006).
  16. [16].↵
    S. Higgins, E. Klemperer, and S. Coleman, Preventive Med. 145, 106421 (2021).
    OpenUrl
  17. [17].↵
    P. Yeh, M. Hegrennes, A. Aiden, and R. Kishony, Nature Rev. Micro. 7, 460 (2009).
    OpenUrl
  18. [18].↵
    R. Lenski, Internatl. Microbiol. 1, 265 (1998).
    OpenUrl
  19. [19].↵
    J. West, Z. Hasnain, J. Mason, and P. K. Newton, Converg. Sci. Phys. Oncol. 2, 035002 (2016).
    OpenUrl
  20. [20].
    J. West, L. You, J. Zhang, R. Gatenby, J. Brown, P. K. Newton, and A. Anderson, Cancer Research 80, 1578 (2020).
    OpenUrlAbstract/FREE Full Text
  21. [21].↵
    M. Gluzman, J. Scott, and A. Vladimirsky, Proc. Roy. Soc. B 287, 20192454 (2020).
    OpenUrl
  22. [22].
    B. San Goh, G. Leitmann, and T. L. Vincent, Mathematical Biosciences 19, 263 (1974).
    OpenUrlCrossRef
  23. [23].
    S. Bewick, R. Yang, and M. Zhang, in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE, 2009) pp. 6026–6029.
  24. [24].
    H. R. Joshi, Optimal control applications and methods 23, 199 (2002).
  25. [25].
    R. B. Martin, Automatica 28, 1113 (1992).
    OpenUrl
  26. [26].
    G. W. Swan, Mathematical biosciences 101, 237 (1990).
    OpenUrlCrossRefPubMedWeb of Science
  27. [27].↵
    A. J. Coldman and J. Murray, Mathematical biosciences 168, 187 (2000).
  28. [28].↵
    P. K. Newton and Y. Ma, Physical Review E 103, 012304 (2021).
    OpenUrl
  29. [29].↵
    A. Kaznatcheev, Complex Adaptive Systems - Resilience, Robustness, and Evolvability FS-10-03, 71 (2010).
  30. [30].↵
    B. Hölldobler and E. O. Wilson, The Ants, Harvard University Press (Harvard University Press, 1990).
  31. [31].↵
    D. H. Kistner, Social Insects, 3, 1 (1982).
    OpenUrl
  32. [32].↵
    J. Parker, Myrmecological News 22, 65 (2016).
    OpenUrl
  33. [33].↵
    M. Maruyama and J. Parker, Current Biology 27, 920 (2017).
    OpenUrl
  34. [34].↵
    S. Lev, “Pontryagin. mathematical theory of optimal processes,” (1987).
  35. [35].
    E. B. Lee and L. Markus, Foundations of Optimal Control Theory, Tech. Rep. (Minnesota Univ Minneapolis Center For Control Sciences, 1967).
  36. [36].
    I. Ross, A Primer on Pontryagin’s Principle in Optimal Control, 2nd Ed. (Collegiate Press, 2015).
  37. [37].↵
    K. L. Teo, C. Goh, and K. Wong, A unified computational approach to optimal control problems (Academic Press, 1991).
  38. [38].↵
    P. Newton and Y. Ma, Am. J. of Phys. 89, 134 (2021).
    OpenUrl
  39. [39].↵
    C. Bauch and D. Earn, Proc. Nat’l Acad. Sci. 101, 13391 (2004).
    OpenUrlAbstract/FREE Full Text
  40. [40].↵
    L. Korn, R. Bohm, N. Meier, and C. Betsch, Proc. Nat’l Acad. Sci. 117, 14890 (2020).
    OpenUrlAbstract/FREE Full Text
Back to top
PreviousNext
Posted August 15, 2021.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Optimal dynamic incentive scheduling for Hawk-Dove evolutionary games
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Optimal dynamic incentive scheduling for Hawk-Dove evolutionary games
K. Stuckey, R. Dua, Y. Ma, J. Parker, P.K. Newton
bioRxiv 2021.08.15.456406; doi: https://doi.org/10.1101/2021.08.15.456406
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Optimal dynamic incentive scheduling for Hawk-Dove evolutionary games
K. Stuckey, R. Dua, Y. Ma, J. Parker, P.K. Newton
bioRxiv 2021.08.15.456406; doi: https://doi.org/10.1101/2021.08.15.456406

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Evolutionary Biology
Subject Areas
All Articles
  • Animal Behavior and Cognition (3585)
  • Biochemistry (7539)
  • Bioengineering (5494)
  • Bioinformatics (20724)
  • Biophysics (10292)
  • Cancer Biology (7946)
  • Cell Biology (11609)
  • Clinical Trials (138)
  • Developmental Biology (6584)
  • Ecology (10161)
  • Epidemiology (2065)
  • Evolutionary Biology (13573)
  • Genetics (9511)
  • Genomics (12811)
  • Immunology (7900)
  • Microbiology (19490)
  • Molecular Biology (7632)
  • Neuroscience (41969)
  • Paleontology (307)
  • Pathology (1254)
  • Pharmacology and Toxicology (2189)
  • Physiology (3258)
  • Plant Biology (7017)
  • Scientific Communication and Education (1293)
  • Synthetic Biology (1946)
  • Systems Biology (5417)
  • Zoology (1112)