## Abstract

Models of adaptive bet-hedging commonly adopt insights from Kelly’s famous work on optimal gambling strategies and the financial value of information. In particular, such models seek evolutionary solutions that maximize long term average growth rate of lineages, even in the face of highly stochastic growth trajectories. Here, we argue for extensive departures from the standard approach to better account for evolutionary contingencies. Crucially, we incorporate considerations of volatility minimization, motivated by interim extinction risk in finite populations, within a finite time horizon approach to growth maximization. We find that a game-theoretic competitive-optimality approach best captures these additional constraints and derive the equilibria solutions under various fitness payoff functions.

## 1. Introduction

`‘‘Adversity has the effect of eliciting talents, which in prosperous circumstances would have lain dormant.’’ -- Horace (65BC-8BC)`

Kelly’s work on optimal gambling strategies and the value of side information was arguably the first convincing attempt at applying concepts from information theory for analysis in a different field [Kelly, 1956]. This work was the precursor to growth-optimal portfolio theory which has extended the basic ideas to the realm of capital markets ([Cover and Thomas, 2006]). There has recently been a resurge of interest in employing insights from optimal gambling theory in models of adaptive bet-hedging under fluctuating environments, where close analogies between the economic and biological setting have been convincingly made apparent ([Bergstrom, 2014]; [Rivoire and Leibler, 2011]; [Donaldson-Matasci et al., 2010]).

Biological bet hedging was originally proposed to explain the observation of un-germinated seeds of annual plants ([Cohen, 1966]). This strategy involves the variable phenotypic expression of a single genotype, rather than a result of genetic polymorphism, although it is difficult to empirically determine whether observed phenotypic diversity in a population arises from randomization by identical genomes or from an underlying polymorphism ([Seger and Brockmann, 1987]). Indeed, evolutionary biologists have long acknowledged that in a stochastically variable environment, natural selection is likely to favor a gene that randomizes its phenotypic expression ([Bergstrom, 2014]). Recent work has revealed a variety of potential instances of bet hedging populations: delayed germination in desert winter annual plants that meets postulated criteria of adaptive bet hedging in a variable environment ([Gremer and Venable, 2014]), bacterial persistence in the presence of antibiotics that appears to constitute an adaptation tuned to the distribution of environmental change ([Kussell et al., 2005]), flowering times in Lobelia inflata which point to flowering being a conservative bet-hedging strategy ([Simons and Johnston, 2003]), or even bet-hedging as a behavioural phenotype, such as the case of nut hoarding in squirrel populations in anticipation of short or long winters ([Bergstrom, 2014]).

Notwithstanding these empirical findings, identifying actual cases of adaptive bet hedging in the wild remains elusive. As [Seger and Brockmann, 1987] have noted more than three decades ago, it is in general difficult to determine whether observed diversity of behavior in a population arises from randomization by genetically identical individuals or from genetic heterogeneity within co-located individuals optimized for different environmental conditions. Moreover, phenotypic heterogeneity can arise within genetically homogenous populations as a form of specialization in a stable environment through stochastic gene expression, positive feedback loops, or asymmetrical cell division, all processes where bet-hedging is not at play ([Rubin and Doebeli, 2017]). These difficulties provide further impetus for constructing better and more elaborate models to test against the data.

Of particular note in classic bet hedging models is the adoption from economic theory of asymptotic growth rate optimality as the target function for fitness maximization strategies, where growth in wealth is analogous to growth in lineage size. Indeed, since evolution proceeds by shifting gene frequencies over generations, with frequency changes being multiplicative, long-term fitness is commonly measured by geometric mean fitness across generations ([Hopper, 2018]). At the same time, it is also widely acknowledged that long-run growth rate is not a valid measure of fitness under fluctuating environments, such as in the case of bet-hedging populations ([Lande, 2007]).

The resulting intrinsic unpredictability has led some researchers to formulate a probabilistic perspective for natural selection that integrates various effects of uncertainty on natural selection ([Yoshimura et al., 2009]). The applicability of geometric mean fitness has also come into question under finite population models, where the probability of fixation provides additional and sometimes more suitable information than the geometric mean fitness ([Proulx and Day, 2001]), and in periodically cycling selection regimes, where evolutionary success depends on the length of the cycle and the strength of selection ([Ram et al., 2018]). Moreover, both gambling and bet-hedging models targeting optimal growth rate implicitly assume an infinite time horizon in formulating the geometric average, and thereby ignore the finiteness of actual horizons over which both economic and evolutionary processes ultimately act. The problem is further amplified when interim extinction risk is taken into account, especially under finite population models. Lineage growth trajectories which are highly stochastic are at risk of large ‘draw-downs’, which may pull the population below some extinction threshold, despite possessing a high asymptotic growth rate. Here we aim to incorporate considerations of finite evolutionary horizons and extinction risk in the search for adaptive optimality in bet hedging models.

### 1.1 The standard model

Most adaptive bet hedging models are largely based on the classic horse-race gambling model associated with Kelly (1956), where the biological counter-part is a lineage apportioning bets on several possible environments. Assume that *k* horses run in a race, and let horse *X*_{i} win with probability *p*_{i}. If horse *X*_{i} wins, the odds are *o*_{i} for 1. A gambler wishes to apportion his bankroll among the horses 0 < *f* ≤ 1, such that ∑ *f*_{i} = 1 and participate in indefinitely repeated races *n* → ∞. How to best apportion the bankroll each time? In this setting, wealth is a discrete-time stochastic process over *n* periods,
where *W* (*X*) = *f* (*X*)*O*(*X*) is the random factor by which the gambler’s wealth is multiplied when horse X wins. More explicitly,

Kelly first insight was that choosing to simply maximize expected wealth (for any time horizon *n*) gives arg max_{f} *E*[*W*_{n}(*f*)] = 1, with the implication that one bets everything on a single horse (the one with the highest *p*_{i}) and a consequent chance of total ruin once that horse loses a race. Therefore, Kelly proposed maximizing the asymptotic growth rate (the rigorous justification provided by [Breiman, 1961]). By the law of large numbers random wealth may be expressed as,
where,
is the asymptotic exponential growth rate. If the gambler stakes his entire wealth each time i.e., ∑ *f*_{i} = 1, then
is maximized (convex nonlinear optimization) at “proportional gambling” *f* = *p* where *D*(*p* ‖*f*) is minimized, without regard the actual odds provided by the bookie.

Indeed, the notion of *proportional gambling*, made famous by Kelly’s treatment, has found its way into classic models of diversified bet hedging. In such models often assumed that “appropriate phenotypes are produced in proportion to the likelihood of each environment” ([Hopper, 2018]) and that consequently “the classical bet-hedging prediction [is] that the optimum probability for employing a strategy is approximately equal to the probability that the strategy will be useful” ([King and Masel, 2007]). Here we follow recent approaches that extend the standard model to non-lethal environments via a full fitness matrix, such that this notion is no longer directly applicable.

[Breiman, 1961] was first to show that the Kelly solution is optimal in two convincing ways: [a] that given a Kelly strategy *ϕ** and any other “essentially different” strategy *ϕ* (not necessarily a fixed fractional betting strategy),
and [b] that it minimizes the expected time to reach asymptotically large wealth goals. Moreover, this strategy is myopic in the sense that at each iteration of the race one only needs to consider the presently given parameters ([Hakansson, 1971]). However, Kelly strategies may also yield tremendous drawdowns, a problem widely recognized in the gambling community, such that optimal Kelly is often viewed as “too risky”; in practice gamblers and investors use ‘fractional Kelly’ which deviates from the optimal solution but reduces the effective variance of the stochastic growth (Fig. 1). In the biological framework, this can lead to abrupt extinction events in finite (especially small) populations with highly stochastic lineage growth trajectories. A further complication is that the underlying probability distributions are merely estimated from past data and model assumptions, leading often to over-betting and increased risk ([MacLean et al., 2011]).

In this work, we extend the existing models to incorporate both interim extinction risk and finite evolutionary time horizons within a bet hedging framework. This requires re-conceptualizing geometric-mean fitness for such highly stochastic growth scenarios. We ultimately derive fitness functions that better account for such conditions where the fluctuating environment is strongly coupled to both long and short-term growth, and locate optimal stable equilibria.

## 2. Model

### 2.1 The full-fitness matrix model

We assume environments are i.i.d random events across generations, multinomially distributed (with some results generalized to non-identically distributed environments). Individuals within lineages have a static full fitness matrix [*O*_{ij}] in which nonlethal environments have low but generally non-zero fitness ([Donaldson-Matasci et al., 2010]; [Rivoire and Leibler, 2011]). We adopt a finite-population model where lineages start off with some initial population size *W*_{0}, implicitly assumed higher than some bet-hedging evolutionary threshold ([King and Masel, 2007]). Lineages then evolve strategies to randomize individual phenotypes towards maximizing growth across finite horizons in the face of interim extinction threats. More formally, with *k* environments and phenotypes,
the general model of lineage growth trajectory across *n* generations under strategy *f* is a random process,
where,
with off-diagonal values reflecting the lower fitness for non-matching environments,
and where all individuals in a lineage are bet-hedging,

And finally, using a straightforward formulation of the growth rate, , a random variable for any finite horizon.

We first derive the asymptotic growth-rate optimal “Kelly” solution for this setting (*f*^{Kelly}) with a corresponding bet-hedging region of the environment simplex (Appendix A). Relaxing the assumption of i.i.d environments we derive the static Kelly solution for the case of nonstationary environments – where environments are independent but not identically distributed across generations (Appendix B). While under nonstationary environments an optimal growth rate is reached with a dynamic myopic strategy, we focus here on a static strategy since adaptations effectively stabilize across time spans much higher than single generations, such that from evolutionary considerations dynamic strategies are not likely to emerge. Alternative models of fluctuating environments such as Markov chains with underlying switching probabilities (e.g. [Li et al., 2017]) are not pursued here and left for future work. Finally, we identify a ‘reference’ strategy that admits deterministic growth trajectories, namely the “Dutch book” solution (where the variance of the finite-time growth rate is zero) and characterize the consequent loss of growth incurred by exchanging opportunity for certainty (Appendix C).

### 2.2 Relative fitness payoff function

We now wish to go beyond the standard approach of targeting the optimization of the asymptotic growth rate as undertaken in the previous section – to incorporate finite evolutionary horizons and extinction risk considerations. For the sake of simplicity, we confine our model here to the case of *k* = 2 environments and phenotypes (so that the two environments occur with probability *p* and 1 − *p*). To motivate the shift to a finite horizon framework we first highlight an important property of our stochastic growth model, known also in portfolio theory ([Markowitz, 2006]). We prove that for any two essentially different strategies, the maximal time *n*_{0} one lineage “dominates” the other is finite for every realization of lineage trajectory pair (Appendix D). The exponentially diminishing histogram of last intersection times of two strategies in Fig. 2B demonstrates this phenomenon (with an instance of two growth trajectories for illustration in Fig. 2A).

The sustained variance and high skewness of the growth rate distribution under any finite horizon necessitates a comparative approach in formulating a fitness payoff function (in fact, the growth rate is asymptotically log-normal as shown in Appendix E). Consider a relative fitness measure for two different lineage strategies *f* and *g*,
with an induced relation defined by,

We may interpret this probabilistic relation between two strategies as relative fitness. Note that since realizations of *W*_{n}(*f*) and *W*_{n}(*g*) stem from the same underlying stochastic environmental sequence, they will generally be highly correlated (with logarithmic growth rates in fact perfectly correlated, as shown in Appendix F). Consequently, the probability in Eq. (3) must be derived from their joint distribution rather than simply from marginal distributions. Fig. 3 depicts realizations of *W*_{n}(*f*) and *W*_{n}(*g*) as histogram distributions for some finite evolutionary horizon.

A few properties of the order induced by this relation are worth highlighting. [a] it is a complete order since any two *W*_{n} are comparable under the relation, [b] it is transitive for any *n* and consequently a pre-order, and [c] its maximal element is , such that both the order induced by *E*[log *W*_{n}(*f*)] and the order induced by the payoff *P* (*W*_{n}(*f*) > *W*_{n}(*g*)) form complete preorders and have the same maximal element (Appendix G). Despite these beneficial properties, given any ‘vanilla’ strategy *g* and time horizon *n*, the strategy that maximizes the payoff function,
will vary as a function of *g* and *n* (demonstrated by counterexamples), and in particular will not necessarily be *f* ^{Kelly}. This implies that a wildtype lineage with strategy *g* different from *f* ^{Kelly} will eventually be overtaken by some mutant invasive lineage with a strategy that maximizes this payoff function, a process that may potentially remain in recurrent flux, with invasive lineages replacing a wildtype lineage.

### 2.3 Competitive optimality with risk

To see whether evolutionary stable optima may also emerge we develop a game-theoretic approach. Players are lineages with particular bet-hedging strategies and random initial population size. Lineages interact by competing over a common niche subject to the same environmental fluctuations. This set-up is in some contrast to more standard evolutionary game theory settings, where agents are organisms rather than lineages and where the notion of an iterated strategy is prominent, but maintains the central aspect of interactions formalized in a payoff function (e.g., [Stollmeier and Nagler, 2018]). A lineage survives the competitive encounter by avoiding extinction (defined in what follows) while exceeding its opponent in size over a given time horizon. This outcome is determined by a game-theoretic deterministic payoff function, modified from Eq. (3) to incorporate an extinction threshold and randomized initial lineage size. Ultimately, we are searching for Nash equilibria.

This approach is motivated by the classic work on time-invariant game-theoretic competitive optimality, within the scope of growth-optimal portfolio theory ([Bell and Cover, 1980, Bell and Cover, 1988]). Bell and Cover consider a competitive setting for a stock portfolio model under any finite number of investment periods and prove that for any relative wealth payoff *E*[*ϕ*(*UW*_{1}*/V W*_{2})] and portfolio wealth *W*_{1} and *W*_{2}, there are conditions on the function *ϕ* such that the log-optimal Kelly portfolio is a solution to the game, given initial randomizations *U* and *V* (independent and of equal expectation). In particular, *ϕ*(*x*) = *χ*_{[1, ∞)}(*x*) results in the payoff ℙ(*UW*_{1} ≤ *VW*_{2}) with the log-optimal portfolio as a game-theoretic solution, given some initial fair randomizations. This additional fair randomization reduces the effect of small differences in end wealth, thus avoiding unwanted cases where the optimal strategy is beat by a small amount most of the time ([Cover and Thomas, 2006]).

### 2.4 The payoff function in a game-theoretic setting

For any time-horizon *n* and extinction threshold *d*, we define a (deterministic) payoff function,
with initial population size independent randomizations *u*_{0} and *v*_{0}, independent and of same mean but possibly of a different distribution class.

This payoff function induces a symmetric discrete-valued non constant-sum game setting, although it is conceptually “zero-sum” *M*_{n}(*f, g*) + *M*_{n}(*g, f*) < 1 (Appendix H). Crucially, our payoff matrix is finite since it reflects the finitely many strategies possible in a finite population model – there can only be *N* different sized partitions of a population of size *N* in betting on two environments (under *k* = 2 environments and phenotypes). A low-resolution toy-model instance of the payoff matrix is depicted in Fig. 4.

Our goal would be to identify pure strategy Nash equilibria reflecting the evolutionary solutions to competitive bet-hedging. In particular, we would like to explore the conditions under which a bet-hedging setting admits a symmetric equilibrium and whether it is unique. In Appendix I we prove that for an infinite-size payoff matrix (i.e., continuous strategies) the log-optimal strategy is the solution to this game, invariant with the choice of time horizon. Moreover, any finite matrix representing the *N* strategies possible for a lineage of finite size *N* necessarily also admits a solution, as illustrated in Fig. 5. This solution is the strategy closest to the log-optimal strategy under the finite resolution framework, such that it converges to it asymptotically with *N* (Appendix L). Finally, under a nonstationary environment model the log-optimal strategy again emerges as the equilibrium static strategy – even given short time horizons (Appendix M).

The effect of extinction thresholds on actual rates of extinction of random lineage trajectories is illustrated in Fig.6A. As would be expected, extinction rates converge quickly to asymptotic values that derive from the threshold values (Appendix N). Numerical simulations indicate that when incorporating low extinction thresholds that result in low extinction rates, the symmetric Nash equilibrium remains stable at the log-optimal strategy. Higher thresholds may result in a number of scenarios: a shift of the symmetric equilibrium away from the log-optimal solution, complete lack of equilibrium solution, or the emergence of multiple symmetric equilibria; in conjunction, multiple pairs of off-diagonal equilibria may appear (see Fig. 6B for one such scenario).

### 2.5 Minimum time to reach a population threshold size

To gain further perspective on optimal strategies under highly stochastic growth we consider evolutionary competition between lineages, where survival is determined by reaching a certain threshold of lineage size in minimal time (e.g. for *K* −selected species, see [Reznick et al., 2002]). In effect, the lineage with growth characteristics that minimize the time to reach a certain population size threshold “wins”, a setting with potential relevance in the context of competitively colonizing a limited niche, as in range expansion scenarios (see [Villa Mart’in et al., 2019] for a bet-hedging population expanding into an unoccupied space). We follow the classic results of [Breiman, 1961] on the log-optimal portfolio as the optimal strategy minimizing the expected time to reach an asymptotic target wealth, but instead of an infinite target we base the fitness payoff function on finite targets. Initial insight into the effect of strategy choice on the consequent distributions of minimal time (Fig. 7A) is provided by comparing their expectation, where the optimality of Kelly is already apparent (Fig. 7B).

Instead of considering expectations of (highly correlated) minimal time distributions, we devise a more informative fitness payoff function based on the joint distribution. Crucially, this payoff will naturally be amenable to a game-theoretic approach, in line with the type of analysis in the previous section with payoff *M*_{n}(*f, g*). As before, we condition the probability on avoiding an extinction threshold. The payoff captures the probability that a trajectory following strategy *f* reaches threshold *c* before a trajectory following strategy *g*, conditioned on avoiding an extinction threshold *d*. If both trajectories reach *c* at the same time (since time is in discrete generations) then the one which overshoots with a greater margin above *c* ‘wins’. Denote by *T* (*f, c*) the minimal time distribution given strategy *f* and target lineage size *c*,

More precisely, we denote new trajectories by
and for all *k* = 0, …, *n* − 1

We denote also by
the first time when the trajectory cuts the threshold *c. T* (*f, c*) = ∞ if and only if this trajectory does not cut the threshold.

Then the payoff matrix *M*_{c}(*f, g*) is defined by

We then identify pure strategy Nash equilibria reflecting the evolutionary solutions with the new relative payoff *M*_{c}(*f, g*). In Appendix J we prove that again Kelly is the solution to the game, invariant to the evolutionary ‘choice’ of target population size *c*, and that under a nonstationary environment regime Kelly emerges as the static equilibrium strategy. Finally, we highlight a deep mathematical link of this probabilistic perspective for minimal time optimality to the competitive optimality setting with payoff *M*_{n}(*f, g*). Formally, *M*_{c}(*f, g*) can be rewritten as a convex linear combination of , *T* (*f, c*) = *n*) (see Appendix J for more details).

## 3. Discussion

In this work we provide further support for the robustness of the expected log criterion as an optimality solution for biological bet hedging. We develop a game-theoretic framework inherently invariant to the span of evolutionary horizons while incorporating considerations of interim extinction risk, and use multiple optimality criteria to strengthen our results. This approach goes beyond standard models of bet-hedging, which focus on indefinite ‘long-term’ growth rates and that ignore accounting for interim risk. Previous work generally upholds that “phenotypes with the greatest long-term average growth rate will dominate the entire population” as “the basic principle” used in optimization ([Yoshimura and Jansen, 1996]), or that a proxy for the likely outcome of evolution is “to think of organisms as maximizing the long-term growth rate of their lineage” ([Donaldson-Matasci et al., 2010]).

Nevertheless, some authors have recently acknowledged the importance of accounting for finite time horizons. For instance, [Rivoire and Leibler, 2011] note in passing that in their model “the growth rate emerges as a unique measure of fitness when considering the long-term limit *T* → ∞, but, if considering a finite “horizon”, there may be a different strategy that outperforms [it]”. Indeed, as some evolutionists have argued, short-term fitness measures are also needed to achieve a full understanding of how evolution works in variable environments, as geometric mean fitness concerns the long-run evolutionary outcome ([Okasha, 2018]). Moreover, long-term fitness metrics are typically formulated without regard to transient short-term population dynamics, in which lineages might come close to extinction. Under more inclusive models with extinction, selection in a fluctuating environment can also favor bet-hedging strategies that ultimately increase the risk of extinction ([Libby and Ratcliff, 2019]). Given such considerations, the benefit of explicitly incorporating extinction considerations in stochastic growth models is clearly evident.

We have opted to focus on symmetric Nash equilibria rather than evolutionary stable strategies (ESS), which are strategies that cannot be beaten if the fraction of the rival invading mutants in the population is sufficiently small, and are generally invoked in settings with iterative match-ups between individuals rather than lineages ([Smith and Price, 1973]). Since the payoff in our game theoretic setting pits one lineage against another (two different strategies) there is no explicit sense of invading mutants (but see [Olofsson et al., 2009] for an ESS approach to bet-hedging). Moreover, some of the classic aspects of Nash’s theorem do not directly apply within our setting. The theorem states that for every two-person zero-sum game with finitely many strategies there exists a mixed strategy that solves the game ([Nash, 1951]). While our framework is indeed “two-person” it is not zero-sum and has finitely many strategies. Crucially, since an implicit goal of theoretical work such as ours may be towards predicting which strategies are likely to evolve, we focus on pure strategies rather than mixed ones, where the uniqueness of the equilibrium solution emerges as especially beneficial (echoing the classic approach of growth rate log-optimality where there is always a unique solution due to convexity).

We are not the first to attempt to model the expected minimal time to reach a finite asymptotic target, an extension of the seminal result of [Breiman, 1961] on properties of the log optimal portfolio. [Aucamp, 1977]) derived the first such analysis, given some basic assumptions that concern reaching a wealth target exactly vs. “overshooting” it. More recently, [Kardaras and Platen, 2010] find that in a continuous time or asset price model where a finite target can be exactly reached with no overshooting, the Kelly solution is still optimal; in a discrete time model Kelly is only approximately optimal, but if “time rebates” are introduced (to compensate overshooting the goal in the last investment period) it becomes exactly optimal. While these results on the expectation of the time distribution are in line with our analysis of stochastic lineage growth optimality, we obtain an even stronger result: given *finite* population size targets, the log-optimal strategy emerges as a Nash equilibrium under a payoff function based on the *joint* distribution of minimal time trajectories.

Interestingly, [Kelly, 1956] has anticipated the application of his ideas in biological bet hedging, writing “Although the model adopted here is drawn from the real-life situation of gambling it is possible that it could apply to certain other economic situations… the essential requirements for the validity of the theory are the possibility of reinvestment of profits and the ability to control or vary the amount of money invested or bet in different categories.” It does not require a leap of the imagination to notice analogies of “economic situations” to evolutionary strategies, of “reinvestment of profits” to biological reproduction and growth, and of the “control” of invested money to evolved adaptative optimality. Of course, it best appreciated with Shannon’s famous “bandwagon” warning in mind, cautioning over hasty attempts to apply insights from information theory to other fields ([Shannon, 1956]).

### 3.1 Other approaches to optimization under finite horizon and risk

A seemingly straightforward way of introducing finite (albeit still arbitrary) horizons into optimization settings is by considering the expectation of a finite-horizon growth rate. This is the approach adopted in some recent stock portfolio models for finite horizons ([Vince and Zhu, 2013]; [Morgan, 2015]). Within our formalism from Eq. (2), this amounts to finding,

However, this implicitly assumes some arbitrary utility function, in this case the *n*-th root, the maximization of which requiring some justification. In contrast, Kelly’s focus on arg max_{f} *E*[log *W*_{n}] while implicitly assumes logarithmic utility, is equivalent the limit of the above expression, and leads to desired optimality properties as famously laid out by [Breiman, 1961].

A more convincing approach to maximizing wealth with risk management over finite horizons was proposed in [Rujeerapaiboon et al., 2015] for portfolio construction. The authors consider the optimization of a minimum bound for finite-horizon growth, with a degree of freedom corresponding roughly to a risk-aversion or a choice of certainty parameter.

The expression above allows deriving the portfolio giving the highest minimum bound for wealth for any level of certainty *ε*. While choosing a particular horizon *n* and a risk-aversion parameter is perfectly sensible in an investment setting, the translation to the biological framework is problematic: what would be evolution’s risk aversion in this setting? Or the appropriate time horizon for optimization? Any choice of these two parameters would inescapably be arbitrary in nature. In an alternative approach [Rujeerapaiboon et al., 2018] reformulate the Kelly gambling setting in terms of the Conservative Expected Value (CEV), a risk-averse expectation for highly skewed distributions. This amounts essentially to devising a systematic way of constructing fractional Kelly strategies such that it is strongly coupled with the infimum of the finite-horizon growth rate. Here again, there is an implicit arbitrariness in the choice of horizon length if applied in the context of an evolutionary framework, which we seek to avoid.

Other authors have focused on incorporating risk to the standard Kelly gambling setting with an infinite time horizon. For instance, [Busseti et al., 2016] develop a systematic way to trade off growth rate and drawdown risk by formulating a risk-constrained Kelly gambling problem within the standard setting of growth rate maximization under asymptotic horizons. The additional risk constraint limits the probability of a drawdown to a specified level. Nevertheless, for our purposes, percentage drawdown is arguably not a natural metric risks, as compared with explicit extinction thresholds, especially in scenarios of competing finite-size populations ([Ashby et al., 2017]). Still other approaches may seek to target risk minimization as a primary criterion. In an approach akin to our Dutch book analysis, [Wolf et al., 2005] minimize the growth rate variance and consequently the probability of extinction due to ‘unlucky’ environmental trajectories. However, this is at the inevitable expense of achieving high stochastic growth rates, a vital aspect of evolutionary fitness.

### 3.2 Game-theoretic competitive optimality of Bell and Cover

The results presented here can also be seen as both a special case and an extension of the classic results of [Bell and Cover, 1980, Bell and Cover, 1988]. There are several important distinctions: [a] their setting is formulated for continuous random variables whereas our environments are discrete events, [b] their payoff implies a zero-sum game whereas our game is non zero-sum (more accurately, non-constant-sum) due to the effect of extinctions, and [c] their payoff function is a straightforward probability while our payoff is effectively a conditional probability (includes considerations of extinction risk). Moreover, implicit in Bell and Cover’s setting is an infinitely sized payoff matrix, whereas our payoff matrix is finite since it reflects a finite number of strategies possible in a finite population. These distinctions have enabled us to show that, at least given the particular payoff function and discrete framework, the emerging symmetric Nash equilibrium is in fact a strict and unique one.

Some authors have generalized or utilized other aspects of the classic competitive optimality results. Most recently, [Garivaltis, 2018] has shown that discrete-time results of [Bell and Cover, 1988] hold equally well for continuous-time rebalanced portfolios in a competitive setting between two investors, each aiming to maximize the expected ratio of one’s own wealth to the other. In an original use of evolutionary ideas in finance, [Lo et al., 2017] and [Orr, 2017] consider a payoff function capturing relative wealth of two competing investors each with some set initial wealth, focusing on finite-period analysis. They analyze optimal strategies of a primary player against a given ‘vanilla’ strategy, a framework consistent with our initial relative payoff non-game-theoretic setting. They find that the particular vanilla strategy chosen plays an important role in the optimal allocation, in conjunction with initial wealth of both players.

Finally, our game-theoretic analysis may hint at a solution to a “coincidence” pointed out in [Bell and Cover, 1980]. They were left perplexed as to why competitive optimality for a finite horizon turned out, by “coincidence”, to have the same solution (namely, Kelly) as in the growth-optimal portfolio: “Finally, it is tantalizing that *b** arises as the solution to such dissimilar problems […] The underlying for this coincidence will be investigated”. Their follow-up 1988 paper suggests a “possible reason for the robustness of log optimal portfolios” or why “log optimal portfolios behave well in the competitive investment game”: namely that the wealth generated from any portfolio is always within “fair reach” of the wealth from the log-optimal portfolio. Indeed, the Kuhn-Tucker conditions and the consequent bound on the wealth ratio ([Cover and Thomas, 2006, Theorem 16.2.2]) already imply that game-theoretic optimality is the driving force behind the asymptotic dominance. Fair randomization of initial wealth then leads to the gametheoretic solution for any increasing function of the wealth ratio. Our investigation of the payoff matrix suggests another perspective to this “coincidence”. Asymptotically with horizon *n*, the payoff matrix becomes maximally ‘contrasted’, with off-diagonal cells converging to probabilities of 0 or 1 (except those on ‘fault lines’), such that the Nash equilibrium emerges naturally. In effect, the ‘saddle-point’ equilibrium, which has been established as invariant with *n*, asymptotically attains maximum curvature (Appendix K).

## 4. Conclusion

In this work we have argued that under fluctuating environments and trait randomization geometric mean fitness should also encompass considerations of stochastic growth and extinction risk under finite evolutionary horizons. We show that for both the maximal growth rate payoff and the minimal time payoff there is a unique pure-strategy symmetric equilibrium, which is invariant with evolutionary time horizon and robust to low extinction risk. Coinciding with the classic bet-hedging modeling approach, this is the Kelly log-optimal strategy. With higher thresholds of extinction, the equilibrium may shift away from Kelly and possibly branch out to multiple equilibria. Future work will be required to generalize the model to competitive optimality payoffs beyond pairwise lineages, Markovian environmental sequential transitions, random fitness matrices, and to more precisely capture the effect of high extinction thresholds on the optimal evolutionary solutions.

## Acknowledgements

We’d like to thank Alex Garivaltis for illuminating discussions on competitive optimality. We also appreciate the continued support of Jürgen Jost and the Max Planck Institute (MIS). OT would like to further acknowledge the generous support of the Complexity Institute at NTU Singapore and Peter MA Sloot. TDT would also like to thank VIASM for financial support and hospitality in his two-month visiting in 2019.

## Appendix A The Kelly solution to the full fitness matrix model

In this section, we derive the Kelly (log optimal) solution for the full-fitness matrix model.

**The case** *k* = 2: We have
where *H* ∼ *Binomial*(*n, p*). The Kelly solution is then defined by
where .

By denoting *ō*_{1}(*f*) := *o*_{11}*f* + *o*_{12}(1 − *f*), *ō*_{2}(*f*) := *o*_{21}*f* + *o*_{22}(1 − *f*), we have

Therefore, by directed calculations, we obtain the Kelly solution which is dependent on *p*
where and , and the corresponding optimal value is

**The case general** *k*: By directed calculations, we obtain
where . This implies that for each **p** ∈ Δ_{k−1} := {(*x*_{1}, …, *x*_{k}) ∈ [0, 1]^{k} such that *x*_{1} +· … +*x*_{k} = 1}, *G*(**f**) is a continuous strict convex function in the compact convex domain Δ_{k−1}. Therefore there will always exist a unique Kelly solution *f* ^{Kelly} ∈ Δ_{k−1} which is dependent on **p**.

If the fitness matrix is diagonal, i.e., (

*o*_{ij}) = diag{*o*_{1}, …,*o*_{k}}, then (*f*^{Kelly})_{i}=*p*_{i};*f*^{Kelly}solves the system

## Appendix B The solution to nonstationary environments

We model the environment probabilities on a parameterized Beta distribution, such that *p* ∼ *B*(*α, β*), and prove that the Kelly solution (a static *f* that maximizes the asymptotic growth rate) in the asymptotic framework corresponds to the solution of the i.i.d. environment case with a probability equaling the expectation of the Beta distribution.

For sake of simplicity, we consider only *k* = 2. We have *W* (*f*) = *ō*_{1} (*f*) ^{H} *ō*_{2} (*f*)^{n−H}, where *H* ∼ *GB*(*n*, {*p*_{1}, …, *p*_{n}} ∼ *Beta*(*α, β*)), i.e., *H* = *ε*_{1} +· …·+ *ε*_{n} with *ε*_{r} ∼ *Bernoulli*(*p*_{r}) and *p*_{r} ∼ *Beta*(*α, β*). Using the law of large numbers, we have
where is the expectation of the Beta distribution. Thus, the Kelly solution in this case is the same as the previous case.

## Appendix C The Dutch book solution and the corresponding loss of growth

In this section we derive the Dutch book solution for our model. By definition, the Dutch book solution *f* ^{D}satisfies *ō* _{1}(*f*) = *ō* _{2}(*f*) = … = *ō* _{k}(*f*) with the positive growth, i.e., *ō* _{1}(*f*) > 1.

**The case** *k* = 2: The Dutch book solution satisfies

Therefore, if Δ := *o*_{11}*o*_{22} − *o*_{12}*o*_{21} > *o*_{11} + *o*_{22} − *o*_{12} − *o*_{21} then we always have a unique Dutch book solution *f* ^{D}
and

**The general case** *k*: We give out here some criteria to have a unique Dutch book solution in the general case *k*.

*Given a fitness matrix* . *Denote by α*_{i,j} = *o*_{i,j} − *o*_{k,j} *for all j* = 1, …, *k and i* = 1, …, *k* − 1. *Denote by* *such that*

*If this fitness matrix O satisfies*

*o*_{ii}>*o*_{ji}≥ 0*for all i, j*= 1, …,*k*Λ

_{i,k}> 0*for all i*= 1, …,*k**for all i*= 1, …,*k*− 1

*then there exists a Dutch book solution defined by* , *j* = 1, …, *k and the corresponding deterministic wealth is*

*Proof.* We have from Condition (*iii*)

Moreover from the definition of *α* and Λ we have for all *i* ≠ *j* = 1 …, *k.*

*In the case of a diagonal matrix, i.e., o*_{i,j} = diag{*o*_{1}, …, *o*_{k}}, *by direct calculation, we obtain* .

*Conditions (i) and (ii) hold true iff o*_{i} > 0 *and condition (iii) holds true iff* .

*For a finite n and assuming* , *there exists a Dutch book solution* f ^{D}.

*Proof.* The conclusion directly follows from the above Corollary 1 (for a diagonal fitness matrix).

## Appendix D Finite last intersection

In this section, we show that for a given pair of strategies (*f, g*) with *G*(*f*) > *G*(*g*), there is a *T* (*f, g*) < ∞ such that *W*_{n}(*f*, **x**) > *W*_{n}(*g*, **x**) for all *n* ≥ *T* (*f, g*) and for all **x** ∈ {0, 1}_{∞}. This means that the last intersection between two random trajectories {*W*_{n}(*f*, **x**)}_{n} and {*W*_{n}(*g*, **x**)}_{n}
is bounded above by *T* (*f, g*) (a finite number depending only on *f* and *g*).

*Proof.* We first define the excess growth rate

We note that for all **x**

To this end we need to prove that there is a *T* (*f, g*) < ∞ such that

Otherwise, for each *k* there exist *n*_{k} ≥ *k* and **x**_{k} ∈ {0, 1}^{∞} such that . Now, there exists a subsequence of {**x**_{k}} which is convergent to some **x** ∈ {0, 1}^{∞}. Therefore as *k* → ∞ we have *n*_{k} → ∞ and , in contradiction to (7).

## Appendix E Asymptotic log-normality of the growth rate

In this section, we show that in our discrete model, the growth rate approaches log-normality with zero variance.

*Proof.* We rewrite , where *y*_{i} = *x*_{i} log *ō*_{1}(*f*) + (1 − *x*_{i}) log *ō*_{2}(*f*) are independent discrete random variables with values: log *ō*_{1}(*f*), log *ō*_{2}(*f*) and probabilities: *p*, 1 − *p* correspondingly. Thus we have a sequence of i.i.d. random variables {*y*_{i}}_{i} with expectation *μ* = *E*(*y*_{i}) = *G*(*f*) and variance *σ*^{2} = *var*(*y*_{i}) = *p*(1 − *p*)(log *ō*_{1}(*f*) − log *ō*_{2}(*f*))^{2}. By using the CLT, we have for a large which is equivalent to

## Appendix F Fully correlated log growth rates for the case k=2

In this section we show that for all *f, g* ≠*f* ^{D}

*Proof.* Denote by
where **x** = (*x*_{1}, …, *x*_{n}) is a realization and |*x*| = *x*_{1} + · … + *x*_{n}. Because *f, g* ≠ *f* ^{D} we have *ō*_{1}(*f*) ≠ *ō*_{2}(*f*) and *ō*_{1}(*g*) ≠ *ō*_{2}(*g*), therefore we can define

We first prove that for any given *m* realizations **x**^{(1)}, …, **x**^{(m)}, we have

Indeed, we note that
and similarly for *g*. This implies (8). Therefore

*Whether the correlation is* ±1 *depends on λ* > 0 *or λ* < 0. *For f* = *f* ^{Kelly} *the growth factor with environment “1”* > *the growth factor with environment “0” implying* . *Similarly for g it implies* , *therefore λ* > 0. *At f* ^{D}, , *therefore it acts as a threshold. In most cases the correlation will be* +1 *since both f and g induce a positive growth rate.*

## Appendix G Kelly is the maximal element in the fitness payoff relation

Here we assume lineage size initial randomization, i.e., *W*_{n}(*f*) ≫ *W*_{n}(*g*) iff
where *W*_{0} and *V*_{0} are random, and show that the Kelly strategy is the maximal element in this relation.

*Proof.* As a direct consequence of Proposition 1 and Eq. (11) we have
and equality if and only if *f* = *f* ^{Kelly}.

## Appendix H Non-constant-sum game, but conceptually zero-sum

In this section we show that

*For d*= 0,*M*_{n}(*f, g*) +*M*_{n}(*g, f*) = 1*for all f, g.**For d*> 0,*M*_{n}(*f, g*) +*M*_{n}(*g, f*) < 1*for all f, g.**Moreover, the game is conceptually zero-sum, but not formally.*

*Proof.*

where *C* = {*W*_{0}*W*_{n}(*f*) > *V*_{0}*W*_{n}(*g*)}, *A* = {*W*_{0}*W*_{i}(*f*) > *d* ∀*i* = 1, …, *n*, *B* = {*V*_{0}*W*_{i}(*g*) > *d* ∀*i* = 1, …, *n*}.

For *f* = *g* we also have
where *A*_{1} = {*W*_{0}*W*_{i}(*f*) > *d* ∀*i* = 1, …, *n*}, *A*_{2} = {*V*_{0}*W*_{i}(*f*) > *d* ∀*i* = 1, …, *n*}.

Finally, numeric simulations demonstrate that if *M* (*W, V*) > *M* (*U, V*) then *M* (*V, W*) < *M* (*V, U*) for all *W, V, U*, i.e. changing to a strategy with a gain for one player always incurs a loss for the other player.

## Appendix I The symmetric Nash equilibrium solution to payoff *M*_{n}(*f, g*)

*We always have*
*and the equality happens if and only if p*_{−} < *p* < *p*_{+}.

*Proof.* For given *f, g*, we denote by . We have

On the other hand, from the formula we have for any pair (*f, f* ^{Kelly}), *pα*_{1} + (1 − *p*)*α*_{2} = 1 if *p* ∈ [*p*_{−}, *p*_{+}] and *pα*_{1} + (1 − *p*)*α*_{2} < 1 if *p* ∈*/* [*p*_{−}, *p*_{+}].

*We consider a game with payoff without extinction*
*where W*_{0}, *V*_{0} *have the same distribution. Then, in this game*, (*f* ^{Kelly}, *f* ^{Kelly}) *is a strict Nash equilibrium.*

*Proof.* First, we note that

where and *A*_{2} = {0, …, *n*} − *A*_{1}. Therefore, for *f* = *g* we have *α*_{1} = *α*_{2} = 1, which implies *A*_{1} = ø, *A*_{2} = {0, …, *n*} and

For any *f* ≠ *f* ^{Kelly}, by using the Cauchy inequality for the second term, we have

From Proposition 2 we have

Therefore (*f* ^{Kelly}, *f* ^{Kelly}) is a strict Nash equilibrium.

*The above Nash equilibrium is the unique one in the game.*

*Proof.* Assume that (*f*_{0}, *g*_{0}) ≠ (*f* ^{Kelly}, *f* ^{Kelly}) is another Nash equilibrium. Without loss of generality we assume that *g*_{0} ≠ *f* ^{Kelly}. By definition of a Nash equilibrium, we have *M*_{n}(*f*_{0}, *g*_{0}) ≥ *M*_{n}(*f, g*_{0}) for all *f* and *M*_{n}(*g*_{0}, *f*_{0}) ≥ *M*_{n}(*g, f*_{0}) for all *g*. By choosing *f* = *g* = *f* ^{Kelly} and using Propposition G we have and . This implies that *M*_{n}(*f*_{0}, *g*_{0}) + *M*_{n}(*g*_{0}, *f*_{0}) > 1 which is a contradiction to Proposition 1. Therefore (*f* ^{Kelly}, *f* ^{Kelly}) is the unique Nash equilibrium (see Fig. I.1 where the equilibrium lies at the saddle-point of the payoff landscape.)

## Appendix J The symmetric Nash equilibrium solution to payoff *M*_{c}(*f, g*)

*We consider a game with payoff defined as* (6) *without extinction*

*Then, in this game*, (*f* ^{Kelly}, *f* ^{Kelly}) *is a strict Nash equilibrium.*

*Proof.* First we note that

Then, from Propposition 3 we have and

Therefore (*f* ^{Kelly}, *f* ^{Kelly}) is a strict Nash equilibrium.

(*f* ^{Kelly}, *f* ^{Kelly}) *is the unique Nash equilibrium.*

*Proof.* We first note that for all *f, g*

The left hand side is similar to the proof in Propposition 4.

It is worthwhile here to highlight a link between the this payoff and *M*_{n}(*f, g*). Formally, *M*_{c}(*f, g*) can be rewritten as a convex linear combination of *M*_{n}(*f, g*):

This has a straightforward interpretation: for each event (*T* (*f, c*) = *n*), [a] the event (*T* (*f, c*) < *T* (*g, c*)) is equivalent to the event (*T* (*g, c*) > *n*) or (*V*_{0}*W*_{n}(*g*) < *c* <= *W*_{0}*W*_{n}(*f*)), and [b] the event (*T* (*f, c*) = *T* (*g, c*), *W*_{0}*W*_{T (f,c)} > *V*_{0}*W*_{T (g,c)}(*g*)) is equivalent to the event (*c* <= *V*_{0}*W*_{n}(*g*) < *W*_{0}*W*_{n}(*f*)). Consequently the combination of the two events (*T* (*f, c*) < *T* (*g, c*)) and (*T* (*f, c*) = *T* (*g, c*), *W*_{0}*W*_{T (f,c)} > *V*_{0}*W*_{T (g,c)}(*g*)) is equivalent to the event (*W*_{0}*W*_{n}(*f*) > *V*_{0}*W*_{n}(*g*)).

## Appendix K The probability payoff matrix converges with horizon *n* to the expected log matrix

*For any pair* (*f, g*) *with G*(*f*) ≠ *G*(*g*), *we have*

*Proof.* If *G*(*f*) − *G*(*g*) = *ε* > 0, then by a similar argumentation as Appendix D, there exists *n*_{0} < ∞ such that for all *n* ≥ *n*_{0} and all **x**

Therefore, for all *n* ≥ *n*_{0} and all **x**

This implies that

Therefore, *M*_{∞}(*f, g*) = 1. Similarly we obtain *M*_{∞}(*f, g*) = 0 if *G*(*f*) < *G*(*g*).

*For the case G*(*f*) = *G*(*g*) *there are only two cases, g* = *f or* . *If g* = *f we have* . *If* *we do not know the value of* .

See Fig. K.1 for a graphical illustration of the convergence.

## Appendix L Nash equilibrium in population size *N*

In this section, we show that the Nash solution in population size *N*, denoted by , will be the strategy closest to Kelly under the finite resolution regime, and such that it converges asymptotically with *N* to the Kelly strategy. Denote by the closest element to *f* ^{Kelly} in , i.e., . We show that is the Nash solution for the game with strategies defined only on *I*_{N}. Due to the definition of , we see that as *N* → ∞. To this end, we show that for all *f* ∈ *I*_{N}. Indeed, we have already from (10) that . Moreover, we have for all . Therefore there exists *ε* > 0 such that

Thus, for every we have where . We assume that log *W*_{0} and log *V*_{0} have the same distribution with supp log W_{0} ⊃ {*α*(0), …, *α*(n)} and |supp log W_{0}| = |supp log V | = r > 2n*ε*. Denote by *A* = {*s* : *α*(*s*) < 0}, *A* = {*s* : *α*(*s*) ≥ 0} and . We have

Note that for *s* ∈ *A*_{1} and for *s* ∈ *A*_{2}. Moreover for for *x* ∈ [−1, 0]. Therefore, for every we have

## Appendix M Nash equilibrium in nonstationary environments

*We consider also a game with payoff of players are*
*where W*_{0}, *V*_{0} *have the same distribution. Then, in this game*, (*f* ^{Kelly}, *f* ^{Kelly}) *is the unique strict Nash equilibrium.*

*Proof.* We note that in the non-stationary case we have

where *H* ∼*GB*(*n*, {*p*_{1}, …, *p*_{n}} ∼ *Beta*(*α, β*) is a generalized binomial distribution. Therefore the proof is similar to the proof in Proposition 3 and is omitted.

## Appendix N Limit of the extinction rate

*Denote by*
*the probability that the extinction does not occur until time n and P*_{n,d}(*f*) = 1 − *Q*_{n,d}(*f*) *the probability of extinction until time n (also see Fig. 6*). *We prove that*

*Proof.* For the sake of simplicity, we denote by

Then we rewrite the formula

If

*ō*_{1}(*f*),*ō*_{2}(*f*) > 1: we have*β*_{n,d}(*x*_{1}, …,*x*_{n−1}, 1) =*β*_{n,d}(*x*_{1}, …,*x*_{n−1}, 0) =*β*_{n−1,d}(*x*_{1}, …,*x*_{n−1}), therefore*Q*_{n,d}(*f*) =*Q*_{n−1,d}(*f*) = ·… =*Q*_{0,d}= ℙ(*W*_{0}>*d*) = 1 for all*n*. Therefore lim_{n→∞}*P*_{n,d}(*f*) = 0.If

*ō*_{1}(*f*),*ō*_{2}(*f*) < 1: we have which approaches infinity with*n*. Therefore for*n*large enough,*Q*_{n,d}= 0. This implies lim_{n→∞}*P*_{n,d}(*f*) = 1.If

*ō*_{1}(*f*) > 1 >*ō*_{2}(*f*): we have*β*_{n,d}(*x*_{1}, …,*x*_{n−1}, 1) =*β*_{n−1,d}(*x*_{1}, …,*x*_{n−1}). Note that*Q*_{n,d}is decreasing and bounded below by 0, therefore there exists the limit of*Q*_{n,d}(*f*) which implies the limit of*P*_{n,d}(*f*).

*c*_{d}(*f*) *is increasing with d (see Fig. 6*).

*Proof.* If *d*_{1} > *d*_{2} then therefore which implies that .

## Footnotes

↵† tran{at}math-uni.leipzig.de

Major revision following reviewers' comments (in publication process).