The statistics of k-mers from a sequence undergoing a simple mutation process without spurious matches

K-mer-based methods are widely used in bioinformatics, but there are many gaps in our understanding of their statistical properties. Here, we consider the simple model where a sequence S (e.g. a genome or a read) undergoes a simple mutation process whereby each nucleotide is mutated independently with some probability r, under the assumption that there are no spurious k-mer matches. How does this process affect the k-mers of S? We derive the expectation and variance of the number of mutated k-mers and of the number of islands (a maximal interval of mutated k-mers) and oceans (a maximal interval of non-mutated k-mers). We then derive hypothesis tests and confidence intervals for r given an observed number of mutated k-mers, or, alternatively, given the Jaccard similarity (with or without minhash). We demonstrate the usefulness of our results using a few select applications: obtaining a confidence interval to supplement the Mash distance point estimate, filtering out reads during alignment by Minimap2, and rating long read alignments to a de Bruijn graph by Jabba.


Related work
Here we give more background on how our paper relates to other previous work.
Lander-Waterman statistics: There is a natural analogy between the stretches of mutated k-mers and the intervals covered by random clones in the work of Lander and Waterman [15]. Each error can be viewed as a random clone with fixed length k, and thus the islands in our study correspond to "covered islands" in theirs. However, their focus was to determine how much redundancy was necessary to cover all (or most) of a genomic sequence, which would correspond to how many nucleotide mutations are needed so that most of the k-mers in the sequence are mutated. In particular, they expect average coverage of the sequence by clones to be greater than 1, while in our study we expect the corresponding value, ≈ k(1 − (1 − r) k , to be much less than 1. Thus, the approximations applied in [15] do not hold in our case.
Alignment-free sequence comparison: In alignment-free analysis, two sequences are compared by comparing their respective vectors of k-mer counts [30]. Two such vectors can be compared in numerous ways, e.g. through the the D 2 similarity measure, which can be viewed as a generalization of the number of mutated k-mers we study in this paper. However, in alignment-free analysis, both the underlying model and the questions studied are somewhat different. In particular, alignment-free analysis usually works with much smaller values of k, e.g. k < 10 [38]. This means that most k-mers are present in a sequence, and k-mers will match between and within sequences even if they are in different locations and not evolutionarily related. Our model and questions assume that these spurious matches are background noise that can be ignored (which is justifiable for larger k), while they form a crucial component of alignment-free analysis. As a result, much of the work in measuring expectation and variance in metrics such as D 2 is done with respect to the distribution of the original sequences, rather than after a mutation process [23,5]. Even when the mutation processes have been studied, they have typically been very different from the ones we consider here (e.g. the "common motif model" [23]). Later works [20,24] did consider the simple mutation model that we study here, though still with a small k. Sequence similarity has also been estimated using the average common substring length between two sequences [12]. This is similar to the distribution of oceans that we study in our paper, but the difference is that oceans are both left-and right-maximal, while the common substrings considered by [12] and others are only right-maximal.

Preliminaries
Let L > 0 be a positive integer. Let [L] to denote the interval of integers {0, . . . , L − 1}, which intuitively captures positions along a string. Let k > 0 be a positive integer. The k-span at position 0 ≤ i < L is denoted as K i and is the range of integers [i, i + k − 1] (inclusive of the endpoints). Intuitively, a k-span captures the interval of a k-mer. We think of [L + k − 1] as representing an interval of length L + k − 1 that contains L k-spans. To simplify the statements of the theorems, we will in some places require that L ≥ k (or similar), i.e. that the interval is of length at least 2k − 1. We believe this covers most practical cases of interest, but, if necessary, the results can be rederived without this assumption.
We define the simple mutation model as a random process that takes as input two integers k > 0 and L > 0 and a real-valued nucleotide error rate 0 < r 1 < 1. For every position in [L + k − 1], the process mutates it with probability r 1 . A mutation at position i is said to mutate the k-spans minhash Jaccard --see Theorem 6 C ber L(1−q)(1+r 1 (k−1))+f (r 1 ,k) L+k−1 see Theorem 11 E[C ber ] ± zα Var(C ber ) Table 1: The expectation, variances, and hypothesis tests derived in this paper. We use q as shorthand for 1−(1−r1) k . We use f (r1, k) as a placeholder for some function of r1 and k that is independent of L; see the theorems for the full expressions. Figure 1: An example of the simple mutation process, with L = 36 and k = 5. There are 5 nucleotides that are mutated (marked with an x). For example, the mutation at position 10 mutates the k-spans K6, . . . , K10 (marked in red). Note that an isolated nucleotide mutation (e.g. at position 10) can affect up to k k-spans (e.g. K6, . . . , K10), but nearby nucleotide mutations can affect the same k-span (e.g. mutation of nucleotides at positions 23 and 27 both affect K23.) There are 2 islands (marked in blue) and 3 oceans, and Nmut = 21. For example, K19, . . . , K34 is an island, and K35 is an ocean.
K max(0,i−k+1) , . . . , K i . We define N mut as a random variable which is the number of mutated kspans. As shorthand notation, we use q 1 − (1 − r 1 ) k to denote the probability that a k-span is mutated. Figure 1 shows an example. The simple mutation model formalizes the notion of a string S undergoing mutations where there are no spurious matches, i.e. there are no duplicate k-mers in S and a mutation always creates a unique k-mer. This is also closely related to assuming that S is random and k is large enough so that such spurious matches happen with low probability. The simple mutation model captures these scenarios by representing S using the interval [L + k − 1] and a k-mer as a k-span.
We can partition the sequence K 0 , . . . , K L−1 into alternating intervals called islands and oceans. The range i, . . . , j is an island iff all K i , . . . , K j are mutated, and the range is maximal, i.e. K i−1 and K j+1 are either not mutated or out of bounds. Similarly, the range is an ocean iff none of K i , . . . , K j are mutated, and the interval is maximal. We define N ocean as a random variable which is the number of oceans and N isl as the number of islands (see Figure 1).
Consider two strings composed of a set of k-mers A and B, respectively, and let s ≤ min |A|, |B| be a non-negative integer. The Jaccard similarity between A and B is defined as |A∩B| |A∪B| . The minhash sketch C S of a set C is the set of the s smallest elements in C, under a uniformly random permutation hash function. The minhash Jaccard similarity between A and B is defined as |(A∪B) S ∩A S ∩B S | |(A∪B) S | , or, equivalently, |(A ∪ B) S ∩ A S ∩ B S |/s [2]. In order to transplant this to our model, we define the sketching simple mutation model as an extension of the simple mutation model, with an additional non-negative integer parameter s ≤ L. We follow the intuition of [L + k − 1] representing a string S with no spurious matches. For every position i, if K i is non-mutated (respectively, mutated), we think of K i as being shared (respectively, distinct) between the strings before and after the mutation process. Formally, let U be a universe which contains an element shared i for every non-mutated K i and, for every mutated K i , contains two elements a-distinct i and b-distinct i . Let A be the set of all shared i and a-distinct i , and let B be the set of all shared i and b-distinct i . The output of the sketching simple mutation model is the minhash Jaccard similarity between A and B, i.e.Ĵ = |(A ∪ B) S ∩ A S ∩ B S |/s. Note that the Jaccard similarity (without sketches) would, in our simple mutation model, be the ratio between the number of shared i and the size of U, which is L−Nmut L+Nmut . Given a distribution with a parameter of interest p, an approximate (1 − α)-confidence interval is an interval which contains p with limiting probability 1 − α. Closely related, an approximate hypothesis test with significance level (1 − α) is an interval that contains a random variable with limiting probability 1 − α. We will drop the word "approximate" in the rest of the paper, for brevity. We will use the notation X ∈ x ± y to mean X ∈ [x − y, x + y]. Given 0 < α < 1, we define z α = Φ −1 (1 − α/2), where Φ −1 is the inverse of the cumulative distribution function of the standard Gaussian distribution. Let H(x, y, z) denote the hypergeometric distribution with population size x, y success states in population, and z trials. We define F n (a) = Pr[H(L + n, L − n, s) ≥ a]. Both Φ −1 and F n can be easily evaluated in programming languages such as R or python.

Number of mutated k-mers: expectation and variance
In this section, we look at the distribution of N mut , i.e. the number of mutated k-mers. The approach we take to this kind of analysis, which is standard, is to express N mut as a sum of indicator random variables whose pairwise dependence can be derived. Let X i be the 0/1 random variable corresponding to whether or not the k-span K i is mutated; i.e., X i = 1 iff at least one of its nucleotides is mutated. Hence, Pr[X i = 1] = 1 − (1 − r 1 ) k q. We can express N mut = X i . By linearity of expectation, we have The key to the computation of variance is the joint probabilities of two k-mers being mutated.
Proof. Set δ = j − i. If δ ≥ k, then K i and K i+δ do not overlap and therefore the variables X i and X i+δ are independent. Otherwise, consider three events. E 1 is the event that at least one of K i , . . . , K i+δ−1 is mutated. E 2 is the event that none of K i , . . . , K i+δ−1 is mutated and one of K i+δ , . . . , K i+k−1 is mutated. E 3 is the event that none of K i , . . . , K i+k−1 is mutated. Notice that the three events form a partition of the event space and so we can write Pr We can now compute the variance using tedious but straightforward algebraic calculations. As we will show in the following section, knowing the variance allows us to obtain a confidence interval or do a hypothesis test based on N mut .

Hypothesis test for M -dependent variables
Our derivations of hypothesis tests and confidence intervals follows the strategy used for the Binomials, which we now describe so as to provide intuition. In the case of estimating the success probability p of a Binomial variable X when the number of trials L is known, a confidence interval for p is called a binomial proportion confidence interval [6]. There are multiple ways to calculate such an interval, as described and compared in [4], and we will follow the approach of the Wilson score interval [36]. It works by first approximating the Binomial with a Normal distribution and then applying a standard score. The result is that Pr |X − Lp| = z α Var(X) = 1 − α + ε(L, p), where Var(X) = Lp(1 − p) and ε(L, p) is a function such that lim L→∞ ε(L, p) = 0; recall that z α = Φ −1 (1−α/2). This can be solved for X to obtain a hypothesis test X ∈ Lp±z α Var(X). This can be converted into a confidence interval by finding all values of p for which X ∈ Lp±z α Var(X) holds. In the particular case of the Binomial, a closed form solution is possible [36], but, more generally, one can also find the solution numerically.
Though random variables like N mut are not Binomial, they have a specific form of dependence between the trials, which allows us to apply a similar strategy. A sequence of L random variables X 0 , . . . , X L−1 is said to be m-dependent if there exists a bounded m (with respect to L) such that if j − i > m, then the two sets {X 0 , . . . , X i } and {X j , . . . , X L−1 } are independent [13]. In other words, m-dependence says that the dependence between a sequence of random variables is limited to be within blocks of length m along the sequence. It is known that the sum of m-dependent random variables is asymptotically normal [13] and this was previously used to construct heuristic hypothesis tests and confidence intervals [18]. Even stronger, the rate of convergence of the sum of m-dependent variables to the Normal distribution is known due to a technique called Stein's method (see Theorem 3.5 in [25]). (This technique applies even to the case where m is not bounded, but that will not be the case in our paper.) Here, we apply Stein's method to obtain a formally correct hypothesis test together with a rate of convergence for a sum of m-dependent (not necessarily identically distributed) Bernoulli variables.
As we will see, m-dependence is well-suited for dealing with variables in the simple mutation model. In most natural cases, the error |ε| → 0 when L → ∞, and Lemma 3 gives a hypothesis test with significance level 1 − α. 5 Hypothesis tests for N mut andĴ and confidence intervals for r 1 There is a natural point estimator for r 1 using N mut , defined asr 1 = 1 − (1 − N mut /L) 1/k . This estimator is both the method of moments and the maximum likelihood estimator, meaning it has nice convergence properties as L increases [35]. In this section, we extend it to a confidence interval and a hypothesis test, both from N mut andĴ (with and without sketching). In the N mut setting, Lemma 1 shows that X 0 , . . . , X L−1 are m-dependent with m = k − 1. Hence we can apply Lemma 3 where |ε| ≤ c/L 1/4 and c is a constant that depends only on r 1 and k. In particular, when r 1 and k are independent of L, we have lim Corollary 4 gives the closed-form boundaries for a hypothesis test on N mut . To compute a confidence interval for q (equivalently, for r 1 ), we can numerically find the range of q for which the observed N mut lies between n low and n high . In other words, the upper bound on the range would be given by the value of q for which the observed N mut is n low and the lower bound by the value of q for which the observed N mut is n high . These observations are made rigorous in Theorem 5. We will use the notation N q mut to denote N mut with parameter Theorem 5. For fixed k, r 1 , and α, for a given observed value of N q mut , there exists an L large enough such that there exists a unique q low such that N q mut = Lq low + z α Var(N q low mut ) and a unique q high such that N q mut = Lq high − z α Var(N q high mut ), and where |ε| ≤ c/L 1/4 and c is a constant that depends only on r 1 and k. In particular, for fixed r 1 and k, we have lim Note that this theorem states that for sufficiently large L, there is a unique solution for the value of q for which the observed N mut is n high (and similarly a unique solution for the value of q for which the observed N mut is n low ). For small L, we have no such guarantee (though we believe the theorem holds true for all L ≥ k); to deal with this possibility, our software verifies if the solutions are indeed unique by computing the derivative inside the proof of Theorem 5 and checking if it is positive. If it is, then the proof guarantees the solutions to be unique; if it is not, our software reports this. However, during our validations, we did not find such a case to occur.
We want to underscore how the difference between a confidence interval and a hypothesis test is relevant in our case. A confidence interval is useful when we have two sequences, one of which having evolved from the other and we would like to estimate their mutation rate from the number of mutated k-spans. A hypothesis test is useful when we know the mutation rate a priori, e.g. the error rate of a sequencing machine. In this case, we may want to know whether a read could have been generated from a putative genome location, given the number of observed mutated k-spans. We will see both applications in Section 7.
In some cases, N mut is not observed but instead we observe another random variable , then T is the Jaccard similarity between the original and the mutated sequence (in our model). In this case, a hypothesis test with significance level α is to check if T lies between f (n low ) and f (n high ). In addition to the Jaccard, [17] describe 14 other variables that are a function of N mut , L, and k. These are: Anderberg, Antidice, Dice, Gower, Hamman, Hamming, Kulczynski, Matching, Ochiai, Phi, Russel, Sneath, Tanimoto and Yule. We can apply our hypothesis test to any of these variables, as long as they are monotone with respect to N mut .
We can also use Lemma 3 as a basis for deriving a hypothesis test onĴ in the sketching model. The proof is more involved and interesting in its own right, but is left for the Appendix due to space constraints. Theorem 6. Consider the sketching simple mutation model with known parameters s, k, L ≥ k, r 1 , and outputĴ. Let 0 < α < 1 and let m ≥ 2 be an integer.
Then, assuming that r 1 and k are independent of L, and m = o(L 1/4 ), We can compute a confidence interval for q fromĴ in the same manner as with Corollary 4. Let j low (q) and j high (q) be defined as in Theorem 6, but explicitly parameterized by the value of q. Then we numerically find the smallest value 0 < q low < 1 for which j low (q low ) =Ĵ and the largest value 0 < q high < 1 for which j high (q high ) =Ĵ. The following theorem guarantees that [q low , q high ] is a confidence interval for q.
Theorem 7. For fixed k, r 1 , α, m, and a given observed value ofĴ, there exists an L large enough such that there exist unique intervals . Moreover, assuming that r 1 , k and m are independent of L, we have

Number of islands and oceans
In this section, we derive the expectation and variance of N isl and N ocean and the hypothesis test based on them. For N isl , we follow the same strategy as for N mut , namely to express N isl as a sum of indicator random variables whose joint probabilities can be derived. Let us define a right border as a position i such that K i is mutated and K i+1 is not. We will denote it by an indicator variable Let us also say that there exists an end-of-string border iff K L−1 is mutated. We will denote this by an indicator variable Z. A right border is a position where an island ends and an ocean begins, and the end-of-string border exists if the last island is terminated not by an ocean but by the end of available nucleotides in the string to make a k-mer. The number of islands is then the number of borders, i.e. N isl = Z + L−2 i=0 B i . To compute the expectation, observe that Z is a Bernoulli variable with parameter q. For B i , observe that the only way that K i is mutated while K i+1 is not is if position i is mutated and the positions i + 1, . . . , i + k are not. Therefore, B i ∼ Bernoulli(r 1 (1−q)). By linearity of expectation, Next, we derive dependencies between border variables and use them to compute the variance.
Proof. Observe that when j − i > k, the positions that have an effect on B i (i.e. K i , . . . , K i+k ) and those that have an effect on B j (i.e. K j , . . . , K j+k ) are disjoint. Hence, B i and B j are independent in this case. When 1 ≤ j − i ≤ k, B i and B j cannot co-occur. This is because ). Lemma 8 also shows that N isl is m-dependent, with m = k − 1, Therefore, a hypothesis test on N isl can be obtained as a corollary of Lemma 3.
Corollary 10. Fix r 1 and let 0 < α < 1. Then, the probability that where |ε| ≤ c/L 1/4 and c is a constant that depends only on r 1 and k. In particular, when r 1 and k are independent of L, we have lim L→∞ (1 − α + ε) = 1 − α.
Unlike for Corollary 4, it is not as straightforward to invert this hypothesis test into a confidence interval for r 1 , since the endpoints of the interval of N isl are not monotone in r 1 . We therefore do not pursue this direction here. The derivation of the expectation and variance for N ocean is analogous and left for the Appendix (Theorem 12). Observe that |N ocean − N isl | ≤ 1, so, as expected, the expectation and variance are identical to N isl in the higher order terms. Corollary 10 also holds for the case that n is the observed number of oceans, if we just replace N isl with N ocean .
An immediate application of N ocean is to compute a hypothesis test for the coverage by exact regions (C ber ), a variable earlier applied to score read mappings in [19]. C ber is the fraction of positions in [L + k − 1] that lie in k-spans that are in oceans. The total number of bases in all the oceanic k-spans is the number of non-mutated k-spans plus, for each ocean, an extra k − 1 "starter" bases. We can then write We can use the expectations and variances of N mut (eq. (1) and Theorem 2) and N ocean (Theorem 12) to derive the expectation and variance of C ber :  Table 2: The accuracy of the confidence intervals for r1 predicted by Corollary 4, for α = 0.05 and for various values of L, r1, and k (the first three groups) and for the E.coli sequence (the fourth group). NA indicates the experiment was not run; for the first three groups, we only ran on parameters where E[Nmut] < L (otherwise they were not of interest), while for E.coli, we ran with the same range of values of r1 and k as in the first three groups. In each cell, we report the fraction of 10,000 replicates for which the true r1 falls into the predicted confidence interval. For the E.coli sequence, we used strain Shigella flexneri Shi06HN159. 1)) and, for L ≥ k + 3, Then, observing that C ber is a linear combination of m-dependent variables and hence itself m-dependent, we can apply Lemma 3 and obtain that, when r 1 and k are independent of L,

Empirical results and applications
In this section, we evaluate the accuracy of our results and demonstrate several applications. A sanity check validation of the correctness of our formulas for E[N mut ] and Var[N mut ] is shown in Table S1, however, most of the expectation and variance formulas are evaluated indirectly through the accuracy of the corresponding confidence intervals. We focus the evaluation on accuracy rather than run time, since calculating the confidence interval took no more than a few seconds for most cases (the only exception was for sketch sizes of 100k or more, the evaluation took on the order of minutes). Memory use was negligible in all cases.

Confidence intervals based on N mut
In this section, we evaluate the accuracy of the confidence intervals (CIs) produced by Corollary 4 (other CIs will be evaluated indirectly through applications). We first simulate the simple mutation model to measure the accuracy, shown in the left three groups (i.e. L = 100, 1000, 10000) of Table 2, for α = 0.05. We observe that the predicted CIs are very accurate at L = 1000, and also accurate for smaller k and r 1 when L = 100. Similar results hold for α = 0.01 (Table S2) and α = 0.10 (Table S3). The remainder of the cases had a CI that was too conservative; these are also the cases with some of the smallest variances (Table S1) and we suspect that, similar to the case of the Binomial, the Normal approximation of m-dependent variables deteriorates with very small variances. However, further investigation is needed.
Next, we investigate how well our predictions hold up when we simulate mutations along a real genome, where we can only observe the set of k-mers without their positions in the genome (as in alignment-free sequence comparison). We start with the E.coli genome sequence and, with probability r 1 , for every position, flip the nucleotide to one of three other nucleotides, chosen with  Table 3: The confidence intervals predicted by Theorem 6 and their accuracy. For each sketch size and r1 value, we show the number of trials for which the true r1 falls within the predicted confidence interval. The reported CI corresponds to applying Theorem 6 withĴ = 1−q 1+q . Here, α = 0.05, k = 21, L = 4, 500, 000, and the sketch size s and r1 are varied as shown. The number of trials for each cell is 1,000, and m = 100 for Theorem 6. equal probability. Let A and B be the set of distinct k-mers in E.coli before and after the mutation process, respectively. We let L = (|A| + |B|)/2 and n = L − |A ∩ B|. We then calculate the 95% CI for r 1 under the simple mutation model (Corollary 4) by plugging in n for N mut . The rightmost group in Table 2 shows the accuracy of these CIs. We see that the simple mutation model we consider in this paper is a good approximation to mutations along a real genome like E.coli.

Mash distance
The Mash distance [22] (and its follow-up modifications [21,27]) first measures the minhash Jaccard similarity j between two sequences and then uses a formula to give a point estimate for r 1 under the assumptions of the sketching simple mutation model. While a hypothesis test was described in [22], it was only for the null model where the two sequences were unrelated. Theorem 6 allows us instead to give a CI for r 1 , based on the minhash Jaccard similarity, in the sketching simple mutation model. Table 3 reproduces a subset of Table 1 from [22], but using CIs given by Theorem 6. For most cases, the predicted CIs are highly accurate, with an error of at most two percentage points. The three exceptions happen when s is small and q is large; in such cases, the predicted CI is too conservative (i.e. too large). In Table S4, we also tested the accuracy with a real E.coli genome by letting A and B be the set of distinct k-mers in the genome before and after mutations, respectively, letting L = (|A| + |B|)/2 andĴ = ((|A ∪ B|) ∩ A s ∩ B s )/s, and applying Theorem 6 with those values. The accuracy is very similar to that in the simple mutation model, demonstrating that for a genome like E.Coli, the simple mutation model is a good approximation.

Filtering out reads during alignment to a reference
Minimap2 is a widely used long read aligner [16]. The algorithm first picks certain k-mers in a read as seeds. Then, it identifies a region of the read and a region of the reference that potentially generated it (called a chain in [16]). Let n be the number of seeds in the read and let m ≤ n be the number of those that exist in the reference region. Minimap2 models the error rate of the k-mers as a homogenous Poisson process and estimates the sequence divergence between the read and the reference asˆ = 1 k log n m (which is the maximum likelihood estimator in that model). Ifˆ is above a threshold, the alignment is abandoned. [16] observes that due to invalid assumptions,ˆ is only approximate and can be biased, but nevertheless maintains a good correlation with the true divergence.
Using our paper, we can obtain a more accurate estimate of r 1 . The situation is very similar to estimating r 1 from N mut , except that only a subset of k-spans are being "tracked." Therefore, the maximum likelihood estimator for q is m/n and for r 1 isr 1 = 1 − (m/n) 1/k . Figures 2 and S3 Figure 2: Estimates of sequence divergence as done by mimimap2 (ˆ ) and by us (r1). Reads are simulated from a random 10kbp sequence introducing mutations at the given r1 rate. For each r1 value, 100 reads are used. As in [16], we use k = 15 and, using a random hash function, identify as seeds the k-mer minimizers, one for every window of 25 k-mers. In the case whenˆ is undefined, we setˆ = 1. show the relative performance of the two estimators (ˆ andr 1 ) for sequences of different lengths, with ourr 1 much closer to the simulated rate thanˆ in both cases.

Evaluating an alignment of a long read to a graph
Jabba [19] is an error-correction algorithm for long read data. At one stage, the algorithm evaluates whether a read is likely to have originated from a given location in the reference. Because Jabba's reference is a de Bruijn graph and not a string, it uses the specialized C ber score for the evaluation. In this scenario, the mutation process corresponds to sequencing errors at a known error rate r 1 and the question is whether the read is likely to have arisen through this process from the given location of the reference. The authors assume the simple mutation model and derive the expected C ber score as 1 − r 1 − k−1 i=0 i(1 − r 1 ) i r 2 1 . They then give a lower rating to reads with a C ber score that has "significant deviation" from this expected value. It is not clear how much of a deviation is deemed to be significant or how it was calculated.
Theorem 11, which gives E[C ber ] and Var[C ber ], would have allowed [19] to take a more rigorous approach. It shows that the C ber expectation computed by [19] is correct only in the limit as L → ∞, while our formula is exact and closed-form. More substantially, we can make the determination of "significant deviation" more rigorous. We regenerated Figure 2 from [19], using the same range of values for k (called m in [19]) and an error rate of r 1 = 10% as in [19] and plotted the 95% confidence interval: E[C ber ] ± z .05 Var(C ber ). Figure 3 demonstrates that this range would have done a good job at capturing most of the generated reads. Table 4 gives the number of C ber values that fall inside of the 95% confidence interval when using a simple mutation process with the same r 1 = 10% for sequences of length 10,000 for 5,000 replicates, with k ranging from 5 to 50 in steps of 5, depicting good agreement between simulation and theorem 11.  Table 4: A total of 5,000 sequences, each of length 10,000nt underwent a simple mutation process with mutation probability r1 = 0.1. The percent of associated C ber scores that fell inside of the 95% confidence interval as determined by Theorem 11 are shown.

Conclusion
The simple mutation model has been used broadly to model either biological mutations or sequencing errors. However, its use has usually been limited to derive the expectations of random variables, e.g. the expected number of mutated k-mers. In this paper, we take this a step further and show that the dependencies between indicator variables in this model (e.g. whether a k-mer at a given position is mutated) are often easy to derive and are limited to nearby locations. This limited dependency allows us to show that the sum of these indicators is approximately Normal. As a result, we are able to obtain hypothesis tests and confidence tests in this model. The most immediate application of our paper is likely to compute a confidence interval for average nucleotide identity from the minhash sketching Jaccard. Previously, only a point estimate was available, using Mash. However, we hope that our technique can be applied by others to random variables that we did not consider. All that is needed is to derive the joint probability of the indicator variables and compute the variance. Computing the variance by hand is tedious and error-prone but can be done with the aid of a software like Mathematica.
We test the robustness of the simple mutation model in the presence of spurious matches by using a real E.coli sequence. However, we do not explore the robustness with respect to violations such as the presence of indels (which result in different string lengths) or the presence of more repeats than in E.coli. This type of robustness has already been explored in other papers that use the simple mutation model [8,22,27]. However, exploring the robustness of our confidence intervals in downstream applications is important future work.
On a more technical note, it would be interesting to derive more tight error bounds for our confidence intervals, both in terms of more tightly capturing the dependencies on L, r 1 , and k, and accurately tracking constants. The error bound ε that is stated in Lemma 3 is likely not tight in either respect, due to the inherent loss when transferring between the Wasserstein and Kolmogorov metrics and due to loose inequalities within the proof of Theorem 3.5 in [25]. Ideally, tight error bounds would give the user a way to know, without simulations, when the confidence intervals are accurate, in the same way that we know that the Wilson score interval for a Binomial will be inaccurate when np(1 − p) is low. For example, it would be useful to better theoretically explain and predict which values in Table 2 deviate from 0.95.
Another practical issue is with the implementation of the algorithm to compute a confidence interval for q fromĴ. Theorem 7 guarantees that the algorithm is correct as L goes to infinity. However, the user of the algorithm will not know if L is large enough for the confidence interval to be correct. There are several heuristic ways to check this, which we have implemented in the software: a short simulation to check the true coverage of the reported confidence interval, a check that the sets in the definitions of j high and j low are not empty, and a check that j high and j low are monotonic with respect to q in the range 0 < q < 1.

A.1 Missing theorems and proofs
Proof. In the following we will use Lemma 1 and the equality n Theorem 5. For fixed k, r 1 , and α, for a given observed value of N q mut , there exists an L large enough such that there exists a unique q low such that N q mut = Lq low + z α Var(N q low mut ) and a unique q high such that N q mut = Lq high − z α Var(N q high mut ), and where |ε| ≤ c/L 1/4 and c is a constant that depends only on r 1 and k. In particular, for fixed r 1 and k, we have lim L→∞ (1 − α + ) = 1 − α.
Proof. Given the result in corollary 4, we need only show that q low and q high are well-defined. As such, it is sufficient to show that Lq+z α Var(N q mut ) and Lq−z α Var(N q mut ) are strictly monotonic in q for sufficiently large L. Equivalently, since q = 1 − (1 − r 1 ) k , these must be strictly monotonic in r 1 which we consider here. For simplicity, we will write r instead of r 1 and N mut instead of N q mut . Focusing then on L(1 − (1 − r) k ) + z α Var(N mut ), consider After a (tedious) series expansion of the right side of the equality in eq. (3) about L = ∞, we find that ∂ ∂r L(1 − (1 − r) k ) + z α Var(N mut ) = kL(1 − r) k−1 + o(L). As such, L(1 − (1 − r) k ) + z α Var(N mut ) is increasing as a function of r as L → ∞. The case of showing that L(1 − (1 − r 1 ) k ) − z α Var(N mut ) is also increasing proceeds in an entirely analogous fashion. Theorem 6. Consider the sketching simple mutation model with known parameters s, k, L ≥ k, r 1 , and outputĴ. Let 0 < α < 1 and let m ≥ 2 be an integer. For 0 ≤ i ≤ m, let n i l = Lq − z i/m Var(N mut ) and n i h = Lq + z i/m Var(N mut ). Let and Then, assuming that r 1 and k are independent of L, and m = o(L 1/4 ), Proof. Recall the definition of A and B from the definition of the minhash Jaccard estimator. First, we argue that an element of (A ∪ B) S is in A S ∩ B S iff it corresponds to a non-mutated k-span (i.e. iff x = shared i for some i). Consider an element x ∈ (A ∪ B) S . If x corresponds to a mutated k-span (i.e. x = a-distinct i or x = b-distinct i for some i), then x ∈ A ∩ B and so x ∈ A S ∩ B S . If x does not correspond to a mutated k-span (i.e. x = shared i for some i), then x ∈ A ∩ B and x ∈ A S ∩ B S as well. Next, let J be the random variable corresponding to |(A ∪ B) S ∩ A S ∩ B S |. The minhash Jaccard estimator can then be expressed asĴ = J /s. Note that J contains randomness due to both the mutation process and to the choice of the minhash permutation. For ease of notation we set N = N mut . We claim that the distribution of J , conditioned on N = n, is hypergeometric H(L + n, L − n, s). To see this, recall from the discussion in the previous paragraphs that an element of (A ∪ B) S is in A S ∩ B S only when it corresponds to a non-mutated k-span, and, on the event that N = n, there are exactly L − n non-mutated k-spans and a total of L + n k-spans to be hashed. We assume the hash values are assigned with a random permutation. Equivalently, we can generate the hash values by repeatedly assigning the smallest available hash value to an element chosen uniformly at random among those that have not been hashed yet. Then, from the first s elements, the probability that exactly a of those are selected from the set of the L − n non-mutated k-spans is: Observe that F n (a) is a non-increasing function with respect to n; this is because increasing n in H(L + n; L − n, s) has the overall effect of reducing the probability of success, since the population size is increased and the number of successes is decreased. Using this, we can find upper and lower bounds for B(a) as follows.
Similarly, we obtain B(a) ≥ i:n i l >0 We now approximate p i h and p i l as follows. Observe that when Here, ε 1 and ε 2 are constants whose absolute value is bounded by ε max = c/L 1/4 , with c > 0 a constant that depends only on q and k. Hence, p i h ∈ 1/(2m) ± ε max . Analogously, we have p i l ∈ 1/(2m) ± ε max . This allows us to further simplify the bounds for B(a): Theorem 7. For fixed k, r 1 , α, m, and a given observed value ofĴ, there exists an L large enough such that there exist unique intervals [q − low , q + low ] and [q − high , q + high ] such that q + high ≥ q − low , j low (q) =Ĵ if and only ifq ∈ [q − low , q + low ], and j high (q) =Ĵ if and only ifq ∈ [q − high , q + high ]. Moreover, assuming that r 1 , k and m are independent of L, we have Proof. Recall from Theorem 6 that n i l = Lq − z i/m Var(N mut ) and n i h = Lq + z i/m Var(N mut ). First, observe from Theorem 2 that Var(N mut ) = cL + o(L), where c is a constant depending only on k and r 1 . Consequently, Var(N mut ) = o(L) and so, for fixed k, r 1 , α, and m, there exists an L sufficiently large such that, for all i = 0, . . . , m, n i h ∈ [0, L] and n i l ∈ [0, L]. Therefore, for all values of q, there exists an L sufficiently large such that the summations in the definition of j low and j high are over 0 ≤ i ≤ m. Second, in the proof of Theorem 5 we established that n i l and n i h are increasing with q provided L is sufficiently large. Therefore, the parameters in the subscripts of the F terms of j low and j high are also increasing with q, when L is sufficiently large. Third, observe that F n (a) is a non-increasing function of n and of a. The fact that it is a non-increasing function of n we already observed in the proof of Theorem 6. The fact that it is a non-increasing function of a follows trivially from its definition. Combining these three observations, we deduce that j low are j high non-increasing functions of q. Therefore, they take on a certain value (i.e.Ĵ) for a unique range of the domain, implying the first assertion of the theorem. The second assertion of the theorem then follows from Theorem 6.
Proof. For convenience, we will define a random variable X i for 0 ≤ i ≤ L − 1 and let X i = B i for i < L − 1 and X L−1 = Z. Also, for notational simplicity, write r for r 1 . Figure S1 visualizes the joint probabilities of all X i 's, as given in Lemma 8. Using the figure as a guide, we proceed with the derivation. Figure S2: Illustration of the joint probabilities of Xi, Bj, and Z , i.e. the terms of the sum in the derivation of Cov (Nocean, Nmut) in Theorem 11. In this example, L = 20 and k = 5.
Theorem 12. E[N ocean ] = Lr 1 (1 − q) + (1 − q)(1 − r 1 ) and, for L ≥ k + 3, Proof. Let us define Z as an indicator for the event that the first k-span (K 0 ) is not mutated. Hence E[Z ] = (1 − r 1 ) k = (1 − q). Observe that every ocean begins either the start of the interval or a right border. Therefore, the the number of oceans is . For the variance, the derivation is equivalent to replacing Z with Z in the derivation of Theorem 9 and we therefore omit the proof here.
Proof. Throughout, we write r instead of r 1 for simplicity. Recall that C ber = (L − N mut + (k − 1)N ocean )/(L + k − 1). Applying linearity of expectation together with eq. (1) and Theorem 12, Applying the distributive properties of variance to the definition of C ber we get: We only need to compute the covariance. We will use the same definition of variables as previously in Sections 3 and 6. In particular, X i is a random variable indicating that K i was mutated, Z indicates that K 0 has not mutated, and B i indicates that K i was mutated and K i+1 has not. Note that Z = 1 − X 0 . Figure S2 visualizes the joint probabilities of all X i 's, B j 's, and Z. We then compute Observe that when the left-most i nucleotides are not mutated when Z = 1. When k ≤ i ≤ L − 1, Z and X i are independent so E[Z X i ] = q(1 − q). Thus, calculating the first sum in equation (4), we obtain For the second sum in equation (4), observe that B j = 1 implies that X j = 1 and X j+1 = 0. Factoring out the L terms in the numerator, we get Var(C ber ) = 1 − q r 2 (L + k − 1) · · (L(2rq + r 2 (−3q − 2k + 4kq) + r 3 (k − 1)(4kq − 3k − 1) + r 4 (1 − q)(k − 1) 2 (−2k − 1)) − 2q + 2r(q + k − kq) + r 2 (k − 1)(k − q) + r 3 (k − 1)(3k − 4kq + 1) + r 4 (k − 1) 2 (1 − q)(k 2 + 3k + 1))     Table S4: The accuracy of confidence intervals predicted by Theorem 6 on a real E.coli sequence. For each sketch size and r1 value, we show the number of trials for which the true r1 falls within the predicted confidence interval.

A.2 Experimental results: extra tables and figures
Here, α = 0.05, k = 21, and the sketch size s and r1 are varied as shown. The number of trials for each cell is 1,000, and m = 100 for Theorem 6. E.coli strain K-12 substr. MG1655 was used. Figure S3: Estimates of sequence divergence as done by mimimap2 (ˆ ) and by our approach (r1). This is similar to Figure 2 but with sequence lengths of 1kbp instead of 10kbp.