Fast haplotype matching in very large cohorts using the Li and Stephens model

The Li and Stephens model, which approximates the coalescent describing the pattern of variation in a population, underpins a range of key tools and results in genetics. Although highly efficient compared to the coalescent, standard implemen-tations of this model still cannot deal with the very large reference cohorts that are starting to become available, and practical implementations use heuristics to achieve reasonable runtimes. Here I describe a new, exact algorithm (“fastLS”) that implements the Li and Stephens model and achieves runtimes independent of the size of the reference cohort. Key to achieving this runtime is the use of the Burrows-Wheeler transform, allowing the algorithm to efficiently identify partial haplotype matches across a cohort. I show that the proposed data structure is very similar to, and generalizes, Durbin’s positional Burrows-Wheeler transform.


Introduction
The genetic variation in a population of interbreeding individuals is highly structured, and the canonical model that describes this structure mathematically is the Kingman's coalescent [1], later extended to also include recombination [2,3]. Although mathematically elegant, it is challenging to use these models directly for statistical inference. In 2003, Li and Stephens introduced a model (LS) that both a good approximation of Griffith's ARG, and computationally tractable [4]. As a result, LS now underpins a large range of key tools and scientific findings [5,6,7,8]. Depending on whether the input sequence is haploid or diploid, LS in its straightforward implementation as a hidden Markov model (HMM) runs in linear or quadratic time in the number of reference haplotypes. While this is orders of magnitude more efficient than algorithms based directly on Kingman's coalescent or the ARG, the recent availability of affordable DNA sequencing technology has provided access to very large reference sets, on which even the LS model is intractable in its standard implementation, so that current implementations of LS use heuristics to cope with data sets encountered in practice [7].
A very different algorithm that is making an impact in genomics is the Burrows-Wheeler transform (BWT). Invented in 1994 [9], this transformation permutes an arbitrary text in such a way that the original text can be recovered, while at the same time improving the compressibility of the transformed text by increasing simple repetitions. In addition, the transformed text, even in compressed form, serves as an index that allows rapid searching in the original text. In genomics this idea has so far been used mainly for fast alignment of short reads against a large and relatively repetitive reference genome [10]. More recently, Durbin introduced a variant of the BWT, termed the Positional Burrows-Wheeler Transform (PBWT), that exploits the additional structure that exists in a set of haplotypes in a population sample [11]. These data, which are usually encoded as a series of 0s and 1s representing the absence or presence in a sample of particular genetic variants along a reference sequence, have a natural representation as a matrix, where rows represent samples and columns represent the particular positions in a reference. Local matches between samples are only relevant at matching positions, and exploiting this restriction leads to improvements over a standard application of the BWT. The resulting data structure again allows for fast haplotype searches against a database, and achieves very high compression ratios [11].
There are two main results in this paper. First, I establish a formal connection between the standard and positional BWT, showing how the PBWT as introduced in [11] is a special case of the BWT. This connection also shows how the PBWT can be slightly generalized to cope with the multiallelic case. Besides providing an additional perspective on the positional BWT algorithms, which helps to better understand them, it also provides a mechanical way to "lift" existing algorithms operating on the BWT data structure to their positional equivalent, allowing the large literature on BWT algorithms to be applied to the current data structure. I show how this works by deriving the haplotype search algorithm from the equivalent BWT algorithm.
The second contribution consist of algorithms that implement Li and Stephens' model on top of the BWT. More precisely, I present algorithms that compute maximumlikelihood ("Viterbi") paths through the Li and Stephens hidden Markov model, providing a parsimonious description of a given sequence as an imperfect mosaic of reference haplotypes. The ability to efficiently identify matches in the database of reference haplotypes result in considerable improvements in runtime over the standard implementation, reducing the linear and quadratic asymptotic runtime to empirical constant time, independent of the number of reference haplotypes. More precisely, for H samples of n loci each, the standard implementation runs in O(Hn) time for a haploid input sequence, and O(H 2 n) for a diploid input sequence, while the proposed algorithms run in empirical O(n) time in both cases. This allows the Li and Stephens model to be used on very large reference panels, without recourse to approximations.
2 Haplotype matching using the BWT Let x 0 , . . . , x H−1 be H haplotype sequences, each consisting of n symbols from the alphabet A representing the possible allelic states at a locus; for simplicity I will often use A = {0, 1} in this paper. A straightforward way of identifying haplotype matches would be to use the BWT on the concatenation x 0 x 1 · · · x H−1 of haplotype sequences. It turns out that a more efficient algorithm is obtained, in terms of time and memory use, by embedding this sequence of Hn characters into a sequence of 2Hn characters taken from a much larger alphabet. The increase in sequence length and alphabet size is offset by the additional structure in the BWT that results from the chosen embedding and Algorithm 1 Calculating BWT(X) Input: sequences x 0 , . . . , x H−1 , each of length n; alphabet A Output: Block permutations j → a i j , i = 0, . . . , n − 1. 1: i ← n; a n−1 j ← j for j ∈ [0, H) 2: While i > 0: Else: 8: additional characters. This in turn translates into better compression and a streamlined search algorithm. I will write x[j] for the jth symbol in the sequence x, and x[j, k) for the subsequence starting at position j and ending at k − 1. I will also use [i, j) to denote the halfopen interval {i, i + 1, . . . , j − 1}, and if M ij is a matrix, M k [i, j) is the subsequence M k,i , M k,i+1 , . . . , M k,j−1 of the kth row of the matrix. Throughout this paper, all indices start at 0.
Let p 0 , . . . , p n−1 be n additional symbols in the alphabet, ordered such that p 0 < · · · < p n−1 < 0 < 1. Introduce a new sequence X of length 2Hn by inserting a symbol p j after each symbol x i [j] and concatenating the resulting sequences into a single sequence of the form (To impose a particular initial ordering I will later on replace the last symbol p n−1 by H symbols p 0 n−1 < · · · < p H−1 n−1 , but to avoid cluttering the notation I ignore this detail for now.) Consider all cyclic shifts Let M be the matrix obtained by writing X k on the kth row of a square matrix, and sorting the resulting rows lexicographically. Let π be the permutation that sorts the rows, so that X π(0) < X π(1) < · · · < X π(2Hn−1) , and M ij = X π(i) [j]. The Burrows-Wheeler transform of X is the last column of this matrix: BW T (X)[i] = X π(i) [2Hn −1]. Note that this is almost the traditional BWT of the sequence X -the difference is that there is no special 'end' character. This character is used to identify the start of the sequence; here, the special structure of X is sufficient to navigate BW T (X). Now consider how the matrix M may be constructed. The position symbols p i determine the coarse structure of M , which is independent of the data x i apart from the haplotype frequencies f 0 i and f 1 i (see Figure 1). The fine-scale structure of M within each "block" of H rows is determined by the data. More precisely, rows in the block starting at index iH are those cyclic shifts of X that start with symbol p i and end with x k [i] for some k ∈ [0, H), such that these rows are ordered lexicographically within the block. Let j → a i j denote the permutation of [0, H) that describes this order within block i, so that row iH + j ends with symbol x a i j [i]. Determining M therefore boils down to determining the n permutations a i j for i ∈ [0, n), since these determine the top half of M , and the remaining rows follow trivially from that (again, see Figure 1).
The permutations a i j are determined recursively, working from i = n − 1 backwards. Because we imposed the special ordering p 0 n−1 < · · · < p H−1 n−1 on the final position symbols, the permutation for block n − 1 is given by the identity permutation j → a n−1 j = j. Now suppose the permutation a i j for block i has been determined. The sequences in block i − 1 are formed from those in block i by moving two characters from the end to the front. The first character in any sequence of this new block is p i−1 , which does not influence the ordering within the block. The second character is an allele marker x a i j [i]. To sort the sequences in block i − 1 in lexicographic order, it is therefore sufficient to list those sequences that start a 0 symbol first, followed by those starting with a 1 symbol (followed by other symbols if the locus is multiallelic), and otherwise leave the original order undisturbed. Doing this results in Algorithm 1.
To show that the proposed construction is equivalent to the positional Burrows-Wheeler transform, Algorithm 1 is given both for general alphabets A and specialized for the case A = {0, 1}, since that in that case the inner loop is precisely Algorithm 1 in [11] (except that the proposed algorithm runs back-to-front, as is usual for BWT algorithms). As in the PBWT algorithm, the permutations a i j play the role of the suffix array in the ordinary BWT algorithm. Note that the output includes a permutation a −1 j , which encodes how the very first characters x j [0] influence the permutation of the cyclic shifts X k ; this permutation is used in Algorithm 5 below.
Following [11] I now define the PBWT of x 0 , . . . , x H−1 as the first half of BW T (X), which is availably implicitly as BW T (X)[Hi + j] = x a i j [i]. Figure 1 shows that the second half of BW T (X) is determined by the allele frequencies f c i , i ∈ [0, n), which can be computed easily from the relevant block in the first half of BW T (X), so that the PBWT of x 0 , . . . , x H−1 is in fact equivalent to BW T (X).

Substring searching
Algorithm 1 to calculate BW T (X) in linear time makes use of the special structure of X, and is not a specialization of an existing, general algorithm to calculate the BWT. By contrast, Algorithm 2, which performs a substring search, can be derived directly from its analogous algorithm for a general BWT.
To describe the algorithm, let M be the sorted matrix of cyclic shifts of an arbitrary sequence X of length n, so that BW T (X)[i] = M i [n − 1], and let R a (i) (the "a-rank" Algorithm 2 General subsequence search Input: Sequence w[0, j), BWT(X) of sequence X[0, n) Output: Indices s, e such that M k [0, j) = w for k ∈ [s, e) 1: s ← 0, e ← n, i ← j 2: While s < e and i > 0: for row i) be the number of times that a appears in BW T (X)[0, i). This function can be calculated from BW T (X) in O(n) time, and can be reduced to O(1) time using a small additional data structure that stores R a (i) for i values at regular intervals. Finally, let C(a) (the cumulative symbol frequency) be the number of symbols in X that are less than a. This notation makes it possible to write down Algorithm 2, for substring searching.
To understand the algorithm, consider all rows of M that end with a symbol a. If these rows are cyclically shifted rightward, so that the last symbol becomes the first and all others are moved one position to the right, all rows will now start with a, and the relative order in which they appear in M (which they must as M contains all cyclic shifts of X) is the same as before the shift since they were ordered lexicographically to start with. Suppose that M k is a row that ends with a, and that after right-shifting it ends up as row M k ; then the above observation means that the rank R a (k) of the symbol a in M k in the last column of M , is the same as the rank in the first column of M of the symbol a in M k . Because M is sorted lexicographically, the rows that start with a form a contiguous block in M , so that the first-column rank of the symbol a in row M k is The function k → LF (k, a), mapping row k to the row corresponding to its right-shifted counterpart k , is called the last-to-first mapping because it maps the last (rightmost) symbol of M k to the corresponding symbol in the first (leftmost) position of M k . It is repeatedly used to identify the interval of rows corresponding to sequences that match one additional character of w.
Note that the mapping is well-defined whether or not M k [n − 1] = a. This makes it possible to think of k as representing a possible location between two entries (k and k − 1) in M where a sequence (or sequence prefix) x not necessarily represented in M would be inserted; this is the view taken in the search algorithm. Alternatively, when k is thought of as a particular row in M , that row's initial character a can be obtained from the C(·) function, and since the mapping (2) is invertible when restricted to the set of rows k ending in a, this makes the mapping k → LF (k, M k [n − 1]) invertible for all k. The existence of this inverse mapping also follows directly from the observation that it corresponds to rotating the sequence one position leftward; it could be called the first-to-last mapping, k → F L(k), and is used in Algorithm 5 below.
To derive the corresponding algorithm for matching a sequence in the PBWT data structure, it is enough to track the bounding variables for two steps through the standard BWT algorithm acting on the "lifted" sequence X, matching a haplotype character and a position character. The first step identifies the new range depending on the haplotype character to be matched, and points these variables to the second half of the matrix. The next step moves the bounding variable back into the first half by moving a position character in front. Because of the regular form of BW T (X) (see figure 1), these two steps can be followed algebraically and combined into a single update step. The derivation, which is straightforward but requires additional notation, is presented in the Appendix. The resulting combined update step is given by a modified last-to-first mapping function, which now additionally depends on the current position i: or for an arbitrary alphabet, Here r a i (k) is the posiional analogue of R a (i), and counts how often a appears in the first k rows of the ith block of , and f a i is the (haplotype) frequency of a at position i. This leads to Algorithm 3.

Haploid Li and Stephens
The Li and Stephens (LS) model approximates the coalescent model describing the relationship between DNA sequences in a population, by generating a new sequence as a mosaic of imperfect copies of existing sequences [4]. The popularity of the model stems from the fact that it is both a good approximation to the full coalescent model with recombination, as well as fast to compute in its natural implementation as a hidden Markov model, running in O(Hn) time for H sequences of length n. However, for very large population samples this is still too slow in practice.
Here I describe an algorithm to compute the maximum likelihood path through the LS hidden Markov model (HMM) in empirical O(n) time. The approach is not to consider single sequences to copy from, but groups of sequences that share a common subsequence. Like the Viterbi algorithm for HMMs, the proposed algorithm traverses the sequence to be explained, but rather than using a dynamic programming approach, it uses a branch-and-bound approach considering (groups of) potential path prefixes to a maximum likelihood path. Where at each iteration the Viterbi algorithm must consider all possible sequences that a potential path prefix could end with, the proposed algorithm in principle considers all extensions of the current potential path prefixes (the "branch" . The column indices are determined by f a i , the allele frequency of symbol a at locus i, and F a i := i j=0 f a i , the cumulative frequency of symbol a across loci 0, . . . , i. Note that ordering of rows (n−1)H to Hn−1 is determined by the special position symbols p 0 n−1 < · · · < p H−1 n−1 , but to avoid cluttering the notation these are all written as p n−1 .

Algorithm 4 Haploid Burrows-Wheeler Li and Stephens
Output: Minimum path score under the Li and Stephens model For (s, e, score) in st:

10:
If score = gm: extended ← T rue 11: If score + µ < gm + ρ: 12: If s < e : st .append((s , e , score + µ)) 14: If s < e and extended = F alse: Never true on 1st iteration 16: st .append((s , e , gm + ρ)) 17: traceback.append((i, gm idx, gm + ρ)) 18: gm ← gm ; st ← st 19: gm idx ← any of {s|(s, e, score) ∈ st and score = gm} 20: Return gm, gm idx, traceback part), but ignores prefixes that cannot be part of an optimal path (the "bound" part). For instance, if a prefix can be extended with a matching nucleotide, a recombination does not have to be considered, since the recombination can be postponed at no cost. Below I will show this more formally. This formal approach is perhaps not necessary (or even helpful) for the haploid case, but becomes useful when I introduce the diploid Li and Stephens algorithm. First some definitions. A placed character is a character c at a sequence position i; it is equivalent to a pair cp i where p i is the position symbol introduced before. Two placed characters are contiguous if they occupy neighbouring positions; subsequences of placed characters are contiguous if every pair of neighbouring characters is; and two or more subsequences are contiguous if their concatenation is. A path π of m parts through a set of sequences Ω = {x 0 , . . . , x H−1 } is a contiguous sequence of m subsequences s 0 , . . . , s m−1 such that each s i is a subsequence of some x j . I will write a path as π = (c 0 c 1 · · · c k0−1 Rc k0 · · · c k1−1 Rc k1 · · · · · · Rc km−2 · · · c l−1 ) where c i is a character placed at position i, and k 0 , k 1 , . . . , k m−2 are the recombination breakpoints identified by the symbol R (which is not part of the alphabet), and l is the length of the path. The (sequence) group associated with π is the set G(π) of all sequences x ∈ Ω for which the subsequences x[k m−2 , l) agree with the suffix c km−2 · · · c l−1 that follows the last recombination in π. The extension πc l (of length l + 1) is the path (· · · Rc km−2 · · · c l−1 c l ), if it exists; since by definition all subsequences that make up While lo < hi: LF (j, a, i) ≤ k ∀j < lo and LF (j, a, i) > k ∀j ≥ hi 4: mid ← (lo + hi)/2

15:
gm idx ← t idx; gm ← gm − ρ; path.append((i, a i−1 gm idx )) 16: Return path a path are subsequences of some x j , existence of an extension implies that its group is nonempty. The extension πR (of length l) is defined as (· · · Rc km−2 · · · c l−1 R), and always exists; its group is Ω. Finally, the path prefix π[0, t) is the path (c 0 · · · c t−1 ) including any R symbols for recombinations between positions 0 and t − 1; a path prefix never ends with an R symbol.
For a given sequence x and a path π, the Li and Stephens model assigns a joint likelihood to the event that π occurred and gave rise to sequence x. If π has m parts and has k mismatches to x, this likelihood is where p ρ is the probability of recombining into a particular other sequence, and p µ is the probability of a mutation to one of the three other nucleotides. The negative log likelihood takes a particularly simple form, where C is a constant, ρ = − log(p ρ /(1 − np ρ )) and µ = − log(p µ /(1 − 3p µ )). This motivates defining the path score as s x (π) = mρ + kµ, where m and k are defined as above. I drop the subscript x from s x (π) when this is possible without creating confusion.
Suppose we want to calculate a path π that minimizes s(π). This can be done by iteratively constructing path prefixes π , so that at each step one of them is a prefix of a full path π that minimizes s(π). Note that the minimum score achievable by a path π that has π as its prefix depends on the prefix score s(π ) and the prefix group G(π ), but not on the rest of the prefix. This is because G(π ) is the set of sequences the Li and Stephens model could be copying from at the end of π , and the Markov property of the model implies that the minimum score only depends on the sequence being copied from (and the prefix score). This justifies the definition of state of a path (prefix) π to be the pair (G(π ), s(π )).
The key observation for the algorithm is that some states (G, s) can be ignored, because any of their extensions give rise to paths and scores that are also achievable via other states. To make this precise I need one more definition. A set S of path prefixes, all of length l, is a full prefix set for x[0, l) if for any sequence x whose prefix x [0, l) agrees with x[0, l), there exists a path π that achieves the minimum score (i.e. s x (π) = min π s x (π )) and whose prefix π[0, l) is in S. If we can somehow find a way to iteratively construct full prefix sets of increasing length, the problem of finding a minimum-score path is solved, because the required path will be an element of the full prefix set for the full-length sequence x. The following theorem shows how to do this: Theorem 1. Suppose S is a full prefix set for x[0, l), S a set of prefixes of length l + 1, and let s min = min π∈S s(π) and s min = min π∈S s(π). Then S is a full prefix set for x[0, l + 1) if the following two conditions are true: a For all π ∈ S and all a ∈ {0, 1} so that πa is an extension and s(πa) < s min + ρ we have πa ∈ S .
b If there is no π ∈ S so that s(π) = s min and πx[l] is an extension, then S contains a path of the form πRx[l] with s(π) = s min .
In other words, certain extensions are not required to be in S : extensions πa whose score exceed the minimum plus ρ can be left out (since a recombination from the minimumscoring prefix would give a path that is at least as good), and recombinations can be ignored altogether as long as any current lowest-scoring path has a matching extension (since otherwise postponing the recombination would again be at least as good) -and if not, only a single recombination from a lowest-scoring path needs to be considered. Algorithm 4 implements these ideas. It does not actually construct prefix sets of paths, but sets of states of paths in prefix sets. This is sufficient since the state determines how paths can be extended. By using the PBWT, these states can be represented efficiently, using just the score and a pair of indices into the PBWT that correspond to a set of subsequence matches to sequences in Ω, similar to how the variables s and e in Algorithm 3 represent the interval [s, e) corresponding to a set of subsequence matches. Another difference with the description above is that the algorithm scans the sequence back-to-front, extending partial matches leftward, so that the invariant refers to the full suffix set, rather than the full prefix set.
The algorithm computes gm = s min , and keeps a running minimum score gm that bounds s min , ignoring states whose new score are not less than gm + ρ. At the end of an iteration, states whose score are not lower than the now updated gm plus ρ are not immediately removed, but are instead ignored in the next iteration. The algorithm implicitly considers both score bounds implied by gm and gm , but in each situation uses only the tighter bound of the two to decide which states to ignore. It is possible for different paths to result in overlapping or identical states, resulting in duplicate or otherwise redundant entries in the st array. Although redundant entries do not impact the correctness of the algorithm, they can dramatically reduce efficiency. A practical implementation therefore includes a step that occasionally removes redundant states.
The algorithm can be generalized a little by allowing the mutation score µ ≥ 0 to depend on the position. The path score is then defined as s(π) = mρ + i:x[i] =π[i] µ i . Theorem 1 continues to hold, and so does Algorithm 4, with the obvious changes. The current approach does not lend itself easily to generalize to a position-dependent recombination probability, as the proof of Theorem 1 relies on delaying the recombination without changing the score, which is only possible if ρ is constant along the sequence.
Note that the algorithm can be simplified when µ i ≥ 2ρ, because a mismatch can always be circumvented by two recombinations (before and after the offending locus), so that only exact matches need to be considered. In human genetics polymorphisms are sparse, and recombinations can only be localized to within hundreds or thousands of positions. Even when a maximum likelihood path is sought it is natural to marginalize over these positions, and this makes the probability of a recombination between two polymorphic sites at least an order of magnitude higher than the probability of a mutation, so that µ ρ. However, in the presence of phasing errors the probability of a mismatch can be much higher than that of a mutation, so that the regime µ < 2ρ is of practical importance.
Algorithm 4 only computes the optimal score, and to obtain an optimal-scoring path π itself a backtracking step is needed (Algorithm 5). Here it is useful that Algorithm 4 works in the backward direction, so that the result of the backtracking is oriented in the natural direction. To track an optimal path along a sequence, the PBWT index corresponding to that sequence can be tracked using the "first-to-last" mapping, inverting the steps in lines 6 and 12 in Algorithm 4, and the minimum score of the remaining suffix is updated whenever a difference between this sequence and x is found. Recombinations are followed greedily, as it is always correct to follow a feasible recombination, and it is never clear whether a particular recombination is the last feasible one for a particular sequence. Algorithm 4 collects information about recombinations in the traceback list, and when a recombination and score is identified that forms a feasible suffix to the path so far, it is followed.
The naive implementation of Algorithm 5 is somewhat slower than the haploid Li and Stephens algorithm itself, due to the F L function which takes O(log H) time in the implementation shown. In practice the P BW T will be stored in compressed form using run-length encoding, which allows a faster implementation of F L.

Diploid Li and Stephens
Where the haploid Li and Stephens algorithm computes a single haplotype path maximizing the probability of a given haploid sequence, the diploid Li and Stephens algorithm aims to find a pair of haplotype paths that maximizes the probability of a sequence of diploid genotypes under the same model. The approach used to derive the haploid algorithm also works in this case, but the details are more involved.
Let x be a sequence of genotypes, encoded as values 0, 1 or 2 at each position representing homozygous ancestral, heterozygous, and homozygous derived genotypes. The aim is to compute a pair of paths α, β that minimizes a score. As before this score contains terms for recombinations and mismatches, but the mismatch term now considers genotypes rather than haplotypes. More precisely, the score associated to the pair {α, β} is defined as s(α, β) = ρm(α) + ρm(β) + µk(α, β), where m(α) represents the number of parts of path α, as before, and | counts the number of mismatches of the paths α and β to the genotype sequence x.
The approach of the algorithm is similar to the haploid case, again sequentially building full prefix sets for ever longer sequence prefixes until a minimum path pair is found. To describe the approach, the definitions of sequence group, state and full prefix set need to be modified.

32:
If x[i] = 1 and x[i] = a r + a x and not double recomb and score = gm and (s r , e r , a r ) / ∈ extended:

34:
st .append((s r , e r , s r , e r , score + 2ρ)) 35: traceback.append((i, s x , −1, s r , score + 2ρ)) 36: double recomb ← T rue 37: gm ← gm ; lm ← lm ; st ← st 38: gm idx1, gm idx2 ← any of {s 1 , s 2 |(s 1 , e 1 , s 2 , e 2 , score) ∈ st and score = gm} 39: Return gm, gm idx1, gm idx2, traceback have extensions required in conditions b and c; whether the extension is appropriate is computed by the function consider recomb. Finally, the variable double recomb is used to ensure that at most one double recombination is considered at every iteration. The traceback algorithm for diploid Li and Stephens is similar to the haploid algorithm. Again, the traceback list contains records describing the recombinations that have been considered. These records now additionally contain a pair s x , e x that represent the range of PBWT indices corresponding to the sequence that does not undergo a recombination. As with the haploid algorithm, the traceback algorithm follows a recombination only if the path scores agree, but now also ensures that the index of the non-recombining path is contained in the range [s x , e x ). Double recombinations are encoded by setting e x = −1, and for such recombinations only the scores need to agree. A pseudocode implementation is given as Algorithm 7. While lo < hi: LF (j, a, i) ≤ k ∀j < lo and LF (j, a, i) > k ∀j ≥ hi 4: mid ← (lo + hi)/2

Performance
For testing the fastLS algorithms were implemented in C++, with all tables stored in uncompressed form in memory. To validate the implementations and to compare runtimes, standard Viterbi algorithms for the haploid and diploid LS model were also implemented. The fastLS algorithms included the traceback step, but Viterbi traceback this was left out because of memory constraints. Two sets of simulations were performed. For the first, 30 Mb of sequence in populations of size 100 to 10, 000 were simulated by scrm [12] using the 'standard simulation' model of [13]. For each population I simulated an additional 50 samples to serve as input sequences. This resulted in a number of segregating sites ranging from from 129, 945 for the 150-sample case, to 436, 361 for 10, 050 samples. For the second set, I simulated a single population of 100, 000 samples under the same model (resulting in 621, 156 segregating sites) and sub-sampled reference populations of 100 to 10, 000 samples from these. The run-times of the Viterbi algorithms show the expected linear and quadratic dependence on H. The fastLS algorithms show a sub-linear dependence on H. In the case of the sub-sampled population, which have a fixed number of loci (not all of which segregate in the sample), the dependence on H is weakest, and in fact the diploid algorithm becomes faster for larger populations, probably because longer haplotype matches can be found in larger populations, resulting in more efficient pruning of the prefix sets.

Acknowledgements
Thanks to Sorina Maciuca and Zam Iqbal who introduced me to the idea of position symbols which led directly to this work; and to Gil McVean and Jerome Kelleher for helpful comments on the manuscript.

Derivation of Algorithm 3
To derive the PBWT algorithm for sequence matching I first need some expressions that describe the structure of M . From Figure 1 it can be seen that j=0 f a j be the cumulative haplotype frequency across positions up to i − 1, and set F a = F a n . Then R a (i) satisfies I define r a i (k) so that R a (Hi + k) = F a i + r a i (k) for k ∈ [0, H] and a ∈ {0, 1}, or equivalently, r a i (k) counts how often a appears in BW T (X)[Hi, Hi+k). To derive the PBWT sequence matching algorithm, it suffices to track one of the bounding variables, say s, for two steps through Algorithm 2. Assume that the subsequence matched so far starts at position i, so that s = Hi + k with k ∈ [0, H], and that the next character to be matched is a ∈ {0, 1}. The first step replaces s with s = C(a) + R a (s) = C(a) + F a i + r a i (k) where the second equality follows from (7). The function r a i (k) returns the number of occurrences of a before the kth row within the block starting at row iH in M . This block includes all sequences that start with p i , so that 0 ≤ r a i (k) ≤ f a i for k ∈ [0, H], and the conditions for (5) and (6) apply, allowing the result of the second step to be computed. The sequence now ends with the symbol p i−1 , so that if a = 0, s is replaced by Since R 0 (r) + R 1 (r) = r for r ≤ Hn, it follows that r 0 i (k) + r 1 i (k) = k for 0 ≤ k ≤ n, so that the last-to-first function mapping k to the new value k satisfying s = H(i − 1) + k is LF (k, a, i) as defined in (3).

Proof of Theorem 1
The key observation is that if S contains a path π with state (G , s ), then S does not need to contain any path π (of the same length) with state (G, s) if G ⊇ G and s ≤ s. In this case I say that π undercuts π, or symbolically π ≤ π. In addition, if π R ≤ π I also say that π ≤ π, again because all scores that are achievable with π as prefix are also achievable with prefix π .
Since S is a full prefix set for x[0, l), a trivial full prefix set for x[0, l + 1) is formed by the union of simple extensions S x = {πa|π ∈ S, a ∈ {0, 1}}, and recombination extensions S r = {πRa|π ∈ S, a ∈ {0, 1}}. To prove that S ⊆ S r ∪ S x is also a full prefix set, we need to show that any path π ∈ S x ∪ S r \ S is undercut by some path π ∈ S . In the proof below I will identify for any such π a π that strictly undercuts π (written as π < π) -that is, either the score is strictly lower or the group is strictly larger -but which is not necessarily an element of S . If an element is found that is not in S , the process can be repeated, finding a π < π < π, and so forth. This process has to stop eventually, with an element in S , because s cannot decrease indefinitely and G cannot increase indefinitely.