A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases

Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this paper, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290x faster than BWAMEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥ 5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and > 60,000 genomes.


Introduction
Mapping reads generated by high-throughput DNA sequencers to reference genomes is a fundamental and widely studied problem [16,24]. The problem is particularly well studied for short read sequences, for which effective mapping algorithms and widely used software such as BWA [15] and Bowtie [12] have been developed. The increasing popularity of single-molecule sequencers from Pacific Biosciences and Oxford Nanopore, and their continually improving read lengths (10 kbp and up), is generating renewed interest in long read mapping algorithms. However, the benefit of long read lengths is currently accompanied by much higher error rates (up to 15-20%). Despite their high error rates, long reads have proved advantageous in many applications including de novo genome assembly [7,10] and real time pathogen identification [2,22].
Sequence data from nanopore devices is available just minutes after introducing the sample. This can enable real-time genomic analysis when coupled with fast computational methods that can map the data stream against large reference databases. However, mapping raw sequences continues to be a bottleneck for many applications. The problem is only going to worsen as Oxford Nanopore's PromethION is projected to generate multiple tera-bases of sequence per day. In parallel, reference databases are continually growing in size, with the non-redundant NCBI RefSeq database close to exceeding a tera-base in size. The high error rate of raw single-molecule sequences further adds to the computational complexity.
Read mapping problems can be solved exactly by designing appropriate variants of the Smith-Waterman alignment algorithm [27]; however, it is computationally prohibitive when mapping reads from a high throughput sequencer to large reference genomes. Seed-and-extend mapping heuristics address this limitation for both long and short reads by limiting the search to locations where exact short word or maximal common substring matches occur before executing an alignment algorithm at these locations [1,8,5]. Accurate alignment-based long read mappers include BLASR [5] and BWA-MEM [13]. However, repetitive seeds that do not translate to correct mappings combined with high sequencing error rates limit their scalability. Additionally, alignment-based mapping algorithms preserve the complete reference sequence in the index, and hence, cannot scale to tera-base scale reference databases. Many genomics applications do not require detailed base-to-base alignment information, and instead use only the alignment boundary and identity summaries. Such applications include depthof-coverage analysis, metagenomic read assignment, structural variant detection, and selective sequencing [18]. Efficient algorithms for these problems, combined with nanopore sequencing, could enable the real-time genomic analysis of patients, pathogens, cancers, and microbiomes.
One class of algorithms for fast, approximate mapping relies on ideas originally developed for finding similarities between web documents. Broder [4] proved that an unbiased estimate of the Jaccard similarity coefficient between two sets can be computed efficiently using a subset of hashed elements called a sketch. Schleimer et al. [25] proposed the winnowing algorithm, which picks a minimum hashed item (also known as a minimizer [23]) from each consecutive window of text as a means to more quickly estimate local similarity between web documents. These ideas have been used to develop new mapping and assembly algorithms for long reads such as the MinHash Alignment Process [3], minimap [14], and BALAUR [21]. To date, the effectiveness of these approaches has only been demonstrated empirically.
In this paper, we propose a fast approximate algorithm for mapping long reads that scales to large reference databases with sufficient theoretical guarantees and practical validation on the quality of results reported. We propose a problem formulation that mathematically characterizes desired mapping targets by linking the Jaccard coefficient between the k-mer spectra of the read and its mapping region to a sequence error rate assuming a Poisson error model. We then provide an efficient algorithm to estimate the Jaccard coefficient through a combination of MinHash and winnowing techniques that characterizes and guarantees the types of mapping regions we find. On the quality side, we provide probabilistic bounds on sensitivity. We present techniques for choosing algorithmic parameters as a function of error rate and sequence lengths that guarantees the desired statistical significance. The theory is validated using PacBio and Min-ION datasets, and we demonstrate the scalability of our approach by mapping PacBio metagenomic reads to the entire RefSeq database. The speed and space efficiency of our algorithm enables real-time mapping, and compared to minimap, our method maintains high sensitivity with better precision for large, repetitive genomes. The implementation is available at github.com/MarBL/MashMap.

Preliminaries
Read Error Model: We assume errors occur independently at the read positions, and use a Poisson error model as in previous works [9,19]. A binomial model would also be appropriate, but is not discussed here for brevity. Let ∈ [0, 1] be the per-base error rate. The expected number of errors in a k-mer is k · , and the probability of no errors within each k-mer, assumed independent, is e − k . We assume the statement is valid irrespective of error type.
Jaccard Similarity: Assuming X , Y are the sets of k-mers in sequences X and Y respectively, their Jaccard similarity is defined as J(X, Y ) = |X ∩ Y|/|X ∪ Y|. The Poisson error model is used to compute the relationship between Jaccard similarity and alignment error rate [19]. We approximate the length of a read alignment to be the read length. Let A be a read derived from B i , where B i denotes the length |A| substring of reference B starting at position i. If c and n denote the number of error-free and total k-mers in A, respectively, then the expected value of c/n, termed k-mer survival probability, is e − k . This equation assumes k is large enough such that k-mers in A or B i are unique. Because |A| = |B i |, J(A, B i ), abbreviated as J, equals c/(2n−c). Using the two equations, we derive the following functions F and G to correlate and J: MinHash Approximation: The MinHash algorithm is a fast and spaceefficient approximation technique to compute an unbiased estimate of Jaccard similarity [4], without explicitly computing the underlying set intersection and union. Let s be a fixed parameter. Assuming universe U is the totally ordered set of all possible items, and Ω : U → U is a permutation chosen uniformly at random, Broder [4] proved that P min Assuming s is fixed and the true Jaccard similarity j = J(A, B i ) is known, the count of shared sketch elements between S(A) and S(B i ) follows a hypergeometric distribution. Since s is much smaller than |A|, it can be approximated by the binomial distribution.
As an example, Figure 1 illustrates this distribution for a read with known Jaccard similarity j = G( = 0.15, k = 16) (using Equation 1) and sketch size s varying from 200 to 500. Winnowing: Winnowing is a local fingerprinting algorithm, proposed to measure similarity between documents by using a subset of hashed words [25]. Unlike MinHash sketching, it bounds the maximum positional gap between any two consecutive selected hashes. It works by sampling the smallest hashed item in every consecutive fixed size sliding window ( Figure 2). Formal description of this algorithm in the context of genomic sequences follows.
Let A 0 denote the set of all k-mer tuples k i , i in sequence A, i denoting the k-mer position. Let w be the window-size used for winnowing, and K j be the set of w consecutive k-mer tuples starting at position j in A, i.e., Assume Ω is a hash function defined as a random permutation. Then, the set of minimizers sampled by the winnowing algorithm in sequence A is Schleimer et. al. [25] prove that the expected set count of minimizers selected from a random sequence A is 2|A 0 |/w. Moreover, W (A) can be computed efficiently in O(|A|) time and O(w) space using a double-ended queue, as sequence A is read in a streaming fashion [26].

Problem Formulation
Given a read A and the maximum per base error rate max , our goal is to identify target positions in reference B where A aligns with ≤ max per-base error rate. This problem can be exactly solved in O(|A| · |B|) time by designing a suitable quadratic time alignment algorithm. When mapping to a large database of reference sequences, solving this problem exactly is computationally prohibitive. Hence, we define an approximate version of this problem using the Jaccard coefficient as a proxy for the alignment as follows: Let B i denote the substring of size |A| in B starting at position i ( 0 ≤ i ≤ |B| − |A|). For a given k, we seek all mapping positions i in B such that Note that if A aligns with B i with per-base error rate ≤ max , then E(J(A, B i )) ≥ G( max , k) (using Equation 1). As this equation applies only to the expected value of J(A, B i ), we lower this threshold by δ to account for variation in the estimate. The parameter δ is defined as the margin of error in Jaccard estimation using a 90% confidence interval.

The Proposed Algorithm
Directly computing J(A, B i ) for all positions i is as asymptotically expensive as the alignment algorithm. The rationale for reformulating the problem in terms of Jaccard coefficients is that it enables the design of fast heuristic algorithms. We present an algorithm to estimate J(A, B i ) efficiently using a combination of MinHash and winnowing techniques. In addition, we compute an estimate of the expected alignment error rate for each mapping reported. Our method relies on an indexing and search strategy we developed to prune the incorrect mapping positions efficiently.

Definitions
Let W (A) be the set of minimizers computed for read A using the winnowing method with window-size w. We sketch W (A) instead of sketching A itself. Assuming s is a fixed parameter, we define S W (A) as the set of the s smallest hashed k-mers that were sampled using winnowing of . We postpone our discussion on how to compute an appropriate window-size w to Section 5. Besides low memory requirements, a key advantage of this indexing strategy is that a new reference sequence can be incrementally added to the existing data structure in time linear to its length, which is not feasible for suffix array or Burrows-Wheeler transform based indices, typically used in most mapping software.

Searching the Reference
The goal of the search phase is to identify for each read A, positions i such that We instead compute the winnowed-minhash estimate To avoid directly evaluating J (A, B i ) for each B i , we state and prove the following theorem: From Equation 5, It also implies that m consecutive entries should exist in L with positional difference between the first and m th entry being < |A|. This criterion is efficiently evaluated for all B i using a linear scan on L (lines 6-9). If satisfied, we push the associated candidate range into T . To avoid reporting B i more than once, we merge two consecutive overlapping tuple ranges into one.
Algorithm 2: Stage 2 of mapping read

Selecting Window and Sketch Sizes
The sketch size for Jaccard similarity estimation is inversely proportional to the window size w (Section 4.3). A larger window size improves the runtime and space requirement during the search but also negatively affects the statistical significance and accuracy of our estimate. To achieve the right balance, we analyze the p-value of a mapping location being reported under the null hypothesis that both query and reference sequences are random. For the subsequent analysis, we will assume the sketch size is s, the count of shared sketch elements is a discrete random variable Z, the k-mer size is k, the alphabet set is Σ, and the read and reference sequence sizes are q and r respectively. Location i is reported if J (A, B i ) ≥ τ , i.e., at least s · τ sketch elements are shared. Following [19], consider two random sequences of length q with k-mer sets X and Y respectively. The probability of a random k-mer α appearing in X or Y , assuming q k, is P (α ∈ X) = P (α ∈ Y ) = 1 − (1 − |Σ| −k ) q . Therefore, the expected Jaccard similarity J null = J(X, Y ) is given by For sketch size s, the probability that x or more sketch elements are shared is P (Z ≥ x|J null , s) = s j=x s j (J null ) j (1 − J null ) s−j . Using this equation, we compute the probability of a random sequence of length q mapping to at least one substring in a random reference sequence of size r q as 1 − 1 − P (Z ≥ x|J null , s) r . For a minimum read length l 0 and x = s · τ , we wish to ensure that this probability is kept below a user-specified threshold p max . As reported mapping locations i must satisfy J (A, B i ) ≥ τ and q ≥ l 0 , a mapping with J (A, B i ) = τ, q = l 0 , in general, will have the highest probability of generating a random match. Therefore, we compute the maximum value of w that satisfies the p max constraint for this instance. Sketch size s is set to |W h (A)|, which from Section 4.3 is expected to be q · 2/w. Since x, s, and w have a circular dependency, we iteratively solve for w, starting from the maximum value l 0 , until the probability of a random mapping is ≤ p max . Influence of different parameters on window size is shown in Figure 3. The window size w increases with increasing p max or l 0 , but has an inverse relationship with max . These plots also highlight that as read length and error rate improve, our algorithm automatically adapts to a larger window size, greatly improving efficiency. Steps appear in the first two curves because Z is a discrete variable.

Proof of Sensitivity
We analyze the sensitivity exhibited by our algorithm in identifying correct mapping locations as a function of the read alignment error rate. Let i be a correct mapping location for read A. If true is the true error rate in aligning A with B i , then J true ≈ E(J(A, B i )) = G( true , k). Our algorithm reports this mapping location if the Jaccard estimate J (A, B i ) ≥ τ , i.e., the count of shared sketch elements Z ≥ s · τ . The associated probability is given by We report the corresponding values in Table 1 while varying max and true from 0.04 to 0.20 error rate, for two sketch sizes s = 200 and 500, respectively. In an ideal scenario, a mapping should be reported only if true ≤ max , i.e., a perfect algorithm would have "1" in each of the entries at or above the diagonal, and "0" in all other positions. From the table, it is evident our algorithm achieves close to ideal sensitivity for alignment error rates up to 20%.

Other Implementation Details
Optimizing for Variable Read Lengths: In contrast to cyclic short-read sequencing, single-molecule technologies can generate highly variable read lengths (e.g. 10 2 -10 5 bases). Previously, we discussed how the window size w is determined using the minimum read length l 0 in Section 5. From Figure 3(c), notice that we can further reduce the sampling rate (i.e. use a larger window size) for reads longer than l 0 while still satisfying the p-value constraint. However, to realize this, the sampling scheme for indexing the reference sequence B needs to be consistent with that of query. We propose the idea of multilevel winnowing to further optimize the runtime of our algorithm by choosing custom window size for each input read. Suppose W w (B) denotes the set of winnowed fingerprints in the reference computed using window size w, then W 2w (B) ⊆ W w (B) [25]. We exploit this property to construct a multilevel reference index with multiple window sizes {w, 2w, 4w . . . } recursively. This optimization yields us faster mapping time per base pair for reads longer than l 0 as we independently compute the window size for a given read length l ≥ l 0 , and round it to the closest smaller reference window size {w, 2w, 4w . . . }. The expected time and space complexity to index the reference using multiple levels is unaffected because the expected size of W 2 x+1 w (B) is half of W 2 x w (B) and W 2 x+1 w (B) can be determined in linear time from W 2 x w (B).

Strand Prediction:
To account for the reads sequenced from the reverse strand relative to the reference genome, we compute and store only canonical k-mers, i.e. the lexicographically smaller of the forward and reverse-complemented kmer. For each k-mer tuple k, i in W (A) and W (B), we append a strand bit 1 if the forward k-mer is lexicographically smaller and −1 otherwise. While evaluating the read mappings in Stage 2, we compute the mapping strand of the read through a consensus vote among the shared sketches using sum of pairwise products of the strand bits.

Quality of Jaccard Estimation
We first show that the accuracy of the winnowed-minhash estimator J to estimate the Jaccard similarity is as good as the direct MinHash approximation, which is an unbiased statistical estimator. We construct a random sequence of length 5 kbp with each character having equal probability of being either A,C,G or T. We generate reads while introducing substitution errors at each position with probability 0.15. Note that both substitutions and indels have a similar effect of altering the k-mers containing them, and a uniform distribution of errors alters more k-mers than a clustering of errors. Figure 4 shows the estimation difference against the true Jaccard similarity using MinHash and our estimator for two different sketch sizes s = 100 and s = 200. Based on these results, we conclude that the bias in our estimation is practically negligible as the mean error by our method in estimating Jaccard similarity is < 0.003 for both sketch sizes. Similar to MinHash approximation, we note that the magnitude of estimation error reduces with increasing sketch size.

Mapping MinION and PacBio Reads
We refer the C++ implementation of our algorithm as mashmap and compare its run-time performance and memory usage against alignment based long-read mappers BWA-MEM (v0.7.15-r114) [13], BLASR (vSMRTportal 2.3.0) [5], and minimap (v0.2) [14]. We also perform a comparison of the approximate mapping targets generated by mashmap and minimap. Like mashmap, minimap uses winnowing to index the reference, but does not use the MinHash approximation to estimate Jaccard similarity or nucleotide identity. Instead, minimap seeks clusters of minimizer matches to identify regions of local similarity. Importantly, minimap approximates a local alignment process, which is useful for split-read mapping. However, because mashmap is currently designed to find complete read mappings, we only consider this case for the following comparisons.
Datasets and Methodology: We evaluated the algorithms by mapping long read datasets generated using single-molecule sequencers from Pacific Biosciences and Oxford Nanopore, and report single-threaded execution timings on an AMD Opteron 2376 CPU with 64 GB RAM. We use two datasets, labeled N1 and P1 respectively, both containing reads of length ≥ 5 kbp. Dataset N1 is a random sample of 30,000 reads from the MinION (R9/1D) sequencing dataset of the Escherichia coli K12 genome [17]. Dataset P1 contains 18,000 reads generated through a single SMRT cell from PacBio's (P6/C4) sequencing of the human genome (CHM1) [6]. We map N1 to E. coli K12 (4.6 Mbp) and P1 to the human reference (3.2 Gbp). For mashmap, we use the following parameters: l 0 = 5000, max = 0.15, and p max = 0.001. When a read maps to multiple locations, mashmap only reports locations where mapping error rate is no more than 1% above the minimum of error rate over all such locations.
Run-Time Performance: Run-times for the index building and mapping stages, and memory used, for the four methods are compared in Table 2. As both BWA-MEM and BLASR are alignment based methods, we expect their run-times to be significantly higher. Indeed, they take several hours in comparison to seconds (N1) or a few minutes (P1) taken by mashmap and minimap. The principal challenge is whether the latter methods can retain the quality obtainable through alignment based methods. We note that mashmap has the lowest memory footprint for both datasets, and its run-time compares favorably with minimap. The ability to compute the sampling rate at runtime gives mashmap its edge in terms of memory usage.

Quality of Mapping:
As there is no standard benchmark using real datasets, we assess sensitivity/recall using BWA-MEM's starting read mapping positions, and precision by computing Smith-Waterman (SW) alignments of the reported mappings (Table 3). Since both minimap and BWA-MEM also report split-read alignments, we post-filter their results to only keep alignments with ≥ 80% read coverage. Recall is measured against BWA-MEM alignments which satisfy the max = 0.15 cutoff (≥ 85% identity). Because both minimap and mashmap estimate mapping positions, the reported mapping is assumed equivalent to BWA-MEM if the predicted starting position of a read is within ±50% of its length. Precision was directly validated using Smith-Waterman (SW) alignment (with scoring matrix: match = 1, mismatch = −1, gap open = −2, gap extend = −1). For minimap's and our results, we allow SW-identity ≥ 75% and query coverage ≥ 80%. Results in Table 3 show that both mashmap and minimap have close to ideal sensitivity/recall, demonstrating their ability to uncover the right target locations.
Mashmap also achieves high precision, avoiding false positives on the repetitive human genome. Minimap's low precision on human is largely driven by false-positive mappings to repetitive sequence, which could potentially be resolved with alternative clustering parameters. Mashmap false positives are dominated by reported mappings with a SW query coverage less than 80% of the read length. It may be possible to avoid such mappings by considering the positional distribution of shared sketch elements during the second stage filter, or by adopting a local alignment reporting strategy like minimap.
We compare our identity estimates (1 − ) × 100 against the SW alignment identities in Figure 5. For the PacBio reads, we observe that most of the points are aligned close to y = x. However, for the nanopore reads, our approach overestimates the identity. This is because PacBio sequencing produces mostly random errors, whereas current nanopore errors are more clustered and systematic [11].

Mapping to RefSeq
We perform mapping of a publicly available PacBio read set consisting of 127,565 reads (each ≥ 5 kbp) sequenced from a mock microbial community containing 20 strains [20]. To demonstrate the scalability of our algorithm, we map these reads against the complete NCBI RefSeq database (838 Gbp) containing sequences from 60,892 organisms. This experiment was executed using default parameters (l 0 = 5000, max = 0.15, p max = 0.001) on an Intel Xeon CPU E7-8837 with 1 TB memory. BWA-MEM and minimap could not index the entire RefSeq database at once with this memory limitation. Mashmap took 29 CPU hours to index the reference and 16 CPU hours for mapping, with a peak memory usage of 660 GB. Note that the same index can be repeatedly used for mapping sequences, conferring our method the ability to process data in real-time. To check the accuracy of our results, we ran BWA-MEM against the 20 known genomes of the mock community. The recall of mashmap against BWA-MEM mappings ranged from 97.7% to 99.1% for all the 20 genomes in the mock community.

Conclusions
We have presented a fast approximate algorithm for mapping long reads to large reference genomes. Instead of reporting base-level alignments, mashmap reports all reference intervals with sufficient Jaccard similarity compared to the k-mer spectrum of the read. In contrast to earlier techniques based on MinHash and winnowing, we provide a formal characterization of the mappings the algorithm is intended to uncover, and provide a provably good algorithm for computing them. In addition, we report an estimate of the alignment error rate tailored to each mapping under an assumed error model. Mashmap provides significant benefits in run-time, memory usage, and scalability, while achieving precision and recall similar to alignment-based methods. Future work aims to extend this method to split-read mapping, compressed reference databases, and additional error models. For example, the winnowed-minhash operation could be applied to paths within a de Bruijn graph to recover identity estimates and identify the database sequences most similar to a query sequence. Such approximate algorithms promise to help address the ever increasing scale of genomic data.