## Abstract

Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this paper, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290x faster than BWA-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥ 5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and > 60, 000 genomes.

## 1 Introduction

Mapping reads generated by high-throughput DNA sequencers to reference genomes is a fundamental and widely studied problem [16, 24]. The problem is particularly well studied for short read sequences, for which effective mapping algorithms and widely used software such as BWA [15] and Bowtie [12] have been developed. The increasing popularity of single-molecule sequencers from Pacific Biosciences and Oxford Nanopore, and their continually improving read lengths (10 kbp and up), is generating renewed interest in long read mapping algorithms. However, the benefit of long read lengths is currently accompanied by much higher error rates (up to 15–20%). Despite their high error rates, long reads have proved advantageous in many applications including *de novo* genome assembly [7, 10] and real time pathogen identification [2, 22].

Sequence data from nanopore devices is available just minutes after introducing the sample. This can enable real-time genomic analysis when coupled with fast computational methods that can map the data stream against large reference databases. However, mapping raw sequences continues to be a bottleneck for many applications. The problem is only going to worsen as Oxford Nanopore’s PromethION is projected to generate multiple tera-bases of sequence per day. In parallel, reference databases are continually growing in size, with the non-redundant NCBI RefSeq database close to exceeding a tera-base in size. The high error rate of raw single-molecule sequences further adds to the computational complexity.

Read mapping problems can be solved exactly by designing appropriate variants of the Smith-Waterman alignment algorithm [27]; however, it is computationally prohibitive when mapping reads from a high throughput sequencer to large reference genomes. Seed-and-extend mapping heuristics address this limitation for both long and short reads by limiting the search to locations where exact short word or maximal common substring matches occur before executing an alignment algorithm at these locations [1, 8, 5]. Accurate alignment-based long read mappers include BLASR [5] and BWA-MEM [13]. However, repetitive seeds that do not translate to correct mappings combined with high sequencing error rates limit their scalability. Additionally, alignment-based mapping algorithms preserve the complete reference sequence in the index, and hence, cannot scale to tera-base scale reference databases. Many genomics applications do not require detailed base-to-base alignment information, and instead use only the alignment boundary and identity summaries. Such applications include depth- of-coverage analysis, metagenomic read assignment, structural variant detection, and selective sequencing [18]. Efficient algorithms for these problems, combined with nanopore sequencing, could enable the real-time genomic analysis of patients, pathogens, cancers, and microbiomes.

One class of algorithms for fast, approximate mapping relies on ideas originally developed for finding similarities between web documents. Broder [4] proved that an unbiased estimate of the Jaccard similarity coefficient between two sets can be computed efficiently using a subset of hashed elements called a sketch. Schleimer *et al*. [25] proposed the winnowing algorithm, which picks a minimum hashed item (also known as a minimizer [23]) from each consecutive window of text as a means to more quickly estimate local similarity between web documents. These ideas have been used to develop new mapping and assembly algorithms for long reads such as the MinHash Alignment Process [3], minimap [14], and BALAUR [21]. To date, the effectiveness of these approaches has only been demonstrated empirically.

In this paper, we propose a fast approximate algorithm for mapping long reads that scales to large reference databases with sufficient theoretical guarantees and practical validation on the quality of results reported. We propose a problem formulation that mathematically characterizes desired mapping targets by linking the Jaccard coefficient between the *k*-mer spectra of the read and its mapping region to a sequence error rate assuming a Poisson error model. We then provide an efficient algorithm to estimate the Jaccard coefficient through a combination of MinHash and winnowing techniques that characterizes and guarantees the types of mapping regions we find. On the quality side, we provide probabilistic bounds on sensitivity. We present techniques for choosing algorithmic parameters as a function of error rate and sequence lengths that guarantees the desired statistical significance. The theory is validated using PacBio and MinION datasets, and we demonstrate the scalability of our approach by mapping PacBio metagenomic reads to the entire RefSeq database. The speed and space efficiency of our algorithm enables real-time mapping, and compared to minimap, our method maintains high sensitivity with better precision for large, repetitive genomes. The implementation is available at github.com/MarBL/MashMap.

## 2 Preliminaries

### Read Error Model

We assume errors occur independently at the read positions, and use a Poisson error model as in previous works [9, 19]. A binomial model would also be appropriate, but is not discussed here for brevity. Let ϵ ∈[0, 1] be the per-base error rate. The expected number of errors in a *k*-mer is *k · ϵ*, and the probability of no errors within each *k*-mer, assumed independent, is *e*^{−}* ^{ϵk}*. We assume the statement is valid irrespective of error type.

### Jaccard Similarity

Assuming are the sets of *k*-mers in sequences *X* and *Y* respectively, their Jaccard similarity is defined as . The Poisson error model is used to compute the relationship between Jaccard similarity and alignment error rate [19]. We approximate the length of a read alignment to be the read length. Let *A* be a read derived from *B _{i}*, where

*B*denotes the length |

_{i}*A*| substring of reference

*B*starting at position

*i*. If

*c*and

*n*denote the number of error-free and total

*k*-mers in

*A*, respectively, then the expected value of

*c/n*, termed

*k-mer survival probability*, is

*e*

^{-}

*. This equation assumes*

^{ϵk}*k*is large enough such that

*k*-mers in

*A*or

*B*are unique. Because |

_{i}*A*| = |

*B*|,

_{i}*J*(

*A, B*), abbreviated as

_{i}*J*, equals

*c*/ (

*2n*-

*c*). Using the two equations, we derive the following functions and to estimate

*J*and

*ϵ*: where (

*ϵ, k*) serves as an estimate of the Jaccard similarity given an error rate and (

*J, k*) estimates the converse. Note (using Jensen’s inequality).

### MinHash Approximation

The MinHash algorithm is a fast and space-efficient approximation technique to compute an unbiased estimate of Jaccard similarity [4], without explicitly computing the underlying set intersection and union. Let *s* be a fixed parameter. Assuming universe *U* is the totally ordered set of all possible items, and *Ω*: *U* → *U* is a permutation chosen uniformly at random, Broder [4] proved that , and thatis an unbiased estimate of *J*(*A, B _{i}*), where

*S*() (called the

*sketch*of ) is the set of the smallest

*s*hashed items in , i.e.,

*S*() = min

*{Ω(*

_{s}*x*):

*x*ϵ }. Typically, the denominator is referred as the MinHash sketch size and the numerator as the count of shared sketch elements. This estimate is unbiased provided

*S*() is a simple random sample of . Increasing the sketch size improves the accuracy of the estimate.

Assuming *s* is fixed and the true Jaccard similarity *j* = *J* (*A, B _{i}*) is known, the count of shared sketch elements between

*S*() and

*S*() follows a hypergeometric distribution. Since

*s*is much smaller than , it can be approximated by the binomial distribution. As an example, Figure 1 illustrates this distribution for a read with known Jaccard similarity

*j*= (ϵ = 0.15,

*k*= 16) (using Equation 1) and sketch size

*s*varying from 200 to 500.

### Winnowing

Winnowing is a local fingerprinting algorithm, proposed to measure similarity between documents by using a subset of hashed words [25]. Unlike MinHash sketching, it bounds the maximum positional gap between any two consecutive selected hashes. It works by sampling the smallest hashed item in every consecutive fixed size sliding window (Figure 2). Formal description of this algorithm in the context of genomic sequences follows.

Let *A*_{0} denote the set of all *k*-mer tuples 〈*k _{i}, i*〉 in sequence

*A, i*denoting the

*k*-mer position. Let

*w*be the window-size used for winnowing, and

*K*be the set of

_{j}*w*consecutive

*k*-mer tuples starting at position

*j*in

*A*, i.e.,

*K*= {〈

_{j}*k*〉:

_{i}, i*j*≤

*i*<

*j*+

*w*}. Assume

*Ω*is a hash function defined as a random permutation. Then, the set of

*minimizers*sampled by the winnowing algorithm in sequence

*A*is where Schleimer

*et. al*. [25] prove that the expected set count of minimizers selected from a random sequence

*A*is 2|

*A*

_{0}|/

*w*. Moreover,

*W*(

*A*) can be computed efficiently in

*O*(|

*A*|) time and

*O*(

*w*) space using a double-ended queue, as sequence

*A*is read in a streaming fashion [26].

### 3 Problem Formulation

Given a read *A* and the maximum per base error rate *ϵ _{max}*, our goal is to identify target positions in reference

*B*where

*A*aligns with ≤

*ϵ*per-base error rate. This problem can be exactly solved in

_{max}*O*(|

*A*|·|

*B*|) time by designing a suitable quadratic time alignment algorithm. When mapping to a large database of reference sequences, solving this problem exactly is computationally prohibitive. Hence, we define an approximate version of this problem using the Jaccard coefficient as a proxy for the alignment as follows: Let

*B*denote the substring of size |

_{i}*A*| in

*B*starting at position

*i*(0 ≤

*i*≤ |

*B*| – |

*A*|). For a given

*k*, we seek all mapping positions

*i*in

*B*such that Note that if

*A*aligns with

*B*with per-base error rate

_{i}*≤ ϵ*, then (

_{max}*J*(

*A, B*)) ≥ (ϵ

_{i}*,*

_{max}*k*) (using Equation 1). As this equation applies only to the expected value of

*J*(

*A, B*), we lower this threshold by

_{i}*δ*to account for variation in the estimate. The parameter

*δ*is defined as the margin of error in Jaccard estimation using a 90% confidence interval.

## 4. The Proposed Algorithm

Directly computing *J*(*A, B _{i}*) for all positions

*i*is as asymptotically expensive as the alignment algorithm. The rationale for reformulating the problem in terms of Jaccard coefficients is that it enables the design of fast heuristic algorithms. We present an algorithm to estimate

*J*(

*A, B*) efficiently using a combination of MinHash and winnowing techniques. In addition, we compute an estimate of the alignment error rate

_{i}*ϵ*for each mapping reported. Our method relies on an indexing and search strategy we developed to prune the incorrect mapping positions efficiently.

### 4.1 Definitions

Let *W*(*A*) be the set of minimizers computed for read *A* using the winnowing method with window-size *w*. We sketch *W*(*A*) instead of sketching *A* itself. Assuming s is a fixed parameter, we define *S*(*W*(*A*)) as the set of the *s* smallest hashed *k*-mers that were sampled using winnowing of *A*, i.e., *S*(*W*(*A*)) = min* _{s}*{

*h*: 〈

*h, pos*〉 ∈

*W*(

*A*)}. To estimate

*J*(

*A, B*), we define

_{i}*winnowed-minhash*estimate (

*A, B*) for

_{i}*J*(

*A, B*) as In contrast to the MinHash approximation (Equation 2), our estimator (

_{i}*A, B*) uses winnowing to reduce the sampling frame before picking the minimum hash values. Even though

_{i}*S*(

*W*(

*A*)) is no longer a simple random sample of the

*k*-mers in

*A*, we empirically show in Section 8.1 that the quality of the Jaccard estimation using (

*A, B*) is as good as the MinHash estimation. We use

_{i}*W*(

_{h}*A*) to denote the set of hashed

*k*-mers in

*W*(

*A*), i.e.,

*W*(

_{h}*A*) = {

*h*: 〈

*h, pos*〉 ∈

*W*(

*A*)}.

### 4.2 Indexing the Reference

Retaining the minimizers *W*(*B _{i}*) is sufficient for Jaccard similarity estimation (

*A, B*) (Equation 5). Since

_{i}*(Section 2), we compute*

*W*(

*B*) from the reference sequence

*B*in order to be able to extract

*W*(

*B*) efficiently for any

_{i}*i*. The set

*W*(

*B*) can be computed from

*B*in a linear scan in

*O*(|

*B*|) time. We store

*W*(

*B*) as an array of tuples 〈

*h, pos*〉. When created, the set is naturally in ascending sorted order of the positions. Further, to enable

*O*(1) look-up of all the occurrences of a particular minimizer’s hashed value

*h*, we also replicate

*W*(

*B*) as a hash table with

*h*as the key and an array of its positions {pos: 〈

*h*,

*pos*〉

*ϵ W*(

*B*)} as the mapped value. The expected space requirements for and are 2|

*B*|/

*w*(Section 2). We postpone our discussion on how to compute an appropriate window-size

*w*to Section 5. Besides low memory requirements, a key advantage of this indexing strategy is that a new reference sequence can be incrementally added to the existing data structure in time linear to its length, which is not feasible for suffix array or Burrows-Wheeler transform based indices, typically used in most mapping software.

### 4.3 Searching the Reference

The goal of the search phase is to identify for each read *A*, positions *i* such that *J* (*A, B _{i}*) ≥ (

*ϵ*) –

_{max}, k*δ*. We instead compute the winnowed-minhash estimate . To avoid directly evaluating (

*A, B*) for each

_{i}*B*, we state and prove the following theorem:

_{i}Assuming sketch size *s* ≤ |*Wh*(*A*)|,
From Equation 5,

We use the above condition as a filter and only consider positions in *B* which satisfy |*W _{h}*(

*A*) ∩

*W*(

_{h}*B*)| ≥

_{i}*s*·

*τ*.To maximize effectiveness of the filter, we set the sketch size

*s*= |

*W*(

_{h}*A*)|. The search proceeds in two successive stages. The first stage identifies candidate positions

*i*using Theorem 1, and the second stage computes (

*A, B*) at each candidate position

_{i}*i*. The position is retained as output if (

*A, B*) ≥

_{i}*τ*, and discarded otherwise.

Stage 1:Algorithm 1 outlines the first stage of our mapping procedure. It calculates all offset positions *i* in *B* such that |*W _{h}*(

*A*)∩

*W*(

_{h}*B*)| ≥ =

_{i}*m*. The output list

*T*is created in the form of one or more tuple ranges 〈

*x, y*〉, implying that the criterion holds true for all

*B*≤

_{i}, x*i*≤

*y*. We begin by computing the minimizer hashed values

*W*(

_{h}*A*) by winnowing the read

*A*, and compute the positions of their occurrence in the reference (line 4). Accordingly,

*L*= {

*pos*:

*h*∈

*W*(

_{h}*A*) ^ 〈

*h, pos*〉∈

*W*(

*B*)}. Next, we sort the array

*L*to process all the positions in ascending order. If

*B*satisfies the filtering criterion, there should be at least

_{i}*m*entries in

*L*with values between [

*i, i*+ |

*A*|). It also implies that m consecutive entries should exist in

*L*with positional difference between the first and

*m*entry being < |

^{th}*A*|. This criterion is efficiently evaluated for all

*B*using a linear scan on

_{i}*L*(lines 6–9). If satisfied, we push the associated candidate range into

*T*. To avoid reporting

*B*more than once, we merge two consecutive overlapping tuple ranges into one.

_{i}Stage 2:Evaluation of each tuple 〈*x, y*〉 in the Stage 1 output array *T* requires computing (*A, B _{i}*) ∀

*i, x*≤

*i*≥

*y*. Accordingly, we compute the minimum

*s*unique sketch elements within

*W*(

_{h}*A*) ∪

*W*(

_{h}*B*), and count the ones shared between

_{i}*A*and

*B*. We show the step-by-step procedure in Algorithm 2. We use to contain the minimizer hashed values {

_{i}*h ϵW*(

_{h}*A*) ∪

*W*(

_{h}*B*)}. To implement , we make use of the C++ ordered map data structure that supports logarithmic time insertion, deletion and linear time iteration over unique ordered keys. We keep the hashed value as the map’s key, and map it to 1 if it appears in both the reference and the read, and 0 otherwise. For each tuple 〈

_{i}*x, y*〉, we begin by saving the hashed values

*W*(

_{h}*A*) in read

*A*into map (lines 1, 3). Two loops (lines 2, 7) evaluate each tuple 〈

*x, y*〉 in

*T*, and consider each

*B*≤

_{i}, x*i*≤

*y*for Jaccard estimation (

*A, B*). The function

_{i}`getMinimizers`gathers the reference minimizer hashes

*W*(

_{h}*B*) by sequentially iterating over in the required position range and populating the minimizers associated with each

_{i}*B*into the map (lines 4,8-9). Note that a few incorrect corner minimizers {

_{i}*h*: 〈

*h, pos*〉 ∈

*W*(

*B*),

*i*≤

*pos*≤

*i*+ |

*A*|}\

*W*(

_{h}*B*) can appear in that were winnowed from windows overlapping with

_{i}*B*. However, these can be discarded by recomputing the minimum of the first and last window of

_{i}*B*. Finally, function

_{i}`solveJaccard`computes |

*S*(

*W*(

*A*)∪

*W*(

*B*)) ∩

_{i}*S*(

*W*(

*A*)) ∩

*S*(

*W*(

*B*))| by iterating over

_{i}*s*minimum unique sketch elements and counting the ones shared between

*A*and

*B*. If (

_{i}*A, B*)≥

_{i}*τ*, then the position

*i*and Jaccard estimate (

*A, B*) are saved into the output as pair 〈

_{i}*i*,(

*A, B*)〉The corresponding estimate of the alignment error rate

_{i}*ϵ*in this case, computed using Equation 1, would be ((

*A, B*),

_{i}*k*).

## 5 Selecting Window and Sketch Sizes

The sketch size for Jaccard similarity estimation is inversely proportional to the window size *w* (Section 4.3). A larger window size improves the runtime and space requirement during the search but also negatively affects the statistical significance and accuracy of our estimate. To achieve the right balance, we analyze the p-value of a mapping location being reported under the null hypothesis that both query and reference sequences are random. For the subsequent analysis, we will assume the sketch size is *s*, the count of shared sketch elements is a discrete random variable *Z*, the *k*-mer size is *k*, the alphabet set is *Σ*, and the read and reference sequence sizes are *q* and *r* respectively.

Location *i* is reported if (*A, B _{i}*) ≥

*τ*, i.e., at least sketch elements are shared. Following [19], consider two random sequences of length

*q*with

*k*-mer sets

*X*and

*Y*respectively. The probability of a random

*k*-mer

*α*appearing in

*X*or

*Y*, assuming

*q*≫

*k*, is . Therefore, the expected Jaccard similarity

*J*=

_{null}*J*(

*X, Y*) is given by For sketch size

*s*, the probability that

*x*or more sketch elements are shared is . Using this equation, we compute the probability of a random sequence of length

*q*mapping to at least one substring in a random reference sequence of size

*r*≫

*q*as 1 – (1 –

*P*(

*Z*≥

*x*|

*J*))

_{null}, s^{r}. For a minimum read length

*l*

_{0}and we wish to ensure that this probability is kept below a user-specified threshold

*p*. As reported mapping locations

_{max}*i*must satisfy (

*A, B*) ≥

_{i}*τ*and

*q*≥

*l*

_{0}, a mapping with (

*A, B*) =

_{i}*τ, q*=

*l*

_{0}, in general, will have the highest probability of generating a random match. Therefore, we compute the maximum value of

*w*that satisfies the

*p*constraint for this instance. Sketch size

_{max}*s*is set to |

*W*(

_{h}*A*)|, which from Section 4.3 is expected to be

*q*·2/

*w*. Since

*x, s*, and

*w*have a circular dependency, we iteratively solve for

*w*, starting from the maximum value

*l*

_{0}, until the probability of a random mapping is ≤

*p*. Influence of different parameters on window size is shown in Figure 3. The window size

_{max}*w*increases with increasing

*p*or

_{max}*l*

_{0}, but has an inverse relationship with

*ϵ*. These plots also highlight that as read length and error rate improve, our algorithm automatically adapts to a larger window size, greatly improving efficiency.

_{max}## 6 Proof of Sensitivity

We analyze the sensitivity exhibited by our algorithm in identifying correct mapping locations as a function of the read alignment error rate. Let *i* be a correct mapping location for read A. If *ϵ _{true}* is the true error rate in aligning

*A*with

*B*, then

_{i}*J*≈ (

_{true}*ϵ*). Our algorithm reports this mapping location if the Jaccard estimate (

_{true}, k*A, B*) ≥

_{i}*τ*, i.e., the count of shared sketch elements

*Z*≥

*s*·

*τ*. The associated probability is given by

*P*(

*Z*≥

*s*·

*τ*|

*J*) ≈. We report the corresponding values in Table 1 while varying

_{true},s*ϵ*and

_{max}*ϵ*from 0.04 to 0.20 error rate, for two sketch sizes

_{true}*s*= 200 and 500, respectively. In an ideal scenario, a mapping should be reported only if

*ϵ*≤

_{true}*ϵ*, i.e., a perfect algorithm would have “1” in each of the entries at or above the diagonal, and “0” in all other positions. From the table, it is evident our algorithm achieves close to ideal sensitivity for alignment error rates up to 20%.

_{max}## 7 Other Implementation Details

### Optimizing for Variable Read Lengths

In contrast to cyclic short-read sequencing, single-molecule technologies can generate highly variable read lengths (e.g. 10^{2}–10^{5} bases) Previously, we discussed how the window size *w* is determined using the minimum read length *l*_{0} in Section 5. From Figure 3(c), notice that we can further reduce the sampling rate (i.e. use a larger window size) for reads longer than *l*_{0} while still satisfying the p-value constraint. However, to realize this, the sampling scheme for indexing the reference sequence B needs to be consistent with that of query. We propose the idea of *multilevel winnowing* to further optimize the runtime of our algorithm by choosing custom window size for each input read. Suppose *W _{w}* (

*B*) denotes the set of winnowed fingerprints in the reference computed using window size

*w*, then [25]. We exploit this property to construct a multilevel reference index with multiple window sizes {

*w*, 2

*w*, 4

*w*…} recursively. This optimization yields us faster mapping time per base pair for reads longer than

*l*

_{0}as we independently compute the window size for a given read length

*l*≥

*l*

_{0}, and round it to the closest smaller reference window size {

*w*,2

*w*,4

*w*…}. The expected time and space complexity to index the reference using multiple levels is unaffected because the expected size of (

*B*) is half of (

*B*) and (B) can be determined in linear time from (

*B*).

### Strand Prediction

To account for the reads sequenced from the reverse strand relative to the reference genome, we compute and store only canonical *k*-mers, i.e. the lexicographically smaller of the forward and reverse-complemented *k*-mer. For each *k*-mer tuple 〈*k, i*〉 in *W*(*A*) and *W*(*B*), we append a strand bit 1 if the forward k-mer is lexicographically smaller and –1 otherwise. While evaluating the read mappings in Stage 2, we compute the mapping strand of the read through a consensus vote among the shared sketches using sum of pairwise products of the strand bits.

## 8 Experimental Results

### 8.1 Quality of Jaccard Estimation

We first show that the accuracy of the *winnowed-minhash* estimator to estimate the Jaccard similarity is as good as the direct MinHash approximation, which is an unbiased statistical estimator. We construct a random sequence of length 5 kbp with each character having equal probability of being either A,C,G or T. We generate reads while introducing substitution errors at each position with probability 0.15. Note that both substitutions and indels have a similar effect of altering the *k*-mers containing them, and a uniform distribution of errors alters more *k*-mers than a clustering of errors. Figure 4 shows the estimation difference against the true Jaccard similarity using MinHash and our estimator for two different sketch sizes *s* = 100 and *s* = 200. Based on these results, we conclude that the bias in our estimation is practically negligible as the mean error by our method in estimating Jaccard similarity is < 0.003 for both sketch sizes. Similar to MinHash approximation, we note that the magnitude of estimation error reduces with increasing sketch size.

### 8.2 Mapping MinION and PacBio Reads

We refer the C++ implementation of our algorithm as mashmap and compare its run-time performance and memory usage against alignment based long-read mappers BWA-MEM (v0.7.15-r114) [13], BLASR (vSMRTportal 2.3.0) [5], and minimap (v0.2) [14]. We also perform a comparison of the approximate mapping targets generated by mashmap and minimap. Like mashmap, minimap uses winnowing to index the reference, but does not use the MinHash approximation to estimate Jaccard similarity or nucleotide identity. Instead, minimap seeks clusters of minimizer matches to identify regions of local similarity. Importantly, minimap approximates a local alignment process, which is useful for split-read mapping. However, because mashmap is currently designed to find complete read mappings, we only consider this case for the following comparisons.

### Datasets and Methodology

We evaluated the algorithms by mapping long read datasets generated using single-molecule sequencers from Pacific Biosciences and Oxford Nanopore, and report single-threaded execution timings on an AMD Opteron 2376 CPU with 64 GB RAM. We use two datasets, labeled N1 and P1 respectively, both containing reads of length ≥ 5kbp. Dataset N1 is a random sample of 30,000 reads from the MinION (R9/1D) sequencing dataset of the *Escherichia coli* K12 genome [17]. Dataset P1 contains 18,000 reads generated through a single SMRT cell from PacBio’s (P6/C4) sequencing of the human genome (CHM1) [6]. We map N1 to *E. coli* K12 (4.6 Mbp) and P1 to the human reference (3.2 Gbp). For mashmap, we use the following parameters: *l*_{0} = 5000, *ϵ _{max}* = 0.15, and

*p*= 0.001. When a read maps to multiple locations, mashmap only reports locations where mapping error rate is no more than 1% above the minimum of error rate over all such locations.

_{max}### Run-Time Performance

Run-times for the index building and mapping stages, and memory used, for the four methods are compared in Table 2. As both BWA-MEM and BLASR are alignment based methods, we expect their run-times to be significantly higher. Indeed, they take several hours in comparison to seconds (N1) or a few minutes (P1) taken by mashmap and minimap. The principal challenge is whether the latter methods can retain the quality obtainable through alignment based methods. We note that mashmap has the lowest memory footprint for both datasets, and its run-time compares favorably with minimap. The ability to compute the sampling rate at runtime gives mashmap its edge in terms of memory usage.

### Quality of Mapping

As there is no standard benchmark using real datasets, we assess sensitivity/recall using BWA-MEM’s starting read mapping positions, and precision by computing Smith-Waterman (SW) alignments of the reported mappings (Table 3). Since both minimap and BWA-MEM also report split-read alignments, we post-filter their results to only keep alignments with ≥80% read coverage. Recall is measured against BWA-MEM alignments which satisfy the *ϵ _{max}* = 0.15 cutoff (≥85% identity). Because both minimap and mashmap estimate mapping positions, the reported mapping is assumed equivalent to BWA-MEM if the predicted starting position of a read is within ±50% of its length. Precision was directly validated using Smith-Waterman (SW) alignment (with scoring matrix:

*match*= 1,

*mismatch*= –1,

*gap*= –2,

_{open}*gap*= –1). For minimap’s and our results, we allow SW-identity ≥ 75% and query coverage ≥ 80%. Results in Table 3 show that both mashmap and minimap have close to ideal sensitivity/recall, demonstrating their ability to uncover the right target locations.

_{extend}Mashmap also achieves high precision, avoiding false positives on the repetitive human genome. Minimap’s low precision on human is largely driven by false-positive mappings to repetitive sequence, which could potentially be resolved with alternative clustering parameters. Mashmap false positives are dominated by reported mappings with a SW query coverage less than 80% of the read length. It may be possible to avoid such mappings by considering the positional distribution of shared sketch elements during the second stage filter, or by adopting a local alignment reporting strategy like minimap.

We compare our identity estimates (1 – *ϵ*) × 100 against the SW alignment identities in Figure 5. For the PacBio reads, we observe that most of the points are aligned close to *y* = *x*. However, for the nanopore reads, our approach overestimates the identity. This is because PacBio sequencing produces mostly random errors, whereas current nanopore errors are more clustered and systematic [11].

### 8.3Mapping to RefSeq

We perform mapping of a publicly available PacBio read set consisting of 127,565 reads (each ≥ 5 kbp) sequenced from a mock microbial community containing 20 strains [20]. To demonstrate the scalability of our algorithm, we map these reads against the complete NCBI RefSeq database (838 Gbp) containing sequences from 60,892 organisms. This experiment was executed using default parameters (*l*_{0} = 5000, *ϵ _{max}*= 0.15,

*p*= 0.001) on an Intel Xeon CPU E7-8837 with 1 TB memory. BWA-MEM and minimap could not index the entire RefSeq database at once with this memory limitation. Mashmap took 29 CPU hours to index the reference and 16 CPU hours for mapping, with a peak memory usage of 660 GB. Note that the same index can be repeatedly used for mapping sequences, conferring our method the ability to process data in real-time. To check the accuracy of our results, we ran BWA-MEM against the 20 known genomes of the mock community. The recall of mashmap against BWA-MEM mappings ranged from 97.7% to 99.1% for all the 20 genomes in the mock community.

_{max}## 9 Conclusions

We have presented a fast approximate algorithm for mapping long reads to large reference genomes. Instead of reporting base-level alignments, mashmap reports all reference intervals with sufficient Jaccard similarity compared to the *k*-mer spectrum of the read. In contrast to earlier techniques based on MinHash and winnowing, we provide a formal characterization of the mappings the algorithm is intended to uncover, and provide a provably good algorithm for computing them. In addition, we report an estimate of the alignment error rate tailored to each mapping under an assumed error model. Mashmap provides significant benefits in run-time, memory usage, and scalability, while achieving precision and recall similar to alignment-based methods. Future work aims to extend this method to split-read mapping, compressed reference databases, and additional error models. For example, the *winnowed-minhash* operation could be applied to paths within a de Bruijn graph to recover identity estimates and identify the database sequences most similar to a query sequence. Such approximate algorithms promise to help address the ever increasing scale of genomic data.

## Acknowledgments

This research was supported in part by the Intramural Research Program of the National Human Genome Research Institute, National Institutes of Health, and the U.S. National Science Foundation under IIS-1416259.