Meta: Memory-efficient taxonomic classification and abundance estimation for metagenomics with deep learning
Metagenomic studies have increasingly utilized sequencing technologies in order to analyze DNA fragments found in environmental samples. One important step in this analysis is the taxonomic classification of the DNA fragments. Conventional read classification methods require large databases and vast amounts of memory to run, with recent deep learning methods suffering from very large model sizes. We therefore aim to develop a more memory-efficient technique for taxonomic classification. A task of particular interest is abundance estimation in metagenomic samples. Current attempts rely on classifying single DNA reads independently from each other and are therefore agnostic to co-occurence patterns between taxa. In this work, we also attempt to take these patterns into account. We develop a novel memory-efficient read classification technique, combining deep learning and locality-sensitive hashing. We show that this approach outperforms conventional mapping-based and other deep learning methods for single-read taxonomic classification when restricting all methods to a fixed memory footprint. Moreover, we formulate the task of abundance estimation as a Multiple Instance Learning (MIL) problem and we extend current deep learning architectures with two different types of permutation-invariant MIL pooling layers: a) deepsets and b) attention-based pooling. We illustrate that our architectures can exploit the co-occurrence of species in metagenomic read sets and outperform the single-read architectures in predicting the distribution over taxa at higher taxonomic ranks.
Over the last decades, advancements in sequencing technology have led to a rapid decrease in the cost of genome sequencing , while the amount of sequencing data being generated has vastly increased. This is attributable to the fact that genome sequencing is a tool of utmost importance for a variety of fields, such as biology and medicine, where it is used to identify changes in genes or aid in the discovery of potential drugs [36, 42]. Metagenomics is a subfield of biology concerned with the study of genetic material extracted directly from an environmental sample [13, 24]. Significant efforts have been carried out by projects such as the Human Microbiome Project (HMP)  and the Metagenomics of the Human Intestinal Tract (MetaHIT) project  in order to understand how the human microbiome can have an effect on human health. An important step in this process is the classification of DNA fragments into various groups at different taxonomic ranks, typically referencing the NCBI Taxonomy for the relationships between the samples against which fragments are matched . In this taxonomy, organisms are assigned taxonomic labels and are thus placed on the taxonomic tree. Each level of the tree represents a different taxonomic rank, with finer ranks such as species and genus being close to the leaf nodes and coarser ranks such as phylum and class closer to the root.
We consider the problem of metagenomic classification, where each individual read is assigned a label or multiple labels corresponding to its taxon at each taxonomic rank. One could simply identify the taxon at the finest level of the taxonomy and then extract the taxa at all levels above by following the path to the root. The problem with this approach is that for certain reads, we might not be able to accurately identify the species of the host organism, but nevertheless be interested in coarser taxonomic ranks. This can apply in cases where little relevant reference data is available for a sequencing data set, such as deep sea metagenomics data  or public transit metagenomics [1, 14], so an accurate prediction at higher taxonomic ranks may be more informative for downstream analysis . This happens especially when reads come from regions of the genome that are conserved across species, but may be divergent at higher taxonomic ranks. Furthermore, in many cases we are also interested in the distribution of organisms in an environmental sample alongside the classifications of individual fragments.
The closely related task of sample discovery, in which the closest match to a query sequence is found in an annotated database of reference sequences, has drawn considerable attention in recent years [37, 20, 40, 48, 4, 3, 39, 28]. While these methods may be used as a back end for classification given metadata associating each sample to its respective taxonomic ranks, this usually comes at the expense of large database sizes. To overcome this, these tools provide various means to trade off accuracy for representation size. Exact methods, for example, may allow for the use of more highly compressed, but slower data structures [39, 2], or for the subsampling of data at the expense of greater false negatives . On the other hand, approximate methods trade off smaller representation size for increased false positives [48, 4, 3].
Alongside these approaches, deep learning has also shown great promise and has become increasingly prevalent for approaching biological classification tasks. In recent years, we have seen various attempts of using deep learning to solve tasks such as variant calling  or the discovery of DNA-binding motifs . These methods even outperform more classical approaches, despite the relative lack of biological prior knowledge incorporated into those models.
In this work, we develop new memory-efficient deep learning methods for taxonomic classification. Furthermore, we formulate the task of abundance estimation as an instance of Multiple Instance Learning (MIL). MIL is a specific framework of supervised learning approaches. In contrast to the traditional supervised learning task, where the goal is to predict a value or class for each sample, in MIL, given a set of samples, the goal is to assign a value to the whole set. A set of items is called a bag, whereas each individual item in the bag is called an instance. In other words, a bag of instances is considered to be one data point . More formally, a bag is a function where is the space of instances. Given an instance , counts the number of occurrences of in the bag . Let be the class of such bag functions. Then the goal of a MIL model is to learn a bag-level concept where is the space of our target variable.
In the context of metagenomic classification, we consider the instances to be DNA reads. Our goal is to directly predict the distribution over a given set of taxonomic ranks in the read set (the bag). So for each taxon, our output is a real number in , denoting the portion of the reads in the read set that originated from that particular species. The motivation for this is that in a realistic set of reads, certain organisms tend to appear together. It might thus be possible to exploit the co-occurrence of organisms to gain better accuracy .
Our main contributions are:
A new method for learning sequence embeddings based on locality-sensitive hashing (LSH) with reduced memory requirements.
A new memory-efficient deep learning method for taxonomic classification.
A novel machine learning model for predicting the distribution over taxa in a read set, combining state-of-the-art deep DNA classification models with read-set-level aggregation in a multiple instance learning (MIL) setting.
A thorough empirical assessment of our proposed single-read classification models, showing significant memory reduction and superior performance when compared to equivalent mapping-based methods.
A thorough empirical assessment of our proposed MIL abundance estimation models showing comparable performance in predicting the distributions of higher level taxa from read sets with significantly lower memory requirements.
2 Related Work
To solve the problem of metagenomic classification, traditional methods rely on the analysis of read mappings to classify each DNA fragment. Given a DNA read, one first needs to match -mers to a large database of reference genomes to detect candidate segments of target genomes, followed by approximate string matching techniques to match the string to these candidate segments. A well-known and widely used tool that uses alignment is BLAST, which is a general heuristic tool for aligning genomic sequences. Results from these matches can then be used to compute a taxonomic classification using MEGAN . Other mapping-based tools specifically designed for metagenomics include Centrifuge , Kraken , and MetaPhlAn . These methods make trade-offs of sensitivity for scalability. For example, BLAST is highly sensitive, but not scalable to databases of unassembled sequencing data, while more approximate methods like Kraken are well suited for such large databases. In particular, the accuracy of Kraken can be tailored to the desired size of the final database by selectively discarding the stored -mers, at the expense of an increased number of false negatives. In additional, tools such as Bracken have been developed for estimating the abundances of taxonomic ranks by post-processing read classifications . Moreover, recent deep learning approaches have outperformed these methods in classifying very long reads with high error-rates .
Most of the previous attempts using machine learning focused on 16S rRNA sequences due to their high sequence conservation across a wide range of species. An example is the RDP (Ribosomal Database Project) classifier which uses a Naive Bayes classifier to classify 16S rRNA sequences . The disadvantage of this method is the loss of positional information due to the encoding of the sequence as a bag of 8-letter words. However, the generalizability of this model to sequencing data drawn from other genomic regions is unclear. Similarly,  use probabilistic topic modeling in order to classify 16S rRNA sequences in the taxonomic ranks from phylum to family. Another interesting approach is taken by  which uses Markov models to classify DNA reads and can even be combined with alignment methods to increase performance. In addition,  use a CNN architecture to classify 16S sequences, while other approaches also proposed to use recurrent neural networks on sequences . Other machine learning approaches include  who learn low-dimensional representations of reads based on their -mer content and  that combine locality sensitive hashing based on LDPC codes with support vector machines.
More recent attempts for solving the general metagenomic classification problem focus on using deep learning to tackle it as a supervised classification task. Two examples of such attempts are GeNet , which attempts to leverage the hierarchical nature of taxonomic classification, and DeepMicrobes , which first learns embeddings of -mers and subsequently uses those to classify each read. We use GeNet and a simplified version of DeepMicrobes as baselines and explain them in more detail in Section 3. These methods however require significant amount of GPU memory to achieve high classification accuracy. , for example, can reach memory requirements of upto 50 GB when large -mers need to be encoded. Numerous different methods have been proposed for the reduction of such large embeddings, including feature hashing  and hash embeddings .
3 Models and Methods
For our experiments, we compare against both mapping-based and alignment-free methods such as Centrifuge  and Kraken , and the more recent deep learning techniques GeNet  and EmbedPool . Furthermore, we tackle the problem of abundance estimation by adding a multiple instance learning (MIL) pooling layer to our models, in order to capture interactions between reads and directly predict the distribution over species in metagenomic samples.
3.1 Data set generation
For training, validation, and evaluation of the deep learning models, we use synthetic reads generated from bacterial genomes from the NCBI RefSeq database , from which we use a subset of genomes comprising species similar to the data set used in . We use NCBI’s Entrez tool , to download the genomes and the taxonomic data. The number of taxa in each taxonomic rank is summarized in Table S1 in Appendix A. The full list of accession numbers for the genomes used is included in our GitHub repository.
For training, we generate a data set large enough for the models to converge, containing a total of reads. We follow an iterative procedure similar to the one described in  to create mini-batches of reads. For each read in the mini-batch, its source genome is first selected uniformly at random. Then, the genome is used as input to the software InSilicoSeq  in order to generate the read. We simulate 151 bp error-free reads (default length of InSilicoSeq) used for training, while we generate more realistic reads with Illumina NovaSeq-type noise for validation and evaluation of our models. This approach allows us to evaluate the performance and generalization ability of our models in more realistic settings where sequencing noise is present. The evaluation of all methods was done on a total of NovaSeq type reads.
Training of the MIL models for abundance estimation is different, where a batch consists of a small number of bags of reads, each containing reads sampled using a more realistic distribution over the genomes. The procedure used is similar to the one used by the CAMISIM simulator  and described in more detail in Section 3.1.1. An example rank-abundance curve for each taxonomic rank generated by this procedure is shown in Figure 1.
Each bag is supposed to simulate a different microbial community, and hence, the generation procedure is repeated for each bag. The more realistic bags allow the MIL models to capture the interactions between the reads coming from related species and capture potential overlap in the reads originating from the same taxa.
Hyperparameter search was performed for all models. Details on the exact parameters can be found in Appendix B.
Sampling a realistic set of reads
In order to sample bags with a more realistic community of bacteria, we propose a method similar to the one by . Given a set of all the taxa at a higher level (e.g., genus or family, we sample numbers from a lognormal distribution with and :
Then, for a taxon with genomes associated with it, we choose to include in our microbial community only random genomes where is sampled from a geometric distribution with :
To calculate the abundance of a genome belonging to taxon , random numbers are sampled from a lognormal distribution as in equation (1). The abundance for the genome is then calculated as:
Finally, all abundances are normalized to produce a probability vector over all the genomes in the data set. When sampling a read set, a genome is selected by sampling from the distribution produced. Reads are then simulated from the genome sample using the software package InSilicoSeq.
3.2 Mapping-based baselines
Kraken 2 [57, 56] is a state-of-the-art taxonomic classification method that attempts to exactly match -mers to reference genomes in its database. Kraken 2.0.8-beta was used for this study. Kraken 2 constructs a database containing pairs of the -mer canonical minimizers of -mers and their respective lowest common ancestors. Additionally, Kraken 2 allows for the user to specify a maximum hash table size in order to create databases that require less memory. This is achieved by a subsampling approach, reducing the number of minimizers included in the hash table. However, the accuracy of this method deteriorates significantly as the size of the database decreases. In addition, with larger databases, the overhead of loading the database into memory can be significant as shown in Table 2.
The size of the complete database for our selected genomes is 12 GB. For comparison purposes, we also build Kraken databases of three sizes in addition to the complete one: 8 GB, 500 MB, and 200 MB. The two smallest sizes in particular are comparable to the sizes of our proposed models.
For abundance estimation, we combine Kraken with Bracken . Bracken takes the classification results of Kraken and re-estimates the frequencies of fragments assigned at lower taxonomic ranks by applying Bayes’ theorem. Bracken is especially useful in cases where a classification at a lower taxonomic rank (e.g., species) was not accurately determined by Kraken but an assignment at a higher level (e.g., genus), was nevertheless determined. Bracken can heuristically assign reads in these cases to lower taxonomic ranks.
Centrifuge  compresses the genomes of all species by iteratively replacing pairs of most similar genomes with a much smaller compressed genome that excludes the identical parts shared by the original pair. Subsequently, an FM-index  is built on top of the final genomic sequence. Classification is done by scoring exact matches of variable-length segments from the query sequences. For our selected reference genomes, the final database was 4.9 GB. Due to the nature of the compression method used, Centrifuge does not provide any additional option of further restricting the database size. Centrifuge version 1.0.4 was used in this study.
For abundance estimation, Centrifuge directly produces a Kraken style report containing the percentages for each taxon at various taxonomic ranks.
3.3 Deep learning baselines
GeNet leverages the hierarchical nature of the taxonomy of species to simultaneously classify DNA reads at all taxonomic ranks . The procedure is similar to positional embedding as described by . Given an input , an embedding is computed, where . The vocabulary of size corresponds to the symbols for the four possible nucleotides A, C, T, G, and N (for unknown base pairs in the read). Embeddings of the absolute positions for each letter are also computed to create , where . The one-hot representation of the sequence, , is added to the other two embeddings to create the matrix . Subsequently, the resulting matrix is passed to a ResNet-like neural network which produces a final low-dimensional representation of the read. The main novelty of the architecture is the final layer used for classification, which comprises multiple softmax layers, one for each taxonomic rank. These layers are connected to each other so that information from higher ranks can be propagated towards the lower ranks. More formally, the output of softmax layer can be written as follows:
where and are trainable parameters, is the output of the ResNet network and is the previous softmax output. is the rectified linear unit function. To train the model, an averaged cross-entropy loss for each softmax layer is used.
 introduce multiple architectures for performing single-read classification among which the best is DeepMicrobes. It involves embedding -mers into a latent representation, followed by a bidirectional LSTM, a self-attention layer, and a multi-layer perceptron (MLP). Unlike GeNet, this model can only be trained to classify a single taxonomic rank. Due to the significant amount of GPU memory required by the model, we implemented EmbedPool, a simpler version of DeepMicrobes (also described in the original paper) to use as a baseline. Because of the exclusion of an LSTM and a self-attention layer, EmbedPool is faster and requires less GPU memory while still achieving similar accuracy .
EmbedPool is a model that features an embedding layer for -mers. In order to further reduce memory footprint, the embedding matrix only includes an entry for each canonical -mer, approximately halving the memory requirements. Non-canonical -mers use the embedding of their reverse complements. After the embedding is applied to each -mer in a sequence, both max- and mean-pooling are performed on the resulting matrix and concatenated together to yield a low-dimensional representation of the read. Since the embedding dimension is set to , after concatenation, this results in a vector of size . An MLP with one hidden layer of units subsequently classifies the read. ReLU is used as the activation function. The model is trained end-to-end using cross-entropy loss.
In order to classify at multiple taxonomic ranks, one could run multiple instances of the model, each running on a different GPU. Even though EmbedPool has a lower GPU memory footprint, the requirements can still scale very quickly. When setting , the model requires 3.26 GB of memory. Just increasing to or , the model size significantly increases up to 13 GB and 50 GB. The training requirements can even be much higher than that due to memory required during backpropagation or for storing intermediate values, making these models very difficult to train  or to fit in GPU memory.
3.4 Proposed memory-efficient models
In this section, we describe two novel models based on EmbedPool that are very memory-efficient while achieving comparable performance when compared to both deep-learning and mapping-based baselines.
As explained, the largest portion of the memory is used for storing the embedding matrix, which can make the models unusable for larger values of . However, a large can have a significant effect on the performance of the resulting model. A simple, but less effective way to fix this problem is to first fix the vocabulary size to a predefined number such that . Then a hash function could be used to map each -mer to an embedding that is shared by all -mers that collide in the same bucket. This is also known as feature hashing . More formally, a hash function is a function where is the set of all -mers and is the set of integers modulo . Given a -mer , then can be used as an index to the embedding matrix. The problem with the described approach is that collisions can be detrimental to the training of the model since two very disimilar -mers can potentially be mapped to the same embedding vector, each pushing the gradient during training towards completely different directions. Furthermore, in order to keep the size of the model small and since each embedding vector has a size of , cannot be much larger than because this would again significantly increase the GPU requirements. This results in a high probability that all -mers would collide with at least one or more other -mers.
From our informal analysis above, it is clear that two approaches can be taken to reduce the burden of collisions: a) reduce the number of collisions, or b) avoid collision of dissimilar -mers. LSH-EmbedPool follows the second approach by replacing the hash function with a locality-sensitive hashing (LSH) scheme [22, 27]. An LSH is a function that, given two very similar inputs, outputs the same hash with high probability. This is in contrast to the regular hash function where the collisions happen completely at random. LSHs have been widely used in near-duplicate detection for documents in natural language processing  as well as by biologists for finding similar gene expressions in genome-wide association studies . For our case, we use an LSH based on MinHash. In particular we use the software sourmash [8, 7] where as a first step a list of hashes of all -mers in a -mer (where ) is created. Subsequently, the resulting hashes are sorted numerically in ascending order and the second half of the list is discarded. The remaining sorted hashes are combined by one more application of a fast non-cryptographic hash function. For this purpose, the library xxHash is used . Discarding some of the hashes allows the algorithm to tolerate minor differences in -mers, such as single nucleotide changes, and increases the probability of such -mers being hashed to the same bucket. Furthermore, the parameter can be used to control the sensitivity of the scheme, with smaller values resulting in a more sensitive scheme. In our experiments, we found that setting to be approximately half of is ideal, while our best accuracy was achieved by setting and . In order to keep memory requirements to the minimum, we set the vocabulary size , which results in a model of only 457.96 MB. This is a reduction of a factor of compared to using a standard EmbedPool with . As with the standard EmbedPool, the model is trained end-to-end using cross-entropy loss. Figure 2(a) shows an overview of our architecture.
In contrast to LSH-EmbedPool, Hash-EmbedPool follows the approach of reducing collisions. This can be achieved using hash embeddings, first introduced by . Hash embeddings are a hybrid between a standard embedding and the naive feature hashing method. As with LSH-EmbedPool, there is a set of shared embedding vectors. For each -mer, we use different hash functions , to select embedding vectors from the pool of shared embeddings. The final vector representation of the -mer is a weighted sum of the selected vectors using weights . The weights for each possible -mer are stored in a second learnable matrix of size . More formally, let be a function that maps each -mer to its unique index and be the function that returns shared embedding vectors given an integer in . The final embedding of a -mer can be written as where . Intuitively, even though collisions can occur in the shared embedding matrix, because of the unique weighted sum of the vectors for each different -mer, collisions in the final embedding are reduced significantly. Increasing or of course has the effect of further reducing the number of collisions at the cost of higher memory requirements. In our model, we found to be sufficient while was set to to keep memory requirements low. We found to be optimal for this model, resulting in a final model size of 553.90 MB, a reduction of a factor of from a standard EmbedPool using the same parameter. For the hash functions we randomly sample functions from the hash family proposed by . As with the standard EmbedPool, the model is trained end-to-end using cross-entropy loss. An overview of the architecture is shown in Figure 2(b).
3.5 Proposed MIL methods
In this section, we present new models for abundance estimation in metagenomic read sets. Since the co-occurence of species can be very informative in real-world samples (see Fig. 1), we combine the presented deep learning approaches with permutation-invariant aggregators for multiple instance learning.
GeNet + MIL pooling
A mini-batch of bags of reads is used as input. The first part of GeNet, consisting of the embedding and the ResNet, is used to process each read individually. A pooling layer is then used to group all reads in each bag to create bag-level embeddings. This is also referred to as MIL pooling [17, 10]. The output is passed to the final layers of GeNet in order to output a probability distribution over the taxa at each taxonomic rank. As a loss function we use the Jensen-Shannon () divergence  between the predicted distribution and the actual distribution of the bag.
Given that a bag is a set, we require that a MIL pooling layer is permutation-invariant, that is, permuting the reads of the bag should still produce the same result. To this end, we utilize DeepSets . DeepSets can be formally described as follows:
In other words, each element of a set is first processed by a function . The outputs are all summed together and the result is subsequently transformed by a function .  proved that all valid functions operating on subsets of countable sets or on fixed-sized subsets of uncountable sets can be written in this form. In our case, the inputs are embeddings in where is the length of a read. In addition, we only input bags of fixed size and hence the assumptions of Theorem 2 in  are satisfied. is modelled with a small MLP with one hidden layer while the ResNet part of the network models the function .
Alternatively to DeepSets, we also consider an attention-based pooling layer as seen in , motivated by the fact that it would allow the model to attend to specific reads originating from each species. In attention-based pooling, the elements of the set are combined in different ways to create a set , such that the set remains invariant when we permute the elements of the input set. This can be written as follows:
where is an element of the input set, and and are trainable parameters. The weights are therefore calculated with an MLP with one hidden layer with non-linearity and activation at the end.  also attempt to increase the flexibility of the MIL pooling by introducing a gating mechanism as shown below:
where is an additional learnable matrix, is the sigmoid activation function and is the element-wise product. As shown in Appendix B, for our models, using the gating mechanism is an additional hyperparameter. Following the attention mechanism, the output is flattened to create a single vector for each bag which is subsequently processed by GeNet’s final layers to output the predicted distributions. The overall architecture can be seen in Figure 3(a).
EmbedPool + MIL pooling
Similarly to subsection 3.5.1, we use EmbedPool to process the reads individually. A MIL pooling layer is added after the mean- and max- pooling layers, the output of which is fed to the rest of the model to predict the distribution. JS-divergence is used as a loss function. For MIL pooling, we use DeepSets and attention-based pooling as before. An overview of the model can again be seen in Figure 3(b).
Hash-EmbedPool + MIL pooling
We follow the same approach as EmbedPool + MIL pooling, but each read is instead processed by a Hash-EmbedPool as introduced in section 3.4.2.
4 Results and Discussion
4.1 Single read classification
In this section, we analyze the memory requirements and performance of our proposed memory-efficient models for taxonomic classification and compare them with our baselines. The evaluation of all models was done on a total of reads with Illumina NovaSeq-type noise.
Our best performing model is LSH-EmbedPool with an accuracy of , outperforming both GeNet and standard EmbedPool which scored and , respectively. In , GeNet was trained on PacBio reads of length bp and Illumina reads of length bp. Since in most cases genome sequencing technologies like Illumina produce shorter reads in the range of 100 bp - 300 bp , we chose to train all our models on reads of length 151 bp. This difference explains the discrepancy between our results and the results reported by . Since GeNet uses one-hot encoding rather than -mer encoding, it might be unable to extract useful features shared across the whole genome from shorter reads. Similarly, Hash-EmbedPool outperforms both deep learning baselines. Compared to Kraken, our models achieve significantly higher accuracy when Kraken’s database is subsampled to have similar size to our models. At 500 MB, Kraken only achieves an accuracy of . Reducing the database even further, Kraken’s accuracy is , even lower than our deep learning baseline EmbedPool. Note that EmbedPool was trained with , only requiring a memory of 70.51 MB. Furthermore, the top-5 accuracy achieved by LSH-EmbedPool is comparable to the accuracy of Kraken restricted to 8 GB. Centrifuge achieves the highest classification performance, with a database size of 4.9 GB. However, this comes at the cost of a reduction in speed, with Centrifuge only being able to process reads per minute when run on 4 threads. Comparably, Hash-EmbedPool can process reads/min when the time required to pre-process reads is measured, and reads/min when that time is excluded, comparable to Kraken. However, Kraken’s simple exact -mer matching technique makes it the fastest even when the time required to load the whole database into memory is taken into account. As Kraken’s database size is reduced, the classification speed increases and the time required to load the database is less significant. Note that for the deep learning models, a single GPU was used. Classification speed can scale linearly when more GPUs are used to process multiple mini-batches of reads simultaneously. Our results demonstrate that the proposed models effectively learn memory-efficient representations while demonstrating a tradeoff between classification accuracy, speed, and memory. Classification accuracy of all models is summarized in Table 1, while speed and memory requirements are shown in Table 2
|Accuracy||Top 5 accuracy|
|Kraken 200 MB||-|
|Kraken 500 MB||-|
|Kraken 8 GB||-|
|GeNet||61.46 MB||561 - 577|
|EmbedPool||70.51 MB||342 - 607|
|Hash-EmbedPool (ours)||553.90 MB||342 - 607|
|LSH-EmbedPool (ours)||457.96 MB||147 - 607|
|Kraken 200 MB||-||200 MB||31,418 - 43,048|
|Kraken 500 MB||-||500 MB||23,077 - 39,491|
|Kraken 8 GB||-||8 GB||4,769 - 23,441|
|Kraken||-||12 GB||653 - 13,275|
|Centrifuge||-||4.9 GB||275 - 278|
4.2 Abundance estimation with MIL pooling
Following the comparison of our methods for taxonomic classification, we proceed to the related problem of abundance estimation. For the standard GeNet and EmbedPool, the microbiota distribution was calculated by classifying each read independently, while for our proposed models, the distribution was predicted directly by the models. An example of the output of the MIL models is shown in Figure 4.
All models were evaluated on a total of bags of NovaSeq-type reads each. Both GeNet + Deepset and GeNet + Attention perform better than all other deep learning models and comparably to the mapping-based methods at higher taxonomic ranks. Moreover, this accuracy is achieved with a model size 4 times smaller compared to the smallest Kraken baseline and 200 times smaller compared to the largest one. Notably, Kraken’s abundance estimation ability does not decrease significantly at higher taxonomic ranks when the database size is restricted. This is due to the fact that Kraken still has enough correctly classified reads to calculate an approximation of the true distribution after discarding all the unclassified reads. Similarly, we found that increasing the size of the deep learning models does not seem to always improve the abundance estimation performance, even though single-read accuracy increases significantly (as was also shown by ).
|GeNet + Deepset (ours)|
|GeNet + Attention (ours)|
|EmbedPool + Deepset (ours)|
|EmbedPool + Attention (ours)|
|Hash-EmbedPool + Deepset (ours)|
|Hash-EmbedPool + Attention (ours)|
|Kraken 200 MB|
|Kraken 500 MB|
|Kraken 8 GB|
As explained in Section 1, we believe that the improvement in performance at higher taxonomic ranks is owed to the fact that the models can exploit the co-occurrence of species in realistic settings or detect overlaps of reads in a bag. A drawback of our MIL models is that, since the performance is owed to the special structure of the bags, it is unlikely that they would perform well when presented with bags with an unrealistic distribution of species (e.g., a bag with a uniformly random distribution over all species). Therefore, it is clear that the models achieve a trade-off between flexibility and performance. Moreover, our proposed MIL models perform poorly on the finer taxonomic ranks, possibly because in the MIL setting the models only observe a summary of the bag rather than a label for each instance. It might therefore be harder for them to learn adequate features. However, the greater performance on higher levels can prove beneficial for some real-world metagenomic data sets where reference data is not sufficiently available to train deep learning models accurately [1, 51]. Table 3 shows a comparison of our MIL models and the achieved values. A comparison of GeNet + Deepset (our best performing model) and standard GeNet can be seen in Figure S1 in Appendix A.
In this work, we introduced memory-efficient models for taxonomic classification. In particular, our method for combining locality-sensitive hashing with learnable embeddings is general and could be used for other tasks where the large size of the embedding matrix would otherwise hinder the training and usability of the model. Our proposed architectures outperformed all other methods within the same memory restrictions and were comparable to mapping-based methods that require larger amounts of memory.
Subsequently, we tackled the problem of directly predicting the distribution of the microbiota in metagenomic samples. In contrast to previous methods that are based on classifying single reads, we formulated the problem as a multiple instance learning task and used permutation-invariant pooling layers in order to learn low-dimensional embeddings for whole sets of reads. We showed that our proposed method can perform better than the baseline deep learning models and comparable to mapping-based methods at the higher taxonomic ranks. Moreover, our best performing MIL models were significantly smaller compared to the mapping-based methods. The MIL models presented could be used as an initial step to filter or preselect the potential genomes that more traditional alignment methods would need to take as input in order to increase their performance.
Future work could include exploring alternative base architectures or more sophisticated pooling methods that can better capture the interactions between reads. For example, one could use Janossy pooling , another permutation-invariant method that can capture order interactions between the elements of a set. Also, the models could potentially be combined with a probabilistic component, such as a Gaussian process over DNA sequences , to allow for uncertainty estimates on the predictions. Finally, as previously mentioned, a possible issue is that observing only the summary of the read set can make it more difficult for the model to learn adequate features for the individual reads. A solution to this could be to first learn better instance-level embeddings to use as input, in order to aid the model in learning more suitable bag-level embeddings.
Appendix A Supplementary material
To train and test our models, we have downloaded genomes from the NCBI RefSeq database . The full list of accession numbers for the genomes used in our data set can be found in our GitHub repository (https://github.com/ag14774/META2).
|Rank||# of taxa|
Figure S1 shows the performance comparison between our best performing MIL pooling method GeNet + Deepset vs standard GeNet. It is clear that MIL pooling provides significant improvements at the higher taxonomic ranks.
In addition to training on error-free reads, we attempt to train on noisy NovaSeq reads to determine whether this approach provides any benefit. The JS-divergence achieved by all models when trained on NovaSeq type reads is shown in Table S2 while a comparison of our best performing MIL model, GeNet + Deepset, and GeNet is depicted in Figure S1.
|GeNet + Deepset (ours)|
|GeNet + Attention (ours)|
|Embedpool + Deepset (ours)|
|Embedpool + Attention (ours)|
|Hash-Embedpool + Deepset (ours)|
|Hash-Embedpool + Attention (ours)|
|Kraken 200 MB|
|Kraken 500 MB|
|Kraken 8 GB|
Appendix B Hyperparameter grid for the trained models.
To train our models, we performed random search over the following hyperparameter grid:
|General parameters for single read models|
|Batch Size||64, 128, 256, 512, 1024, 2048|
|General parameters for MIL models|
|Bag Size||64, 128, 512, 1024, 2048|
|Batch Size||1, 2, 4, 8|
|Output size of ResNet||128, 256, 512, 1024|
|Use GeNet initialization scheme||True, False|
|BatchNorm running statistics||True, False|
|Learning rate||0.001, 0.0005, 1.0 (for SGD)|
|Nesterov momentum (SGD only)||0.0, 0.9, 0.99|
|Size of MLP hidden layer||1000, 3000|
|Optimizer||Adam, RMSprop, SGD|
|Nesterov momentum (SGD only)||0.0, 0.5, 0.9, 0.99|
|Learning rate||0.001, 0.0005|
|Deepset pooling layer|
|Deepset hidden layer size||128, 256, 1024|
|Deepset output size||128, 1024|
|Dropout before network||0.0, 0.2, 0.5, 0.8|
|Deepset activation||ReLU, Tanh, ELU|
|Attention pooling layer|
|Hidden layer size||128, 256, 512, 1024|
|Gated attention||False, True|
|Attention rows||1, 10, 30, 60|
|LSH parameter||3, 5, 7|
|Number of hashes||2, 3|
- (2015) Geospatial resolution of human and bacterial diversity with city-scale metagenomics. Cell systems 1 (1), pp. 72–87. Cited by: §1, §4.2.
- (2019) An efficient, scalable and exact representation of high-dimensional color information enabled via de bruijn graph search. In International Conference on Research in Computational Molecular Biology, pp. 1–18. Cited by: §1.
- (2019) Cobs: a compact bit-sliced signature index. In International Symposium on String Processing and Information Retrieval, pp. 285–303. Cited by: §1.
- (2019) Ultrafast search of all deposited bacterial and viral genomic data. Nature biotechnology 37 (2), pp. 152. Cited by: §1.
- (2009) Phymm and phymmbl: metagenomic phylogenetic classification with interpolated markov models. Nature methods 6 (9), pp. 673. Cited by: §2.
- (2010) RAPID detection of gene–gene interactions in genome-wide association studies. Bioinformatics 26 (22), pp. 2856–2862. Cited by: §3.4.1.
- (1997) On the resemblance and containment of documents. In Proceedings. Compression and Complexity of SEQUENCES 1997 (Cat. No. 97TB100171), pp. 21–29. Cited by: §3.4.1.
- (2016) Sourmash: a library for minhash sketching of dna. Journal of Open Source Software 1 (5), pp. 27. Cited by: §3.4.1.
- (2019) A deep learning approach to pattern recognition for short dna sequences. bioRxiv, pp. 353474. Cited by: §2.
- (2018) Multiple instance learning: a survey of problem characteristics and applications. Pattern Recognition 77, pp. 329–353. Cited by: §1, §3.5.1.
- (1979) Universal classes of hash functions. Journal of computer and system sciences 18 (2), pp. 143–154. Cited by: §3.4.2.
- (2012) xxHash - Extremely fast hash algorithm. Note: \urlhttps://github.com/Cyan4973/xxHash[Online; accessed 30-January-2020] Cited by: §3.4.1.
- (2016) The metagenomics and metadesign of the subways and urban biomes (metasub) international consortium inaugural meeting report. Springer. Cited by: §1.
- (2019) Global genetic cartography of urban metagenomes and anti-microbial resistance. BioRxiv, pp. 724526. Cited by: §1.
- (2000) Opportunistic data structures with applications. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pp. 390–398. Cited by: §3.2.2.
- (2018) Scalable gaussian processes on discrete domains. arXiv preprint arXiv:1810.10368. Cited by: §5.
- (2010) A review of multi-instance learning assumptions. The Knowledge Engineering Review 25 (1), pp. 1–25. Cited by: §1, §3.5.1.
- (2019) CAMISIM: simulating metagenomes and microbial communities. Microbiome 7 (1), pp. 17. Cited by: §3.1.1, §3.1.
- (2018) Supervised learning on synthetic data for reverse engineering gene regulatory networks from experimental time-series. bioRxiv, pp. 356477. Cited by: §2.
- (2018) Variation graph toolkit improves read mapping by representing genetic variation in the reference. Nature biotechnology 36 (9), pp. 875–879. Cited by: §1.
- (2017) Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1243–1252. Cited by: §3.3.1.
- (1999) Similarity search in high dimensions via hashing. In Vldb, Vol. 99, pp. 518–529. Cited by: §3.4.1.
- (2018) Simulating illumina metagenomic data with insilicoseq. Bioinformatics 35 (3), pp. 521–522. Cited by: §3.1.
- (2014) Tackling soil diversity with the assembly of large, complex metagenomes. Proceedings of the National Academy of Sciences 111 (13), pp. 4904–4909. Cited by: §1.
- (2007) MEGAN analysis of metagenomic data. Genome research 17 (3), pp. 377–386. Cited by: §2.
- (2018) Attention-based deep multiple instance learning. arXiv preprint arXiv:1802.04712. Cited by: §3.5.1, §3.5.1.
- (1998) Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pp. 604–613. Cited by: §3.4.1.
- (2019) Sparse binary relation representations for genome graph annotation. In International Conference on Research in Computational Molecular Biology, pp. 120–135. Cited by: §1.
- (2016) Centrifuge: rapid and sensitive classification of metagenomic sequences. Genome research 26 (12), pp. 1721–1729. Cited by: §2, §3.2.2, §3.
- (2015) Probabilistic topic modeling for the analysis and classification of genomic sequences. BMC bioinformatics 16 (6), pp. S2. Cited by: §2.
- (2019) DeepMicrobes: taxonomic classification for metagenomics with deep learning. bioRxiv, pp. 694851. Cited by: Table S2, §2, §3.3.2, §3, §4.2, Table 3.
- (1991) Divergence measures based on the shannon entropy. IEEE Transactions on Information theory 37 (1), pp. 145–151. Cited by: §3.5.1.
- (2017) Bracken: estimating species abundance in metagenomics data. PeerJ Computer Science 3, pp. e104. Cited by: §2, §3.2.1.
- (2019) Metagenomic binning through low-density hashing. Bioinformatics 35 (2), pp. 219–226. Cited by: §2.
- (2019) Continuous embeddings of dna sequencing reads and application to metagenomics. Journal of Computational Biology 26 (6), pp. 509–518. Cited by: §2.
- (2012) A framework for human microbiome research. nature 486 (7402), pp. 215. Cited by: §1.
- (2017) Succinct colored de bruijn graphs. Bioinformatics 33 (20), pp. 3181–3187. Cited by: §1.
- (2018) Janossy pooling: learning deep permutation-invariant functions for variable-size inputs. arXiv preprint arXiv:1811.01900. Cited by: §5.
- (2019) Dynamic compression schemes for graph coloring. Bioinformatics 35 (3), pp. 407–414. Cited by: §1.
- (2018) Mantis: a fast, small, and exact large-scale sequence-search index. Cell systems 7 (2), pp. 201–207. Cited by: §1.
- (2018) A universal SNP and small-indel variant caller using deep neural networks. Nature biotechnology 36 (10), pp. 983. Cited by: §1.
- (2010) A human gut microbial gene catalogue established by metagenomic sequencing. nature 464 (7285), pp. 59. Cited by: §1.
- (2012) A tale of three next generation sequencing platforms: comparison of ion torrent, pacific biosciences and illumina miseq sequencers. BMC genomics 13 (1), pp. 341. Cited by: §4.1.
- (2019) GeNet: deep representations for metagenomics. arXiv preprint arXiv:1901.11015. Cited by: Table S1, Table S2, §1, §2, §2, §3.1, §3.1, §3.3.1, §3, §4.1, Table 3.
- (1996) Entrez: molecular biology database and retrieval system. In Methods in enzymology, Vol. 266, pp. 141–162. Cited by: §3.1.
- (2012) Metagenomic microbial community profiling using unique clade-specific marker genes. Nature methods 9 (8), pp. 811. Cited by: §2.
- (2019) Megatron-lm: training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053. Cited by: §3.3.2.
- (2016) Fast search of thousands of short-read sequencing experiments. Nature biotechnology 34 (3), pp. 300. Cited by: §1.
- (2017) Hash embeddings for efficient word representations. In Advances in Neural Information Processing Systems, pp. 4928–4936. Cited by: §2, §3.4.2.
- (2013) Locality sensitive hashing for similarity search using mapreduce on large scale data. In Intelligent Information Systems Symposium, pp. 171–178. Cited by: §3.4.1.
- (2018) The reconstruction of 2,631 draft metagenome-assembled genomes from the global oceans. Scientific data 5, pp. 170203. Cited by: §1, §4.2.
- (2007) Naive bayesian classifier for rapid assignment of rrna sequences into the new bacterial taxonomy. Appl. Environ. Microbiol. 73 (16), pp. 5261–5267. Cited by: §2.
- (2009) Feature hashing for large scale multitask learning. In Proceedings of the 26th annual international conference on machine learning, pp. 1113–1120. Cited by: §2, §3.4.1.
- (2013) DNA sequencing costs: data from the NHGRI genome sequencing program (GSP). Cited by: §1.
- (2006) Database resources of the national center for biotechnology information. Nucleic acids research 35 (suppl_1), pp. D5–D12. Cited by: Appendix A, §1, §3.1.
- (2019) Improved metagenomic analysis with kraken 2. Genome biology 20 (1), pp. 257. Cited by: §3.2.1.
- (2014) Kraken: ultrafast metagenomic sequence classification using exact alignments. Genome biology 15 (3), pp. R46. Cited by: §2, §3.2.1, §3.
- (2017) Deep sets. In Advances in neural information processing systems, pp. 3391–3401. Cited by: §3.5.1.
- (2018) A primer on deep learning in genomics. Nature genetics, pp. 1. Cited by: §1.