References
Abstract

T-distributed stochastic neighbour embedding (t-SNE) is a widely used data visualisation technique. It differs from its predecessor SNE by the low-dimensional similarity kernel: the Gaussian kernel was replaced by the heavy-tailed Cauchy kernel, solving the “crowding problem” of SNE. Here, we develop an efficient implementation of t-SNE for a -distribution kernel with an arbitrary degree of freedom , with corresponding to SNE and corresponding to the standard t-SNE. Using theoretical analysis and toy examples, we show that can further reduce the crowding problem and reveal finer cluster structure that is invisible in standard t-SNE. We further demonstrate the striking effect of heavier-tailed kernels on large real-life data sets such as MNIST, single-cell RNA-sequencing data, and the HathiTrust library. We use domain knowledge to confirm that the revealed clusters are meaningful. Overall, we argue that modifying the tail heaviness of the t-SNE kernel can yield additional insight into the cluster structure of the data.

oddsidemargin has been altered.
marginparsep has been altered.
topmargin has been altered.
marginparwidth has been altered.
marginparpush has been altered.
paperheight has been altered.
The page layout violates the ICML style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. We’re not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.

 

Heavy-tailed kernels reveal a finer cluster structure in t-SNE visualisations

 

Dmitry Kobak1  George Linderman2  Stefan Steinerberger3  Yuval Kluger2 4  Philipp Berens1 


footnotetext: 1Institute for Ophthalmic Research, University of Tübingen, Germany 2Applied Mathematics Program, Yale University, New Haven, USA 3Department of Mathematics, Yale University, New Haven, USA 4Department of Pathology, Yale School of Medicine, New Haven, USA. Correspondence to: Dmitry Kobak <dmitry.kobak@uni-tuebingen.de>.  
\@xsect

T-distributed stochastic neighbour embedding (t-SNE) (van der Maaten & Hinton, 2008) and related methods (Tang et al., 2016; McInnes et al., 2018) are used for data visualisation in many scientific fields dealing with thousands or even millions of high-dimensional samples. They range from single-cell cytometry (Amir et al., 2013) and transcriptomics (e.g. Tasic et al., 2018; Zeisel et al., 2018; Saunders et al., 2018) where samples are cells and features are proteins or genes, to population genetics (Diaz-Papkovich et al., 2018) where samples are people and features are single-nucleotide polymorphisms, to humanities (Schmidt, 2008) where samples are books and features are words.

T-SNE was developed from an earlier method called SNE (Hinton & Roweis, 2003). The central idea of SNE was to describe pairwise relationships between high-dimensional points in terms of normalised affinities: close neighbours have high affinity whereas distant samples have near-zero affinity. SNE then positions the points in two dimensions such that the Kullback-Leibler divergence between the high- and low-dimensional affinities is minimised. This worked to some degree but suffered from what was later called the “crowding problem”. The idea of t-SNE was to adjust the kernel transforming pairwise low-dimensional distances into affinities: the Gaussian kernel was replaced by the heavy-tailed Cauchy kernel (t-distribution with 1 degree of freedom ), ameliorating the crowding problem.

The choice of the specific heavy-tailed kernel was largely arbitrary and motivated by mathematical and computational simplicity: a t-distribution with has a density proportional to which is mathematically compact and fast to compute. However, a t-distribution with any finite has heavier tails than the Gaussian distribution (which corresponds to ). It is therefore reasonable to explore the whole spectrum of the values of from to 0. Given that t-SNE () outperforms SNE (), it might be that for some data sets would perform even better, or at least offer additional insights.

While this seems like a straightforward extension, no efficient implementation of this idea has been available. T-SNE is optimised via adaptive gradient descent. While it is easy to write down the gradient for an arbitrary value of , the exact t-SNE from the original paper requires time and memory, and cannot be run for sample sizes much larger than . Efficient approximations have been recently developed allowing to run approximate t-SNE for much larger sample sizes (van der Maaten, 2014; Linderman et al., 2019). But until now, these approximations have only been implemented for , and so, despite some previous related work (Yang et al., 2009), the effect of on large real-life datasets has remained unknown.

Here we show that the recent FIt-SNE approximation (Linderman et al., 2019) can be modified to deal with an arbitrary value of and demonstrate that can reveal “hidden” structure, invisible with standard t-SNE.

\@xsect\@xsect

SNE defines directional affinity of point to point as

For each , this forms a probability distribution over all points (all are set to zero). The variance of the Gaussian kernel is chosen such that the perplexity of this probability distribution

has some pre-specified value. In symmetric SNE (SSNE)111In the following text we will not make a distinction between the symmetric SNE (SSNE) and the original, asymmetric, SNE. and t-SNE the affinities are symmetrised and normalised

to form a probability distribution on the set of all pairs .

The points are then arranged in a low-dimensional space to minimise the Kullback-Leibler divergence between and the affinities in the low-dimensional space:

Here is a kernel that transforms Euclidean distance between any two points into affinities, and are low-dimensional coordinates (all are set to 0).

SNE uses the Gaussian kernel . T-SNE uses the t-distribution with one degree of freedom (also known as Cauchy distribution): . Here we consider a general t-distribution kernel

()

We use a simplified version defined as

()

This kernel corresponds to the scaled t-distribution with . This means that using () instead of () in t-SNE produces an identical output apart from global scaling by . At the same time, () allows to use any , including corresponding to negative , i.e. it allows kernels with tails heavier than any possible t-distribution.222Equivalently, we could use an even simpler kernel that differs from () only by scaling. We prefer () because of the explicit Gaussian limit at .

The gradient of the loss function (see Appendix) is

Any implementation of exact t-SNE can be easily modified to use this expression instead of the gradient.

Modern t-SNE implementations make two approximations. First, they set most to zero, apart from only a small number of close neighbours (van der Maaten, 2014; Linderman et al., 2019), accelerating the attractive force computations (that can be very efficiently parallelised). This carries over to the case. The repulsive forces are approximated in FIt-SNE by interpolation on a grid, further accelerated with the Fourier transform (Linderman et al., 2019). This interpolation can be carried out for the case in full analogy to the case (see Appendix).

Importantly, the runtime of FIt-SNE with is practically the same as with . For example, embedding MNIST () with perplexity 50 as described below took 90 seconds with and 97 seconds with on a computer with 4 double-threaded cores, 3.4 GHz each.333The numbers correspond to 1000 gradient descent iterations. The slight speed decrease is due to a more efficient implementation of the interpolation code for the special case of .

\@xsect

We first applied exact t-SNE with various values of to a simple toy data set consisting of several well-separated clusters. Specifically, we generated a 10-dimensional data set with 100 data points in each of the 10 classes (1000 points overall). The points in class were sampled from a Gaussian distribution with covariance and mean where is the -th basis vector. We used perplexity 50, and default optimisation parameters (1000 iterations, learning rate 200, early exaggeration 12, length of early exaggeration 250, initial momentum 0.5, switching to 0.8 after 250 iterations).

Figure 1: Toy example with ten Gaussian clusters. (A) SNE visualisation of 10 spherical clusters that are all equally far away from each other (). (B) Standard t-SNE visualisation of the same data set (). (C) t-SNE visualisation with . The same random seed was used for initialisation in all panels. Scale bars are shown in the bottom-right of each panel.

Figure 1 shows the t-SNE results for , , and . A t-distribution with degrees of freedom is very close to the Gaussian distribution, so here and below we will refer to the result as SNE. We see that class separation monotonically increases with decreasing : t-SNE (Figure 1B) separates the classes much better than SNE (Figure 1A), but t-SNE with separates them much better still (Figure 1C).

Figure 2: Toy example with ten “dumbbell”-shaped clusters. (A) SNE visualisation of 10 dumbbell-shaped clusters (). (B) Standard t-SNE visualisation (). (C) t-SNE visualisation with .

In the above toy example, the choice between different values of is mostly aesthetic. This is not the case in the next toy example. Here we change the dimensionality to 20 and shift 50 points in each class by and the remaining 50 points by (where is the class number). The intuition is that now each of the 10 classes has a “dumbbell” shape. This shape is invisible in SNE (Figure 2A) and hardly visible in standard t-SNE (Figure 2B), but becomes apparent with (Figure 2C). In this case, decreasing below 1 is necessary to bring out the fine structure of the data.

\@xsect

We showed that decreasing increases cluster separation (Figures 1, 2). Why does this happen? An informal argument is that in order to match the between-cluster affinities , the distance between clusters in the t-SNE embedding needs to grow when the kernel becomes progressively more heavy-tailed (van der Maaten & Hinton, 2008).

To quantify this effect, we consider an example of two standard Gaussian clusters in 10 dimensions ( in each) with the between-centroid distance set to ; these clusters can be unambiguously separated. We use exact t-SNE (perplexity 50) with various values of from 0.2 to 3.0 and measure the cluster separation in the embedding. As a scale-invariant measure of separation we used between-centroids distance divided by the root-mean-square within-cluster distance. Indeed, we observed a monotonic decrease of this measure with growing (Figure 3).

Figure 3: Separation in the t-SNE visualisation between the two well-separated clusters as a function of . Separation was measured as the between-centroids distance divided by the root-mean-square within-cluster distance.

The informal argument mentioned above can be replaced by the following formal one. Consider two high-dimensional clusters ( points in each) with all pairwise within-cluster distances equal to and all pairwise between-cluster distances equal to (this can be achieved in the space of dimensions). In this case, the matrix has only two unique non-zero values: all within-cluster affinities are given by and all between-cluster affinities by ,

where is the Gaussian kernel corresponding to the chosen perplexity value. Consider an exact t-SNE mapping to the space of the same dimensionality. In this idealised case, t-SNE can achieve zero loss by setting within- and between-cluster distances and in the embedding such that and . This will happen if

Plugging in the expression for and denoting the constant right-hand side by , we obtain

The left-hand side can be seen as a measure of class separation close to the one used in Figure 3, and the right-hand side monotonically decreases with increasing .

In the simulation shown in Figure 3, the matrix does not have only two unique elements, the target dimensionality is two, and the t-SNE cannot possibly achieve zero loss. Still, qualitatively we observe the same behaviour: approximately power-law decrease of separation with increasing .

\@xsect

We now demonstrate that these theoretical insights are relevant to practical use cases on large-scale data sets. Here we use approximate t-SNE (FIt-SNE).

\@xsect

We applied t-SNE with various values of to the MNIST data set (Figure 4), comprising grayscale images of handwritten digits. As a pre-processing step, we used principal component analysis (PCA) to reduce the dimensionality from 784 to 50. We used perplexity 50 and default optimisation parameters apart from learning rate that we increased to .444To get a good t-SNE visualisation of MNIST, it is helpful to increase either the learning rate or the length of the early exaggeration phase. Default optimisation parameters often lead to some of the digits being split into two clusters. In the cytometric context, this phenomenon was described in detail by Belkina et al. (2018). For easier reproducibility, we initialised the t-SNE embedding with the first two PCs (scaled such that PC1 had standard deviation 0.0001).

Figure 4: MNIST data set (. (A) SNE visualisation (). (B) Standard t-SNE visualisation (). (C) t-SNE visualisation with . The colours are consistent across panels (A–C), labels are shown in (A). PCA initialisation was used in all three cases. Transparency 0.5 for all dots in all three panels. (D) Average images for some individual sub-clusters from (C). The sub-clusters were isolated via DBSCAN with default settings as it is implemented in scikit-learn. Up to five subclusters with at least 100 points are shown, ordered from top to bottom by abundance.

To the best of our knowledge, Figure 4A is the first existing SNE () visualisation of the whole MNIST: we are not aware of any SNE implementation that can handle a dataset of this size. It produces a surprisingly good visualisation but is nevertheless clearly outperformed by standard t-SNE (, Figure 4B): many digits coalesce together in SNE but get separated into clearly distinct clusters in t-SNE. Remarkably, reducing to 0.5 makes each digit further split into multiple separate sub-clusters (Figure 4C), revealing a fine structure within each of the digits.

To demonstrate that these sub-clusters are meaningful, we computed the average MNIST image for some of the sub-clusters (Figure 4D). In each case, the shapes appear to be meaningfully distinct: e.g. for the digit “4”, the hand-writing is more italic in one sub-cluster, more wide in another, and features a non-trivial homotopy group (i.e. has a loop) in yet another one. Similarly, digit “2” is separated into three sub-clusters, with the most abundant one showing a loop in the bottom-left and the next abundant one having a sharp angle instead. Digit “1” is split according to the stroke angle. Re-running t-SNE using random initialisation with different seeds yielded consistent results. Points that appear as outliers in Figure 4C mostly correspond to confusingly written digits.

MNIST has been a standard example for t-SNE starting from the original t-SNE paper (van der Maaten & Hinton, 2008), and it has been often observed that t-SNE preserves meaningful within-digit structure. Indeed, the sub-clusters that we identified in Figure 4C are usually close together in Figure 4B.555This can be clearly seen in an animation that slowly decreases from 100 to 0.5, see http://github.com/berenslab/finer-tsne. However, standard t-SNE does not separate them into visually isolated sub-clusters, and so does not make this internal structure obvious.666For MNIST, the KL divergence as a function of has a minimum at , i.e. , but we do not believe it implies that this is the “optimal” value. See Figure S1.

\@xsect

For the second example, we took the transcriptomic dataset from Tasic et al. (2018), comprising cells from adult mouse cortex (sequenced with Smart-seq2 protocol). Dimensions are genes, and the data are the integer counts of RNA transcripts of each gene in each cell. Using a custom expert-validated clustering procedure, the authors divided these cells into 133 clusters. In Figure 5, we used the cluster ids and cluster colours from the original publication.

Figure 5A shows the standard t-SNE () of this data set, following common transcriptomic pre-processing steps as described in Kobak & Berens (2018). Briefly, we row-normalised and log-transformed the data, selected 3000 most variable genes and used PCA to further reduce dimensionality to 50. We used perplexity 50 and PCA initialisation. The resulting t-SNE visualisation is in a reasonable agreement with the clustering results, however it lumps many clusters together into contiguous “islands” or “continents” and overall suggests many fewer than 133 distinct clusters.

Figure 5: Tasic et al. data set (). (A) Standard t-SNE visualisation (). Cluster ids and cluster colours are taken from the original publication (Tasic et al., 2018): cold colours for excitatory neurons, warm colours for inhibitory neurons, and grey/brown colours for non-neural cells such as astrocytes or microglia. (B) t-SNE visualisation with . (C) A zoom-in into the left side of panel (B) showing all Vip clusters from Tasic et al. Black circles mark cluster centroids (medians).

Reducing the number of degrees of freedom to splits many of the contiguous islands into “archipelagos” of smaller disjoint areas (Figure 5B). In many cases, this roughly agrees with the clustering results of Tasic et al. (2018). Figure 5C shows a zoom-in into the Vip clusters (west-southwest part of panel B) that provide one such example: isolated islands correspond well to the individual clusters (or sometimes pairs of clusters). Importantly, the cluster labels in this data set are not ground truth, nevertheless the agreement between cluster labels and t-SNE with provides additional evidence that this data categorisation is meaningful.

\@xsect

For the final example, we used the HathiTrust library data set (Schmidt, 2008). The full data set comprises 13.6 million books and can be described with several million features that represent word counts of each word in each book. We used the pre-processed data from Schmidt (2008): briefly, the word counts were row-normalised, log-transformed, projected to 1280 dimensions using random linear projection with coefficients , and then reduced to 100 PCs.777The data set was downloaded from https://zenodo.org/record/1477018. The available meta-data include author name, book title, publication year, language, and Library of Congress classification (LCC) code. For simplicity, we took a subset consisting of all books in Russian language. We used perplexity 50 and learning rate .

Figure 6: Russian language part of the HathiTrust library (). (A) Standard t-SNE visualisation (). Colour denotes publication year. (B) t-SNE visualisation with . Blue and black contours in both panels are kernel density estimate contour lines for mathematical literature and poetry (plotted with seaborn.kdeplot() with Gaussian bandwidth set to 2.0; contour levels were manually tuned to enclose the majority of the books).

Figure 6A shows the standard t-SNE visualisation () coloured by the publication year. The most salient feature is that pre-1917 books cluster together (orange/red colours): this is due to the major reform of Russian orthography implemented in 1917, leading to most words changing their spelling. However, not much of a substructure can be seen among the books published after (or before) 1917. In contrast, t-SNE visualisation with fragments the corpus into a large number of islands (Figure 6B).

We can identify some of the islands by inspecting the available meta-data. For example, mathematical literature (LCC code QA, books) is not separated from the rest in standard t-SNE, but occupies the leftmost island in t-SNE with (blue contour lines in both panels). Several neighbouring islands correspond to the physics literature (LCC code QC, books; not shown). In an attempt to capture something radically different from mathematics, we selected all books authored by several famous Russian poets888Anna Akhmatova, Alexander Blok, Joseph Brodsky, Afanasy Fet, Osip Mandelstam, Vladimir Mayakovsky, Alexander Pushkin, and Fyodor Tyutchev. ( in total). This is not a curated list: there are non-poetry books authored by these authors, while many other poets were not included (the list of poets was not cherry-picked; we made the list before looking at the data). Nevertheless, when using , the poetry books printed after 1917 seemed to occupy two neighbouring islands, and the ones printed before 1917 were reasonably isolated as well (Figure 6B, black contour lines). In the standard t-SNE visualisation poetry was not at all separated from the surrounding population of books.

\@xsect

Yang et al. (2009) considered a very similar setting: they introduced symmetric SNE (SSNE) with the kernel family

calling it heavy-tailed symmetric SNE (HSSNE). This is exactly the same kernel family as (), but with replaced by . However, Yang et al. did not show any examples of heavier-tailed kernels outperforming and did not provide an implementation suitable for large sample sizes. Interestingly, Yang et al. argued that the t-SNE optimisation algorithm is not suitable for their HSSNE and suggested an alternative algorithm; here we demonstrated that the t-SNE optimisation works reasonably well in a wide range of values (but see Discussion).

UMAP (McInnes et al., 2018) is a promising recent algorithm closely related to an earlier largeVis (Tang et al., 2016); both are similar to t-SNE but modify the repulsive forces to make them amenable for a sampling-based optimisation. UMAP uses the following family of similarity kernels:

which reduces to Cauchy when and is more heavy-tailed when . UMAP default is and with both parameters adjusted via the min_dist input parameter. In our experiments, we observed that modifying min_dist in some cases (e.g. on the Tasic et al. dataset) led to the effect similar to modifying in t-SNE. However, for some other data sets, e.g. MNIST, min_dist did not seem to influence the overall shape of the embedding, and we were unable to obtain any sub-digit structure with UMAP (Figure S2).

\@xsect

We showed that using in t-SNE can yield insightful visualisations that are qualitatively different compared to the standard choice of . Crucially, the choice of was made by van der Maaten & Hinton (2008) for the reasons of mathematical convenience, and we are not aware of any a priori argument in favour of . As our approach still uses the t-distribution kernel (scaled t-distribution to be precise), one can refer to t-SNE with as “heavy-tailed t-SNE”.

We found that lowering below 1 makes progressively finer structure apparent in the visualisation and brings out smaller clusters, which — at least in the data sets studied here — are often meaningful. In a way, can be thought of as a “magnifying glass” for the standard t-SNE representation. We do not think that there is one ideal value of suitable for all data sets and all situations; instead we consider it a useful adjustable parameter of t-SNE, complementary to the perplexity. There is a non-trivial interaction between and perplexity. Small vs. large perplexity makes the affinity matrix represent the local vs. global structure of the data (Kobak & Berens, 2018). Small vs. large makes the embedding represent the finer vs. coarser structure of the affinity matrix. In practice, it can make sense to treat it as a two-dimensional parameter space to explore. However, for large data sets (), it is computationally unfeasible to increase the perplexity from its standard range of 30–100 (it would linearly increase the runtime), and so becomes the only available parameter to adjust.

One important caveat is to be kept in mind. It is well-known that t-SNE, especially with low perplexity, can find “clusters” in pure noise, picking up random fluctuations in the density (Wattenberg et al., 2016). This can happen with and gets exacerbated with lower values of . A related point concerns clustered real-life data where separate clusters (local density peaks) can sometimes be connected by an area of lower but non-zero density: for example, Tasic et al. (2018) argued that many pairs of their 133 clusters have intermediate cells. Our experiments demonstrate that lowering can make such clusters more and more isolated in the embedding, creating a potentially misleading appearance of perfect separation. In other words, there is a trade-off between bringing out finer cluster structure and preserving continuities between clusters. Choosing a value of that yields the most faithful representation of a given data set, remains a goal for future research (in particular, KL divergence may not be the ideal metric for this, see Figure S1). However, in general, there may not be a single “best” embedding of high-dimensional data in a two-dimensional space. Rather, by varying , one can obtain different complementary “views” of the data.

Very low values of correspond to kernels with very wide and very flat tails, leading to vanishing gradients and difficult convergence. We found that was about the smallest value that could be safely used (Figure S3). In fact, it may take more iterations to reach convergence for compared to . As an example, running t-SNE on MNIST with for ten times longer than we did for Figure 4C, led to the embedding expanding much further (which leads to a slow-down of FIt-SNE interpolation) and, as a result, resolving additional sub-clusters (Figure S4). On a related note, when using only one single MNIST digit as an input for t-SNE with , the embedding also fragments into many more clusters (Figure S5), which we hypothesise is due to the points rapidly expanding to occupy a much larger area compared to what happens in the full MNIST embedding (Figure S5). This can be counterbalanced by increasing the strength of the attractive forces (Figure S5). Overall, the effect of the embedding scale on the cluster resolution remains an open research question.

In conclusion, we have shown that adjusting the heaviness of the kernel tails in t-SNE can be a valuable tool for data exploration and visualisation. As a practical recommendation, we suggest to embed any given data set using various values of , each inducing a different level of clustering, and hence providing insight that cannot be obtained from the standard choice alone.999Our code is available at http://github.com/berenslab/finer-tsne. The main FIt-SNE repository at http://github.com/klugerlab/FIt-SNE was updated to support any (version 1.1.0).

\@xsect

The loss function, up to a constant term , can be rewritten as follows:

(1)

where we took into account that . The first term in Eq. (1) contributes attractive forces to the gradient while the second term yields repulsive forces. The gradient is

(2)
(3)

The first expression is more convenient for numeric optimisation while the second one can be more convenient for mathematical analysis.

For the kernel

the gradient of is

(4)

Plugging Eq. 4 into Eq. 3, we obtain the expression for the gradient101010Note that the C++ Barnes-Hut t-SNE implementation (van der Maaten, 2014) absorbed the factor 4 into the learning rate, and the FIt-SNE implementation (Linderman et al., 2019) followed this convention.

For numeric optimisation it is convenient to split this expression into the attractive and the repulsive terms. Plugging Eq. 4 into Eq. 2, we obtain

where

It is noteworthy that the expression for has raised to the power, which cancels out the fractional power in . This makes the runtime of computation unaffected by the value of . In FIt-SNE, the sum over in is approximated by the sum over approximate nearest neighbours of point obtained using Annoy (Bernhardsson, 2013), where is the provided perplexity value. The heuristic comes from van der Maaten (2014). The remaining values are set to zero.

The can be approximated using the interpolation scheme from Linderman et al. (2019). It allows fast approximate computation of the sums of the form

and

where is any smooth kernel, by using polynomial interpolation of on a fine grid.111111The accuracy of in the interpolation can somewhat decrease for small values of . One can increase the accuracy by decreasing the spacing of the interpolation grid (see FIt-SNE documentation). We found that it did not noticeably affect the visualisations. All kernels appearing in are smooth.

\@ssect

Acknowledgements

This work was supported by the Deutsche Forschungsgemeinschaft (BE5601/4-1, EXC 2064 Project ID 390727645) (PB), the Federal Ministry of Education and Research (FKZ 01GQ1601, 01IS18052C), and the National Institute of Mental Health under award number U19MH114830 (DK and PB), NIH grants F30HG010102 and U.S. NIH MSTP Training Grant T32GM007205 (GCL), NSF grant DMS-1763179 and the Alfred P. Sloan Foundation (SS), and the NIH grant R01HG008383 (YK). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

  • Amir et al. (2013) Amir, E.-a. D., Davis, K. L., Tadmor, M. D., Simonds, E. F., Levine, J. H., Bendall, S. C., Shenfeld, D. K., Krishnaswamy, S., Nolan, G. P., and Pe’er, D. viSNE enables visualization of high dimensional single-cell data and reveals phenotypic heterogeneity of leukemia. Nature Biotechnology, 31(6):545, 2013.
  • Belkina et al. (2018) Belkina, A. C., Ciccolella, C. O., Anno, R., Spidlen, J., Halpert, R., and Snyder-Cappione, J. Automated optimal parameters for t-distributed stochastic neighbor embedding improve visualization and allow analysis of large datasets. bioRxiv, 2018.
  • Bernhardsson (2013) Bernhardsson, E. Annoy. https://github.com/spotify/annoy, 2013.
  • Diaz-Papkovich et al. (2018) Diaz-Papkovich, A., Anderson-Trocme, L., and Gravel, S. Revealing multi-scale population structure in large cohorts. bioRxiv, 2018.
  • Hinton & Roweis (2003) Hinton, G. and Roweis, S. Stochastic neighbor embedding. In Advances in Neural Information Processing Systems, pp. 857–864, 2003.
  • Kobak & Berens (2018) Kobak, D. and Berens, P. The art of using t-SNE for single-cell transcriptomics. bioRxiv, 2018.
  • Linderman et al. (2019) Linderman, G. C., Rachh, M., Hoskins, J. G., Steinerberger, S., and Kluger, Y. Fast interpolation-based t-SNE for improved visualization of single-cell RNA-seq data. Nature Methods, 2019. doi: 10.1038/s41592-018-0308-4.
  • McInnes et al. (2018) McInnes, L., Healy, J., and Melville, J. UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv, 2018.
  • Saunders et al. (2018) Saunders, A., Macosko, E., Wysoker, A., Goldman, M., Krienen, F., Bien, E., Baum, M., Wang, S., Goeva, A., Nemesh, J., et al. A single-cell atlas of cell types, states, and other transcriptional patterns from nine regions of the adult mouse brain. bioRxiv, 2018.
  • Schmidt (2008) Schmidt, B. Stable random projection: Lightweight, general-purpose dimensionality reduction for digitized libraries. Journal of Cultural Analytics, 2008.
  • Tang et al. (2016) Tang, J., Liu, J., Zhang, M., and Mei, Q. Visualizing large-scale and high-dimensional data. In Proceedings of the 25th International Conference on World Wide Web, pp. 287–297. International World Wide Web Conferences Steering Committee, 2016.
  • Tasic et al. (2018) Tasic, B., Yao, Z., Graybuck, L. T., Smith, K. A., Nguyen, T. N., Bertagnolli, D., Goldy, J., Garren, E., Economo, M. N., Viswanathan, S., et al. Shared and distinct transcriptomic cell types across neocortical areas. Nature, 563(7729):72, 2018.
  • van der Maaten (2014) van der Maaten, L. Accelerating t-SNE using tree-based algorithms. Journal of Machine Learning Research, 15(1):3221–3245, 2014.
  • van der Maaten & Hinton (2008) van der Maaten, L. and Hinton, G. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008.
  • Wattenberg et al. (2016) Wattenberg, M., Viégas, F., and Johnson, I. How to use t-SNE effectively. Distill, 1(10):e2, 2016.
  • Yang et al. (2009) Yang, Z., King, I., Xu, Z., and Oja, E. Heavy-tailed symmetric stochastic neighbor embedding. In Advances in Neural Information Processing Systems, pp. 2169–2177, 2009.
  • Zeisel et al. (2018) Zeisel, A., Hochgerner, H., Lonnerberg, P., Johnsson, A., Memic, F., van der Zwan, J., Haring, M., Braun, E., Borm, L., La Manno, G., et al. Molecular architecture of the mouse nervous system. Cell, 174(4):999–1014, 2018.
\@ssect

Supplementary Figures

Figure S1: The Kullback-Leibler divergence of t-SNE embedding with after 1000 gradient descent iterations with learning rate (starting from PCA initialisation, see main text). The horizontal axis is on the log scale. The values were sampled on a grid with step for , for and for . Running gradient descent with for iterations (Figure SX) lowered KL divergence down to 3.6, which was still above the minimum value on this curve. The minimum is at . However, we do not think that this automatically implies that is the “optimal” value of for this data set. Kullback-Leibler divergence might not be the ideal metric to quantify the embedding quality; and as we showed in the main text, give some additional information about the structure of the data set.
Figure S2: UMAP visualisations of the MNIST data set with default parameters and min_dist set to 0.001, 0.01, 0.1 (default), and 0.5. This is the recommended range according to the UMAP documentation. Each subplot shows the corresponding value of the UMAP kernel . Note that with min_dist below 0.01 the hardly changes. As a side note, one can obtain an MNIST embedding very similar to the one that UMAP gives with default settings using t-SNE with late exaggeration of about , i.e. multiplying all attractive forces by after the early exaggeration period (first 250 iterations) is over.
Figure S3: Toy example with ten “dumbbell”-shaped clusters from Figure 2, here embedded with . Top-left plot shows the result after 1000 gradient descent iterations (default). Note that the dumbbell shape is lost: whereas the number of visible clusters increased as was lowered from 100 to (Figure 2), it decreased when it was further lowered to . We believe the reason for this is that the strong repulsion between dumbbells “squashes” them in the beginning of optimisation into very compact blobs. It is likely that longer optimisation would resolve the dumbbell shapes. This is difficult to test because the kernel with is extremely wide and flat, leading to slow convergence. Top-right plot shows the result after 5000 iterations. Here a few outlying points get pushed to the periphery. Zooming-in to the main 10 clusters (bottom-left) still does not resolve the dumbbell shapes. Further zooming in on one of the dumbbells (bottom-right) shows that the points are squashed into 1D which may be a sign of poor convergence. In a separate sets of experiments, we observed the similar phenomenon with MNIST: after 1000 iterations yielded fewer clusters than . Our conclusion is that smaller values of should be used with caution.
Figure S4: t-SNE visualisation of the MNIST dataset with . The top-left panel is identical to Figure 4C; it was obtained after 1000 gradient descent iterations (the default value). The top-right panel corresponds to iterations and has many more isolated sub-clusters. This can also be seen in the bottom row showing the respective zoom-ins into the digit “4”. At the same time, the embedding after 1000 iterations is not misleading and is simply a coarser-grained version of the embedding after iterations. Using iterations is impractical: whereas 1000 iterations were finished in 1.5 minutes, iterations took 4 hours 30 minutes. This is because FIt-SNE interpolation scheme uses regular interpolation grid with the number of nodes growing quadratically with the embedding size. While the left embedding is contained within , the right one expands to . In principle, an implementation based on the fast multipole method (FMM) could be developed to dramatically accelerate the gradient computation in this setting where most of the embedding space is “empty space”, but current FIt-SNE implementation does not support it. Note that the standard t-SNE embedding with also expands much further after iterations, compared to the 1000 iterations. However, with it does not resolve additional sub-clusters, at least in MNIST.
Figure S5: t-SNE visualisation of a MNIST subset consisting of all images of digit “4” () (perplexity 50). Left: , the same as in Figure 4C. Note the large number of isolated clusters. We believe this happens because the embedding rapidly expands to a larger area, compared to Figure S4 (bottom-left). One evidence for that is that re-running t-SNE after adding several random Gaussian clusters with each, roughly recovers the shape of the digit “4” archipelago from the full MNIST embedding (middle). Right: and exaggeration factor 1.5 (Kobak & Berens, 2018), i.e. all attractive forces are multiplied by 1.5 after the end of the early exaggeration phase (during the early exaggeration they are multiplied by 12). This roughly recovers the sub-clusters from the full MNIST embedding (Figure S4). The relationship between and exaggeration remains for future work.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
338964
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description