Big Data Approaches to Knot Theory: Understanding the Structure of the Jones Polynomial
Abstract.
We examine the structure and dimensionality of the Jones polynomial using manifold learning techniques. Our data set consists of more than 10 million knots up to 17 crossings and two other special families up to 2001 crossings. We introduce and describe a method for using filtrations to analyze infinite data sets where representative sampling is impossible or impractical, an essential requirement for working with knots and the data from knot invariants. In particular, this method provides a new approach for analyzing knot invariants using Principal Component Analysis. Using this approach on the Jones polynomial data we find that it can be viewed as an approximately 3 dimensional manifold, that this description is surprisingly stable with respect to the filtration by the crossing number, and that the results suggest further structures to be examined and understood.
1. Introduction
Throughout the development of lowdimensional topology, there has been an emphasis on the study of invariants from algebraic, combinatorial and geometric perspectives. The scarcity of results considering the statistical nature of these invariants is quite surprising given the abundance of available data. Examining the distributions that arise from invariants should reveal and illuminate structures that are difficult to see using traditional tools. We consider the Jones polynomial from a statistical perspective and provide an outline of how to use filtrations to investigate infinite data sets of this type.
The techniques of big data and deep learning provide useful tools for analyzing the statistical nature of knot invariants. Multiple advances have resulted from melding traditional methods in Physics and Mathematics with the emerging datadriven techniques of scientific computing. These have ranged from solving previously intractable problems in Computer Vision to providing significant improvements in earthquake prediction models. Despite the wide number of techniques available, the study of how to use these powerful statistical tools in pure mathematics is in its infancy. Machine learning and data mining techniques have just started to attract attention in Knot theory, see [50, 30, 27] and to our knowledge have largely been used for predicting valuations. This is the first in a series of papers examining how to apply these techniques to lowdimensional topology to gather structural insights. We start by focusing on dimensionality reduction, with further analysis using supervised machine learning techniques [24] and persistent homology [15] forthcoming.
Lowdimensional topology, and knot theory in particular, is among the most datarich of mathematical subfields. Tabulating data concerning knots is a longstanding tradition dating back to the 1860s [48]. As computing power improves and people continue to search for answers to fundamental questions in knot theory such as the Jones unknot conjecture [31] and the hyperbolic volume conjecture [41], people have continued to enlarge our tabulations of known knots. Recently, Burton tabulated all the prime knots up to and including 19 crossings [9], finding over 350 million total prime knots. This tabulation was summarized as part of the software package Regina [8] with published DTcodes as defined in [14]. A separate effort at tabulating large numbers of unique knots was also recently undertaken by Tuzun and Sikora in their demonstration that no counterexamples to the Jones unknot conjecture exist up to 23 crossings [47]. In their tabulation, well over 10 trillion knot diagrams were considered using distinct methods from Burton’s. While the exact number of distinct knots with a certain number of crossings is still unknown, we do know that this number grows at an exponential rate as we increase the number of crossings [19]. This ensures that datarelated questions arising from Knot Theory naturally fit into a big data framework.
The first major contribution of this paper is to demonstrate a reliable technique by which manifold learning can be applied to infinite data sets where representative sampling is impossible or impractical. We describe how to analyze and construct a usable filtration on the infinite set of knots, where the Jones polynomial is unbounded in degree. The experimental results of this demonstrates that our Jones invariant data is well approximated by a three dimensional manifold, consistent across our filtration up to computation limits.
Meaningfully applying dimensionality reduction techniques to our data proved difficult. It is unknown how to create a representative subsample of Jones polynomial data. Any conclusions drawn must remain consistent when choosing comparable subsets of the data. Results were sensitive to the choice of how encode the data for comparison. Using the same approach as was used to find patterns in the more general coloured Jones polynomial [37, 7, 39, 6, 2, 18] provided data where any structures proved transient. Fortunately, exactly one model for encoding the data provided results that were both remarkable and persistent, it is discussed in Section 3.
The requirement that results be persistent across comparable subsets of the data required detailed analysis of how to filter sets of knots into related families. Knot Theory has always driven researchers to calculate knot invariants and organize them into data tables in a process called knot tabulation [25, 26]. Originally envisioned as a way to distinguish different atomic properties [48], modern work has suggested that a classification system could assist in the understanding of glueball particles [20]. Since then a series of systems have been suggested for ordering or relating knots within these ever expanding tabulations [13, 10].
Upon generating our Jones polynomial data for all knots up to 17 crossings we examined several methods for organizing the data. We considered the crossing number, Rasmussen sinvariant [43], signature, unknotting number, and a wide variety of properties intrinsic to the Jones data itself. As we discuss in Section 5 below, organizing the data by crossing number yields persistent results despite the manner in which the set varies dramatically in both the ratio of alternating to nonalternating knots present in the sample and the expanding size of the data considered.
The second major result from this study is to demonstrate a new tool for comparing knot invariants and understanding their structure. Applying dimensionality reduction to the Jones data using Principal Component Analysis (PCA) [51] as in Figure 1 we see a rich three dimensional structure with large scale features differentiated by their signature with subfeatures of smaller ‘tendrils’ with as yet unknown significance. We propose the following definition for understanding the results of dimensionality reduction via PCA, given the discussion of Remark 2.1.
Definition 1.1 (Dimension of a polynomial knot invariant, ).
Let be the value for which the normalized explained variance of the first PCA components sums to more than 95%. If this remains stable across the crossing filtration, then the knot invariant has dimension .
Under this definition the Jones polynomial is 3 dimensional, the polynomial of BarNatan and van der Veen [5] is 2 dimensional and the Alexander polynomial [1] is 1 dimensional. In this paper we will focus solely on understanding how the Jones polynomial is 3 dimensional under this definition, while the remaining calculations will be published in upcoming work.
To perform this analysis we relied on the tools from the KnotTheory package [4] for calculating the Jones polynomial for all knots. The DT codes we used for knots up to 16 crossings were exactly those in the KnotTheory package, while to calculate data for the 17 crossing knots we added the DT codes from Burton’s Regina program [9] to the KnotTheory data tables. The PCA calculations were done using the scikitlearn library [42] and in Mathematica [28]. Finally, knot figures were generated using Inkscape [29].
2. Background
In this section we briefly provide the reader with an overview of the definitions and notions used in the article. We begin with the definition of the Jones polynomial, followed by an overview of the basic properties of the PCA technique.
2.1. The Jones Polynomial
The Jones polynomial [31] and its generalizations [40, 44, 49] play a fundamental role in lowdimensional topology [40, 32, 41]. The discovery of the Jones polynomial has led to multiple major discoveries in various areas of lowdimensional topology [34, 33, 12, 3, 36, 21, 35]. Understanding the discriminative power of the Jones polynomial, its relations to other classical invariants of knots and links, as well as the information encoded in its coefficients, conjectured to be related to the hyperbolic volume of the knot, are important problems in lowdimensional topology. Furthermore, the coefficients of the Jones polynomial and its generalizations have been proven to be related to many interesting areas in number theory, and have been the subject of an extensive research effort [37, 17, 22, 7, 39, 23, 6, 2, 18].
Let be a knot in . The Jones polynomial, denoted by , is a Laurent polynomial in . The Jones polynomial can be characterized by the requirements that , when is the unknot, and that it satisfies the following skein relation: Here , and are three oriented link diagrams that are identical everywhere except at single crossing as appears on the right in Figure 2. The skein relation can be used to compute the Jones polynomial for any given link .
2.2. Principal Component Analysis
Principal Component Analysis (PCA) is one of the most popular multivariate statistical techniques in big data. PCA is defined as an orthogonal linear transformation that maps a given data set to a new orthonormal basis that is aligned with the core properties of the data. To accomplish this, PCA finds and ranks the linear directions along which the data has maximal variance.
Given the data set , where and , the PCA linear transformation is obtained by computing the eigendecomposition of the covariance matrix defined by
The matrix is a symmetric matrix by definition and hence is diagonalizable by an orthogonal basis. Therefore we can find an orthogonal matrix , and a diagonal matrix , such that: . Thus, the matrix defines, via and , an orthonormal eigensystem , where , with
(2.1) 
To compute the orthonormal basis in whose directions maximize the variance, PCA finds the first principal component, which is the direction in along which projections have the largest variance among all possible directions and then iterates. In particular,
(2.2) 
where in the standard manner. The second principal component vector, , is the direction that maximizes variance among all directions that are orthogonal to the first principal component, with similar properties for . Nontrivially, the vectors defined in (2.2) are precisely the eigenvectors of the covariance matrix, , we refer the reader to [45] for further details.
Each eigenvalue, , from above explains the variance associated with its paired eigenvector. As noted in (2.1) the highest eigenvalue denotes to the direction of highest variance in the data, which is the first principal component, . Correspondingly, the eigenvalues obtained from the covariance matrix are often referred to as the explained variances. The normalized explained variance is defined by:
Remark 2.1.
The value describes the percentage of variance in the direction. Intuitively, the normalized explained variance provides a measure of the degree of importance for each corresponding eigenvector. As these eigenvalues are ordered (2.1), one can also refer to the most important PCA directions, where , and these are the vectors corresponding to the largest eigenvalues. These properties of the PCA orthonormal eigensystem can be used to obtain a heuristic assessment of the dimensionality of distinguishing features within the original data set . This heuristic is obtained by measuring the cumulative values of the normalized explained variance , where . In practice, we choose such that where is a reasonable percentage value for which the chosen PCA vectors still capture the original data. In this work we have chosen to be .
3. Preparing the Data
Regarding the data sets used in this paper we note the following. Most knots exist in pairs, , such as the left and right handed trefoil knots, where one knot becomes the other when all positive and negative crossings are switched, so everywhere, giving what is referred to as the mirror image of the knot. Knot tabulation efforts have generally accepted that it is not necessary to enumerate both of these paired knots when listing all knots, but many invariants are sensitive to this choice. For the Jones polynomial [38, 31], while for the signature [38], and for the Rasmussen sinvariant as well [43].
In light of this symmetry, to reduce memory overhead and computation time, as well as to enhance the clarity of the associated data visualizations, we have chosen to include just one of either or in our data set. We first chose the embedding where the signature was positive. If the signature was zero, we then chose or , to ensure the Rasmussen sinvariant was positive. When both the signature and sinvariant are zero, or one was unknown, we chose the embedding of or for which the most extreme degree was positive (e.g. by choosing over as has extreme degrees of 4 and 2) as in Table 1. Once a choice between and was made we constructed each point cloud by the following general method.
Given a finite family of knots , and a single variable knot polynomial invariant , we want to construct a point cloud . The procedure we describe here works for any finite set of knots and any single variable polynomial knot invariant . The steps for creating this point cloud are as follows:

For each we compute the polynomial invariant or as in the second column of Table 1.

Convert each polynomial to a coefficient vector  or .

The set of coefficient vectors are aligned by padding each vector with zeroes to ensure that the coefficient of is in a consistent position and all vectors are of the same length as in the right column of Table 1 and elaborated upon below. Each padded vector is denoted .
An example of this method is given for a small set of knots in Table 1. There we present the choice of embedding for each knot, corresponding Jones polynomial and the resulting vector in the point cloud of the family of knots up to 6 crossings.
1  ( 0,  0,  0,  1,  0,  0,  0,  0,  0,  0,  0)  

( 0,  0,  0,  0,  1,  0,  1,  1,  0,  0,  0)  
( 0,  1,  1,  1,  1,  1,  0,  0,  0,  0,  0)  
( 0,  0,  0,  0,  0,  1,  0,  1,  1,  1,  1)  
( 0,  0,  0,  0,  1,  1,  2,  1,  1,  1,  0)  
( 0,  1,  1,  2,  2,  1,  1,  1,  0,  0,  0)  
( 0,  0,  1,  1,  2,  2,  2,  2,  1,  0,  0)  
(1,  2,  2,  3,  2,  2,  1,  0,  0,  0,  0) 
Observe that each vector does not solely depend on the knot, , but rather also depends on the family . Indeed, the coefficient vectors, for various knots frequently are of differing lengths and belong to different Euclidean spaces. Constructing depends explicitly on the family , since we padded the shorter vectors in this set to match the longest ones. Even when two polynomials in a family have the same length coefficient vector as with and in Table 1 on the left, alignment frequently pads the vectors differently. In order to obtain the point cloud data , where is all knots with at most 6 crossings, we align the vectors as shown in the right column of Table 1, padding with zeros as necessary.
For generating the data here we used the KnotTheory package [4] to compute the signature and the Jones polynomial for all knots up to 17 crossings and received a collection of the sinvariants for all knots up to 15 crossings from Alex Shumakovitch [46].
For each invariant, we then defined coefficient vectors for each polynomial, padding the vectors with zeroes as needed to align the position of as done to go from left to right in Table 1.
4. The Filtration Method
In data analysis one commonly has to draw conclusions based on incomplete data sets. Therefore, one hopes to find those properties that do not evolve, but rather persist unchanged throughout the data set, suggesting that the conclusions are fundamental to all of the data rather than as a property of whichever special subset is being considered. To address this issue, we have applied manifold learning to filtrations of our data, that is, nested sequences of point clouds indexed with respect to some increasing parameter. This allows us to both detect essential, conjecturally constant, features, while also detecting those that meaningfully evolve with respect to different parameters.
Definition 4.1.
A filtration of a set is a finite sequence of nested sets such that:
For our purposes, let each set be a family of knots, and be a single variable polynomial knot invariant. The nested sequence in Definition 4.1 induces a nested sequence on a corresponding filtration of point clouds, denoted:
(4.1) 
Now even though the corresponding vectors and often belong to different Euclidean spaces, but there is always a natural mapping that sends a point in to the corresponding point in . Namely the two vectors can be aligned on the position of , padding the necessary zeros in so it is embedded in the same Euclidean space as . Using this mapping we can meaningfully embed a point cloud inside whenever .
Studying the eigensystems generated by PCA on a nested sequence of point clouds provides insight on how their principal component vectors and corresponding normalized explained variances evolve as the size of the point cloud grows. This can provide more information about the distribution from which this point cloud is drawn. For instance, by considering the relative sizes between consecutive normalized explained variances from the filtration we get information on the shape of the point cloud.
There are over 50 distinct wellknown invariants used to distinguish knots in tabulations [11]. Most of these naturally define ordered families of knots. One such natural filtration of families of knots is the one obtained by considering all knots, up to ambient isotopy, with minimal number of crossings less than or equal to a particular value . Let denote the family of all different knots with crossing number less than or equal to . We wish to study the point clouds obtained by the following filtration
(4.2) 
When paired with a single variable polynomial knot invariant , the filtration (4.2) induces a filtration of point clouds as in (4.1). We will refer to the point cloud filtration induced by (4.2) as a crossing filtration. To specify the PCA eigensystem obtained at each point cloud in a crossing filtration we will associate to the point cloud the PCA eigensystem .
The second natural filtration on point clouds associated with polynomial knot invariants is induced by the norm. We default to the norm due to its ease of use and scalability within the scikitlearn package, but comparisons against other norms did not produce significantly different results. Given , a finite family of knots with , a single variable polynomial knot invariant, then for any , we define
For a finite sequence of positive real numbers , we define the filtration:
(4.3) 
where We call the point cloud filtration given in (4.3) the norm filtration and will denote the PCA eigensystem obtained from by .
5. Application to the Jones polynomial
In this section we outline the results of running PCA on point clouds obtained from the Jones polynomial. In total we use all 9,755,329 knots of at most 17 crossings. In Subsection 5.1 we discuss how filtering by the norm illustrates the shape of the point cloud using the filtration discussed in Section 4. While in Subsection 5.2 we examine how the crossing filtration illuminates persistent features of the point cloud. To see if this behavior continues at higher crossing number we consider the point cloud for two special subfamilies of knots up to 2001 crossings in Section 6.
Our analysis utilizes two primary visualizations of the PCA data. In the first, the relative importance of each PCA eigenvector can be visualized by plotting the normalized explained variance as a function of the ordered indices of the eigensystem as discussed in Remark 2.1. Similarly, one can also plot the cumulative values of the normalized explained variance as a function of . Our second visualization follows the trajectory of a sequence of PCA eigensystems across each step of the filtration by comparing the values of and the directions of the principal components . Ideally, all the principal component vectors would overlap across the filtration, we measure their deviation by using the classical dot product between them:
(5.1)  
(5.2) 
Here is the angle between the principal component in the and eigensystem.
5.1. Structure from the Norm Filtration
Having prepped and filtered the data as in Sections 3 and 4, we set up the norm filtration of (4.3) using the radii , denoted in Figure 3. Each radius is chosen to restrict to the central of the point cloud, doubling the number of points considered with each iteration.
Figure 3 illustrates the first type of visualization, where we consider the plotted values of the normalized explained variance. The left hand set of curves show the normalized explained variance of each principal component, ordered as in (2.1), while the right hand graph shows the cumulative normalized explained variance. In both graphs the PCA calculation is done on the filtration with each family denoted by a distinct color. The exponential division of the point cloud is both for eventual contrast with the exponential growth of the crossing filtration and to ensure that at each step the amount of new data is equal to the amount of preexisting data.
Two trends quickly appear in the Figure 3. The first principal component becomes more significant as the bounding radius of the point cloud increases, while subsequent components decrease in prominence. This increase in the first principal component exceeds the decrease in subsequent components and as a result the cumulative normalized explained variance, , increases for each . Following the bound set out in Remark 2.1 we see that for and , while for and , , and then and have and respectively, which suggests that approximates a 3dimensional manifold.
These trends are affirmed by the second type of visualization in Figure 4, where the trajectory of the first six components of the normalized PCA eigensystem are followed across each step of the filtration. The left graph illustrates the gradual growth of the first principal component at the expense of the remaining components as the radius of our point cloud increases. Quantitatively, we solely note that the maximum relative spread for any of the 3 significant PCA components is .
The quality of a filtration is not only reflected in the stability of the ’s, we can also measure the alignment of sequential eigenvectors as in (5.1). On the right hand side of Figure 4 we plot these angles for principal components across radii . The principal components stabilize as the filtration radius increases, but two details stand out. First, from to the variation between important eigenvectors reduces to a stable point. Secondly, the angles between secondary sequential components begins to stabilize from to . Furthermore, the significance of the principal component does not appear to correlate directly with the relative degree of stability across the filtration.
Figure 4 suggests that in the center of the point cloud, the data spreads fairly evenly in 2 directions, before changing direction between radii of 50200 and pronouncedly extending out in a single new direction. The cutoffs in the data based on the doubling radii of the point cloud suggests that the data is disproportionately densely packed towards the center of the distribution and is sparse towards the extremes. We consider the shape of the data further when discussing Figure 7.
5.2. Persistent Properties in the Crossing Filtration
Following the same steps we now consider the crossing number filtration following from (4.2). This filtration presents distinct features from the norm filtration. The number of knots in each step of the filtration increases exponentially [19], so to ensure a sufficient number of data points in the smallest filtration, we only consider the cases . The visualization of Figure 5 when compared to Figure 3 presents a strong contrast.
In Figure 5 the normalized explained variances and cumulative normalized explained variances are essentially indistinguishable. Following the bound set out in Remark 2.1, we find that for every family in the filtration and that except for the 12 and 13 crossing families, where . We also consider the crossing number filtration analogue of Figure 4 in Figure 6.
Figure 6 stands in marked contrast to its analogue for the norm filtration. The normalized explained variance is remarkably stable for the significant components with a maximal relative spread of , a significant improvement in consistency. The principal components also behave differently when measuring the angle between and compared to the norm filtered case. There may be more total variation across the filtration, but that variation is more orderly, with for all and all significant , where . It is not surprising that these filtrations have some amount of variation as the minimal dimension of in which can be embedded strictly increases with . Of further interest is a mild periodicity in the variation of across k, suggesting that the change in distribution of knots in depends on the parity of .
The disparity in stabilization behavior between the norm filtration and the crossing number filtration begs the question of whether there is something special about either one. Looking at the distribution of norms for as in the lower right of Figure 7 it becomes immediately apparent that the norm filtration suffers from some structural deficiencies. Namely, the subfamily of nonalternating knots has a skewed distribution towards lower norms, while the norms of alternating knots favor a broader distribution. We observed in talking about Figure 4 that the angles between principal components experienced an inflection and rapid stabilization in the three most important PCA components between and . Additional experimentation has shown that these characteristics stabilize almost completely for . In Figure 7 we see that, for every family in the crossing filtration, the nonalternating knots contribute an insignificant fraction of new data points to the PCA calculation by the point where the alternating knot distribution peaks. Similarly, the tail of this distribution continues stretching as the crossing number increases, so in each case only a small number of data points are added to the point cloud after an so it is of little surprise that the principal components mostly stabilize after a given point.
This dependence on the norm distributions of the alternating and nonalternating knots suggests that we should also consider these knot classes by themselves. Let denote the point cloud of nonalternating knots of at most crossings built using the single polynomial invariant , and will denote the analogous point cloud of alternating knots.
In Figure 8 we first consider the persistence of the PCA eigensystem features under the crossing number filtration of on alternating knots. The normalized explained variances on the left of Figure 8, suggests they follow the same general pattern as expressed in Figure 4, but with a value of and relative spread of .
Next we consider the persistence of the PCA eigensystem features under the crossing number filtration of on nonalternating knots, as in Figure 9. Like the crossing filtration on alternating knots, the PCA eigensystem values for the crossing filtration on nonalternating knots are stable, but with and relative spread of . It is worth noting that the normalized explained variances of the alternating knots and nonalternating knots settle at different values, but their combination, at steadily diverging weights, as illustrated by the relative proportions in Figure 7, still remains not just consistent as noted by Figure 6, but has even less relative spread with and relative spread of .
Even considering our final normalized eigenvalue deemed significant by Remark 2.1, , we find that the relative spread in all knots is , while the alternating and nonalternating knots have relative spreads of and respectively, when considering crossing filtrations for to ensure at least 1000 knots in every filtration. The possible implications of these observations bear further investigation.
6. Examining Jones Structure at Higher Crossing Number
To provide insight into what happens for higher crossing numbers we looked at two subfamilies of knots whose Jones polynomials were easily computed at higher crossing number. We consider the torus knots up to 2000 crossings and the positive double twist links of up to 2001 crossings. Let denote the point clouds of torus knots up to crossings, and the point clouds of single strand positive double twist link knots up to crossings, which were calculated using [4] and [16] respectively. The three dimensional PCA projections of and are presented in Figure 10 and suggest interesting structures exist. Yet our results are inconsistent with those of Section 5 and reveal more about the challenges of using manifold learning than they do specifically about the dimensions of .
Studying the PCA eigensystems of using the top row of Figure 11 it is easy to see that should not be considered a 5003d manifold. In fact, and , which by our heuristic suggests that approximates a 4 dimensional manifold. This suggests that this submanifold of approximates a higher dimensional manifold than we measured for and that the apparent stability in ’s seen in Figure 6 might slowly evolve as crossing number increases.
We investigated this phenomenon further for torus knots. A very different picture emerges from the analysis of the bottom row of in Figure 11. While the left visualization is broadly similar to the results for , the right chart displays a significant difference. Here the cumulative normalized explained variance approaches 1 much more slowly with and taking even longer to reach a stricter restriction used by some of .
Two details about stand out in contrast to . First, contains a mere 4501 data points unlike the over 500,000 in . Second, while lives in a 5003 dimensional space, lives in an 2998 dimensional space. It is apparent that these two point clouds are not directly comparable even though they both are contained in . This suggests that the approximate dimension of a point cloud is dependent on how it is sampled especially for nonrandom samples. Furthermore, a direct examination of the sparsely populated , supports the idea that for a sample size that doesn’t even double the dimensionality of the space it is embedded in it is difficult to have dimensions with
7. Conclusions and future work
Studying the features of datasets that arise in pure mathematics has distinct challenges from those one faces when working with real world data. In this paper we have outlined how to utilize one of the most traditional dimensionality reduction techniques, Principal Component Analysis, to study point clouds of data in this context. In particular, we introduced the notion of filtrations to analyze a nested sequence of datasets. The method introduced here is general and applicable to other scenarios where a conclusion about an infinite dataset is required.
Having explicitly described how this technique can be used to analyze the structure of the Jones polynomial data, immediate extensions of this work are to study point clouds arising from other one variable polynomial invariants such as the Alexander polynomial and to investigate the substructures illustrated in Figure 1. In our upcoming works we will use other big data analysis techniques in the context of data in lowdimensional topology, further outlining how they can be used to compare numerical and polynomial knot invariants. Additional dimensionality reduction calculations using ISOMAP on the polynomial data affirm results obtained using PCA. Preliminary research indicates that persistence homology confirms the existence of the substructures in the Jones polynomial data that also reflect potential relations of the Jones polynomial and signature.
Acknowledgements
Computation for the work described in this paper was supported by the University of Southern California’s Center for HighPerformance Computing (hpcc.usc.edu). RS was partially supported by the Simons Collaboration Grant 318086 and NSF DMS .
References
 (1928) Topological invariants of knots and links. Transactions of the American Mathematical Society 30 (2), pp. 275–306. Cited by: §1.
 (2011) RogersRamanujan type identities and the head and tail of the colored Jones polynomial. arXiv preprint arXiv:1106.3948. Cited by: §1, §2.1.
 (1996) On the Melvin–Morton–Rozansky conjecture. Inventiones mathematicae 125 (1), pp. 103–133. Cited by: §2.1.
 (2011–2019) The Knot Atlas. Note: \urlhttp://katlas.org External Links: Link Cited by: §1, §3, §6.
 (2017) A polynomial time knot polynomial. arXiv preprint arXiv:1708.04853. Cited by: §1.
 (2016) The colored Jones polynomial of singular knots. New York J. Math 22, pp. 1439–1456. Cited by: §1, §2.1.
 (2017) Qseries and tails of colored Jones polynomials. Indagationes Mathematicae 28 (1), pp. 247–260. Cited by: §1, §2.1.
 (1999–2017) Regina: software for lowdimensional topology. Note: http://reginanormal.github.io/\urlhttp://reginanormal.github.io/ Cited by: §1.
 (2018) The next 350 million knots. Note: \urlhttp://reginanormal.github.io/data.html\urlhttp://reginanormal.github.io/data.html Cited by: §1, §1.
 (201705) Knot fertility and lineage. Journal of Knot Theory and Its Ramifications 26, pp. . External Links: Document Cited by: §1.
 (December 29, 2017) KnotInfo: table of knot invariants, http://www.indiana.edu/ knotinfo. Cited by: §4.
 (2006) On the head and the tail of the colored Jones polynomial. Compositio Mathematica 142 (5), pp. 1332–1342. Cited by: §2.1.
 (200904) A partial ordering of knots through diagrammatic unknotting. Journal of Knot Theory and Its Ramifications 18, pp. . External Links: Document Cited by: §1.
 (1983) Classification of knot projections. Topology and its Applications 16 (1), pp. 19–31. Cited by: §1.
 (2000) Topological persistence and simplification. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pp. 454–463. Cited by: §1.
 (2018) Twist regions and coefficients stability of the colored jones polynomial. Transactions of the American Mathematical Society 370 (7), pp. 5155–5177. Cited by: §6.
 (2017) Pretzel knots and series. Osaka Journal of Mathematics 54 (2), pp. 363–381. Cited by: §2.1.
 (2018) Foundations of the colored Jones polynomial of singular knots. Bull. Korean Math. Soc. Cited by: §1, §2.1.
 (1987) The growth of the number of prime knots. In Mathematical Proceedings of the Cambridge Philosophical Society, Vol. 102, pp. 303–315. Cited by: §1, §5.2.
 (200702) Natural classification of knots. Proceedings of The Royal Society A: Mathematical, Physical and Engineering Sciences 463, pp. . External Links: Document Cited by: §1.
 (2005) The colored Jones function is qholonomic. Geometry & Topology 9 (3), pp. 1253–1293. Cited by: §2.1.
 (2016) The tail of a quantum spin network. The Ramanujan Journal 40 (1), pp. 135–176. Cited by: §2.1.
 (2017) The colored Kauffman skein relation and the head and tail of the colored Jones polynomial. Journal of Knot Theory and Its Ramifications 26 (03), pp. 1741002. Cited by: §2.1.
 (2005) The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer 27 (2), pp. 83–85. Cited by: §1.
 (1998) The first 1,701,936 knots. The Mathematical Intelligencer 20 (4), pp. 33–48. Cited by: §1.
 (2005) The enumeration and classification of knots and links. In Handbook of knot theory, pp. 209–232. Cited by: §1.
 (2016) A neural network approach to predicting and computing knot invariants. arXiv preprint arXiv:1610.05744. Cited by: §1.
 (2019) Mathematica, Version 12.0. Note: Champaign, IL Cited by: §1.
 (20110304) Inkscape. Note: \urlhttps://inkscape.org External Links: Link Cited by: §1.
 (2019) Deep learning the hyperbolic volume of a knot. arXiv preprint arXiv:1902.05547. Cited by: §1.
 (1997) A polynomial invariant for knots via von neumann algebras. In Fields Medallists’ Lectures, pp. 448–458. Cited by: §1, §2.1, §3.
 (1997) The hyperbolic volume of knots from the quantum dilogarithm. Letters in Mathematical Physics 39 (3), pp. 269–275. Cited by: §2.1.
 (2005) Categorifications of the colored Jones polynomial. Journal of Knot Theory and its Ramifications 14 (01), pp. 111–130. Cited by: §2.1.
 (1905) On the AJ conjecture for knots. Indiana University Mathematics Journal 64 (4), pp. . Cited by: §2.1.
 (2000) Integrality and symmetry of quantum link invariants. Duke Mathematical Journal 102 (2), pp. 273–306. Cited by: §2.1.
 (2006) The colored Jones polynomial and the Apolynomial of knots. Advances in Mathematics 207 (2), pp. 782–804. Cited by: §2.1.
 (2018) A trivial tail homology for nonA–adequate links. Algebraic & Geometric Topology 18 (3), pp. 1481–1513. Cited by: §1, §2.1.
 (2012) An introduction to knot theory. Vol. 175, Springer Science & Business Media. Cited by: §3.
 (2013) The Bailey chain and mock theta functions. Advances in Mathematics 238, pp. 442–458. Cited by: §1, §2.1.
 (2001) The colored Jones polynomials and the simplicial volume of a knot. Acta Mathematica 186 (1), pp. 85–104. Cited by: §2.1.
 (2011) An introduction to the volume conjecture. Interactions between hyperbolic geometry, quantum topology and number theory 541, pp. 1–40. Cited by: §1, §2.1.
 (2011) Scikitlearn: machine learning in Python. Journal of Machine Learning Research 12, pp. 2825–2830. Cited by: §1.
 (2010) Khovanov homology and the slice genus. Inventiones mathematicae 182 (2), pp. 419–447. Cited by: §1, §3.
 (1990) Ribbon graphs and their invariants derived from quantum groups. Communications in Mathematical Physics 127 (1), pp. 1–26. Cited by: §2.1.
 (2014) A tutorial on principal component analysis. arXiv preprint arXiv:1404.1100. Cited by: §2.2.
 (2019) Private communication. George Washington University. Cited by: §3.
 (2018) Verification of the Jones unknot conjecture up to 23 crossings. Note: arXiv:1809.02285 Cited by: §1.
 (1869) On vortex motion. Trans. R. Soc. Edinburgh 25, pp. 217–260. External Links: Document Cited by: §1, §1.
 (1988) The YangBaxter equation and invariants of links. Inventiones mathematicae 92 (3), pp. 527–553. Cited by: §2.1.
 (2018) Using neural networks to classify knots: data mining and deep learning in knot theory. Cited by: §1.
 (1987) Principal component analysis. Chemometrics and intelligent laboratory systems 2 (13), pp. 37–52. Cited by: §1.