Simpler, faster and shorter labels for distances in graphs
Abstract
We consider how to assign labels to any undirected graph with nodes such that, given the labels of two nodes and no other information regarding the graph, it is possible to determine the distance between the two nodes. The challenge in such a distance labeling scheme is primarily to minimize the maximum label lenght and secondarily to minimize the time needed to answer distance queries (decoding). Previous schemes have offered different tradeoffs between label lengths and query time. This paper presents a simple algorithm with shorter labels and shorter query time than any previous solution, thereby improving the stateoftheart with respect to both label length and query time in one single algorithm. Our solution addresses several open problems concerning label length and decoding time and is the first improvement of label length for more than three decades.
More specifically, we present a distance labeling scheme with labels of length bits^{1}^{1}1Throughout the paper, all logarithms are in base 2. and constant decoding time. This outperforms all existing results with respect to both size and decoding time, including Winkler’s (Combinatorica 1983) decadeold result, which uses labels of size and decoding time, and Gavoille et al. (SODA’01), which uses labels of size and decoding time. In addition, our algorithm is simpler than the previous ones. In the case of integral edge weights of size at most , we present almost matching upper and lower bounds for the label size : . Furthermore, for additive approximation labeling schemes, where distances can be off by up to an additive constant , we present both upper and lower bounds. In particular, we present an upper bound for additive approximation schemes which, in the unweighted case, has the same size (ignoring second order terms) as an adjacency labeling scheme, namely . We also give results for bipartite graphs as well as for exact and additive distance oracles.
1 Introduction
A distance labeling scheme for a given family of graphs assigns labels to the nodes of each graph from the family such that, given the labels of two nodes in the graph and no other information, it is possible to determine the shortest distance between the two nodes. The labels are assumed to be composed of bits. The main goal is to make the worstcase label size as small as possible while, as a subgoal, keeping query (decoding) time under control. The problem of finding implicit representations with small labels for specific families of graphs was first introduced by Breuer [13, 14], and efficient labeling schemes were introduced in [43, 51].
1.1 Distance labeling
For an undirected, unweighted graph, a naïve solution to the distance labeling problem is to let each label be a table with the distances to all the other nodes, giving labels of size around bits. For graphs with bounded degree it was shown [14] in the 1960s that labels of size can be constructed such that two nodes are adjacent whenever the Hamming distance [41] of their labels is at most . In the 1970s, Graham and Pollak [38] proposed to label each node with symbols from , essentially representing nodes as corners in a “squashed cube”, such that the distance between two nodes exactly equals the Hamming distance of their labels (the distance between and any other symbol is set to 0). They conjectured the smallest dimension of such a squashed cube (the socalled Squashed cube conjecture), and their conjecture was subsequently proven by Winkler [65] in the 1980s. This reduced the label size to , but the solution requires query time to decode distances. Combining [43] and [50] gives a lower bound of bits. A different distance labeling scheme of size of and with decoding time was proposed in [36]. The article also raised it as open problem to find the right label size. Later in [63] the algorithm from [36] was modified, so that the decoding time was further reduced to with slightly larger labels, although still of size . This article raised it as an open problem whether the query time can be reduced to constant time. Having distance labeling with short labels and simultaneous fast decoding time is a problem also addressed in text books such as [58]. Some of our are solutions are simple enough to replace material in text books.
Addressing the aforementioned open problems, we present a distance labeling scheme with labels of size bits and with constant decoding time. See Table 1 and Figure 1 for an overview.
Distance labeling schemes for various families of graphs exist, e.g., for trees [5, 55], bounded treewidth [36], distancehereditary [34], bounded cliquewidth [21], some nonpositively curved plane [18], interval [35] and permutation graphs [10]. In [36] it is proved that distance labels require bits for trees, and bits for planar graphs, and bits for bounded degree graphs. In an unweighted graph, two nodes are adjacent iff their distance is . Hence, lower bounds for adjacency labeling apply to distance labeling as well, and adjacency lower bounds can be achieved by reduction [43] to induceduniversal graphs, e.g. giving and for general and bipartite graphs, respectively. An overview of adjacency labeling can be found in [7].
1.2 Overview of results
For weighted graphs we assume integral edge weights from . Letting each node save the distance to all other nodes would require a scheme with labels of size bits. Let denote the shortest distance in between nodes and . An additive approximation scheme returns a value , where .
Throughout this paper we will assume that since otherwise the naïve solution mentioned above will be as good as our solution. Ignoring second order terms, we can for general weighted graphs and constant decoding time achieve upper and lower bounds for label length as stated in Table 2. For bipartite graphs we also show a lower bound of and an upper bound of whenever .
We present, as stated in Table 3, several tradeoffs between decoding time, edge weight , and space needed for the second order term.
We also show that, for any with and , there exists a additive distance scheme using labels of size bits.
Finally, we present lower bounds for approximation schemes. In particular, for we prove that labels of bits are required for an additive distance labeling scheme.
1.3 Approximate distance labeling schemes and oracles
Approximate distance labeling schemes are well studied; see e.g., [36, 39, 40, 55, 62]. For instance, graphs of doubling dimension [59] and planar graphs [60] both enjoy schemes with polylogarithmic label length which return approximate distances below a factor of the exact distance. Approximate schemes that return a small additive error have also been investigated, e.g. in [17, 33, 48]. In [32], lower and upper bounds for additive schemes, , are given for chordal, AT, permutation and interval graphs. For general graphs the current best lower bound [32] for additive scheme is . For , one needs bits since a additive scheme can answer adjacency queries in bipartite graphs. Using our approximative result, we achieve, by setting and , a additive distance labeling scheme which, ignoring second order terms, has the same size (namely bits) as an optimal adjacency labeling scheme. Somehow related, [11] studies labeling schemes that preserve exact distances between nodes with minimum distance , giving an bit solution.
Approximate distance oracles introduced in [62] use a global table (not necessarily labels) from which approximate distance queries can be answered quickly. One can naïvely use the labels in a labeling scheme as a distance oracle (but not vice versa). For unweighted graphs, we achieve constant query time for additive distance oracles using bits in total, matching (ignoring second order terms) the space needed to represent a graph. Other techniques only reduce space for additive errors for . For exact distances in weighted graphs, our solution achieves bits for . This relaxes the requirement of in [28] (and slightly improves the space usage in that paper).
1.4 Second order terms are important
Chung’s solution in [19] gives labels of size for adjacency labeling in trees, which was improved to in [9] and in [12, 29, 30, 44] to for various special cases. A recent STOC’15 paper [7] improves label size for adjacency in generel graphs from to . Likewise, the second order term for ancestor relationship is improved in a sequence of STOC/SODA papers [2, 8, 4, 30, 31] (and [1]) to , giving labels of size .
1.5 Labeling schemes in various settings and applications
By using labeling schemes, it is possible to avoid costly access to large global tables, computing instead locally and distributed. Such properties are used, e.g., in XML search engines [2], network routing and distributed algorithms [22, 25, 61, 62], dynamic and parallel settings [20, 47], graph representations [43], and other applications [45, 46, 54, 55, 56]. From the SIGMOD, we see labeling schemes used in [3, 42] for shortest path queries and in [16] for reachability queries. Finally, we observe that compact hop labeling (a specific distance labeling scheme) is central for computing exact distances on realworld networks with millions of arcs in realtime [23].
1.6 Outline of the paper
Section 3 illustrates some of our basic techniques. Sections 5 and 4 present our upper bounds for exact distance labeling schemes for general graphs. Section 6 presents upper bounds for approximate distances. Our lower bounds are rather simple counting arguments with reduction to adjacency and have been placed in Appendix A.
2 Preliminaries
Trees.
Given a rooted tree and a node of , denote by be the subtree of consisting of all the descendants of (including itself). The depth of is the number of edges on the unique simple path from to the root of . For any rooted subtree of , denote by the root of , as the node of with smallest depth. Denote by the forest obtained from by removing its root. Denote by the number of nodes of : hence, represents its number of edges. Denote by the parent of the node in . Let denote the nodes on the simple path from to in . The variants and denote the same path without the first and last node, respectively.
Graphs.
Throughout we assume graphs to be connected. If a graph is not connected, we can add bits to each label, indicating the connected component of the node, and then handle components separately. We denote by the minimum distance (counted with edge weights) of a path in connecting the nodes and .
Representing numbers and accessing them.
We will need to encode numbers with base different from and sometimes compute prefix sums on a sequence of numbers. We apply some existing results:
Lemma 2.1 ([49]).
A table with integral entries in can be represented in a data structure of bits to support prefix sums in constant time.
Lemma 2.2 ([24]).
A table with elements from a finite alphabet can be represented in a data structure of bits, such that any element of the table can be read or written in constant time. The data structure requires precomputed word constants.
Lemma 2.3 (simple arithmetic coding).
A table with elements from an alphabet can be represented in a data structure of bits.
3 Warmup
This section presents, as a warmup, a distance labeling scheme which does not achieve the strongest combination of label size and decoding time, but which uses some of the techniques that we will employ later to achieve our results. For nodes , define
Note that the triangle inequailty entails that
In particular, whenever are adjacent.
Given a a path of nodes in , the telescoping property of values means that
Since and are adjacent, we can encode the values above as a table with entries, in which each entry is a an element from the alphabet with values. Using Lemma 2.3 we can encode this table with bits. Note that we can compute from by adding a prefix sum of the sequence of values:
The Hamiltonian number of is the number of edges of a Hamiltonian walk in , i.e. a closed walk of minimal length (counted without weights) that visits every node in . It is wellknown that , the first inequality being an equality iff is Hamiltonian, and the latter being an equality iff is a tree (in which case the Hamiltonian walk is an Euler tour); see [15, 37].
Consider a Hamiltonian walk of length . Given nodes from , we can find such that and . Without loss of generality we can assume that . If , we can compute as the sum of at most values:
If, on the other hand, , then we can compute as the sum of at most values:
where we have counted indices modulo in the last expression. This leads to the following distance labeling scheme. For each node in , assign a label consisting of

a number such that ; and

the values for .
From the above discussion it follows that the labels and for any two nodes are sufficient to compute .
We can encode with bits using Lemma 2.3. If is Hamiltonian, this immediately gives a labeling scheme of size . In the general case, we get size , which for matches Winkler’s [65] result when disregarding second order terms. Theorem 4.1 in the next section shows that it is possible to obtain labels of size even in the general case. Theorem 5.3 in the section that follows shows that we can obtain constant time decoding with extra space.
4 A scheme of size
We now show how to construct a distance labeling scheme of size .
First, we recall the heavylight decomposition of trees [57]. Let be a rooted tree. The nodes of are classified as either heavy or light as follows. The root of is light. For each nonleaf node , pick one child where is maximal among the children of and classify it as heavy; classify the other children of as light. The apex of a node is the nearest light ancestor of . By removing the edges between light nodes and their parents, is divided into a collection of heavy paths. Any given node has at most light ancestors (see [57]), so the path from the root to goes through at most heavy paths.
Now, enumerate the nodes in in a depthfirst manner where heavy children are visited first. Denote the number of a node by . Note that nodes on a heavy path will have numbers in consecutive order; in particular, the root node will have number , and the nodes on its heavy path will have numbers . Assign to each node a label consisting of the sequence of values of its first and last ancestor on each heavy path, ordered from the top of the tree and down to . Note that the first ancestor on a heavy path will be the apex of that heavy path and will be light, whereas the last ancestor on a heavy path will be the parent of the apex of the subsequent heavy path. This construction is similar to the one used in [6] for nearest common ancestor (NCA) labeling schemes, although with larger sublabels. Indeed, the label is a sequence of at most numbers from . We can encode this sequence with bits.
Suppose that the node has label , where and and where are the numbers of the first and last ancestor, respectively, on the ’th heavy path visited on the path from the root to . Since nodes on heavy paths are consecutively enumerated, it follows that the nodes on the path from the root to are enumerated
where duplicates may occur in the cases where , which happens when the first and last ancestor on a heavy path coincide.
In addition to the label , we also store the label consisting of the sequence of distances and . This label is a sequence of at most numbers smaller than , and hence we can encode with bits. Combined, and can be encoded with bits.
Now consider a connected graph with shortestpath tree rooted at some node . Using the above enumeration of nodes, we can construct a distance labeling scheme in the same manner as in Section 3, except that instead of using a Hamiltonian path, we use the enumeration of nodes in from above, and we save only value between nodes and their parents, using bits due to Lemma 2.3. More specifically, for each node , we assign a label consisting of

the labels and as described above; and

the values for all with .
We can encode the above with bits.
Given nodes , either will contain or will contain . Without loss of generality, we may assume that contains . Let denote the nearest common ancestor of and . Note that must be the last ancestor of either or on some heavy path, meaning that appears in either or . By construction of depthfirstsearch, a node on the path from (but not including) to (and including) will have a number that satisfies the requirements to be stored in . Thus, must, in fact, contain values for all nodes in .
Next, note that, since is a shortestpath tree, . Now, if appears in , we can obtain directly from ; else, must appear in , and we can then obtain from and compute . In either case, we can now compute the distance in between and as
The label of contains all the needed values, and and combined allows us to determine the numbers of the nodes on , so that we know exactly which values from ’s label to pick out. Thus we have proved:
Theorem 4.1.
There exists a distance labeling scheme for graphs with label size .
This gives us the first row of Table 3. To obtain the second row, we encode the values with Lemma 2.2. Doing this we can access each value in constant time and simply traverse in time the path from to , adding values along the way. Note, however, that Lemma 2.2 only applies for . Saving the values in a prefix sum structure as described in Lemma 2.1, we can compute the sum using lookups. The next section describes how we can avoid spending time (or more) on this, while still keeping the same label size.
For unweighted (), bipartite graphs, values between adjacent nodes can never be , which means that we only need to consider two rather than three possible values. Thus, we get label size instead in this case. We shall give no further mention to this in the following.
5 Constant query time
Let be any rooted spanning tree of the connected graph with nodes. We create an edgepartition of into rooted subtrees, called micro trees. Each micro tree has at most edges, and the number of micro trees is . We later choose the value of . For completeness we give a proof (Lemma B.1) in the appendix of the existence of such a construction. Observe that the collection forms a partition of the nodes of . As the parent relationship in coincides with the one of , we have for all .
For every node , we denote by the unique index such that . For a node of we let , and for let .
Define the macro tree to have node set and an edge between and for all .
By construction, has nodes.
Our labeling scheme will compute the distance from to as
The first addend, , is saved as part of ’s label using bits. The second addend can be computed as a sum of values for nodes in the macro tree and is hence referred to as the macro sum. The third addend can be computed as a sum of values for nodes inside ’s micro tree and is hence referred to as the micro sum. The next two sections explain how to create data structures that allow us to compute these values in constant time.
5.1 Macro sum
Consider the macro tree with nodes. As mentioned in Section 3 there exists a Hamiltonian walk of length , where we can assume that . Given nodes , consider a path in along such a Hamiltonian walk from to . This is a subpath of the Hamiltonian walk, where is chosen such that . Note that
Since each edge in connects two nodes that belong to the same micro tree, and the distance within each micro tree is a most , we have that for all . Using Lemma 2.1 we can store these values in a data structure, , of size such that prefix sums can be computed in constant time. This data structure is stored in ’s label. An index with is stored in ’s label using bits. These two pieces of information combined allow allows us to compute for all .
Label summary: For a (preselected) Hamiltonian walk in , we store in the label of each node a datastructure of size such that prefix sums in the form can be computed in constant time. In addition, we store in the label of an index such that , which requires bits.
5.2 Micro sum
For any node , define
Note that, for a node , is the sum of the values for all nodes lying on the path from to . Each of these values is a number in .
For each , order the nodes in in any order. For each node and index , let , where is the ordered sequence of nodes from . We will construct our labels such that ’s label stores for half of the total set of delta values (we will see how in the next section), and such that ’s label stores information about for which ’s the node lies on the path between and . With these two pieces of information, we can compute as described above.
We define . The sequence consists of values from can be encoded with bits. To store this more compactly, we will use an injective function, as described in Lemma 2.3 that maps every sequence of integers from into a bit string of length . Denote by such an encoding of the sequence to a bit string oflength , as
In order to decode the encoded version of in constant time, we construct a tabulated inverse function . From the input and output sizes, we see that we need a table with entries, for each of the possible micro tree sizes, and each result entry having bits, giving a total space of bits.
Let . Let be the bitwise AND operator. In node ’s label we save the bit string such that gives an integer sequence identical to , except that the integer has been replaced by for all that are not an ancestor of . Given we can now compute the micro sum as the sum of integers in the sequence . We will create a tabulated function that sums these integers, . is given a sequence of up to values in , and the output is a number in . We can thus tabulate as a table with entries each of size , giving a total space of .
Both functions, and , have been tabulated in the above. A lookup in a tabulated function can be done in constant time on the RAM as long as both input and output can be represented by bits. We can achieve this by setting
for a constant . To see this, note that the maximum of the four input and output values above is . Using the above inequality then gives .
The tables for the tabulated functions are the same for all nodes. Hence, in principle, assuming an upper bound for is known, we could encode the two tables in global memory, not using space in the labels. However, as we will see, the tables take no more space than the prefix table , so we can just as well encode them into the labels. Doing that we use an additional for the table and for the table. Using that and substituting for the above expression then gives, after a few reductions, that the extra space used is no more than bits. Since the prefix table uses at least bits, we see that the added space does not (asymptotically) change the total space usage, as long as we choose .
Label summary: We will construct the labels such that either ’s label contains or vice versa (we shall see how in the next section). Using the tabulated function , the bits in can be extracted in constant time from ’s label. Using from ’s label and the tabulated function , we can then compute in constant time. The total space used for all this is no more than .
5.3 Storing and extracting the deltas
Let the micro trees in bee given in a specific order: . Let denote the binary string composed of the concatenation of each string in the order .
Let be the length in bits of . Let be the position in the string where the substring starts. E.g., is the first bit of , the first bit of , and so on. According to Lemma 2.3 we have . Observe that the position only depends on and and not on .
We denote by and the starting and ending positions of the substring in . More precisely, and , so that . For each node we use bits to store and in its label.
For a node we will only save approximate half of , in a table . will start with and the code for the following micro trees in the given circular order until in total has at least values, but as few a possible. In other words where the indexes may wrap to after reaching the largest index if . Let .
In a node ’s label we save and using bits. Having those values we know which values from are saved in ’s label as well as the position of them in . Furthermore we know the position of the values of ’s own micro tree in . We will need to extract at most consecutive bits from in one query. On the wordRAM this can be done in constant time.
Proposition 5.1.
Let be two nodes of . Then,

; and

either is part of or is part of .
Proof.
Let be the subset of encoded in . We have:
which proves (i). Part (ii) follows from the fact that saves at least half of the ’s in a cyclic order. If not is include here, must be included in the values saved by . ∎
5.4 Summary
The label of is composed of the follows items.

The values , , , , , and : .

A prefix table, , for the values in the macro tree: .

The table : .

Global tables, and of size .
Note that and the global tables are common to all the nodes. In addition we may need to use bits to save the start position in the label for the above constant number of sublabels.
Lemma 5.2.
Every label has length at most bits.
Let denote the distance returned by the decoder given the labels of and of in . It is defined by:
: If then and Else return Return
Theorem 5.3.
There exists a distance labeling scheme for graphs with edge weights in using labels of length bits and constant decoding time.
6 Approximate distances
By considering only a subset of nodes from and using the previous techniques, it is possible to create an approximation scheme where the label size is determined by a smaller number of nodes but with larger weights between adjacent nodes. We leave the details for Section C.1 and present here only the result.
Theorem 6.1.
There exists a additive distance labeling scheme for graphs with nodes and edge weights in using labels of size .
Another way to achieve an approximation scheme is to use a smaller set of weights while keeping the accumulated error under control. This leads to the following result whose proof can be seen in Section C.2.
Theorem 6.2.
For any there exists a additive distance labeling scheme for graphs with nodes and edge weights in using labels of size .
One instance of Theorem 6.2 is , which gives a 1additive distance labeling scheme of size . For we get a additive distance labeling scheme of size . For constant the above technique also applies to our constant time decoding results. For unweighted graphs this implies that we can have labels of size with a 1additive error and constant decoding time.
By combining the above two theorems, we obtain the theorem below; see Section C.3.
Theorem 6.3.
For any and there exists a additive distance labeling scheme for graphs with nodes and edge weights in using labels of size bits.
References
 [1] S. Abiteboul, S. Alstrup, H. Kaplan, T. Milo, and T. Rauhe. Compact labeling scheme for ancestor queries. SIAM J. Comput., 35(6):1295–1309, 2006.
 [2] S. Abiteboul, H. Kaplan, and T. Milo. Compact labeling schemes for ancestor queries. In Proc. of the 12th annual ACMSIAM Symp. on Discrete Algorithms (SODA), pages 547–556, 2001.
 [3] T. Akiba, Y. Iwata, and Y. Yoshida. Fast exact shortestpath distance queries on large networks by pruned landmark labeling. In ACM International Conference on Management of Data (SIGMOD), pages 349–360, 2013.
 [4] S. Alstrup, P. Bille, and T. Rauhe. Labeling schemes for small distances in trees. In Proc. of the 14th annual ACMSIAM Symp. on Discrete Algorithms (SODA), pages 689–698, 2003.
 [5] S. Alstrup, P. Bille, and T. Rauhe. Labeling schemes for small distances in trees. SIAM J. Discrete Math., 19(2):448–462, 2005. See also SODA’03.
 [6] S. Alstrup, E. B. Halvorsen, and K. G. Larsen. Nearoptimal labeling schemes for nearest common ancestors. In Proc. of the 25th annual ACMSIAM Symp. on Discrete Algorithms (SODA), pages 972–982, 2014.
 [7] S. Alstrup, H. Kaplan, M. Thorup, and U. Zwick. Adjacency labeling schemes and induceduniversal graphs. In Proc. of the 47th Annual ACM Symp. on Theory of Computing (STOC), 2015. To appear.
 [8] S. Alstrup and T. Rauhe. Improved labeling schemes for ancestor queries. In Proc. of the 13th annual ACMSIAM Symp. on Discrete Algorithms (SODA), 2002.
 [9] S. Alstrup and T. Rauhe. Small induceduniversal graphs and compact implicit graph representations. In In Proc. 43rd annual IEEE Symp. on Foundations of Computer Science (FOCS), pages 53–62, 2002.
 [10] F. Bazzaro and C. Gavoille. Localized and compact datastructure for comparability graphs. Discrete Mathematics, 309(11):3465–3484, June 2009.
 [11] B. Bollobás, D. Coppersmith, and M. Elkin. Sparse distance preservers and additive spanners. SIAM J. Discrete Math., 19(4):1029–1055, 2005. See also SODA’03.
 [12] N. Bonichon, C. Gavoille, and A. Labourel. Short labels by traversal and jumping. In Structural Information and Communication Complexity, pages 143–156. Springer, 2006.
 [13] M. A. Breuer. Coding the vertexes of a graph. IEEE Trans. on Information Theory, IT–12:148–153, 1966.
 [14] M. A. Breuer and J. Folkman. An unexpected result on coding vertices of a graph. J. of Mathemathical analysis and applications, 20:583–600, 1967.
 [15] G. Chartrand, T. Thomas, P. Zhang, and Varaporn Saenpholphat. A new look at Hamiltonian walks. Bull. Inst. Combin. Appl., 42:37–52, 2004.
 [16] J. Cheng, S. Huang, H. Wu, and A. WaiChee Fu. TFlabel: a topologicalfolding labeling scheme for reachability querying in a large graph. In ACM International Conference on Management of Data (SIGMOD), pages 193–204, 2013.
 [17] V. D. Chepoi, F. F. Dragan, B. Estellon, M. Habib, and Y. Vaxès. Diameters, centers, and approximating trees of deltahyperbolic geodesic spaces and graphs. In 24st Annual ACM Symp. on Computational Geometry, pages 59–68, 2008.
 [18] V. D. Chepoi, F. F. Dragan, and Y. Vaxès. Distance and routing labeling schemes for nonpositively curved plane graphs. J. of Algorithms, 61(2):60–88, 2006.
 [19] F. R. K. Chung. Universal graphs and induceduniversal graphs. J. of Graph Theory, 14(4):443–454, 1990.
 [20] E. Cohen, H. Kaplan, and T. Milo. Labeling dynamic XML trees. SIAM J. Comput., 39(5):2048–2074, February 2010.
 [21] B. Courcelle and R. Vanicat. Query efficient implementation of graphs of bounded cliquewidth. Discrete Applied Mathematics, 131:129–150, 2003.
 [22] L. J. Cowen. Compact routing with minimum stretch. J. of Algorithms, 38:170–183, 2001.
 [23] D. Delling, A. V. Goldberg, R. Savchenko, and R. Foncesa Werneck. Hub labels: Theory and practice. In 13th International Symp. on Experimental Algorithms, pages 259–270, 2014.
 [24] Y. Dodis, M. Pǎtraşcu, and M. Thorup. Changing base without losing space. In Proc. of the 42nd Annual ACM Symp. on Theory of Computing (STOC), pages 593–602, 2010.
 [25] T. Eilam, C. Gavoille, and D. Peleg. Compact routing schemes with low stretch factor. J. of Algorithms, 46(2):97–114, 2003.
 [26] A. Farzan and J. I. Munro. Succinct encoding of arbitrary graphs. Theoretical Computer Science, 513:38–52, 2013.
 [27] A. Farzan and J. I. Munro. A uniform paradigm to succinctly encode various families of trees. Algorithmica, 68(1):16–40, 2014.
 [28] P. Ferraginaud, I. Nitto, and R. Venturini. On compact representations of allpairsshortestpathdistance matrices. Theor. Comput. Sci., 411(3436):3293–3300, July 2010.
 [29] P. Fraigniaud and A. Korman. On randomized representations of graphs using short labels. In Proc. of the 21st Annual Symp. on Parallelism in Algorithms and Architectures, pages 131–137, 2009.
 [30] P. Fraigniaud and A. Korman. Compact ancestry labeling schemes for XML trees. In Proc. of the 21st annual ACMSIAM Symp. on Discrete Algorithms (SODA), pages 458–466, 2010.
 [31] P. Fraigniaud and A. Korman. An optimal ancestry scheme and small universal posets. In Proc. of the 42nd ACM Symp. on Theory of computing (STOC), pages 611–620, 2010.
 [32] C. Gavoille, M. Katz, N. Katz, C. Paul, and D. Peleg. Approximate distance labeling schemes. In Proc. of the 9th annual European Symp. on Algorithms, pages 476–488, 2001.
 [33] C. Gavoille and O. Ly. Distance labeling in hyperbolic graphs. In 16th Annual International Symp. on Algorithms and Computation, pages 1071–1079, 2005.
 [34] C. Gavoille and C. Paul. Distance labeling scheme and split decomposition. Discrete Mathematics, 273(13):115–130, 2003.
 [35] C. Gavoille and C. Paul. Optimal distance labeling for interval graphs and related graphs families. SIAM J. on Discrete Mathematics, 22(3):1239–1258, July 2008.
 [36] C. Gavoille, D. Peleg, S. Pérennes, and R. Raz. Distance labeling in graphs. J. of Algorithms, 53(1):85 – 112, 2004. See also SODA’01.
 [37] S. Goodman and S. Hedetniemi. On the hamiltonian completion problem. In Proc. 1973 Capital Conf. on Graph Theory and Combinatorics, pages 263â–272, 1974.
 [38] R. L. Graham and H. O. Pollak. On embedding graphs in squashed cubes. In Lecture Notes in Mathematics, volume 303 of Proc. of a conference held at Western Michigan University. SpringerVerlag, 1972.
 [39] A. Gupta, R. Krauthgamer, and J. R. Lee. Bounded geometries, fractals, and lowdistortion embeddings. In 44th Symp. on Foundations of Computer Science (FOCS ), pages 534–543, 2003.
 [40]