The Analysis of Kademlia for random IDs

# The Analysis of Kademlia for random IDs

Xing Shi Cai         Luc Devroye
School of Computer Science, McGill University of Montreal, Canada,
xingshi.cai@mail.mcgill.ca          lucdevroye@gmail.com
Research of the authors was supported by NSERC.
###### Abstract

Kademlia [7] is the de facto standard searching algorithm for P2P (peer-to-peer) networks on the Internet. In our earlier work [2], we introduced two slightly different models for Kademlia and studied how many steps it takes to search for a target node by using Kademlia’s searching algorithm. The first model, in which nodes of the network are labeled with deterministic ids, had been discussed in that paper. The second one, in which nodes are labeled with random ids, which we call the Random id Model, was only briefly mentioned. Refined results with detailed proofs for this model are given in this paper. Our analysis shows that with high probability it takes about steps to locate any node, where is the total number of nodes in the network and is a constant that does not depend on .

## 1 Introduction to Kademlia

A P2P (peer-to-peer) network [11] is a decentralized computer network which allows participating computers (nodes) to share resources. Some P2P networks have millions of live nodes. To allow searching for a particular node without introducing bottlenecks in the network, a group of algorithms \addedcalled dht (Distributed Hash Table) [1] was invented in the early 2000s, including Plaxton’s algorithm [8], Pastry [10], can [9], Chord [13], Koorde [6], Tapestry [15], and Kademlia [7]. Among them, Kademlia is most widely used in today’s Internet.

In Kademlia, each node is assigned an id selected uniformly at random from (id space), where is usually  [12] or  [3]. The distance between two nodes is calculated by performing the bitwise exclusive or (xor) operation over their ids and taking the result as a binary number. (In this work distance and closeness always refer to the xor distance between ids.)

Roughly speaking, a Kademlia node keeps a table of a few other nodes (neighbors) \replacedwhose distances are sufficiently diversewith diversified distances to itself. So when a node searches for an id, it always has some neighbors close to its target. By inquiring these neighbors, and these neighbors’ neighbors, and so on, the node that is closest to the target id in the network will be found eventually. Other dhts work in similar ways. The differences mainly come from how distance is defined and how neighbors are chosen. For a more detailed survey of dhts, see [1].

## 2 The Random ID Model

This section briefly reviews the Random id Model for Kademlia defined in [2]. Let be the length of binary id chosen uniformly at random from without replacement. Consider nodes indexed by . Let be the id of node .

Given two id, their xor distance is defined by

 δ(x,y)=d∑j=1(xj⊕yj)×2d−j.

where is the xor operator

 u⊕v={1if u≠v,0otherwise.

Let be the length of the common prefix of and . The nodes can be partitioned into parts by their common prefix length with via

 S(x,j)={i:1≤i≤n, ℓ(x,Xi)=j},0≤j≤d.

For each , tables (buckets) of size \replacedat mostup to are kept, where is a fixed positive integer. Buckets are indexed by . The bucket is filled with \replacedup to indices drawn uniformly at random from without replacement. Note that the first bits of , if , agree with the first bits of , but the -th bit is different.

Searching for initiated at node proceeds as follows. Given that , can only be in . Thus, all indices from the bucket of are retrieved, say . From them, the one having shortest distance to is selected as . (In fact, any selection algorithm would be sufficient for the results of this paper.) Note that

 ℓ(y,Xi∗)=max1≤r≤kℓ(y,Xir).

Thus the choice of does not depend on the exact distances from to . Therefore, instead of the xor distance, only the length of common prefix is needed in the following analysis of searching.

The search halts if or if the bucket is empty. In the latter case, is closest to among all nodes. Otherwise we continue from . Since , the maximal number of steps before halting is bounded by . Let be the number of steps before halting in the search of when started from (searching time). Then .

Treating as strings consisting of zeros and ones, they can be represented by a tree data structure called trie [14]. The ’s can be viewed as subtrees. Filling buckets is equivalent to choosing \replacedat mostup to leaves from each of these subtrees. Fig. 1 gives an example of an id trie.

## 3 Main Results

The structure of the model is such that nothing changes if are replaced by their coordinate-wise xor with a given vector . This is a mere rotation of the hypercube. Thus, it can be assumed without loss of generality that , the rightmost branch in the id trie.

If for some , the searching time is \replacedquite stable and acceptable, which is undoubtedly a contributing factor in Kademlia’s success. If , then it is not a useful upper bound of searching time any more. However, in some probabilistic sense, can be much smaller than —it can be controlled by the parameter , which measures the amount of storage consumed by each node. The aim of this work is to investigate finer properties of these random variables. In particular, the following theorem is proved:

###### Theorem 1.

Assume that . Let be a fixed integer. Let denote convergence in probability. Then

 T1log2np→1μk,as n→∞, ET1log2n→1μk,as n→∞,

where is a function of only:

 μk=∞∑j=11−(1−12j−1)k.

In particular, .

In the rest of the paper, we first show that once the search reaches a node that shares a common prefix of length about with , the search halts in steps. Thus it suffices to prove Theorem 1 for the time that it takes for this event to happen. Then we show that the id trie is well balanced with high probability. Thus when is a power of , we can couple the search in the original trie with a search in a trie that is a complete binary tree. It proves the theorem for this special case. After that, we give a sketch of how to deal with general . At the end we briefly summarize some implications of the theorem.

## 4 The Tail of the Search Time

To keep the notation simple, let and \replacedand note that is not necessarily integer-valuedassume that is a power of two. In Section 7, we sketch what to do when is not a power of two. Also, for analytic purposes, define

 J=min{j:n2j+1≤m4}.

Since and ,

 J

The importance of follows from the fact that once the search reaches a node with , it takes very few steps to finish. \addedLet be the number of search steps that depart from a node in the set for some , with the very first node in the search being .

###### Lemma 1.

Theorem 1 follows if

 T′1log2np→1μk,as n→∞.
###### Proof.

Let . counts steps of the search departing from a node in . Thus

 T′′1≤d−1∑j≥J1[|S(y,j)|>0].

Noting that

 E|S(y,j)|=n2j+1, (3)

by linearity of expectation,

 ET′′1 ≤d−1∑j≥JP{|S(y,j)|≥1}≤d−1∑j≥Jmin{E|S(y,j)|,1} ≤d−1∑j≥Jmin{n2j+1,1}(by (???)) ≤d−1∑j≥J1[2j+1

Thus, for all fixed,

 P{T′′1≥ϵlog2n}≤ET′′1ϵlog2n=o(1),

Therefore . For the expectation, note that

 ET′1log2n→1μk,as n→∞,

by the lemma’s assumption and the fact that . ∎

## 5 Good Tries and Bad Tries

Since the tail of search does not matter, define a new partition of all nodes by merging subtrees for as follows:

 Sj={S(y,j)if 0≤j

Let . It follows from (3) that

 ENj={n/2j+1if 0≤j

or simply , where . \addedNote that is hypergeometric with parameters

 (n,2d2(j+1)∧J,2d−2d2(j+1)∧J),

i.e., it corresponds to the selection of balls without replacement from an urn of balls of which are white [5, chap. 6.3].

The analysis of can be simplified if the ’s are all close to their expectations. To be precise, let be the accuracy parameter. An id trie is good, if

 ∣∣Nj−ENj∣∣≤α×ENj,

for all . Otherwise it is called bad.

###### Lemma 2.

The probability that an id trie is bad is .

###### Proof.

It follows from the union bound that

 P{J⋃j=0[|Nj−ENj|>α×ENj]} ≤J∑j=0P{[|Nj−ENj|>α×ENj]} ≤J∑j=0Var(Nj)(α×ENj)2(by Chebyshev's inequality) ≤J∑j=0ENj(α×% ENj)2(Nj is hypergeometric) ≤1α2×J∑j=02j+1n(by (???)) ≤m3×2J+2n=o(1).(since n2J>m4)

The fact used here is that where is binomial . For the binomial, . ∎

## 6 Proof when n Is a Power of 2

In this section, is assumed to be a power of , i.e., is an integer. The general case is treated in the next section.

### 6.1 A Perfect Trie

Construct a coupled id trie consisting of as follows. If , i.e., the size of the subtree is at least its expectation, let for the smallest indices in . After this preliminary coupling, some ’s are undefined. The indices for which are undefined go into a global pool of size

 J∑j=0max{Nj−ENj,0}.

For a good trie, the size of the pool is at most

 J∑j=0α×ENj=α×EJ∑j=0Nj=αn.

For a subtree of size , take indices from and assign a value, that is different from all other ’s, and that has . Subtrees of this new trie have fixed sizes of

 |{i:ℓ(Yi,y)∧J=j}|=ENj=n2(j+1)∧J,0≤j≤J. (5)

A trie like this is called perfect. Indices for which , i.e., , are called ghosts. Other indices are called normal.

Next, refill the buckets according to the perfect trie, but keep buckets of normal indices containing no ghosts unchanged. Observe that a search step departing at a normal index proceeds precisely the same in both tries if bucket (with ) of does not contain ghosts. \addedAssuming that the original trie is good, the probability that a bucket \addedthat corresponds to for some contains a ghost is not more than . This is because in the newly constructed prefect trie, the subtree contains no more than proportion of ghost nodes.

Let denote the number of search steps starting from node via node with in the perfect trie. Then where is the event that at least one node in the buckets encountered during a search is a ghost. Let be the event that the trie is good. It follows from Lemma 2 that

 P{T∗1≠T′1}≤P{B}≤P{B,A}+P{Ac}≤J×kα+o(1)=o(1).

Therefore, Theorem 1 follows if

 T∗1log2np→1μk,as n→∞.

### 6.2 Filling the Buckets with Replacement

To deal with the problem that buckets are filled by sampling without replacement, another coupling argument is needed. Let be the probability that the items sampled with\deletedout replacement from a set of size are not all distinctive. Observe that by the union bound,

 pj≤(k2)2j+1n≤k22jn.

If , then bucket of has elements drawn without replacement from

 S={s:ℓ(Ys,y)≥j+1},0≤j

Observe that

Hence, with probability , the sampling can be seen as having been carried out with replacement.

The coupling is as follows: for all with and all , mark bucket of with probability . When a bucket is marked, replace its entries with new entires drawn with replacement conditioned on the existence of at least one duplicate entry. In this way, all bucket entries are for a sampling with replacement. Let the search time, starting still from , \addedbe denoted by . Let be the event that during the search \replaceda markedan unmarked bucket is encountered. Observe that Therefore

 P{T∗1≠T∗∗1}≤P{D}≤J−1∑j=0pj≤J−1∑j=0k22jn

So Theorem 1 follows if

 T∗∗1log2np→1μk,as n→∞.

### 6.3 Analyzing T∗∗1 Using a Sum of I.I.D. Random Variables

Let . Assume that step of the search departs from node and reaches node . Let , i.e., represents the progress in this step. Then

Due to the recursive structure of a \replacedperfectprefect trie, , although not i.i.d., should have very similar distributions. This intuition leads to the following analysis of by studying a sum of i.i.d. random variables.

One observation allows us to deal with truncated version of ’s is as follows:

###### Lemma 3.

Let be a sequence of real numbers with . Define

where is also a real number. Then

where we define the infimum of an empty set to be .

###### Proof.

Let . \addedIf or , the lemma is trivially true. So we assume . By induction\added on , one can show that if . \addedSince , we have , which is the induction basis. If for all and , then

Therefore if and only if . ∎

Let . It follows from the previous lemma that

which is quite convenient as the distribution of is easy to compute.

Assume again that step of the search departs from node with . Consider one item, say , in bucket of . Recall that is selected uniformly at random from all \replacedindices indexes with . Thus it follows from the structure of a perfect trie, which is given by (5), that

 P{ℓ(Yz,y)=s}=n2s+1n2j+2+n2j+3+⋯+n2J+n2J=12s−j,j+1≤s

Or shifted by ,

 P{ℓ(Yz,y)−j=s}=12s,1≤s

If truncated by , we obtain

 P{(ℓ(Yz,y)−j)∧(J−j)=s}=12s∧(J−j−1),1≤s≤J−j.

Note that this is exactly the distribution of a geometric truncated by .

Recall that among all the values of given by items in the bucket of , the one chosen as the next stop of the search gives the maximum. Thus

 Δt=maxz∈bucket j{ℓ(Yz,y)−j}.

Let be i.i.d. geometric . Let . Then

 ¯¯¯¯¯Δt =Δt∧(J−j) =maxz∈bucket j{(ℓ(Yz,y)−j)}∧(J−j) =maxz∈bucket j{(ℓ(Yz,y)−j)∧(J−j)} L=max{Z1∧(J−j),…,Zk∧(J−j)} =max{Z1,…,Zk}∧(J−j) =V∧(J−j).

Let be a geometric minus one. \addedThen . Let be i.i.d. random variables distributed as . Let . Using induction and the previous argument about , one can show that

For the induction basis, note that

 ¯¯¯¯¯Δ0=Δ0∧JL=(V0∧d)∧J=V0∧J=¯¯¯¯V0.

Assume that for some . Then for all ,

 P{t∑s=0¯¯¯¯¯Δs=i} =i∑j=0P{¯¯¯¯¯Δt=i−j ∣∣ ∣∣ t−1∑s=0¯¯¯¯¯Δs=j}P{t−1∑s=0¯¯¯¯¯Δs=j} =i∑j=0P{Vt∧(J−j)=i−j}P{t−1∑s=0¯¯¯¯Vs=j} =i∑j=0P{[¯¯¯¯Vt=i−t−1∑s=0¯¯¯¯Vs]∩[t−1∑s=0¯¯¯¯Vs=j]}=P{t∑s=0¯¯¯¯Vs=i}.

Thus (6) is proved. It then follows from Lemma 3 and (6) that

which makes much easier to analyze.

Since if and only if are all smaller than ,

Therefore, by definition of ,

 EV=∞∑s=1P{V≥s}=∞∑s=11−(1−12s−1)k=μk.

Readers familiar with renewal theory [4, chap. 4.4] can immediately see that

 T∗∗1log2n=T∗∗1J×Jlog2np→1EV=1μk,

which completes the proof of Theorem 1 for which is power of . The following Lemma gives some more details.

###### Lemma 4.

If ,

 τM/EVp→1,%asM→∞.
###### Proof.

Since is geometric ,

 P{V0+1≤s}=1−12s≥(1−12s)k=P{V1≤s}.

In other words, , where denotes stochastical ordering. Let

 τ′=inf{t:t∑s=1Vs≥M},τ′′=inf{t:t∑s=0Vs+1≥M}=τ′−1.

Then and By the strong law of large number\addeds, both and converge\deleteds to almost surely. Therefore . ∎

## 7 Proof for the General Case

In this section, the proof Theorem 1 for an arbitrary integer is only sketched as most methods used here are very similar to those in the previous section.

### 7.1 An Almost Perfect Trie

When is not power of , is not guaranteed to be an integer. So a perfect trie is not well defined any more. However, let us define

 bj=⎧⎨⎩⌈ENj⌉=⌈n2j+1⌉0≤j

Then the coupling argument for perfect tries used in Section 6.1 can still be applied, now replacing by .

In this way, a trie consisting of can be constructed, with its subtrees having fixed sizes of

 |{i:ℓ(Yi,y)∧J=j}|=bj. (7)

If the original trie is good, then the number of indices for which , called ghosts, is bounded by

 J−1∑j=0α×ENj+(αENJ+J)=αn+J.

A trie with these properties is called almost perfect.

Let denote the number of search steps starting from node via node with in the almost perfect trie. If and are coupled the same way as they were in Section 6.1, then where is the event that at least one node in the buckets encountered during a search is a ghost. Let be the event that the trie is good, which has probability by Lemma 2. One can check that

 P{T∗1≠T′1}≤P{B}≤P{B,A}+P{Ac}≤mk(m−3/2+m2m)+o(1)=o(1).

Again, Theorem 1 follows if

 T∗1log2np→1μk,as n→∞.

### 7.2 Filling the Buckets with Replacement

The coupling argument used in Section 6.2 to deal the problem that buckets are filled by sampling without replacement can be adapted for an almost perfect trie. Let \deletedis be the probability that items sampled without replacement from a set of size have conflicts. Observe that, for large enough,

 bj+1+⋯+bJ≥n2j+1−(j+1)≥n2j+2.

Thus it follows from the union bound that

 pj≤(k2)1bj+1+⋯+bJ≤k22(bj+1+⋯+bJ)≤2j+1n.

Let the search time of sampling without replacement be . Let and be coupled in the same way as they were in Section 6.2. Let be the event that during the search an unmarked bucket is encountered. Since one can check that

 P{T∗1≠T∗∗1}≤P{D}≤J−1∑j=0pj<4k2m4=o(1).

So once again, Theorem 1 follows if

 T∗∗1log2np→1μk,as n→∞.

### 7.3 Analyzing T∗∗1 Using a Sum of I.I.D. Random Variables

Consider two partitions of a line segment of length . From left to right, cut into consecutive intervals , with , where denotes the length of . Again, from left to right, cut into infinite many consecutive intervals , with .

Observe that for , and do not completely match since is wider than . However, since , for , the distance between the right endpoints of and is at most . Therefore, the total length of unmatched regions, which are are called death zones, is .

Let and be the same as in Section 6.3. A coupling between them can constructed as follows: pick one point uniformly at random from the entire . If falls in interval , let . If falls in interval , let . Note that . Also note that since

 P{V0=j}=P{z0∈B′j}=|B′j|n=12j+1,j=0,1,…,

is geometric minus one, as desired.

Assume that . Pick points from the line segment starting from to the right endpoint of . Let such that the rightmost one of the points falls into . Since

 P{Vt

is again the maximum of i.i.d. geometric .

If not all the points are in the range of , keep picking more points until of them are within this region. Let such that the rightmost of the these points falls into . Chosen in this way, has the same distribution as how much progress one makes at step of the search. Therefore

 T∗∗1L=inf{t:t∑s=0¯¯¯¯¯Δs≥J}.

It follows from Lemma 4 that if

 T∗∗∗1=inf{t:t∑s=0Vs≥J},

then as .

Let be the event that at some step of the previous coupling, at least one of the first chosen points falls into death zones. Note that . Therefore,

 P{T∗∗1≠T∗∗∗1}≤P{E}≤J−1∑j=0kJ2bJ≤m3km4−m=o(1).

So the proof of Theorem 1 \replacedwhen isfor being an arbitrary integer is complete.

## 8 Conclusions

In a Kademlia system, \replacedone often searches for a random idwhich id needs to be searched for is random. Although is the searching time for a fixed id, Theorem 1 still holds if the target is chosen uniformly at random from .

If with , there is no essential difference between sampling the ids with or without replacement from as the probability of a collision in sampling with replacement is . This is the well known birthday problem. Since in practice, a Kademlia system hands out a new id without checking its uniqueness, it is \deletedis wise to have since then a randomly generated id \replacedclashesdoes not clash with any existing id \deletedexcept with \deleteda very small probability.