Octopus: A Secure and Anonymous DHT Lookup

Octopus: A Secure and Anonymous DHT Lookup

Qiyan Wang Department of Computer Science
University of Illinois at Urbana-Champaign
   Nikita Borisov Department of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
IL, U.S.A.

Distributed Hash Table (DHT) lookup is a core technique in structured peer-to-peer (P2P) networks. Its decentralized nature introduces security and privacy vulnerabilities for applications built on top of them; we thus set out to design a lookup mechanism achieving both security and anonymity, heretofore an open problem. We present Octopus, a novel DHT lookup which provides strong guarantees for both security and anonymity. Octopus uses attacker identification mechanisms to discover and remove malicious nodes, severely limiting an adversary’s ability to carry out active attacks, and splits lookup queries over separate anonymous paths and introduces dummy queries to achieve high levels of anonymity. We analyze the security of Octopus by developing an event-based simulator to show that the attacker discovery mechanisms can rapidly identify malicious nodes with low error rate. We calculate the anonymity of Octopus using probabilistic modeling and show that Octopus can achieve near-optimal anonymity. We evaluate Octopus’s efficiency on Planetlab with 207 nodes and show that Octopus has reasonable lookup latency and manageable communication overhead.

Anonymity, security, DHT, lookup

I Introduction

Structured peer-to-peer networks, such as Chord [1] or Kademlia [2], allow the creation of scalable distributed applications that can support millions of users. They have been used to build a number of successful applications, including P2P file sharing (Overnet, Kad, Vuze DHT111http://www.vuze.com/) and content distribution (CoralCDN [3]), and many others have been proposed, such as distributed file systems [4], anonymous communication systems [5, 6, 7, 8, 9], and online social networks [10, 11]. At the heart of these networks lies a distributed hash table (DHT) lookup mechanism that implements a decentralized key–value store. The DHT allows efficient storage and coordination among very large collections of nodes; however, its decentralized nature creates a number of security and privacy vulnerabilities. Because peers have to rely on other peers to determine the state of the network, malicious nodes could provide misinformation to misdirect an honest user’s lookups [12]. Likewise, nodes can profile the lookup activities of other nodes and learn what files or websites they are interested in or who their friends may be. Recent research shows that anonymous communication systems based on anonymity-deficient DHT lookups have severe vulnerabilities to information leak attacks [13, 14].

To address these issues, our goal is to design a lookup mechanism that achieves both security and anonymity (where anonymity means no information is revealed about which nodes are looking up which values), heretofore an open problem. We note that security is a necessary condition for anonymity, because without security, malicious nodes can misdirect the lookup towards colluding nodes and learn about the lookup target. On the other hand, security is not a sufficient condition for anonymity. Some existing lookup schemes designed to resist active attacks are not suitable to build anonymous DHT systems, e.g., due to heavily relying on redundant transmission that leaks information about the lookup initiator and/or target [15, 16, 17]. Furthermore, even with a secure lookup that itself does not cause information leak, an inappropriate design of the DHT system can still lead to anonymity vulnerabilities [14].

Our contributions in this work include:

1) We propose a suite of novel security mechanisms–attacker discovery for DHT systems. Our mechanisms proactively identify and remove malicious peers. We develop an event-based simulator to show that our identification mechanism is capable of rapidly discovering malicious nodes with low error rate. For a network with 20% malicious nodes, it can correctly identify all attacking nodes within 30 minutes. We compare our scheme with Halo [16], a state-of-the-art secure DHT system, and show that our scheme provides better robustness against active attacks. Furthermore, our design does not rely on redundant transmission, and thus is suitable to construct anonymous DHT systems.

2) With the proposed security mechanisms, we put forward a secure and anonymous DHT lookup Octopus. Octopus splits individual queries used in a lookup over multiple anonymous paths, and introduces dummy queries, to make it difficult for an adversary to learn the eventual target of a lookup. Unlike most previous works that only analytically evaluate the systems’ anonymity, we use probabilistic modelling with the help of simulation to calculate the information leak, so that users can know how much anonymity can be actually provided by the system. We show that Octopus provides near-optimal anonymity for both the lookup initiator and target. In a network of 100 000 nodes with 20% malicious nodes, Octopus only leaks 0.57 bit of information about the initiator and 0.82 bit of information about the target; these are 6 times and 4 times better than what previous works [7, 8] were able to achieve, respectively.

3) For performance evaluation, we measure the lookup latency of Octopus on Planetlab with 207 nodes, and compare it with the base-line scheme Chord [1] and Halo [16]. The lookup latency of Octopus is comparable to that of Chord, and even better than that of Halo. While Octopus incurs relatively higher communication overhead than Chord and Halo to provide extra security and/or anonymity guarantees, the bandwidth consumption of Octopus is still manageable, which is only a few kbps for each node.

The remainder paper is organized as follows. Section II presents the system model. We describe our security and anonymity mechanisms in Section III and Section IV. The efficiency evaluation is provided in Section V and Section VI presents the related work. We conclude in Section VII.

Ii System Model

Ii-a Threat Model

In the same vein of related works [5, 6, 7, 8, 9, 17, 18], we do not consider a global adversary that is capable of controlling the whole network and observing all communication traffic. Such a global adversary seems unpractical in large-scaled P2P networks. Instead, we assume a partial adversary that controls a fraction of all nodes in the network ( is typically assumed to be up to 20%). Malicious nodes can behave in an arbitrarily malicious way, such as intercepting, modifying or dropping any messages going through them, or injecting fake messages to any other nodes . We also assume that malicious nodes can log any messages they have seen and access to a high-speed communication channel to share any information with very low transmission delay.

Also similar to related works, we do not attempt to solve the problem of Sybil attack [19] in this work. Defending sybil attack is an interesting research area that has drawn a lot of attentions; a number of effective solutions have been proposed, such as [20, 21, 22]. All these solutions are applicable to Octopus as extensions to resist Sybil attacks.

Ii-B Design Goals

The major goal for security is to avoid lookups being biased by malicious nodes. In other words, given a lookup target, it should not be possible to misdirect the lookup path or bias the final lookup result.

Pfitzmann and Hansen defined several relevant anonymity properties for message-based communication, such as sender and receiver anonymity [23]. We consider equivalent properties in the context of DHT lookups.

  • Initiator anonymity: given a lookup target, it should not be possible to determine its initiator.

  • Target anonymity: given a lookup initiator, it should not be possible to determine its target.

  • Query unlinkability: given several queries with known targets, it should not be possible to find out if they came from the same initiator.

Iii Security Mechanisms of Octopus

Iii-a Problem Description

In a DHT system, like Chord [1], each node is assigned a unique ID associated with its IP address, and owns the IDs from itself to its direct predecessor on the ring. Each node maintains a list of contact nodes (called fingers), where is the network size, and the -th finger of is the owner (or successor) of the ID (the first finger, i.e., , is ’s direct successor). Besides, each node maintains a list of successor nodes for stabilization. Some DHTs [24] also utilize the list of successors during lookups to speed up the lookup process in the last few hops.

We study the more general case where the successor lists are used in lookups. We refer to the combination of the fingertable and the successor list as the routing table. More specifically, we consider the following lookup procedure. Assuming a lookup initiator wants to find the owner of value , it first queries , the node that is closest to in its routing table, and sends its routing table to 222In vanilla DHT lookups, tells to each queried node, which will return the finger closest to ; however, this reveals the lookup target to malicious intermediate nodes. Hence, for anonymous lookups, asks each intermediate node for its full routing table, without revealing .. Then out of ’s routing table, finds the node closest to and asks for its routing table. The process iteratively proceeds until reaching , for which one of its successors is the owner of (i.e., the lookup target).

Some of the queried nodes could be malicious and attempt to launch the following attacks.

Iii-A1 Lookup Bias Attack

If the last queried node is malicious, it can replace the honest nodes in its successor list with malicious nodes, so that one of its (malicious) “successors” will be concerned as the lookup target.

Iii-A2 Lookup Misdirection Attack

Instead of trying to bias the lookup result, malicious nodes could attempt to make query more malicious nodes during the lookup, by providing manipulated fingertables (i.e., replacing honest fingers with malicious nodes). This attack is a big threat to anonymity since the adversary can learn more information about the lookup target from a larger number of queried malicious nodes [14].

Iii-A3 Finger Pollution Attack

In a DHT system, each node periodically performs lookups to update its fingers. An attack related to this is that malicious nodes could attempt to pollute honest nodes’ fingertables during the finger-update lookups, so that the polluted fingertables can contribute to the lookup bias and misdirection attacks.

Iii-B Security Mechanisms

Many existing secure DHT designs [15, 16, 6] employ redundant queries or lookups to tolerate misinformation provided by malicious nodes. However, the redundant transmission creates more opportunities for an adversary to gain information about the lookup initiator and/or target [13]. Some schemes [17, 18] utilize quorum-based topologies and threshold cryptography to limit byzantine adversaries, but they also require the initiator to contact multiple nodes at each step of the lookup (for cryptographic operations), which accelerates information leak. Some other schemes, such as Myrmic [25], prevent routing table manipulation by introducing a central trusted authority to sign each node’s routing table; however, this approach is impractical since for each node join or churn, the authority has to regenerate the signatures for all related nodes, rendering a performance bottleneck.

To effectively limit active attacks while minimizing information leak, we propose a new defense strategy by letting each (honest) node secretly check the correctness of other nodes’ routing tables. Such checks are preformed offline (i.e., independent of lookups), and thus do not reveal any information of lookups. To punish discovered malicious nodes, we use a certificate authority (CA) to issue certificates and revoke certificates from identified malicious nodes so that the malicious nodes can be gradually removed from the system. We note that the CA in our case is fundamentally different from that of Myrmic [25]. The latter is required to be online all the time and needs to update signatures for multiple nodes for each node churn/join. Whereas, the certificates in our scheme are independent of nodes’ routing states and thus do not need to be updated frequently. Our simulation results show that the workload of our CA is sufficiently low and can be handled by most Internet servers.

There has been several fairly efficient and scalable revocation mechanisms in the literature, such as Merkle Hash Tree based certificate revocation [26], efficient distribution of revocation information over P2P networks [27], and scalable PKI based on P2P systems [28]. Since certificate management in our scheme is essentially the same as these systems, we do not particularly study certificate revocation in this work.

Iii-B1 Secret Neighbor Surveillance

To limit the lookup bias attack, we propose secret neighbor surveillance, a mechanism that prevent malicious neighbors from manipulating their successor lists.

In particular, we let each node maintain a predecessor list, in the same way as maintaining the successor list (i.e., periodically running Chord stabilization protocol anti-clockwise). The predecessor list is of the same size as the successor list, and thus each node should be contained in the successor list of any of its predecessors. In other words, if is not contained in the successor list of its predecessor, it means this predecessor is trying to manipulating its successor list by replacing with another node. Our goal is to let detect this.

Fig. 1: Secret neighbor surveillance. is checking if itself is included in its predecessor ’s successor list. If not, it means manipulates its successor list by replacing with another node.

In the lookup bias attack, a malicious node provides a manipulated successor list in response to lookup queries. Therefore, we let anonymously sends a “lookup query” to one of its predecessors, say (see Figure 1), and checks if itself is included in ’s successor list. “Anonymously” is required since if the malicious node can distinguish a testing query from real lookup queries based on the querier’s identity, it can always provide the correct successor list for testing queries to avoid being detected. The anonymous transmission can be achieved by using the basic onion routing technique [29], i.e., chooses two randomly peers as relays to forward its query to while using onion encryption to ensure each hop on the forwarding path can only know its previous and next hops. The two relay nodes can be found by performing a -hop random walk on the overlay network (where ). The details of the random walk are provided in Appendix -A.

performs the above checks from time to time (i.e., with time interval where is the maximum checking interval) on randomly selected successors. A detected malicious node will be reported to the CA. To provide a non-repudiation proof on a manipulated successor list (i.e., verifiable to the CA), we let each node sign its routing table and attach a time stamp to it.

Fig. 2: Successor list pollution. The malicious successor pollutes ’s successor list in ’s stabilization by providing a manipulated successor list.

Another strategy to launch the lookup bias attack is to pollute honest nodes’ successor lists during stabilization. For example, as shown in Figure 2, assuming is malicious and is honest, can send a manipulated successor list excluding during ’s stabilization, so that will concern as dead and remove from its successor list; consequently, will be mistakenly identified as a malicious successor by and will still be uncovered. To deal with this, we let each node sign its successor list used in stabilization; also, each node keeps a queue of latest received successor lists in stabilization as proof, to prove that its successor list is not intentionally manipulated. For example, can provide its proof to the CA showing that its successor list is correctly computed according to the information provided by . If ’s proof is verified (according to the stabilization algorithm) by the CA, then the suspicion on is cleared and the CA will request for its proof, and check it against ’s proof. This process is repeated until finding a node that cannot provide valid proofs and this node is then judged as the malicious node.

Iii-B2 Secret Finger Surveillance

Likewise, we propose a secret finger surveillance mechanism for the lookup misdirection attack by limiting malicious nodes from manipulating their fingertables in the lookup.

Fig. 3: Secret finger surveillance. is checking if node has replaced its finger with a malicious node . first asks for its predecessor list, and then anonymously sends a “lookup query” to a random node in ’s predecessor list. If any node in ’s successor list is closer to the ideal finger ID than ’, detects manipulated its fingertable.

In particular, we let each node keep a small number of received fingertables (e.g., from lookups, secret neighbor surveillance, or random walks). From time to time, node chooses a random finger from one of the kept fingertables, say the -th finger of node , and asks for its predecessor list (see Figure 3). Then, after waiting a short random period of time, anonymously sends a “lookup query” to a random predecessor of (say ), and checks if any node in ’s successor list is closer to the ideal finger ID than (i.e., is not the true finger ).

The intuition behind this is that if replaces a honest finger with a malicious node , at least one of ’s (true) predecessors should be closer to the ideal finger ID than . Hence, if provides with its true predecessor list, the fingertable manipulation will be detected. Therefore, has to manipulate its predecessor list to ensure that all the “predecessors” are malicious, so that the selected predecessor can collude with by providing a manipulated successor list that is consistent with the predecessor list provided by . On the other hand, however, cannot freely manipulate its successor list, since is under surveillance by its neighbors (i.e., secret neighbor surveillance). Therefore, if the adversary tries to manipulate a single finger (), she has to sacrifice at least one malicious node, either or and .

Iii-B3 Secure Finger Update

We can invoke the secret finger surveillance to limit the finger pollution attack: when obtains the result (say ) of the finger-update lookup, it asks for its predecessor list, and chooses a random predecessor of to perform the same checks as in the secret finger surveillance to verify is the true finger; uses to update its fingertable only when passes these checks.

Iii-C Security Evaluation

We use the following metrics to evaluate our security mechanisms.

  • fraction of remaining malicious nodes,

  • false positive rate, i.e., the chance that a honest node is judged as a malicious node,

  • false negative rate, i.e., the chance that a malicious node is not identified when being tested by a node,

  • false alarm rate, i.e., the chance that there is no node identified in a report sent to the CA.

These metrics represent different aspects of security properties. Reduction of malicious nodes shows effectiveness, false positive/negative rates represent accuracy, and false alarm rate indicates efficiency.

(a) Secret neighbor surveillance
(b) Secret finger surveillance
(c) Secure finger update
Fig. 4: Simulation results of the security mechanisms

Iii-C1 Experiment Setup

We developed an event-based simulator in C++ with about 3.0 KLOC. We consider a WAN setting, where latencies between each pair of peers are estimated using the King dataset [30]. We model node churn/join as an exponential distribution process with mean life time minutes. We generate random network topologies of size with 20% malicious nodes. Each node maintains 12 fingers and 6 successors/predecessors (on the order of ). With similar configurations as related works [9, 1], we let each node run successor and predecessor stabilization protocols every 2 seconds, and performs finger update every 30 seconds. To discover malicious nodes, each peer performs security checks of secret neighbor surveillance and secret finger surveillance every 60 seconds333This is based on the frequency of stabilization and finger update as well as the node churn rate, and we found that doing security checks every 60s is sufficient to rapidly discover malicious nodes.. To ensure high identification accuracy, each node keeps 6 latest received successor lists as proofs. Besides, we let each node perform one lookup every minute (we choose 1 min only because we want to test a large number of lookups within a relatively short simulation time).

Iii-C2 Experimental Results

We see from Figure (a)a that the secret neighbor surveillance mechanism can rapidly identify malicious nodes that try to bias lookups. After a short time (20 mins), almost all malicious nodes are discovered. In comparison, the speed of discovering malicious nodes by the secret finger surveillance mechanism is relatively slower (as shown in Figure (b)b), but still it can identify over 80% malicious nodes within 30 mins. The speed of identifying malicious nodes by the secure finger update mechanism is faster than that of the secure finger surveillance (as shown in Figure (c)c), because the former is performed more frequently (at each finger update) and some malicious fingers contained in the successor list can also be detected by the secret neighbor surveillance.

Security mechanisms False Positive False Negative False Alarm
Secret neighbor surveillance 0 0 0 0.52% 0 0.52%
Secret finger surveillance 0 0 14.02% 19.55% 0.18% 1.55%
Secure finger update 0 0 14.08% 18.48% 0.33% 2.18%
TABLE I: False positive/negative/alarm rates of the security mechanisms. is mean life time of each node (in minute). Attack rate is 100%. In fingertable manipulation/pollution attacks, checked malicious predecessors provide manipulated successor lists with 50% chance.

The accuracy of our attacker discovery mechanisms is shown in Table I. The false positive rate is 0 for all the three mechanisms even when the churn rate is very high (e.g., the mean life time for each node is 10 mins). This ensures that honest nodes will not be judged as malicious nodes by mistake. In addition, the secret neighbor surveillance has very low false negative rate (less than 0.6%), which implies that any malicious nodes that try to bias lookups can be caught with high probability. We also see that the false negative rates for the secret finger surveillance and secure finger update mechanisms are relatively higher. This is because a malicious finger can pass the security checks if the randomly selected predecessor happens to be a colluding node and provides a successor list consistent with the malicious finger. However, over time, a malicious node can be identified with very high probability as shown in Figure 4.

We also compare our scheme with a state-of-the-art secure DHT scheme Halo [16] in terms of the number of biased lookups over time. We calculate the ratio of biased lookups of Halo according to their analysis results [16] §4.1 using the parameter as they suggested. We can see from Figure 5 that after a short period of time, there are no more biased lookups in Octopus, while the number of biased lookups of Halo keeps increasing in linear with the total number of lookups. This demonstrates that our security mechanisms can fundamentally thwart active attacks.

Fig. 5: Security comparison

Finally, We evaluate the workload of the CA in terms of the number of messages (including reports, proofs, and etc) processed over time. We can see from Figure 6 that even during the peak time (the first 10 min), the CA only needs to process about 2 messages per second on average, which can be handled by most Internet servers.

Fig. 6: The CA’s workload

Iii-D Other Attacks, Countermeasures, and Discussion

We also consider other potential attacks, such as the selective denial-of-service attack and the relay exhaustion attack; we discuss these attacks and propose countermeasures in Appendix.

Iv Anonymity mechanisms of Octopus

Iv-a Problem Description

In vanilla DHT systems, since the lookup initiator queries intermediate nodes directly, the queried nodes can easily infer the ’s identity. A natural idea to let hide its identity by sending queries through an anonymous path, as in attacker identification mechanisms. An illustration of this is shown in Figure 7.

Fig. 7: is the lookup target, is the anonymous path, and ’s are queried nodes.

However, we note that a single anonymous path is insufficient to achieve high levels of anonymity. We use the example in Figure 7 to show this. Assume queried nodes and are malicious, and the first relay is also malicious. With a single anonymous path, the adversary can learn that and belong to the same lookup since they are contacted by the same exit node . Wang et al. [14] have shown that based on the positions of a few queried malicious nodes in the lookup, the adversary can narrow the range of the lookup target into a small set of nodes (called range estimation attack). Suppose there are concurrent lookups each having an estimation range of nodes; then, the adversary can know that is doing an lookup and its target is one of the nodes.

Iv-B Anonymity Mechanisms

To address the limitation of a single anonymous path, we propose to split lookup queries over multiple anonymous paths, as shown in Figure 8.

Fig. 8: The structure of multiple anonymous paths in Octopus. , , and are relays.

Using separate anonymous paths for different queries effectively disassociates the adversary’s observations. The adversary only sees disjoint events from different concurrent lookups, but is unable to group queries belonging to the same lookup; in this case, it is much harder to apply the range estimation attack, thus substantially limiting the information leak.

Moreover, to further blur the adversary’s observations and make the range estimation attack even harder, we propose to add dummy queries in the lookup. Then, even though in rare cases the adversary can link two queried nodes in the same lookup (e.g., when all the relays used for the two queries are malicious), the adversary is unable to tell whether they are dummy queries or true queries, and the result of the range estimation attack would be incorrect with a dummy query.

We note that using multiple anonymous paths is important to ensure effectiveness of dummy queries, because in the single-anonymous-path scenario, observed queries are linkable due to the common exit relay and hence dummy queries can be distinguished based on the positions of observed queries. In comparison, with multiple anonymous paths, identifying dummy queries is much harder.

Iv-C Anonymity Evaluation

We analyze the best strategies for an adversary to infer the lookup target and the initiator based its observations, and calculate the target anonymity and the initiator anonymity . We use entropy to quantify and . We let denote the set of possible observations of the adversary (including null observation). To measure the system as a whole, we have:


where is the probability of observation occurring.

To calculate the maximum information leak, we make the following assumptions in the anonymity calculation. First, we assume the network is static, since network dynamics can obscure the adversary’s observations and make it more difficult to extract information about the initiator/target. Second, we only consider passive attacks, as active attackers can be quickly identified by our security mechanisms and consequently the adversary will lose observers to carry out passive attacks.

Iv-C1 The Adversary’s Observations

An observation consists of a (large) number of observed events, which are message transmissions seen by malicious nodes. Each observed event can provide information, such as sender/receiver IDs, message content, and transmission time. The adversary can log all the observed events, and try to derive useful information from any combinations of them.

Observations of queries. There are two cases where a query is observed (i.e., the queried node is identified): 1) the queried node itself is malicious, or 2) the exit relay is malicious. The adversary can link an observed query backwards to in two ways. One is through direct connection of compromised relays on the anonymous path. For example, if the relays and and the queried node are malicious (refer to Figure 8), is observed and can be linked to using and as bridges. The other is through linkability of relays to in the random walk. For instance, if is compromised and linkable to in the random walk, then is observed and linkable to using as a shortcut. It is possible to use both approaches at the same time.

Furthermore, considering all queries in the same lookup together can help the adversary link more queries to . Since all in the same lookup are connected to the same relay , if there exists one query linkable to both and , then any other queries that are linkable to can also be linked to . In the rest paper, we use “a linkable query” to specifically refer to a query that is observed and linkable to .

Observations of the initiator. The adversary’s goal is to link and ; knowing only one of them is useless to the adversary. Therefore, the pre-condition of compromising the target anonymity is to observe . Since is directly connected to , the adversary can observe as long as is malicious. In addition, is also observable in random walks; hence, another case for being observed is that there exists at least one malicious relay linkable to in a random walk.

Observation of the target. For similar reasons, in order to compromise the initiator anonymity, the adversary has to know . We note that is not necessarily contacted during the lookup, since the aim of the lookup is to find the IP address of , which can be learnt from query replies of intermediate nodes (after the lookup is done, might be contacted due to some application needs, but we do not consider this as part of the lookup). Yet we assume that each node can tell whether itself is a target node based on its role in the application. For example, in DHT-based anonymous communication, a node can learn that itself is a lookup target if it is selected as a relay of an anonymous circuit. Therefore, we concern as observed if itself is a malicious node.

Iv-C2 Initiator Anonymity

To calculate the initiator anonymity, we divide the observations of the adversary into two categories.

  • : the observation occurring when is not observed

  • : the set of observations occurring when is observed

According to 1, is calculated as:


When is not observed, the entropy of is maximized. Presuming that the adversary can exclude malicious nodes from the anonymity set of , we have:


Let denote the set of non-dummy queries linkable to in the lookup whose target is . Then, based on whether is an empty set, we calculate as follows:


When there is no linkable non-dummy query, is unlinkable with . However, since some of the initiators of concurrent lookups can be observed by the adversary, we have:


Let denote the set of concurrent lookups that have at least one linkable query, and denote the lookup with target . When , . Each lookup in is possible to be . Therefore, we have:


Because the density of queries close to the target is higher than other regions on the ring, for it is highly likely that the last queried node in is located very close to . Therefore, the adversary can assign probability to each candidate initiator based on the minimum distance (i.e. number of hops) between its queried nodes and . In particular, let denote the set of linkable queries in , , and let denote the probability that for the minimum distance from linkable queried nodes to is . can be obtained via pre-simulations of the lookup. Then, we can calculate as follows:


Iv-C3 Target Anonymity

We categorize the adversary’s observations into three classes:

  • : the observation occurring when is not observed

  • : the set of observations occurring when there is at least one linkable query in the lookup

  • : the set of observations occurring when there is no linkable query in the lookup

According to Equation (1), we have:


Since the adversary has to know at the first place, the entropy of is maximum when is not observed, i.e. .

Calculation of . When there exist queries that are linkable to , the adversary can adopt range estimation attack to narrow the range of . The output of this attack is a lower bound and an upper bound of ’s location on the ring.

We temporarily assume that all queries observed by the adversary are non-dummy queries (how to deal with dummy queries is shown later). Suppose there are two or more linkable queries in the lookup. Let and denote the first and the last linkable queried nodes, respectively. Since nodes succeeding will not be queried in the lookup, can be used as a lower bound of . An upper bound of can be obtained based on the fact that the lookup always greedily queries the finger that is precedingly closest to the target. In particular, the adversary first decides the queried nodes between and by (locally) simulating the lookup from to . The initial upper bound is set as ; then for each pair of consecutive queries () between and , , the adversary finds out the index of in ’s finger table (say ) and uses the -th finger of to update the upper bound.

Since the density of queries close to on the ring is higher than other regions, nodes located closer to in the estimation range are more likely to be . We let denote the probability that the -th node (clockwise) in an estimation range of size is the target, . The probability distribution of can be obtained by pre-simulation of the lookup. Note that the range estimation attack is inapplicable when the lookup has only one linkable query (say ), but the adversary can use the successor of as the lower bound of and the predecessor of as the upper bound of , and assign higher probabilities to the nodes closer to the lower bound in the estimation range.

Now we discuss how to deal with dummy queries. Let denote the set of linkable queries in the lookup performed by , and denote the set of linkable non-dummy queries, . Based on whether is an empty set, can be calculated as:


denotes the entropy when all linkable queries are dummies. In this case, the linkable queries cannot provide any information about . However, the adversary can observe all (concurrent) malicious target nodes, and has chance to be one of them. Therefore, we have:


When is non-empty, a range estimation attack based on can produce a minimum range of . Whereas, an estimation range calculated using any dummy query will be incorrect. Due to use of anonymous paths, an individual dummy query is indistinguishable from any non-dummy query. Nevertheless, the adversary can base on timing and location relationships between queries to filter out some subsets of that contain dummy queries. In particular, any subset of queries that violates the following rules must contain at least one dummy query:

  • if is queried before , then must precede

  • if and are the first and last queried nodes in the subset, then any other query must be on the path of the virtual lookup from to

Note that the above approach cannot remove all subsets that contain dummy queries. Let denote all subsets of that pass the above filtering test, . Since all queries in are non-dummy, will pass the filtering test, i.e., . From the adversary’s prospective, each element of is possible to be . The best strategy for her is to assign different probability to each element in according to the pre-calculated probability distribution of . We use two variables to characterize : the number of queries in , and the largest hop in the virtual lookup from the first query in to the last query.444The largest hop means the largest ID difference between two consecutively queried nodes. The largest hop also implies the number of hops in the lookup. These two characteristics are a close approximation of the adversary’s observation on .

Let denote an arbitrary node that is contained in any estimation range. Then:


Let denote the estimation range computed based on , . Let denote the location of in the estimation range , and denote the number of elements of a set. Then, we have:


Let denote the largest hop in the virtual lookup based on , and denote the probability that a set of queries with the largest hop in the virtual lookup being is . Then, we have:


is obtained by pre-simulations of the lookup.

Calculation of . There are three possible cases when there is no linkable query:

  • : there is no query observed by the adversary

  • : there is at least one (observed) query that is linkable to

  • : there is no query linkable to but at least one query is observed by the adversary

Let , , and denote the entropy of in the three cases, respectively. Then, we have:


In the first case, since no information is learnt from queries, is calculated as Equation (10).

For the second case, although is disassociated with any observed queries, the adversary can group queries belonging to the same lookup based on whether they are linkable to a common relay . Furthermore, she can calculate estimation ranges for each concurrent lookup that contains queries linkable to , and consider all estimation ranges as possible candidates for the true estimation range of .

In particular, we let denote the set of non-dummy queries linkable to in the lookup performed by , and can be calculated as follows.


where denotes the entropy when there is at least one non-dummy query linkable to . If is malicious, the adversary can reduce the candidates of down to the set of observed malicious targets; otherwise, she needs to rely on the queries linkable to to infer . Therefore, we have:

Let denote the set of concurrent lookups that have at least one query linkable to , and denote the lookup performed by . Let denote the set of non-dummy queries linkable to in , , and let denote all subsets of queries (linked to ) in that pass the filtering test. Then, we have:


Since is unlinkable to any observed queries, each concurrent lookup is equally likely to be . We have:


is calculated the same as Equation (13).

In the last case, since all observed queries are disassociated with each other, the range estimation attack cannot be applied. Let denote the set of observed non-dummy queries in the lookup performed by . Similar to the above cases, we have:


Let denote the query closest to in . Then, the adversary can use ’s successor as the lower bound of the estimation range of and ’s predecessor as the upper bound, and assign probabilities to the nodes in the range according to a pre-calculated probability distribution of . Let denote the set of observed queries of all concurrent lookups, and denote the estimation range based on . Then, we have:

Based on the observations, the adversary is unable to tell which query is more likely to be . Therefore, we have:


Iv-D Results and Comparisons

(a) Initiator anonymity.
(b) Target anonymity.
Fig. 9: Anonymity evaluation of Octopus. is the concurrent lookup rate.
(a) Initiator anonymity.
(b) Target anonymity.
Fig. 10: Anonymity comparison. .

We developed a simulator for anonymity measurements in C++ with about 1.3 KLOC. The results are shown in Figure 10. With network size , concurrent lookup rate , malicious nodes, and 6 dummies, Octopus only leaks 0.57 bits of information about the initiator and 0.82 bits of information about the target. We compare Octopus with the base-line scheme Chord [1] and the state-of-the-art anonymous DHT lookups NISAN [7] and Torsk [8] (we do not explicitly compare Octopus with secure DHTs such as Halo [16], since they leak more information than Chord.). We can see that in the same setting, NISAN and Torsk leak about 3.3 bits of information about the initiator, which is about 6 times more than Octopus. As for the target anonymity, the information leak for NISAN and Torsk is 11.3 bits and 3.4 bits, which is 13 times and 4 times more than that of Octopus, respectively.

V Performance Evaluation

Schemes Lookup Latency (sec) Bandwidth Consumption (kbps)
Mean Median
Octopus 2.15 1.61 5.91 4.30
Chord [1] 1.35 0.35 0.29 0.28
Halo [16] 6.89 1.79 0.71 0.37
TABLE II: Performance comparison. is the time interval between two consecutive lookups.

V-a Lookup Latency

Lookup latency is one of the most important performance factors for DHT systems. We measure the lookup latency of Octopus using PlanetLab with 207 randomly selected nodes. We use boost C++ library555www.boost.org (mainly UDP asynchronous read/write of Boost.Asio) to build the communication substrate. We let each node perform 2000 lookups independently using randomly picked lookup keys. For each lookup, we record the latency from the time of sending out the first query till the time of receiving the lookup result.

Fig. 11: Comparison of lookup latency on Planetlab.

For comparison, we use the same methodology to implement Chord [1] and Halo [16], and measure their lookup latencies in the same network environment. We choose Halo because it is one of the state-of-the-art secure DHT lookup schemes and it is also based on Chord overlay. For Halo, we use degree-2 recursion with redundant parameter , as suggested in their paper [16] to provide fairly strong security guarantee. The experimental results are presented in Figure 11 and Table II. We can see that while the lookup latency of Octopus is relatively longer than that of Chord due to more transmission for security and anonymity needs, it is smaller than that of Halo, which only provides security guarantees. The outperformance is because Octopus does not rely on redundant lookups, while in Halo a lookup is not completed until all redundant lookups’ results are returned.

V-B Bandwidth Overhead

We also compare Octopus with Chord and Halo in terms of bandwidth cost. We adopt the same configuration as described in Section III-C1 for each of the DHT lookups, and consider an overlay network with 1 000 000 nodes666We use the following parameters to estimate the bandwidth overhead. Each routing state item (such as fingers or successors) is 10 bytes. We use ECDSA signature (40 bytes) for authentication with a 4-byte timestamp, and AES-128 for onion encryption. Each certificate is 50 bytes, including the node’s IP address (6 bytes), the node’s public key (20 bytes), expire time (4 bytes), and the CA’s signature (20 bytes).. We can see from Table II that Octopus does incur higher communication overhead than Chord and Halo, in order to achieve high levels of anonymity; however, the bandwidth cost of Octopus is still reasonable (only a few kbps), which is affordable even for low-end clients with limited bandwidth.

Vi Related Work

Vi-a Secure DHT Lookups

A major school of proposals to securing DHT lookups uses redundancy. Castro et al. [15] proposed a robust DHT system that relies on redundant lookups. Each key is replicated among several replica nodes (typically the neighbors of the key owner). Instead of doing a single lookup, the initiator performs multiple redundant lookups towards all the replicas. The lookup result would be correct as long as one of the redundant lookups is not biased. The limitation of this approach is that the redundant lookups tend to converge to a small number of nodes close to the target, and one malicious node in this set could infect many redundant lookups. Much subsequent work (such as [32, 6, 16]) focuses on disentangling the redundant lookup paths to provide better security. Cyclone [32] partitions nodes into Chord sub-rings based on similarity of node IDs, and has redundant lookups routed through the sub-rings independently. Salsa [6] uses a new virtual-tree-based DHT structure, in which any two nodes share few global contacts so that redundant messages can proceed along different paths. Halo [16] does not change the underlying DHT structures, but uses the original Chord overlay and performs redundant searches towards knuckles – nodes that have fingers pointing to the target.

While effective in ensuring security, these redundant-lookup-based approaches are incapable of preserving anonymity, since redundant transmission creates opportunities for an adversary to gain information about the lookup initiator and/or target [13]. ShadowWalker [9] embeds redundancy into the DHT itself and uses shadows (nodes in redundant topologies) to verify each step of a lookup. Unfortunately, Schuchard et al. [33] found that ShadowWalker is vulnerable to eclipse attack, where the entire set of shadows of a certain node are compromised, leading to other nodes’ routing states being infected. They also showed that increasing the dimension of redundant topologies can mitigate the eclipse attack, but the resultant performance cost is prohibitively high.

Another major school of research on secure DHT lookups leverages cryptographic techniques. Mymric [25] uses an online certificate authority to sign each node’s routing state. The major limitation of Myrmic is that for each node join/churn, the central authority has to update the certificates for all related nodes. Young et al. [17] proposed two schemes RCP-I and RCP-II that use threshold signature and distributed key generation to avoid the reliance of a central authority. In their schemes, the verification information on each message is collaboratively generated by a threshold number of nodes, rather by a central authority.

All these secure DHT lookup schemes are not designed to preserve anonymity. Lookup keys are revealed during queries, and identities of lookup initiators are easily exposed due to directly contacting intermediate nodes.

Vi-B Secure and Anonymous DHT Lookups

NISAN [7] is among the first to try to provide both security and anonymity guarantees in DHT systems. For security purpose, each queried node is required to provide its entire fingertable, so that the lookup initiator can apply bound checking on it to limit manipulation of fingertables. NISAN also uses redundancy to enhance security. The authors proposed a greed-search mechanism to query multiple nodes at each step and combine the query results to tolerate misinformation. On the other hand, acquiring the entire fingertable also helps protect the anonymity of lookup targets, since the lookup keys are not revealed to intermediate nodes. Nevertheless, NISAN can only provide very limited anonymity protection. Wang et al. [14] showed that a passive adversary is able to narrow the range of a lookup target down to a small number of nodes, by analyzing the locations of observed queries (called range estimation attack).

Torsk [8] is a DHT-based anonymous communication system. A key component of Torsk is a proxy-based anonymous DHT lookup. The idea is that a lookup initiator performs a random walk on the overlay to find a random node (called buddy), and requests the buddy to perform the lookup on its behalf. Because Torsk uses Myrmic [25] to secure lookups, it has the same limitation as Myrmic – requiring an online central authority to sign each node’s routing state. In addition, as we analyzed in Section IV-A, a single proxy structure is insufficient to provide high levels of anonymity: the information learnt by the range estimation attack can be used to launch relay exhaustion attack [14].

Recently, Backes et al. [18] proposed to leverage oblivious transfer to add query privacy to RCP-I and RCP-II [17]. However, for similar reasons as NISAN, this scheme is vulnerable to the range estimation attack, since the initiator needs to contact multiple intermediate nodes at each step of the lookup.

Freenet [34] is a deployed P2P system, which allows people to upload sensitive files to the overlay and employs data duplication strategies to make them hard to block. Freenet aims to preserve the publishers’ privacy, but does not provide anonymity in lookups. Vasserman et al. [35] create a membership concealing overlay network (MCON) for unobservable communication. They aim to make it difficult for either an insider or outsider adversary to learn the set of participating members. This is similar to previous darknet designs [36]. However, MCON and darknets are not designed to provide anonymity.

Vii Conclusion

In this paper, we presented Octopus, a new DHT lookup that provides strong guarantees for both anonymity and security. Octopus ensures security and anonymity via three fundamental techniques. First, Octopus constructs an anonymous path to send lookup query messages while hiding the initiator. Second, it splits the individual queries used in a lookup over multiple paths, and introduces dummy queries, to make it difficult for an adversary to learn the lookup target. Third, it uses secret security checks to identify and remove malicious nodes. We developed an event-based simulator, and showed that malicious nodes can be quickly identified with high accuracy. In addition, via probabilistic modeling and simulation, we showed that Octopus can achieve near-optimal anonymity for both the lookup initiator and target. We also evaluated the efficiency of Octopus on Planetlab, and showed that Octopus has reasonable lookup latency and communication overhead.

Viii Acknowledgements

We would like to thank Prateek Mittal for helpful discussion throughout the project. We also thank George Danezis for invaluable feedback on the earlier draft of the paper. In addition, we are grateful to Prateek Mittal for sharing his codes of the event-based simulator and the DHT implementation on Planetlab. Finally, we thank anonymous reviewers for helpful comments to improve the paper. This research was supported in part by NSF CNS 09-53655.


  • [1] I. Stoica, R. Morris, D. Liben-Nowell, D. R. Karger, M. F. Kaashoek, F. Dabek, and H. Balakrishnan, “Chord: A scalable peer-to-peer lookup protocol for internet applications,” IEEE/ACM Trans. Netw., vol. 11, no. 1, pp. 17–32, 2003.
  • [2] P. Maymounkov and D. Mazieres, “Kademlia: A peer-to-peer information system based on the xor metric,” IPTPS, 2001.
  • [3] M. J. Freedman, E. Freudenthal, and D. Mazières, “Democratizing content publication with coral,” in Proc. of NSDI’04, 2004.
  • [4] A. Rowstron and P. Druschel, “Storage management and caching in past, a large-scale, persistent peer-to-peer storage utility,” in Proc. SOSP’01, 2001.
  • [5] A. Mislove, G. Oberoi, A. Post, C. Reis, P. Druschel, and D. S. Wallach, “Ap3: Cooperative, decentrialized anonymous communication,” ACM SIGOPS European Workshop, 2004.
  • [6] A. Nambiar and M. Wright, “Salsa: A structured approach to large-scale anonymity,” ACM CCS, 2006.
  • [7] A. Panchenko, S. Richter, and A. Rache, “Nisan: Network information service for anonymization networks,” ACM CCS, November 2009.
  • [8] J. McLachlan, A. Tran, N. Hopper, and Y. Kim, “Scalable onion routing with torsk,” ACM CCS, November 2009.
  • [9] P. Mittal and N. Borisov, “Shadowwalker: Peer-to-peer anonymous communication using redundant structured topologies,” ACM CCS, November 2009.
  • [10] A. Shakimov, A. Varshavsky, L. P. Cox, and R. Cáceres, “Privacy, cost, and availability tradeoffs in decentralized osns,” ser. WOSN ’09, 2009, pp. 13–18.
  • [11] L. A. Cutillo, R. Molva, and T. Strufe, “Privacy preserving social networking through decentralization,” Proceedings of the Sixth international conference on Wireless On-Demand Network Systems and Services, 2009.
  • [12] D. Wallach, “A survey of peer-to-peer security issues,” in Software Security — Theories and Systems, ser. LNCS, 2003, vol. 2609, pp. 253–258.
  • [13] P. Mittal and N. Borisov, “Information leaks in structured peer-to-peer anonymous communication systems,” ACM CCS, 2008.
  • [14] Q. Wang, P. Mittal, and N. Borisov, “In search of an anonymous and secure lookup: Attacks on structured peer-to-peer anonymous communication systems,” ACM CCS, 2010.
  • [15] M. Castro, P. Druschel, A. Ganesh, A. Rowstron, and D. S. Wallach, “Secure routing for structured peer-to-peer overlay networks,” in OSDI, December 2002.
  • [16] A. Kapadia and N. Triandopoulos, “Halo: High-assurance locate for distributed hash tables,” in NDSS, February 2008.
  • [17] M.Young, A.Kate, I.Goldberg, and M.Karsten, “Practical robust communication in dhts tolerating a byzantine adversary,” Proc. ICDCS’10, pp. 263–272, 2010.
  • [18] M. Backes, I.Goldberg, A. Kate, and T. Toft, “Adding query privacy to robust dhts,” Tech. rep., arXiv:1107.1072v1 [cs.CR], July 2011.
  • [19] J. Douceur, “The sybil attack,” in Peer-to-Peer Systems, 2002, vol. 2429, pp. 251–260.
  • [20] N. Borisov, “Computational puzzles as sybil defenses,” in Peer-to-Peer Computing, 2006.
  • [21] G. Danezis and P. Mittal, “Sybilinfer: Detecting sybil nodes using social networks,” in NDSS, 2009.
  • [22] H. Yu, P. B. Gibbons, M. Kaminsky, and F. Xiao, “Sybillimit: A near-optimal social network defense against sybil attacks,” in IEEE Symposium on Security and Privacy (Oakland), 2008.
  • [23] A. Pfitzmann and M. Hansen, “A terminology for talking about privacy by data minimization: Anonymity, unlinkability, undetectability, unobservability, pseudonymity, and identity management,” Aug. 2010, v0.34.
  • [24] A. Rowstron and P. Druschel, “Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems,” IFIP/ACM International Conference on Distributed Systems Platforms (Middleware), pp. 329–350.
  • [25] P. Wang, I. Osipkov, N. Hopper, and Y. Kim, “Myrmic: Secure and robust dht routing,” Tech. rep., Digital Technology Center, University of Minnesota at Twin Cities, 2007.
  • [26] J. L. Muñoz, J. Forne, O. Esparza, and M. Soriano, “Certificate revocation system implementation based on the merkle hash tree,” Inte. Journ. of Inf. Sec., vol. 2, no. 2, 2003.
  • [27] M. C. Morogan and S. Muftic, “Certificate revocation system based on peer-to-peer crl distribution,” in Proc. of the DMS’03 Conference, 2003.
  • [28] C. Y. Liau, S. Bressan, K.-L. Tan, C. Yee, L. Stéphane, and B. K. lee Tan, “Efficient certificate revocation: A p2p approach,” in HICSS’05, 2005.
  • [29] P. Syverson, G. Tsudik, M. Reed, and C. Landwehr, “Towards an analysis of onion routing security,” International Workshop on Design Issues in Anonymity and Unobservaility, vol. 2009, LNCS, Springer, July 2000.
  • [30] F. Dabek, J. Li, E. Sit, J. Robertson, M. F. Kaashoek, and R. Morris, “Designing a dht for low latency and high throughput,” NSDI’04, 2004.
  • [31] Q. Wang and N. Borisov, “Octopus: A secure and anonymous dht lookup,” uiuc-cs technical report, 2011. http://hatswitch.org/~qwang26/papers/octopus-tr.pdf.
  • [32] M. S. Artigas, P. G. Lopez, J. P. Ahullo, and A. F. G. Skarmeta, “Cyclone: A novel design schema for hierarchical dhts,” In P2P ’05, pp. 49–56, 2005.
  • [33] M. Schuchard, A. Dean, V. Heorhiadi, N. Hopper, and Y. Kim, “Balancing the shadows,” ACM WPES, 2010.
  • [34] I. Clarke, T. W. Hong, S. G. Miller, O. Sandberg, and B. Wiley, “Protecting Free Expression Online with {Freenet},” IEEE Internet Computing, vol. 6, no. 1, pp. 40–49, 2002.
  • [35] E. Y. Vasserman, R. Jansen, J. Tyra, N. Hopper, and Y. Kim, “Membership-concealing overlay networks,” in ACM CCS, 2009.
  • [36] WASTE. http://waste.sourceforge.net/.
  • [37] A. Serjantov and P. Sewell, “Passive attack analysis for connection-based anonymity systems,” in ESORICS’03, October 2003.
  • [38] X. Wang, S. Chen, and S. Jajodia, “Network flow watermarking attack on low-latency anonymous communication systems,” in IEEE Security and Privacy, May 2007.
  • [39] G. Danezis, “The traffic analysis of continuous-time mixes,” in Privacy Enhancing Technologies workshop, May 2004.
  • [40] A. Acharya and J. Saltz, “A study of internet round-trip delay,” Technical Report (CS-TR 3738), University of Maryland, 1996.
  • [41] R. Dingledine, N. Mathewson, and P. Syverson, “Tor: The second-generation onion router,” in USENIX Security Symposium, August 2004.
  • [42] N. Borisov, G. Danezis, P. Mittal, and P. Tabriz, “Denial of service or denial of security?” ACM CCS, 2007.
  • [43] R. Dingledine, M. J. Freedman, D. Hopwood, and D. Molnar, “A reputation system to increase mix-net reliability,” in the 4th International Workshop on Information Hiding (IHW), 2001.

-a Random Walk for Relay Selection

As shown in Figure 12, the random walk originates from the initiator and is composed of two phases, with nodes visited in each phase (). The motivation of dividing the random walk into two phases is to mitigate the timing analysis attack, wherein malicious nodes on the same anonymous path can be associated by analyzing timings of packets in the traffic going through them. In the first phase, picks a random finger out of its fingertable, and requests for its fingertable, from which the second hop is selected. Then sends an onion-encrypted query to , using as the forwarding node, and selects the third hop at random from the fingertable returned by . This process is recursively repeated for hops. To provide integrity check and source authentication, each replied fingertable is signed by its owner with the owner’s certificate attached.

Fig. 12: Two-phase random walk for selecting relays.

The second phase of the random walk is conducted by , the last node visited in the first phase. In particular, sends a random seed through the anonymous path established in the first phase, and the seed will guide how to pick nodes “randomly”. For example, we can let apply hash function to the seed for times and map the hash value to ( is the size of fingertables) and use the result as an index to select the -th hop. The second phase is performed in the same way as the first phase, and the last two hops ( and ) are chosen as a pair of relays to be used in lookups or attacker identifications. To prevent malicious from biasing the random walk, is required to keep all received fingertables, signatures and certificates, and send them back to through the anonymous path of the first phase at the end of the random walk. Such information allows to verify whether has honestly performed the random walk. If the verification is invalid or does not receive the results by a pre-set deadline, chooses another node from ’s fingertable to restart the second phase of the random walk.

-B Other Attacks, Countermeasures, and Discussion

-B1 Selective Denial-of-Service Attack

A threat to anonymous communication systems (like Tor [41]), the selective Denial-of-Service (DoS) attack [42], can increase the chance of compromising anonymous circuits by selectively dropping packets to tear down the circuits that are infeasible to compromise. The selective DoS attack is also applicable to Octopus. For example, to create more opportunities of observing lookup initiators, malicious relays can selectively drop lookup queries or replies, when the relay directly connected to the initiator is not malicious. Nevertheless, under our framework of attacker identification, Octopus can effectively constrain the selective DoS attack by identifying malicious droppers.

We leverage the reputation-based reliability enhancement strategy for mix networks [43] to identify malicious dropper nodes. The idea is as follows. Each message is assigned a deadline by which it must be sent to the next hop (in either direction along the anonymous path). A relay first tries to send the message to the next hop directly. If is alive and honest, it will send a signed receipt back to . If has not received a receipt from by a specified period before the deadline, it will request a pre-defined set of witnesses (e.g., its successors and predecessors) to independently try to send the message to and obtain a receipt. If a witness gets a valid receipt, it will forward it to ; otherwise, it sends a signed statement to the delivery failure. Before sending any query, the initiator first checks if the first pair of relays , are alive ( is checked by ). During the lookup, if the initiator does not receive the -th query reply by the pre-set deadline, it queries the successors and predecessors of the relays , (through the partial anonymous path and ) about their aliveness, which can be inferred based on their recent stabilization activities. If both of them are alive, the initiator reports the failure to the CA with the identities of all the relays. Then the CA will request the relays to provide either receipts or statements, and based on the provided information, the CA will be able to identify the malicious dropper node.

The (selective) DoS attacks are also possible in random walks. For example, a malicious hop of the random walk could simply drop packets to prevent the random walk from being completed, or a malicious could deny returning the random walk result to the initiator if the result only contains honest nodes. We can use the same strategy as above to identify such malicious nodes.

We use the event-based simulator (described in Section III-C1) to evaluate the defense mechanism for the selective DoS attack. The simulation results are shown in Figure 13. We can see that the malicious dropper nodes can be rapidly discovered by our identification mechanism.

Fig. 13: Selective DoS attack

-B2 Relay Exhaustion Attack

The relay exhaustion attack [14] is a selective-DoS-flavored attack used to compromise DHT-based anonymity systems that are lack of target anonymity protection. In such an attack, an adversary can utilize the information leak about the lookup target to predict the next relay of the circuit and launch flooding-based attack to prevent the circuit from being extended to the next relay. While Octopus also uses relays in the lookup, it is resistant to the relay exhaustion attack, since little information about the target is leaked in Octopus (as shown in Section IV-C).

-B3 End-to-End Timing Analysis Attack

The end-to-end timing analysis attack is an attack that associates two malicious relays (e.g., and in Figure 8) on the same anonymous path, by analyzing timings of packets in the traffic going through them. Since in Octopus there is only one message transmitted through each anonymous path in either forward or backward direction, the timing analysis attacks that require a large number of observed packets (such as packet counting [37] or packet timing correlation [38, 39]) are inapplicable to Octopus.

For Octopus, the only strategy to associate and is using the similarity of upstream and downstream latencies: in a noise-free network environment, the transmission latency from to should be the same as that from to . However, this similarity is determined by communication latency jitters and can be easily destroyed by adding a short random delay at the middle relay . We simulate this attack using King dataset [30] to find a pair of and with the smallest difference between the upstream and downstream latencies. A random delay is added at from the range , where is the maximum delay. We choose a typical network setting as used in related works [14, 13, 9, 7]: there are nodes in the network, 20% of them are malicious nodes, and concurrent lookup rate is between 0.5% - 5%; the jitter window for a pair of communicating peers is set as 10 ms or 10% of the averaged transmission latency whichever is smaller (according to [40]). Table III presents the simulation results of this attack. When the maximum delay is 100 ms and , the error rate is as high as 99.91%; in this case the information leak is only bit. This means the adversary can hardly learn extra information by launching the timing analysis attack.

Max. delay
100 ms 99.35% 99.50% 99.91%
200 ms 99.60% 99.82% 99.95%
TABLE III: Error rate of end-to-end timing analysis attack. is concurrent lookup rate.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description