A New Approach for Distributed Hypothesis Testing with Extensions to Byzantine-Resilience

# A New Approach for Distributed Hypothesis Testing with Extensions to Byzantine-Resilience

## Abstract

We study a setting where a group of agents, each receiving partially informative private observations, seek to collaboratively learn the true state (among a set of hypotheses) that explains their joint observation profiles over time. To solve this problem, we propose a distributed learning rule that differs fundamentally from existing approaches, in the sense, that it does not employ any form of “belief-averaging”. Specifically, every agent maintains a local belief (on each hypothesis) that is updated in a Bayesian manner without any network influence, and an actual belief that is updated (up to normalization) as the minimum of its own local belief and the actual beliefs of its neighbors. Under minimal requirements on the signal structures of the agents and the underlying communication graph, we establish consistency of the proposed belief update rule, i.e., we show that the actual beliefs of the agents asymptotically concentrate on the true state almost surely. As one of the key benefits of our approach, we show that our learning rule can be extended to scenarios that capture misbehavior on the part of certain agents in the network, modeled via the Byzantine adversary model. In particular, we prove that each non-adversarial agent can asymptotically learn the true state of the world almost surely, under appropriate conditions on the observation model and the network topology.

## 1 Introduction

Various distributed learning problems arising in social networks (such as opinion formation and spreading), and in engineering systems (such as target recognition by a group of aerial robots) can be studied under the formal framework of distributed hypothesis testing. Within this framework, a group of agents repeatedly observe certain private signals, and aim to infer the “true state of the world” that explains their joint observations. While much of the earlier work on this topic assumed the existence of a centralized fusion center for performing computational tasks [1, 2], more recent endeavors focus on a distributed setting where interactions among agents are captured by a communication graph [3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Our work here falls in the latter class. A typical belief update rule in the distributed setting combines a local Bayesian update with a consensus-based opinion pooling of neighboring beliefs. Specifically, linear opinion pooling is studied in [3, 4, 5], whereas the log-linear form of belief aggregation is studied in the context of distributed hypothesis testing in [6, 7, 8, 9, 10], and distributed parameter estimation in [11, 12]. Notably, exponential convergence rates are achieved in [4, 6, 7, 8, 9], while a finite-time analysis is presented in [10]. Extensions to time-varying graphs have also been studied in [5, 6, 7].

In [7, Section III], the authors explain that the commonly studied linear and log-linear forms of belief aggregation are specific instances of a more general class of opinion pooling known as g-Quasi-Linear Opinion pools (g-QLOP), introduced in [13]. The main contribution of our paper is the development of a novel belief update rule that deviates fundamentally from the broad family of g-QLOP learning rules discussed above. Specifically, the learning algorithm that we propose in Section 3.1 does not rely on any linear consensus-based belief aggregation protocol. Instead, each agent maintains two sets of beliefs: a local belief that is updated in a Bayesian manner based on the private observations (without neighbor interactions), and an actual belief that is updated (up to normalization) as the minimum of the agent’s own local belief and the actual beliefs of its neighbors. In Section 6, we establish that under minimal requirements on the agents’ signal structures and the communication graph, the actual beliefs of the agents asymptotically concentrate on the true state almost surely. In Section 5, we argue that our approach works under graph-theoretic conditions that are milder than the standard assumption of strong-connectivity.

In addition to the above contribution to the distributed hypothesis testing problem, we also show in this paper that our approach is capable of handling agents that do not follow the prescribed learning algorithm. Indeed, despite the wealth of literature on distributed inference, there is limited understanding of the impact of misbehaving agents for the problem under consideration. Such agents may represent stubborn individuals, ideological extremists in the context of a social network, or model faults (either benign or malicious) in a networked control system. In the presence of such misbehaving entities, how should the remaining agents process their private observations and the beliefs of their neighbors to eventually learn the truth? To answer this question, we model misbehaving agents via the classical Byzantine adversary model, and develop a provably correct, resilient version of our proposed learning rule in Section 3.2. The only related work (that we are aware of) in this regard is reported in [9]. As we discuss in Section 3.2, our proposed approach is significantly less computationally intensive relative to those in [9]. We identify conditions on the observation model and the network structure that guarantee applicability of our Byzantine-resilient learning rule, and argue (in Section 5) that such conditions can be checked in polynomial time.

## 2 Model and Problem Formulation

Network Model: We consider a group of agents interacting over a time-invariant, directed communication graph . An edge indicates that agent can directly transmit information to agent . If , then agent will be called a neighbor of agent , and agent will be called an out-neighbor of agent . The set of all neighbors of agent will be denoted . Given two disjoint sets , we say that is reachable from if for every , there exists a directed path from some to agent (note that will in general be a function of ). We will use to denote the cardinality of a set .

Observation Model: Let denote possible states of the world; each will be called a hypothesis. Let and denote the set of non-negative integers and positive integers, respectively. Then at each time-step , every agent privately observes a signal , where denotes the signal space of agent . The joint observation profile so generated across the network is denoted , where , and . The signal is generated based on a conditional likelihood function , governed by the true state of the world . Let denote the -th marginal of . The signal structure of each agent is then characterized by a family of parameterized marginals .1

We make the following standard assumptions [3, 4, 5, 6, 7, 8, 9, 10]: (i) The signal space of each agent , namely , is finite. (ii) Each agent has knowledge of its local likelihood functions , and it holds that , and . (iii) The observation sequence of each agent is described by an i.i.d. random process over time; however, at any given time-step, the observations of different agents may potentially be correlated. (iv) There exists a fixed true state of the world (unknown to the agents) that generates the observations of all the agents.2 Finally, we define a probability triple , where , is the -algebra generated by the observation profiles, and is the probability measure induced by sample paths in . Specifically, . For the sake of brevity, we will say that an event occurs almost surely to mean that it occurs almost surely w.r.t. the probability measure .

Given the above setup, the goal of each agent in the network is to discern the true state of the world . The challenge associated with such a task stems from the fact that the private signal structure of any given agent is in general only partially informative. To make this notion precise, define In words, represents the set of hypotheses that are observationally equivalent to the true state from the perspective of agent . In general, for any agent , we may have , necessitating collaboration among agents. While inter-agent collaboration is implicitly assumed in the distributed hypothesis testing literature, in this paper we will also allow for potential misbehavior on the part of certain agents in the network, modeled as follows.

Adversary Model: We assume that a certain fraction of the agents are adversarial, and model their behavior based on the Byzantine fault model [14]. In particular, Byzantine agents possess complete knowledge of the observation model, the network model, the algorithms being used, the information being exchanged, and the true state of the world. Leveraging such information, adversarial agents can behave arbitrarily and in a coordinated manner, and can in particular, send incorrect, potentially inconsistent information to their out-neighbors. In terms of their distribution in the network, we will consider an -local adversarial model, i.e., we assume that there are at most adversaries in the neighborhood of any non-adversarial agent.3 Finally, we emphasize that the non-adversarial agents are unaware of the identities of the adversaries in their neighborhood. As is fairly standard in the distributed fault-tolerant literature [15, 16, 17, 18, 19, 20], we only assume that non-adversarial agents know the upper bound on the number of adversaries in their neighborhood. The adversarial set will be denoted by , and the remaining agents will be called the regular agents.

Our objective in this paper will be to design a distributed learning rule that allows each regular agent to identify the true state of the world almost surely, despite (i) the partially informative signal structures of the agents, and (ii) the actions of any -local Byzantine adversarial set. To this end, we introduce the following notion of source agents.

###### Definition 1.

(Source agents) An agent is said to be a source agent for a pair of distinct hypotheses , if , where represents the KL-divergence between the distributions and , and is given by:

 D(li(⋅|θp)||li(⋅|θq))=∑wi∈Sili(wi|θp)logli(wi|θp)li(wi|θq). (1)

The set of all source agents for the pair is denoted by .4

In words, a source agent for a pair is an agent that can distinguish between the pair of hypotheses based on its private signal structure. In our developments, we will require the following two definitions.

###### Definition 2.

(-reachable set) [16] For a graph , a set , and an integer , is an -reachable set if there exists an such that .

###### Definition 3.

(strongly -robust graph w.r.t. ) For and , a graph is strongly -robust w.r.t. the set of source agents , if for every non-empty subset , is -reachable.

## 3 Proposed Learning Rules

### 3.1 A Novel Belief Update Rule

In this section, we propose a novel belief update rule and discuss the intuition behind it. To introduce the key ideas underlying our basic approach, we first consider a scenario where all agents are regular, i.e., . Every agent maintains and updates (at every time-step) two separate sets of belief vectors, namely, and . Each of these vectors are probability distributions over the hypothesis set . We will refer to and as the “local” belief vector (for reasons that will soon become obvious), and the “actual” belief vector, respectively, maintained by agent . The goal of each agent in the network will be to use its own private signals, and the information available from its neighbors, to update sequentially so that almost surely. To do so, for each , and at each time-step , agent first generates via a local Bayesian update rule that incorporates the private observation using as a prior. Having generated , agent updates (up to normalization) by setting it to be the minimum of its locally generated belief , and the actual beliefs of its neighbors at the previous time-step. It then reports its actual belief to each of its out-neighbors.5 The belief vectors are initialized as . Subsequently, these vectors are updated at each time-step (where ) as follows:

• Step 1: Update of the local beliefs:

 πi,t+1(θ)=li(si,t+1|θ)πi,t(θ)m∑p=1li(si,t+1|θp)πi,t(θp). (2)
• Step 2: Update of the actual beliefs:

 μi,t+1(θ)=min{{μj,t(θ)}j∈Ni,πi,t+1(θ)}m∑p=1min{{μj,t(θp)}j∈Ni,πi,t+1(θp)}. (3)

Intuition behind the learning rule: Consider the set of source agents who can differentiate between a certain false hypothesis and the true state . Suppose for now that this set is non-empty. We ask: how do the agents in the set contribute to the process of collaborative learning? To answer this question, we note that the signal structures of such agents are rich enough for them to be able to eliminate on their own, i.e., without the support of their neighbors. Thus, the agents in should contribute towards driving the actual beliefs of their out-neighbors (and eventually, of all the agents in the set ) on the hypothesis to zero. To achieve the above objective, we are especially interested in devising a rule that ensures that the capability of the source agents to eliminate is not diminished due to neighbor interactions. As we shall see later, such a property will be particularly useful when certain agents in the network are adversarial. It is precisely these considerations that motivate us to employ (i) an auxiliary belief vector generated via local processing (i.e., without any network influence) of the private signals, and (ii) a min-rule of the form (3). Specifically, if , then the sequence of local beliefs will almost surely converge to based on the update rule (2). Hence, for a source agent , will play the key role of an external network-independent input in the min-rule (3). This in turn will trigger a process of belief reduction on the hypothesis originating at the source set , and eventually propagating via the proposed min-rule to each agent in the network reachable from such a source set. The above discussion will be made precise in Section 6.

###### Remark 1.

We emphasize that the proposed min-rule (3) does not employ any form of “belief-averaging”. This feature is in stark contrast with existing approaches to distributed hypothesis testing that rely either on linear opinion pooling [3, 4, 5], or log-linear opinion pooling[11, 12, 10, 6, 7, 8, 9]. As such, the lack of linearity in our belief update rule precludes (direct or indirect) adaptation of existing analysis techniques to suit our needs. Consequently, we develop a novel sample path based proof technique in Section 6 to establish consistency of the proposed learning rule. As one of the main outcomes of this analysis, we argue that our learning rule works under graph-theoretic conditions that are in general weaker than strong-connectivity (see also Section 5).

### 3.2 A Byzantine-Resilient Belief Update Rule

As pointed out in the Introduction, a key benefit of our approach is that it can be extended to account for the worst-case Byzantine adversarial model described in Section 2. A standard way to analyze the impact of such adversarial agents while designing resilient distributed consensus-based protocols (for applications in consensus [16, 15], optimization [17, 18], hypothesis testing [9], and multi-agent rendezvous [21]) is to construct an equivalent matrix representation of the linear update rule that involves only the regular agents [22]. In particular, this requires expressing the iterates of a regular agent as a convex combination of the iterates of its regular neighbors, based on appropriate filtering techniques, and under certain assumptions on the network structure. While this can indeed be achieved efficiently for scalar consensus problems, for problems requiring consensus on vectors (like the belief vectors in our setting), such an approach becomes computationally prohibitive [9]. To bypass such heavy computations, and yet accommodate Byzantine attacks, we now develop a resilient version of the learning rule introduced in Section 3.1, as follows. Each agent acts as follows at every time-step (where ).

• Step 1: Update of the local beliefs: The local belief is updated as before, based on (2).

• Step 2: Filtering extreme beliefs: If , then agent performs a filtering operation as follows. It collects the actual beliefs from each neighbor and sorts them from highest to lowest. It rejects the highest and the lowest of such beliefs (i.e., it throws away beliefs in all). In other words, for each hypothesis, a regular agent retains only the moderate beliefs received from its neighbors.

• Step 3: Update of the actual beliefs: If , then agent updates as follows. Let the set of neighbors whose beliefs on are not rejected by agent (based on the previous filtering step) be denoted by . The actual belief is then updated as follows:

 μi,t+1(θ)=min{{μj,t(θ)}j∈Mθi,t,πi,t+1(θ)}m∑p=1min{{μj,t(θp)}j∈Mθpi,t,πi,t+1(θp)}. (4)

If , then agent updates as follows:

 μi,t+1(θ)=πi,t+1(θ). (5)

As with the learning rule presented in Section 3.1, agent transmits to each of its out-neighbors on completion of the above steps. We will refer to the above sequence of actions as the Local-Filtering based Resilient Hypothesis Elimination (LFRHE) algorithm.

## 4 Main Results

In this section, we state our main results, and then comment on them in Section 5; detailed proofs of the results are presented in Section 6. Our first result establishes the correctness of the learning rule proposed in Section 3.1.

###### Theorem 1.

Suppose , and that the following are true:

1. For every pair of hypotheses , the corresponding source set is non-empty.

2. For every pair of hypotheses , is reachable from the source set .

3. Every agent has a non-zero prior belief on each hypothesis, i.e., for all , and for all .

Then, the learning rule described by equations (2) and (3) leads to collaborative learning of the true state, i.e., almost surely .

Our second result establishes the correctness of the LFRHE algorithm proposed in Section 3.2.

###### Theorem 2.

Suppose the following are true:

1. For every pair of hypotheses , the graph is strongly -robust w.r.t. the corresponding source set .

2. Each regular agent has a non-zero prior belief on each hypothesis, i.e., for all , and for all .

Then, the LFRHE algorithm described by equations (2), (4) and (5) leads to collaborative learning of the true state despite the actions of any -local set of Byzantine adversaries, i.e., almost surely .

###### Remark 2.

For any pair , notice that condition (i) of Theorem 2 (together with the definition of strong-robustness in Def. 3) requires , if is non-empty.

## 5 Discussion

(Assumptions in Theorem 1): While the first condition in Theorem 1 is a basic global identifiability condition, the second condition on the network structure is in general weaker than the standard assumption of strong-connectivity made in [3, 4, 11, 12, 10, 8]. To see why the latter statement is true, consider a scenario where . Clearly, any agent can discern the true state without neighbor interactions, precluding the need for incoming edges to such agents.6 Finally, the assumption of non-zero initial beliefs is fairly standard, and can be easily met by maintaining a uniform support over the hypotheses set initially.

(Assumptions in Theorem 2): The first condition in Theorem 2 blends requirements on the signal structures of the agents with those on the communication graph. To gain intuition about this condition, suppose , and let there exist at least one agent . To enable agent to discern the truth despite potential adversaries in its neighborhood, one requires (i) redundancy in the signal structures of the agents (see Remark 2), and (ii) redundancy in the network structure to facilitate reliable information flow from to agent . These requirements are captured by condition (i), a point made apparent in Section 6.2.

(Complexity of Checking Condition (i) in Theorem 2): Given a network of agents with associated signal structures, condition (i) in Theorem 2 can be checked in polynomial time. Specifically, for every pair , finding the source set can easily be done in polynomial time via inspection of the agents’ signal structures. For a fixed source set , checking whether is strongly -robust w.r.t. amounts to simulating a bootstrap percolation process on , with as the initial active set, and as the threshold. This too can be achieved in polynomial time, as discussed in [19].

(Analogy with Distributed State Estimation): Consider the problem of collaboratively estimating the state of an LTI process based on information exchanges among agents that receive partial measurements of the state. There are natural connections between this setting, and the problem studied in this paper. For the state estimation scenario, one can fix an unstable mode of the process, and define source agents for that mode to be agents that can detect the eigenspaces associated with that mode. Interestingly, with source agents defined for each unstable mode in the manner described above, [23, Theorem 3] and [19, Theorem 7] (in the context of distributed state estimation) can be viewed as analogues of Theorem 1 and Theorem 2, respectively.

(Convergence Rate): Consider any false hypothesis . We conjecture that based on our learning rules, the actual beliefs of all the regular agents on will almost surely decay exponentially fast after a transient period, with the rate of decay lower bounded by .

## 6 Proofs of the Main Results

We start with the following simple lemma that characterizes the asymptotic behavior of the local belief sequences generated based on (2); we provide a proof (adapted to our notation) to keep the paper self-contained.

###### Lemma 1.

Consider an agent . Suppose . Then, the update rule (2) ensures that (i) almost surely, and (ii) exists almost surely, and satisfies .

###### Proof.

Pick an agent , and define:

 ρi,t(θ)≜logπi,t(θ)πi,t(θ⋆),λi,t(θ)≜logli(si,t|θ)li(si,t|θ⋆). (6)

Then, based on (2), we obtain the following recursion:

 ρi,t+1(θ)=ρi,t(θ)+λi,t+1(θ),∀t∈N. (7)

Rolling out the above equation over time yields:

 ρi,t(θ)=ρi,0(θ)+t∑k=1λi,k(θ),∀t∈N+. (8)

Notice that is a sequence of i.i.d. random variables with finite means and variances. In particular, it is easy to verify that each random variable has mean7 given by . Thus, based on the strong law of large numbers, we have almost surely. Dividing both sides of (8) by , and taking the limit as goes to infinity, we then obtain:

 limt→∞1tρi,t(θ)=−D(li(⋅|θ⋆)||li(⋅|θ))almost surely. (9)

Finally, note that based on the definition of the set , . It then follows from (9) that almost surely, and hence almost surely. For any , observe that . It then follows from (7) that for each , . From the above discussion, we conclude that a limiting belief vector exists almost surely, with non-zero entries corresponding to only those for which . Part (ii) of the lemma then follows readily by noting that . ∎

We are now in position to prove Theorems 1 and 2.

### 6.1 Proof of Theorem 1

###### Proof.

Let denote the set of sample paths along which for each agent , the following hold: (i) for each , , and (ii) exists, and satisfies . Recall that represents the set of hypotheses that are observationally equivalent to the true state from the point of view of agent . Hence, for each , we have . Based on the third condition in the statement of Theorem 1, and Lemma 1, we infer that has measure . Thus, to prove the desired result, it suffices to confine our attention to the set . Specifically, fix any sample path , and pick any . Our goal will be to establish that along the sample path , there exists such that for all , for all , and for all in the dynamics given by (3). This would be equivalent to establishing that the actual beliefs of all the agents on the true state can be made arbitrarily close to (since the proposed min-rule (3) generates a valid probability distribution over the hypothesis set at each time-step). We complete the proof in the following two steps.

Step 1: Lower bounding the actual beliefs on the true state: Consider the following scenario. During a transient phase, certain agents see private signals that cause them to temporarily lower their local beliefs on the true state. This in turn gets propagated via the min-rule (3) to the actual beliefs of the agents in the network. For sample paths in the set , we rule out the possibility of such a transient phenomenon triggering a cascade of progressively lower beliefs on the true state. To this end, define . Notice that based on condition (iii) of the theorem. Given the choice of the sample path , we notice that exists for each , and that . Pick a small number such that . The following statement is then immediate. For each agent , there exists , such that for all , . Define . In words, represents the time-step beyond which the local beliefs of all the agents on the true state are lower-bounded by . We ask: At such a time-step, what is the lowest actual belief held by an agent on the true state? More precisely, we define . We claim . To see this, observe that given the assumption of non-zero prior beliefs on the true state, and the structure of the proposed min-rule (3), can be if and only if there exists some time-step such that , for some . However, given the structure of the local Bayesian update rule (2), we would then have , for all , contradicting the fact that (the latter fact has already been established above). Having thus established that , define . In other words, lower-bounds the lowest belief (considering both local and actual beliefs) on the true state held by an agent at time-step . We claim the following:

 μi,t(θ⋆)≥η(ω),∀t≥¯t1(ω,δ),∀i∈V. (10)

To see why (10) is true, fix an agent , and consider the following chain of inequalities:

 μi,¯t1(ω,δ)+1(θ⋆) (a)=min{{μj,¯t1(ω,δ)(θ⋆)}j∈Ni,πi,¯t1(ω,δ)+1(θ⋆)}m∑p=1min{{μj,¯t1(ω,δ)(θp)}j∈Ni,πi,¯t1(ω,δ)+1(θp)} (11) (b)≥η(ω)m∑p=1min{{μj,¯t1(ω,δ)(θp)}j∈Ni,πi,¯t1(ω,δ)+1(θp)} ≥η(ω)m∑p=1πi,¯t1(ω,δ)+1(θp) (c)=η(ω),

where is given by (3), follows from the way is defined and by noting that , and follows by noting that the local belief vectors generated via (2) (at each time-step) are valid probability distributions over the hypothesis set , and hence . Since the above reasoning applies to every agent in the network, we can keep repeating it to establish (10) via induction.

Step 2: Upper bounding the actual beliefs on each false hypothesis: The key observation that guides the rest of the proof is as follows. While Step 1 of the proof ensures that the beliefs (both local and actual) of each agent on the true state are lower-bounded by after a finite period of time (given by ), Lemma 1 guarantees that the local beliefs on any false hypothesis will eventually become arbitrarily small (and in particular, smaller than ) for each agent , on the sample path under consideration. In what follows, we investigate how this impacts the actual beliefs of the agents in the network. To this end, given an , pick a small such that . Fix a hypothesis . By virtue of condition (i) of the theorem, we know that . Let , where represents the diameter of the graph . Then, based on Lemma 1, for each , there exists such that for all , . Define

 ¯tθ2(ω,δ,¯ϵ(ω))≜max{¯t1(ω,δ),maxi∈S(θ⋆,θ){tθi(ω,¯ϵ(ω))}}. (12)

Throughout the rest of the proof, we suppress the dependence of on and to avoid cluttering the exposition. For any agent , we obtain the following chain of inequalities:

 μi,¯t2+1(θ) (a)=min{{μj,¯t2(θ)}j∈Ni,πi,¯t2+1(θ)}m∑p=1min{{μj,¯t2(θp)}j∈Ni,πi,¯t2+1(θp)} (13) (b)≤¯ϵq(ω)m∑p=1min{{μj,¯t2(θp)}j∈Ni,πi,¯t2+1(θp)} ≤¯ϵq(ω)min{{μj,¯t2(θ⋆)}j∈Ni,πi,¯t2+1(θ⋆)} (c)≤¯ϵq(ω)η(ω) (d)<¯ϵ(q−1)(ω)≤¯ϵ(ω)<ϵ,

where is given by (3), follows from the fact that for each , we have , follows from (10) and (12), and follows from the way has been chosen. In particular, note that the above chain of reasoning used to arrive at (13) applies to subsequent time-steps as well. We thus conclude:

 μi,t(θ)<¯ϵ(q−1)(ω),∀t≥¯t2+1,∀i∈S(θ⋆,θ). (14)

We now wish to investigate how the effect of (14) propagates through the rest of the network. If is empty, then we have reached the desired conclusion w.r.t. the false hypothesis . If not, define

 (15)

as the set of immediate out-neighbors of the source set . By virtue of condition (ii) of the theorem, if is non-empty, then as defined above is also non-empty. Consider any agent . By definition, agent has a neighbor in satisfying (14). This observation coupled with equations (10), (12) can be used to obtain a similar chain of inequalities as the ones featuring in (13). Specifically, we obtain:

 μi,t(θ)<¯ϵ(q−2)(ω),∀t≥¯t2+2,∀i∈L(θ⋆,θ)1. (16)

With , the above arguments can be repeated by successively defining the sets as follows:

 (17)

Whenever is non-empty, condition (ii) of the theorem implies that will also be non-empty. One can then easily verify via induction on that:

 μi,t(θ)<¯ϵ(q−(r+1))(ω),∀t≥¯t2+(r+1),∀i∈L(θ⋆,θ)r, (18)

where . Noting that , we obtain the desired result that , . An identical argument as the one presented above can be made for each false hypothesis . This completes the proof. ∎

### 6.2 Proof of Theorem 2

###### Proof.

Consider an -local adversarial set , and let . We study two separate cases.

Case 1: Consider a regular agent such that . Based on condition (i) of the theorem, we claim that , for every pair . We prove this claim via contradiction. To do so, suppose there exists a pair , such that . As , the set is clearly not -reachable (see Def. 2). Thus, is not strongly -robust w.r.t. the source set , a fact that contradicts condition (i) of the theorem. Thus, we have established that for networks satisfying condition (i) of the theorem, regular agents with fewer than neighbors can distinguish between every pair of hypotheses. Lemma 1 then implies that such agents can discern the true state by simply running the local Bayesian estimator (2), and updating actual beliefs via (5).

Case 2: We now focus only on regular agents satisfying . For this case, the structure of the proof mirrors that of Theorem 1; we thus only elaborate on details that are specific to tackling the aspect of adversarial agents. A key property of the proposed LFRHE algorithm that will be used throughout the proof is as follows. For any , and any , the filtering operation of the LFRHE algorithm ensures that at each time-step , we have:

 μj,t(θ)∈Conv(Ψθi,t),∀j∈Mθi,t, (19)

where

 (20)

and is used to denote the convex hull formed by the points in the set . In other words, any neighboring belief (on a particular hypothesis) that agent uses in the update rule (4) lies in the convex hull of the actual beliefs of its regular neighbors (on that particular hypothesis). To see why (19) is true, partition the neighbor set of a regular agent into three sets , and as follows. Sets and are each of cardinality , and contain neighbors of agent that transmit the highest and the lowest actual beliefs respectively, on the hypothesis , to agent at time-step . The set contains the remaining neighbors of agent , and is non-empty at every time-step since . If , then (19) holds trivially. Thus, consider the case when there are adversaries in the set , i.e., . Given the -locality of the adversarial model, and the nature of the filtering operation in the LFRHE algorithm, we infer that for each , there exist regular agents , such that , , and . This establishes our claim regarding equation (19).

With the above property in hand, our goal will be to now establish each of the two steps in the proof of Theorem 1. To this end, let denote the set of sample paths along which for each agent , the following hold: (i) for each , , and (ii) exists, and satisfies . Based on condition (ii) of the theorem, and Lemma 1, we infer that has measure . Thus, as in Theorem 1, fix a sample path , and pick . Define , pick a small number satisfying , and observe that for each agent , there exists , such that for all , . Define and . As before, we claim . To establish this claim, we need to answer the following question: Can an adversarial agent cause its out-neighbors to set their actual beliefs on to be by setting its own actual belief on to be ? We argue that this is impossible under the LFRHE algorithm. By way of contradiction, suppose there exists a time-step satisfying:

 t′(ω)=min{t∈N:∃i∈Rwithμi,t(θ⋆)=0}. (21)

In words, represents the first time-step when some regular agent sets its actual belief on the true hypothesis to be zero. Clearly, based on condition (ii) of the theorem. Suppose is some positive integer, and focus on how agent updates based on (4). Following similar arguments as in the proof of Theorem 1, we know that At the same time, every belief featuring in the set (as defined in equation (20)) is strictly positive based on the way is defined. In light of the above arguments, and based on (19), (20), we infer:

 min{{μj,t′(ω)−1(θ⋆)}j∈Mθ⋆i,t′(ω)−1,πi,t′(ω)(θ⋆)}>0. (22)

Thus, based on (4), we must have , yielding the desired contradiction. With , one can easily verify the following:

 μi,t(θ⋆)≥η(ω),∀t≥¯t1(ω,δ),∀i∈R. (23)

In particular, (23) follows by (i) noting that for each , , and each belief featuring in the set