Maximum Error Modeling for Fault-Tolerant Computation using Maximum a posteriori (MAP) Hypothesis

Maximum Error Modeling for Fault-Tolerant Computation using Maximum a posteriori (MAP) Hypothesis

Karthikeyan Lingasubramanian, Syed M. Alam, Sanjukta Bhanja
Nano Computing Research Group (NCRG), Department of Electrical Engineering, University of South Florida
EverSpin Technologies

Abstract

The application of current generation computing machines in safety-centric applications like implantable biomedical chips and automobile safety has immensely increased the need for reviewing the worst-case error behavior of computing devices for fault-tolerant computation. In this work, we propose an exact probabilistic error model that can compute the maximum error over all possible input space in a circuit-specific manner and can handle various types of structural dependencies in the circuit. We also provide the worst-case input vector, which has the highest probability to generate an erroneous output, for any given logic circuit. We also present a study of circuit-specific error bounds for fault-tolerant computation in heterogeneous circuits using the maximum error computed for each circuit. We model the error estimation problem as a maximum a posteriori (MAP) estimate, over the joint error probability function of the entire circuit, calculated efficiently through an intelligent search of the entire input space using probabilistic traversal of a binary join tree using Shenoy-Shafer algorithm. We demonstrate this model using MCNC and ISCAS benchmark circuits and validate it using an equivalent HSpice model. Both results yield the same worst-case input vectors and the highest % difference of our error model over HSpice is just . We observe that the maximum error probabilities are significantly larger than the average error probabilities, and provides a much tighter error bounds for fault-tolerant computation. We also find that the error estimates depend on the specific circuit structure and the maximum error probabilities are sensitive to the individual gate failure probabilities.

I Introduction

Why maximum error? Industries like automotive and health care, which employs safety-centric electronic devices, have traditionally addressed high reliability requirements by employing redundancy, error corrections, and choice of proper assembly and packaging technology. In addition, rigorous product testing at extended stress conditions filters out even an entire lot in the presence of a small number of failures [39]. Another rapidly growing class of electronic chips where reliability is very critical in implantable biomedical chips [41, 42]. More interestingly, some of the safety approaches, such as redundancy and complex packaging, are not readily applicable to implantable biomedical applications because of low voltage, low power operation and small form factor requirements. Also in future technologies like NW-FET, CNT-FET [44], RTD [46], hybrid nano devices [16], single electron tunneling devices [17], field coupled computing devices like QCA’s [45] (molecular and magnetic) and spin-coupled computing devices, computing components are likely to have higher error rates (both in terms of defect and transient faults) since they operate near the thermal limit and information processing occurs at extremely small volume. Nano-CMOS, beyond 22nm, is not an exception in this regard as the frequency scales up and voltage and geometry scales down. Also we have to note that, while two design implementation choices can have different average probabilities of failures, the lower average choice may in fact have higher maximum probability of failure leading to lower yield in manufacturing and more rejects during chip burn-in and extended screening.

I-a Proposed Work

In this work, we present a probabilistic model to study the maximum output error over all possible input space for a given logic circuit. We present a method to find out the worst-case input vector, i.e., the input vector that has the highest probability to give an error at the output. In the first step of our model, we convert the circuit into a corresponding edge-minimal probabilistic network that represents the basic logic function of the circuit by handling the interdependencies between the signals using random variables of interest in a composite joint probability distribution function . Each node in this network corresponds to a random variable representing a signal in the digital circuit, and each edge corresponds to the logic governing the connected signals. The individual probability distribution for each node is given using conditional probability tables.

From this probabilistic network we obtain our probabilistic error model that consists of three blocks, (i) ideal error free logic, (ii) error prone logic where every gate has a gate error probability i.e., each gate can go wrong individually by a probabilistic factor and (iii) a detection unit that uses comparators to compare the error free and erroneous outputs. The error prone logic represents the real time circuit under test, whereas the ideal logic and the detection unit are fictitious elements used to study the circuit. Both the ideal logic and error prone logic would be fed by the primary inputs . We denote all the internal nodes, both in the error free and erroneous portions, by and the comparator outputs as . The comparators are based on XOR logic and hence a state ”1” would signify error at the output. An evidence set is created by evidencing one or more of the variables in the comparator set to state ”1” (). Then performing MAP hypothesis on the probabilistic error model provides the worst-case input vector which gives . The maximum output error probability can be obtained from after instantiating the input nodes of probabilistic error model with and inferencing. The process is repeated for increasing values and finally the value that makes at least one of the output signals completely random () is taken as the error bound for the given circuit.

It is obvious that we can arrive at MAP estimate by enumerating all possible input instantiations and compute the maximum by any probabilistic computing tool. The attractive feature of this MAP algorithm lies on eliminating a significant part of the input search-subtree based on an easily available upper-bound of by using probabilistic traversal of a binary Join tree with Shenoy-Shafer algorithm [21, 22]. The actual computation is divided into two theoretical components. First, we convert the circuit structure into a binary Join tree and employ Shenoy-Shafer algorithm, which is a two-pass probabilistic message-passing algorithm, to obtain multitude of upper bounds of with partial input instantiations. Next, we construct a Binary tree of the input vector space where each path from the root node to the leaf node represents an input vector. At every node, we traverse the search tree if the upper bound, obtained by Shenoy-Shafer inference on the binary join tree, is greater than the maximum probability already achieved; otherwise we prune the entire sub-tree. Experimental results on a few standard benchmark show that the worst-case errors significantly deviate from the average ones and also provides tighter bounds for the ones that use homogeneous gate-type (c17 with NAND-only). Salient features and deliverables are itemized below:

  • We have proposed a method to calculate maximum output error using a probabilistic model. Through experimental results, we show the importance of modeling maximum output error. (Fig. 9)

  • Given a circuit with a fixed gate error probability , our model can provide the maximum output error probability and the worst-case input vector, which can be very useful testing parameters.

  • We present the circuit-specific error bounds for fault-tolerant computation and we show that maximum output errors provide a tighter bound.

  • We have used an efficient design framework that employs inference in binary join trees using Shenoy-Shafer algorithm to perform MAP hypothesis accurately.

  • We give a probabilistic error model, where efficient error incorporation is possible, for useful reliability studies. Using our model the error injection and probability of error for each gate can be modified easily. Moreover, we can accommodate both fixed and variable gate errors in a single circuit without affecting computational complexity.

The rest of the paper is structured as follows, Section. II gives a summary of some of the previous works on error bounds for fault-tolerant computation along with some of the reliability models established from these works, Section. III explains the structure of our probabilistic error model, Section. IV explains the MAP hypothesis and its complexity, Section. V provides the experimental results, followed by conclusion in Section. VI.

Fig. 1: (a) Digital logic circuit (b) Error model (c) Probabilistic error model

Ii Prior Work

Ii-a State-of-the-art

The study of reliable computation using unreliable components was initiated by von Neumann [2] who showed that erroneous components with some small error probability can provide reliable outputs and this is possible only when the error probability of each component is less than . This work was later enhanced by Pippenger [3] who realized von Neumann’s model using formulas for Boolean functions. This work showed that for a function controlled by -arguments the error probability of each component should be less than to achieve reliable computation. This work was later extended by using networks instead of formulas to realize the reliability model [4]. In [5], Hajek and Weller used the concept of formulas to show that for 3-input gates the error probability should be less than . Later this work was extended for -input gates [6] where was chosen to be odd. For a specific even case, Evans and Pippenger [7] showed that the maximum tolerable noise level for 2-input NAND gate should be less than . Later this result was reiterated by Gao et al for 2-input NAND gate, along with other results for -input NAND gate and majority gate, using bifurcation analysis [8] that involves repeated iterations on a function relating to the specific computational component. While there exists studies of circuit-specific bounds for circuit characteristics like switching activity [9], the study of circuit-specific error bounds would be highly informative and useful for designing high-end computing machines.

The study of fault-tolerant computation has expanded its barriers and is being generously employed in fields like nano-computing architectures. Reliability models like Triple Modular Redundancy (TMR) and N-Modular Redundancy (NMR) [10] were designed using the von Neumann model. Expansion of these techniques led to models like Cascaded Triple Modular Redundancy (CTMR) [11] used for nanochip devices. In [12], the reliability of reconfigurable architectures was obtained using NAND multiplexing technique and in [13], majority multiplexing was used to achieve fault-tolerant designs for nanoarchitectures. A recent comparative study of these methods [14], indicates that a 1000-fold redundancy would be required for a device error (or failure) rate of 0.01111Note that this does not mean 1 out of 100 devices will fail, it indicates the devices will generate erroneous output 1 out of 100 times.. Many researchers are currently focusing on computing the average error [19, 20] from a circuit and also on the expected error to conduct reliability-redundancy trade-off studies. An approximate method based on Probabilistic Gate Model (PGM) is discussed by Han et al. in [15]. Here the PGMs are formed using equations governing the functionality between an input and an output. Probabilistic analysis of digital logic circuits using decision diagrams is proposed in  [18]. In [27], the average output error in digital circuits is calculated using a probabilistic reliability model that employs Bayesian Networks.

In testing, the identification of possible input patterns to perform efficient circuit testing is achieved through Automatic Test Pattern Generation (ATPG) algorithms. Some of the commonly used ATPG algorithms like D-algorithm [32], PODEM (path-oriented decision making) algorithm [33] and FAN (fanout-oriented test generation) algorithm [34] are deterministic in nature. There are some partially probabilistic ATPG algorithms [35, 36, 37] which are basically used to reduce the input pattern search space. In order to handle transient errors occurring in intermediate gates of a circuit, we need a completely probabilistic model [38].

Ii-B Relation to State-of-the-art

Our work concentrates on estimation of maximum error as opposed to average error, since for higher design levels it is important to account for maximum error behavior, especially if this behavior is far worse than the average case behavior.

Also our work proposes a completely probabilistic model as opposed to a deterministic model, where every gate of the circuit is modeled probabilistically and the worst case input pattern is obtained.

The bounds presented in all the above mentioned works do not consider (i) combination of different logic units like NAND and majority in deriving the bounds and (ii) do not consider circuit structure and dependencies and error masking that could occur in a realistic logic network, making the bounds pessimistic. Our model encapsulates the entire circuit structure along with the signal inter dependencies and so is capable of estimating the error bound of the entire circuit as opposed to a single logic unit.

Iii Probabilistic error model

The underlying model compares error-free and error-prone outputs. Our model contains three sections, (i) error-free logic where the gates are assumed to be perfect, (ii) error-prone logic where each gate goes wrong independently by an error probability and (iii) XOR-logic based comparators that compare the error-free and error-prone primary outputs. When error occurs, the error-prone primary output signal will not be at the same state as the ideal error-free primary output signal. So, an output of logic ”1” at the XOR comparator gate indicates occurrence of error. For a given digital logic circuit as in Fig. 1(a), the error model and the corresponding probabilistic error model are illustrated in Fig. 1(b) and Fig. 1(c) respectively. In Fig. 1(b) and Fig. 1(c), block 1 is the error-free logic, block 2 is the error-prone logic with gate error probability and block 3 is the comparator logic. In the entire model, the error-prone portion given in block 2 is the one that represents the real-time circuit. The ideal error-free portion in block 1 and the comparator portion in block 3 are fictitious and used for studying the given circuit.

We would like the readers to note that we will be representing a SET OF VARIABLES by bold capital letters, set of instantiations by bold small letters, any SINGLE VARIABLE by capital letters. Also probability of the event will be denoted simply by or by .

The probabilistic network is a conditional factoring of a joint probability distribution. The nodes in the network are random variables representing each signal in the underlying circuit. To perfectly represent digital signals each random variable will have two states, state and state . The edges represent the logic that governs the connecting nodes using conditional probability tables (CPTs). For example, in Fig. 1(c), the nodes and are random variables representing the error-free signal and the error-prone signal respectively of the digital circuit given in Fig. 1(a). The edges connecting these nodes to their parents and represent the error-free AND logic and error-prone AND logic as given by the CPTs in Table. I.

Error-free AND
0 0
0 1
Error-prone AND
1-
TABLE I: Conditional Probability Tables (CPTs) for error-free and error-prone AND logic

Let us define the random variables in our probabilistic error model as , composed of the three disjoint subsets , and where

  1. are the set of primary inputs.

  2. are the internal logic signals for both the erroneous (every gate has a failure probability ) and error-free ideal logic elements.

  3. are the comparator outputs, each one signifying the error in one of the primary outputs of the logic block.

  4. is the total number of network random variables.

Any probability function , where are random variables, can be written as,

(1)

This expression holds for any ordering of the random variables. In most applications, a variable is usually not dependent on all other variables. There are lots of conditional independencies embedded among the random variables, which can be used to reorder the random variables and to simplify the joint probability as,

(2)

where indicates the parents of the variable , representing its direct causes. This factoring of the joint probability function can be denoted as a graph with links directed from the random variable representing the inputs of a gate to the random variable representing the output. To understand it better let us look at the error model given in Fig. 1(c). The joint probability distribution representing the network can be written as,

(3)

Here the random variable is independent of the random variables given its parents . This notion explains the conditional independence between the random variables in the network and it is mathematically denoted by . So for , the probability distribution can be rephrased as,

(4)

By implementing all the underlying conditional independencies the basic joint probability distribution can be rephrased as,

The implementation of this probability distribution can be clearly seen in Fig. 1(c). Each node is connected only to its parents and not to any other nodes. The conditional probability potentials for all the nodes are provided by the CPTs. The attractive feature of this graphical representation of the joint probability distribution is that not only does it make conditional dependency relationships among the nodes explicit but it also serve as a computational mechanism for efficient probabilistic updating.

Iv Maximum a Posteriori (MAP) Estimate

Fig. 2: Search tree where depth first branch and bound search performed.

As we mentioned earlier, in our probabilistic error model, the network variables ,say , can be divided into three subsets , and where represents primary input signals; represents internal signals including the primary output signals; represents the comparator output signals. Any primary output node can be forced to be erroneous by fixing the corresponding comparator output to logic ”1”, that is providing an evidence to a comparator output . Given some evidence , the objective of the Maximum a posteriori estimate is to find a complete instantiation of the variables in that gives the following joint probability,

(6)

The probability is termed as the MAP probability and the variables in are termed as MAP variables and the instantiation which gives the maximum is termed as the MAP instantiation.

For example, consider Fig 1. In the probabilistic model shown in Fig 1(c), we have ; ; . is the ideal error-free primary output node and is the corresponding error-prone primary output node. Giving an evidence to indicates that has produced an erroneous output. The MAP hypothesis uses this information and finds the input instantiation, , that would give the maximum . This indicates that is the most probable input instantiation that would give an error in the error-prone primary output signal . In this case, . This means that the input instantiation will most probably provide a wrong output, (since the correct output is ).

We arrive at the exact Maximum a posteriori (MAP) estimate using the algorithms by Park and Darwiche [29] [30]. It is obvious that we could arrive at MAP estimate by enumerating all possible input instantiations and compute the maximum output error. To make it more efficient, our MAP estimates rely on eliminating some part of the input search-subtree based on an easily available upper-bound of MAP probability by using a probabilistic traversal of a binary Join tree using Shenoy-Shafer algorithm [21, 22]. The actual computation is divided into two theoretical components.

  • First, we convert the circuit structure into a binary Join tree and employ Shenoy-Shafer algorithm, which is a two-pass probabilistic message-passing algorithm, to obtain multitude of upper bounds of MAP probability with partial input instantiations (discussed in Section. IV-A). The reader familiar with Shenoy-Shafer algorithm can skip the above section. To our knowledge, Shenoy-Shafer algorithm is not commonly used in VLSI context, so we elaborate most steps of join tree creation, two-pass join tree traversal and computation of upper bounds with partial input instantiations.

  • Next, we construct a Binary tree of the input vector space where each path from the root node to the leaf node represents an input vector. At every node, we traverse the search tree if the upper bound, obtained by Shenoy-Shafer inference on the binary join tree, is greater than the maximum probability already achieved; otherwise we prune the entire sub-tree. The depth-first traversal in the binary input instantiation tree is discussed in Section. IV-B where we detail the search process, pruning and heuristics used for better pruning. Note that the pruning is key to the significantly improved efficiency of the MAP estimates.

Iv-a Calculation of MAP upper bounds using Shenoy-Shafer algorithm

To clearly understand the various MAP probabilities that are calculated during MAP hypothesis, let us see the binary search tree formed using the MAP variables. A complete search through the MAP variables can be illustrated as shown in Fig. 2 which gives the corresponding search tree for the probabilistic error model given in Fig. 1(c). In this search tree, the root node will have an empty instantiation; every intermediate node will be associated with a subset of MAP variables and the corresponding partial instantiation ; and every leaf node will be associated with the entire set and the corresponding complete instantiation . Also each node will have children where is the number of values or states that can be assigned to each variable . Since we are dealing with digital signals, every node in the search tree will have two children. Since the MAP variables represent the primary input signals of the given digital circuit, one path from the root to the leaf node of this search tree gives one input vector choice. In Fig. 2, at node , and . The basic idea of the search process is to find the MAP probability by finding the upper bounds of the intermediate MAP probabilities .

MAP hypothesis can be categorized into two portions. The first portion involves finding intermediate upper bounds of MAP probability, , and the second portion involves improving these bounds to arrive at the exact MAP solution, . These two portions are intertwined and performed alternatively to effectively improve on the intermediate MAP upper bounds. These upper bounds and final solution are calculated by performing inference on the probabilistic error model using Shenoy-Shafer algorithm [21, 22].

Shenoy-Shafer algorithm is based on local computation mechanism. The probability distributions of the locally connected variables are propagated to get the joint probability distribution of the entire network from which any individual or joint probability distributions can be calculated. The Shenoy-shafer algorithm involves the following crucial information and calculations.

CPT
Error-free AND
0 0
0 1
Error-prone AND
1-
Input
0.5
0.5
Valuation
Error-free AND
0 0 0 1
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1
Error-prone AND
0 0 0 1-
0 0 1 1-
0 1 0 1-
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1 1-
Input
0 0.5
1 0.5
TABLE II: Valuations of the variables derived from corresponding CPTs

Valuations: The valuations are functions based on the prior probabilities of the variables in the network. A valuation for a variable can be given as where are the parents of . For variables without parents, the valuations can be given as . These valuations can be derived from the CPTs (discussed in Section. III) as shown in Table II.

0 0 1
0 1 1
1 0 1
1 1 0
0 0 1
0 1 0
1 0 0
1 1 0
0 0 0 1x1
0 0 1 1x0
0 1 0 1x0
0 1 1 1x0
1 0 0 1x1
1 0 1 1x0
1 1 0 0x0
1 1 1 0x0
TABLE III: Combination
Fig. 3: Illustration of the Fusion algorithm.

Combination: Combination is a pointwise multiplication mechanism conducted to combine the information provided by the operand functions. A combination of two given functions and can be written as , where and are set of variables. Table III provides an example.

Marginalization: Given a function , where and are set of variables, marginalizing over provides a function of and that can be given as . This process provides the marginals of a single variable or a set of variables. Generally the process can be done by summing or maximizing or minimizing over the marginalizing variables in . Normally the summation operator is used to calculate the probability distributions. In MAP hypothesis both summation and maximization operators are involved.

The computational scheme of the Shenoy-Shafer algorithm is based on fusion algorithm proposed by Shenoy in [23]. Given a probabilistic network, like our probabilistic error model in Fig. 3(a), the fusion method can be explained as follows,

  1. The valuations provided are associated with the corresponding variables forming a valuation network as shown in Fig. 3(b). In our example, the valuations are for , for , for , for , for .

  2. A variable for which the probability distribution has to be found out is selected. In our example let us say we select .

  3. Choose an arbitrary variable elimination order. For the example network let us choose the order as O1,X1,X2,I2. When a variable is eliminated, the functions associated with that variable are combined and the resulting function is marginalized over . It can be represented as, . This function is then associated with the neighbors of . This process is repeated until all the variables in the elimination order are removed. Fig. 3 illustrates the fusion process.

    Eliminating yields the function associated to neighbors .
    Eliminating yields the function associated to neighbors .
    Eliminating yields the function associated to neighbors .
    Eliminating yields the function associated to neighbor .

    According to a theorem presented in [22], combining the functions associated with yields the probability distribution of . = Probability distribution of I1 [22]. Note that the function represents the joint probability of the entire probabilistic error model.

  4. The above process is repeated for all the other variables individually.

(a) (b)
Fig. 4: (a) Partial illustration of Binary Join tree construction method for the first chosen variable. (b) Complete illustration of Binary Join tree construction method.

To perform efficient computation, an additional undirected network called join tree is formed from the original probabilistic network. The nodes of the join tree contains clusters of nodes from the original probabilistic network. The information of locally connected variables, provided through valuations, is propagated in the join tree by message passing mechanism. To increase the computational efficiency of the Shenoy-Shafer algorithm, a special kind of join tree named binary join tree is used. In a binary join tree, every node is connected to no more than three neighbors. In this framework only two functions are combined at an instance, thereby reducing the computational complexity. We will first explain the method to construct a binary join tree, as proposed by Shenoy in [22], and then we will explain the inference scheme using message passing mechanism.

Construction of Binary Join Tree: The binary join tree is constructed using the fusion algorithm. The construction of binary join tree can be explained as follows,

  1. To begin with we have,
    A set that contains all the variables from the original probabilistic network. In our example, .
    A set that contains the subsets of variables, that should be present in the binary join tree. i.e., the subsets that denote the valuations and the subsets whose probability distributions are needed to be calculated. In our example, let us say that we need to calculate the individual probability distributions of all the variables. Then we have, {{I1}, {I2}, {X1,I1,I2}, {X2,I1,I2}, {O1,X1,X2}, {X1}, {X2}, {O1}}.
    A set that contains the nodes of the binary join tree and it is initially null.
    A set that contains the edges of the binary join tree and it is initially null.
    We also need an order in which we can choose the variables to form the binary join tree. In our example, since the goal is to find out the probability distribution of I1, this order should reflect the variable elimination order (O1,X1,X2,I2,I1) used in fusion algorithm .

  2. 1:  while  do
    2:     Choose a variable
    3:     
    4:     while  do
    5:        Choose and such that for all
    6:        
    7:        
    8:        
    9:        
    10:        
    11:     end while
    12:     if  then
    13:        Take where
    14:        
    15:        
    16:        
    17:        
    18:     end if
    19:     
    20:     
    21:  end while
  3. The final structure will have some duplicate clusters. Two neighboring duplicate clusters can be merged into one, if the merged node does not end up having more than three neighbors. After merging the duplicate nodes we get the binary join tree.

Fig. 4 illustrates the binary join tree construction method for the probabilistic error model in Fig. 3(a). Fig. 4(a) explains a portion of the construction method for the first chosen variable, here it is . Fig. 4(b) illustrates the entire method. Note that, even though the binary join tree is constructed with a specific variable elimination order for finding out the probability distribution of I1, it can be used to find out the probability distributions of other variables too.

Inference in binary join tree: Inference in a binary join tree is performed using message passing mechanism. Initially all the valuations are associated to the appropriate clusters. In our example, at Fig. 5, the valuations are associated to these following clusters,
- associated to cluster C11
- associated to cluster C10
- associated to cluster C6
- associated to cluster C7
- associated to cluster C2
A message passed from cluster , containing a variable set B, to cluster , containing a variable set C can be given as,

(7)

where is the valuation associated with cluster . If cluster is not associated with any valuation, then this function is omitted from the equation. The message from cluster can be sent to cluster only after cluster receives messages from all its neighbors other than . The resulting function is marginalized over the variables in cluster that are not in cluster . To calculate the probability distribution of a variable , the cluster having that variable alone is taken as root and the messages are passed towards this root. Probability of , , is calculated at the root. In our example, at Fig. 5(a), to find the probability distribution of I1, the cluster C11 is chosen as the root. The messages from all the leaf clusters are sent towards C11 and finally the probability distribution of I1 can be calculated as, . Also note that the order of the marginalizing variables is O1,X1,X2,I2 which exactly reflects the elimination order used to construct the binary join tree. As we mentioned before, this binary join tree can be used to calculate probability distributions of other variables also. In our example, at Fig. 5(b), to find out the probability distribution of O1, cluster C1 is chosen as root and the messages from the leaf clusters are passed towards C1 and finally the probability distribution of O1 can be calculated as, . Note that the order of the marginalizing variables changes to I1,I2,X1,X2. We can also calculate joint probability distributions of the set of variables that forms a cluster in the binary join tree. In our example, the joint probability can be calculated by assigning cluster C9 as root. In this fashion, the probability distributions of any individual variable or a set of variables can be calculated by choosing appropriate root cluster and sending the messages towards this root. During these operations some of the calculations are not modified and so performing them again will prove inefficient. Using the binary join tree structure these calculations can be stored thereby eliminating the redundant recalculation. In the binary join tree, between any two clusters and , both the messages and are stored. Fig. 5(c) illustrates this phenomenon using our example.

If an evidence set is provided, then the additional valuations provided by the evidences has to be associated with the appropriate clusters. A valuation for a variable can be associated with a cluster having alone. In our example, if the variable O1 is evidenced, then the corresponding valuation can be associated with cluster C1. While finding the probability distribution of a variable , the inference mechanism (as explained before) with an evidence set will give the probability instead of . From , is calculated as, . Calculation of the probability of evidence is crucial for MAP calculation.

Fig. 5: (a) Message passing with cluster C11 as root. (b) Message passing with cluster C1 as root. (c) Message storage mechanism.

The MAP probabilities are calculated by performing inference on the binary join tree with evidences and . Let us say that we have an evidence set , then . For a given partial instantiation , is calculated by maximizing over the MAP variables which are not evidenced. This calculation can be done by modifying the message passing scheme to accommodate maximization over unevidenced MAP variables. So for MAP calculation, the marginalization operation involves both maximization and summation functions. The maximization is performed over the unevidenced MAP variables in I and the summation is performed over all the other variables in X and O. For MAP, a message passed from cluster to cluster is calculated as,

(8)

where , , and .

Fig. 6: Binary join tree for the probabilistic error model in Fig. 1(c).
Fig. 7: Search process for MAP computation.

Here the most important aspect is that the maximization and summation operators in Eq. 8 are non-commutative.

(9)

So during message passing in the binary join tree, the valid order of the marginalizing variables or the valid variable elimination order should have the summation variables in X and O before the maximization variables in I. A message pass through an invalid variable elimination order can result in a bad upper bound that is stuck at a local maxima and it eventually results in the elimination of some probable instantiations of the MAP variables I during the search process. But an invalid elimination order can provide us an initial upper bound of the MAP probability to start with. The closer the invalid variable elimination order to the valid one, the tighter will be the upper bound. In the binary join tree, any cluster can be chosen as root to get this initial upper bound. For example, in Fig. 5(b) choosing cluster C1 as root results in an invalid variable elimination order I1,I2,X1,X2 and message pass towards this root can give the initial upper bound. Also it is essential to use a valid variable elimination order during the construction of the binary join tree so that there is at least one path that can provide a good upper bound.

Fig. 6 gives the corresponding binary join tree, for the probabilistic error model given in Fig. 1(c), constructed with a valid variable elimination order (O1,X3,X6,X1,X2,X4,X5,I3,I2,I1). In this model, there are three MAP variables I1,I2,I3. The MAP hypothesis on this model results in .

The initial upper bound is calculated by choosing cluster C2 as root and passing messages towards C2. As specified earlier this upper bound can be calculated with any cluster as root. With C2 as root, an upper bound will most certainly be obtained since the variable elimination order (I3,I2,I1,X4,X5,X1,X2,X3,X6) is an invalid one. But since the maximization variables are at the very beginning of the order, having C2 as root will yield a looser upper bound. Instead, if C16 is chosen as root, the elimination order (O1,X3,X6,X1,I3,X4,X5,I2,I1) will be closer to a valid order. So a much tighter upper bound can be achieved. To calculate an intermediate upper bound , the MAP variable newly added to form is recognized and the cluster having the variable alone is selected as root. By doing this a valid elimination order and proper upper bound can be achieved. For example, to calculate the intermediate upper bound where the instantiation is newly added to the initially empty set , a valid elimination order should have the maximization variables I2,I3 at the end. To achieve this, cluster is chosen as root thereby yielding a valid elimination order (O1,X3,X6,X1,X2,X4,X5,I3,I2).

Iv-B Calculation of the exact MAP solution

The calculation of the exact MAP solution can be explained as follows,

  1. To start with we have the following,
    subset of MAP variables . Initially empty.
    partial instantiation set of MAP variables . Initially empty.
    partial instantiation sets used to store . Initially empty.
    MAP instantiation. At first, , where is calculated by sequentially initializing the MAP variables to a particular instantiation and performing local taboo search around the neighbors of that instantiation [30]. Since this method is out of the scope of this paper, we are not explaining it in detail.
    MAP probability. Initially calculated by inferencing the probabilistic error model.
    number of values or states that can be assigned to a variable . Since we are dealing with digital signals, for all .

  2. 1:  Calculate . /*This is the initial upper bound of MAP probability.*/
    2:  if  then
    3:     
    4:  else
    5:     
    6:     
    7:  end if
    8:  while  do
    9:     Choose a variable .
    10:     .
    11:     while  do
    12:        Choose a value of
    13:        .
    14:        Calculate from binary join tree.
    15:        if  then
    16:           
    17:           
    18:        else
    19:           
    20:        end if
    21:        
    22:     end while
    23:     
    24:     if  then
    25:        goto line 29
    26:     end if
    27:     
    28:  end while
    29:  if  then
    30:     
    31:  else
    32:     
    33:  end if

The pruning of the search process is handled in lines 11-23. After choosing a MAP variable , the partial instantiation set is updated by adding the best instantiation thereby ignoring the other instantiations of . This can be seen in Fig. 7 which illustrates the search process for MAP computation using the probabilistic error model given in Fig. 1(c) as example.

Iv-C Calculating the maximum output error probability

According to our error model, the MAP variables represent the primary input signals of the underlying digital logic circuit. So after MAP hypothesis, we will have the input vector which has the highest probability to give an error on the output. The random variables I that represent the primary input signals are then instantiated with and inferenced. So the evidence set for this inference calculation will be . The output error probability is obtained by observing the probability distributions of the comparator logic variables O. After inference, the probability distribution will be obtained. From this can be obtained as, . Finally the maximum output error probability is given by, .

Iv-D Computational complexity of MAP estimate

The time complexity of MAP depends on that of the depth first branch and bound search on the input instantiation search tree and also on that of inference in binary join tree. The former depends on the number of MAP variables and the number of states assigned to each variable. In our case each variable is assigned two states and so the time complexity can be given as where is the number of MAP variables. This is the worst case time complexity assuming that the search tree is not pruned. If the search tree is pruned, then the time complexity will be .

The time complexity of inference in the binary join tree depends on the number of cliques and the size of the biggest clique. It can be represented as and the worst case time complexity can be given as . In any given probabilistic model with variables, representing a joint probability , the corresponding jointree will have always [25]. Also depending on the underlying circuit structure, the jointree of the corresponding probabilistic error model can have or close to , which in turn determines the time complexity.

Since for every pass in the search tree inference has to be performed in the join tree to get the upper bound of MAP probability, the worst case time complexity for MAP can be given as . The space complexity of MAP depends on the number of MAP variables for the search tree and on the number of variables in the probabilistic error model and the size of the largest clique. It can be given by .

V Experimental Results

Fig. 8: Flow chart describing the experimental setup and process

The experiments are performed on ISCAS85 and MCNC benchmark circuits. The computing device used is a Sun server with 8 CPUs where each CPU consists of 1.5GHz UltraSPARC IV processor with at least 32GB of RAM.

V-a Experimental procedure for calculating maximum output error probability

Our main goal is to provide the maximum output error probabilities for different gate error probabilities . To get the maximum output error probabilities every output signal of a circuit has to be examined through MAP estimation, which is performed through algorithms provided in [31]. The experimental procedure is illustrated as a flow chart in Fig. 8. The steps are as follows,

  1. First, an evidence has to be provided to one of the comparator output signal variables in set O such that and . Recall that these variables have a probability distribution based on XOR logic and so giving evidence like this is similar to forcing the output to be wrong.

  2. The comparator outputs are evidenced individually and the corresponding input instantiations i are obtained by performing MAP.

  3. Then the primary input variables in the probabilistic error model are instantiated with each instantiation i and inferenced to get the output probabilities.

  4. is noted from all the comparator outputs for each i and the maximum value gives the maximum output error probability.

  5. The entire operation is repeated for different values.

(a) (b) (c) (d)
(e) (f) (g)
Fig. 9: Circuit-specific error bound for (a) , (b) , (c) , (d) , (e) , (f) , (g) . The figures also show the comparison between maximum and average output error probabilities, that indicates the importance of using maximum output error probability to achieve a tighter error bound.

V-B Worst-case Input Vectors

Circuits No. of Input vector Gate error
Inputs probability
c17 5 01111 0.005 - 0.2
max_flat 8 00010011 0.005 - 0.025
11101000 0.03 - 0.05
11110001 0.055 - 0.2
voter 12 000100110110 0.01 - 0.19
111011100010 0.2
TABLE IV: Worst-case input vectors from MAP

Table IV gives the worst-case input vectors got from MAP i.e., the input vectors that gives maximum output error probability. The notable results are as follows,

  • In and the worst-case input vectors from MAP changes with , while in it does not change.

  • In the range {0.005-0.2} for , has three different worst-case input vectors while has two.

  • It implies that these worst-case input vectors not only depend on the circuit structure but could dynamically change with . This could be of concern for designers as the worst-case inputs might change after gate error probabilities reduce due to error mitigation schemes. Hence, explicit MAP computation would be necessary to judge the maximum error probabilities and worst-case vectors after every redundancy schemes are applied.

V-C Circuit-specific error bounds for fault-tolerant computation

The error bound for a circuit can be obtained by calculating the gate error probability that drives the output error probability of at least one output to a hard bound beyond which the output does not depend on the input signals or the circuit structure. When the output error probability reaches , it essentially means that the output signal behaves as a non-functional random number generator for at least one input vector and so can be treated as a hard bound.

Fig. 9 gives the error bounds for various benchmark circuits. It also shows the comparison between maximum and average output error probabilities with reference to the change in gate error probability . These graphs are obtained by performing the experiment for different values ranging from to . The average error probabilities are obtained from our previous work by Rejimon et.al [27]. The notable results are as follows,

  • The circuit consists of 6 NAND gates. The error bound for each NAND gate in is , which is greater than the conventional error bound for NAND gate, which is  [7, 8]. The error bound of the same NAND gate in circuit (contains 10 NAND gates, 16 NOT gates, 8 NOR gates, 15 OR gates and 10 AND gates) is , which is lesser than the conventional error bound. This indicates that the error bound for an individual NAND gate placed in a circuit can be dependent on the circuit structure. The same can be true for all other logics.

  • The maximum output error probabilities are much larger than average output error probabilities, thereby reaching the hard bound for comparatively lower values of , making them a very crucial design parameter to achieve tighter error bounds. Only for and , the average output error probability reaches the hard bound within , while the maximum output error probabilities for these circuits reach the hard bound for far lesser gate error probabilities ().

  • While the error bounds for all the circuits, except , are less than , the error bounds for circuits like , and are even less than making them highly vulnerable to errors.

Circuit No. of No. of Time
Inputs Gates
c17 5 6 0.047s
max_flat 8 29 0.110s
voter 12 59 0.641s
pc 27 103 225.297s
count 35 144 36.610s
alu4 14 63 58.626s
malu4 14 92 588.702s
TABLE V: Run times for MAP computation

Table V tabulates the run time for MAP computation. The run time does not change significantly for different values and so we provide only one run time which corresponds to all values. This is expected as MAP complexity (discussed in Sec. IV-D) is determined by number of inputs, and number of variables in the largest clique which in turn depends on the circuit complexity. It has to be noted that, even though has less number of inputs than , it takes much more time to perform MAP estimate due to its complex circuit structure.

V-D Validation using HSpice simulator

(a) (b) (c)
Fig. 10: (a) Output error probabilities for the entire input vector space with gate error probability for . (b) Output error probabilities , calculated from probabilistic error model, with gate error probability for . (c) Output error probabilities , calculated from HSpice, with gate error probability for
Circuit Model HSpice % diff over HSpice
c17 0.312 0.315 0.95
max_flat 0.457 0.460 0.65
voter 0.573 0.570 0.53
pc 0.533 0.536 0.56
count 0.492 0.486 1.23
alu4 0.517 0.523 1.15
malu4 0.587 0.594 1.18
TABLE VI: Comparison between Maximum error probabilities achieved from the proposed model and the HSpice simulator at

HSpice model: Using external voltage sources error can be induced in any signal and it can be modeled using HSpice [43]. In our HSpice model we have induced error, using external voltage sources, in every gate’s output. Consider signal is the original error free output signal and the signal is the error prone output signal and is the piecewise linear (PWL) voltage source that induces error. The basic idea is that the signal is dependent on the signal and the voltage . Any change of voltage in will be reflected in . If , then , and if , then , thereby inducing error. The data points for the PWL voltage source are provided by computations on a finite automata which models the underlying error prone circuit where individual gates have a gate error probability .

Simulation setup: Note that, for an input vector of the given circuit, a single simulation run in HSpice is not enough to validate the results from our probabilistic model. Also the circuit has to be simulated for each and every possible input vectors to find out the worst-case one. For a given circuit, the HSpice simulations are conducted for all possible input vectors, where for each vector the circuit is simulated for runs and the comparator nodes are sampled. From this data the maximum output error probability and the corresponding worst-case input vector are obtained.

Table VI gives the comparison between maximum error probabilities achieved from the proposed model and the HSpice simulator at . The notable results are as follows,

  • The simulation results from HSpice almost exactly coincides with those of our error model for all circuits.

  • The highest % difference of our error model over HSpice is just .

Fig. 10(a) gives the output error probabilities for the entire input vector space of with gate error probability . The notable results are as follows,

  • It can be clearly seen that the results from both the probabilistic error model and HSpice simulations show that gives the maximum output error probability.

Fig. 10(b) and (c) give the output error probabilities, obtained from the probabilistic error model and HSpice respectively, for with gate error probability . In order to show that has large number of input vectors capable of generating maximum output error, we plot output error probabilities , where is the mean of output error probabilities and is the standard deviation. The notable results are as follows,

  • It is clearly evident from Fig. 10(b) that has a considerably large amount of input vectors capable of generating output error thereby making it error sensitive. Equivalent HSpice results from Fig. 10(c) confirms this aspect.

  • It is clearly evident that the results from probabilistic error model and HSpice show the same worst-case input vector, , that is obtained through MAP hypothesis.

V-E Results with multiple

Fig. 11: Comparison between the average and maximum output error probability and run time for =0.005, =0.05 and variable ranging for 0.005 - 0.05 for

Apart from incorporating a single gate error probability in all gates of the given circuit, our model also supports to incorporate different values for different gates in the given circuit. Ideally these values has to come from the device variabilities and manufacturing defects. Each gate in a circuit will have an value selected in random from a fixed range, say 0.005 - 0.05.

We have presented the result in Fig. 11 for . Here we compare the average and maximum output error probability and run time with =0.005, =0.05 and variable ranging for 0.005 - 0.05. The notable results are as follows,

  • It can be seen that the output error probabilities for variable are closer to those for =0.05 than for =0.005 implicating that the outputs are affected more by the erroneous gates with =0.05.

  • The run time for all the three cases are almost equal, thereby indicating the efficiency of our model.

Vi Conclusion

We have proposed a probabilistic model that computes the exact maximum output error probabilities for a logic circuit and map this problem as maximum a posteriori hypothesis of the underlying joint probability distribution function of the network. We have demonstrated our model with standard ISCAS and MCNC benchmarks and provided the maximum output error probability and the corresponding worst-case input vector. We have also studied the circuit-specific error bounds for fault-tolerant computing. The results clearly show that the error bounds are highly dependent on circuit structure and computation of maximum output error is essential to attain a tighter bound.

Extending our proposed algorithm one can also obtain a set of, say N, input patterns which are highly likely to produce an error in the output. Circuit designers will have to pay extra attention in terms of input redundancy for these set of vulnerable inputs responsible for the high end of error spectrum. We are already working on the stochastic heuristic algorithms for both average and maximum error for mid-size benchmarks where exact algorithms are not tractable. This work should serve as a baseline exact estimate to judge the efficacy of the various stochastic heuristic algorithms that will be essential for circuits of higher dimensions. Our future effort is to model the gate error probabilities derived from the physics of the device and fabrication methods; model delay faults due to timing violations; model variability in the error probabilities.

References

  • [1]
  • [2] J. von Neumann, “Probabilistic logics and the synthesis of reliable organisms from unreliable components,” in Automata Studies (C. E. Shannon and J. McCarthy, eds.), pp. 43–98, Princeton Univ. Press, Princeton, N.J., 1954.
  • [3] N. Pippenger, “Reliable Computation by Formulas in the Presence of Noise”, IEEE Trans on Information Theory, vol. 34(2), pp. 194-197, 1988.
  • [4] T. Feder, “Reliable Computation by Networks in the Presence of Noise”, IEEE Trans on Information Theory, vol. 35(3), pp. 569-571, 1989.
  • [5] B. Hajek and T. Weller, “On the Maximum Tolerable Noise for Reliable Computation by Formulas”, IEEE Trans on Information Theory, vol. 37(2), pp. 388-391, 1991.
  • [6] W. Evans and L. J. Schulman, “On the Maximum Tolerable Noise of k-input Gates for Reliable Computation by Formulas”, IEEE Trans on Information Theory, vol. 49(11), pp. 3094-3098, 2003.
  • [7] W. Evans and N. Pippenger, “On the Maximum Tolerable Noise for Reliable Computation by Formulas” IEEE Transactions on Information Theory, vol. 44(3) pp. 1299–1305, 1998.
  • [8] J. B. Gao, Y. Qi and J. A. B. Fortes, “Bifurcations and Fundamental Error Bounds for Fault-Tolerant Computations” IEEE Transactions on Nanotechnology, vol. 4(4) pp. 395–402, 2005.
  • [9] D. Marculescu, R. Marculescu and M. Pedram, “Theoretical Bounds for Switching Activity Analysis in Finite-State Machines” IEEE Transactions on VLSI Systems, vol. 8(3), pp. 335–339, 2000.
  • [10] P. G. Depledge, “Fault-tolerant Computer Systems”, IEE Proc. A, vol. 128(4), pp. 257–272, 1981.
  • [11] S. Spagocci and T. Fountain, “Fault rates in nanochip devices,” in Electrochemical Society, pp. 354–368, 1999.
  • [12] J. Han and P. Jonker, “A defect- and fault-tolerant architecture for nanocomputers,” Nanotechnology, vol. 14, pp. 224–230, 2003.
  • [13] S. Roy and V. Beiu, “Majority Multiplexing-Economical Redundant Fault-tolerant Designs for Nano Architectures” IEEE Transactions on Nanotechnology, vol. 4(4) pp. 441–451, 2005.
  • [14] K. Nikolic, A. Sadek, and M. Forshaw, “Fault-tolerant techniques for nanocomputers,” Nanotechnology, vol. 13, pp. 357–362, 2002.
  • [15] J. Han, E. Taylor, J. Gao and J. A. B. Fortes, ‘”Reliability Modeling of Nanoelectronic Circuits” IEEE Conference on Nanotechnology, 2005.
  • [16] M. O. Simsir, S. Cadambi, F. Ivancic, M. Roetteler and N. K. Jha, “Fault-Tolerant Computing Using a Hybrid Nano-CMOS Architecture”, International Conference on VLSI Design, pp. 435-440, 2008.
  • [17] C. Chen and Y. Mao, “A Statistical Reliability Model for Single-Electron Threshold Logic”, IEEE Transactions on Electron Devices, vol. 55, pp. 1547-1553, 2008.
  • [18] A. Abdollahi, “Probabilistic Decision Diagrams for Exact Probabilistic Analysis”, Proceedings of the 2007 IEEE/ACM International Conference on Computer-Aided Design, pp. 266–272, 2007.
  • [19] M. R. Choudhury and K. Mohanram, “Accurate and scalable reliability analysis of logic circuits”, DATE, pp. 1454–1459, 2007.
  • [20] S. Lazarova-Molnar, V. Beiu and W. Ibrahim, “A strategy for reliability assessment of future nano-circuits”, WSEAS International Conference on Circuits, pp. 60–65, 2007.
  • [21] P. P. Shenoy and G. Shafer, ”Propagating Belief Functions with Local Computations”, IEEE Expert, vol. 1(3), pp. 43-52, 1986.
  • [22] P. P. Shenoy, ”Binary Join Trees for Computing Marginals in the Shenoy-Shafer Architecture”, International Journal of Approximate Reasoning, pp. 239-263, 1997.
  • [23] P. P. Shenoy, ”Valuation-Based Systems: A Framework for Managing Uncertainty in Expert Systems”, Fuzzy Logic for the Management of Uncertainty, pp. 83-104, 1992.
  • [24] J. Pearl, “Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference”, Morgan Kaufmann Publishers, Inc., 1988.
  • [25] F. V. Jensen, S. Lauritzen and K. Olesen, ”Bayesian Updating in Recursive Graphical Models by Local Computation”, Computational Statistics Quarterly, pp. 269-282, 1990.
  • [26] R. G. Cowell, A. P. David, S. L. Lauritzen, D. J. Spiegelhalter, “Probabilistic Networks and Expert Systems,” Springer-Verlag New York, Inc., 1999.
  • [27] T. Rejimon and S. Bhanja, “Probabilistic Error Model for Unreliable Nano-logic Gates”, IEEE Conference on Nanotechnology, pp. 717-722, 2006
  • [28] K. Lingasubramanian and S.Bhanja, ”Probabilistic Maximum Error Modeling for Unreliable Logic Circuits”, ACM Great Lake Symposium on VLSI, pp. 223-226, 2007.
  • [29] J. D. Park and A. Darwiche, “Solving MAP Exactly using Systematic Search”, Proceedings of the 19th Annual Conference on Uncertainty in Artificial Intelligence, 2003.
  • [30] J. D. Park and A. Darwiche, “Approximating MAP using Local Search”, Proceedings of 17th Annual Conference on Uncertainty in Artificial Intelligence, pp. 403-410, 2001.
  • [31] “Sensitivity Analysis, Modeling, Inference and More”
    URL http://reasoning.cs.ucla.edu/samiam/
  • [32] J. P. Roth, ”Diagnosis of Automata Failures: A Calculus and a Method,” IBM Journal of Research and Development, vol. 10(4), pp. 278-291, 1966.
  • [33] P. Goel, ”An Implicit Enumeration Algorithm to Generate Tests for Combinational Logic Circuits,” IEEE Transactions on Computers, vol. C-30(3), pp. 215-222, 1981.
  • [34] H. Fujiwara, and T. Shimono, ”On The Acceleration of Test Generation Algorithms,” IEEE Transactions on Computers, vol. C-32(12), pp. 1137-1144, 1983.
  • [35] V. D. Agrawal, S. C. Seth, and C. C. Chuang, ”Probabilistically Guided Test Generation,” Proceedings of IEEE International Symposium on Circuits and Systems, pp. 687-690, 1985.
  • [36] J. Savir, G. S. Ditlow, and P. H. Bardell, ”Random Pattern Testability,” IEEE Transactions on Computers, vol. C-33(1), pp. 79-90, 1984.
  • [37] C. Seth, L. Pan, and V. D. Agrawal, ”PREDICT - Probabilistic Estimation of Digital Circuit Testability,” Proceedings of IEEE International Symposium on Fault-Tolerant Computing, pp. 220-225, 1985.
  • [38] S. T. Chakradhar, M. L. Bushnell, and V. D. Agrawal, ”Automatic Test Generation using Neural Networks”, Proceedings of IEEE International Conference on Computer-Aided Design, vol. 7(10), pp. 416-419, 1988.
  • [39] M. Mason, ”FPGA Reliability in Space-Flight and Automotive Applications”, FPGA and Programmable Logic Journal, 2005.
  • [40] E. Zanoni and P. Pavan, ”Improving the reliability and safety of automotive electronics”, IEEE Micro, vol. 13(1), pp. 30-48, 1993.
  • [41] P. Gerrish, E. Herrmann, L. Tyler and K. Walsh, ”Challenges and constraints in designing implantable medical ICs”, IEEE Transactions on Device and Materials Reliability, vol. 5(3), pp. 435- 444, 2005
  • [42] L. Stotts, ”Introduction to Implantable Biomedical IC Design”, IEEE Circuits and Devices Magazine, pp. 12-18, 1999.
  • [43] S. Cheemalavagu, P. Korkmaz, K. V. Palem, B. E. S. Akgul and L. N. Chakrapani, ”A Probabilistic CMOS Switch and its Realization by Exploiting Noise”, Proceedings of the IFIP International Conference on Very Large Scale Integration, 2005.
  • [44] R. Martel, V. Derycke, J. Appenzeller, S. Wind and Ph. Avouris, “Carbon Nanotube Field-Effect Transistors and Logic Circuits”, Proceedings of the 39th Conference on Design Automation, 2002.
  • [45] R. K. Kummamuru, A. O. Orlov, R. Ramasubramanyam, C. S. Lent, G. H. Bernstein, and G. L. Snider, “Operation of Quantum-Dot Cellular Automata (QCA),Shift Registers and Analysis of Errors”, IEEE Transactions on Electron Devices, vol. 50(59), pp. 1906–1913, 1993.
  • [46] P. Mazumder, S. Kulkarni, M. Bhattacharya, Jian Ping Sun and G. I. Haddad, “Digital Circuit Applications of Resonant Tunneling Devices”, Proceedings of the IEEE, vol. 86(4), pp. 664–686, 1998.
  • [47] Military Standard (MIL-STD-883), ”Test Methods and procedures for Microelectronics”, 1996.
  • [48]
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
48535
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description