Deception through Half-Truths

Deception through Half-Truths

Andrew Estornell, Sanmay Das, Yevgeniy Vorobeychik
Computer Science & Engineering, Washington University in St. Louis
{aestornell,sanmay,yvorobeychik}@wustl.edu
Abstract

Deception is a fundamental issue across a diverse array of settings, from cybersecurity, where decoys (e.g., honeypots) are an important tool, to politics that can feature politically motivated “leaks” and fake news about candidates. Typical considerations of deception view it as providing false information. However, just as important but less frequently studied is a more tacit form where information is strategically hidden or leaked. We consider the problem of how much an adversary can affect a principal’s decision by “half-truths”, that is, by masking or hiding bits of information, when the principal is oblivious to the presence of the adversary. The principal’s problem can be modeled as one of predicting future states of variables in a dynamic Bayes network, and we show that, while theoretically the principal’s decisions can be made arbitrarily bad, the optimal attack is NP-hard to approximate, even under strong assumptions favoring the attacker. However, we also describe an important special case where the dependency of future states on past states is additive, in which we can efficiently compute an approximately optimal attack. Moreover, in networks with a linear transition function we can solve the problem optimally in polynomial time.

1 Introduction

For better or for worse, deception is ubiquitous. It can be benign, but just as often deception is used to deliberately mislead. Commonly, the means of deception can be viewed as outright lies or misinformation. This is certainly the case with fake news and false advertising, as well as phishing emails, and it is also the case for honeypots, even though here deception is used to help network security, rather than for a nefarious purpose. However, a more subtle means of deception involves strategically hiding information. For example, misleading advertising about a drug may omit important information about its side-effects, and we may effectively protect a system against classes of attacks by strategically deciding what is public about it, such as a Windows computer publicizing a Safari browser, but not the OS, to make it appear it’s running Mac OS X.

Theoretical studies of deception typically leverage games of incomplete information, where deception takes the form of signaling misinformation about private state [Carrol11, Pawlick15], for example, advertising an incorrect configuration of computing devices (e.g., a Windows machine advertising as Linux) [Schlenker18], or warning that there may be inspections when no inspectors are present [Xu16]. We take a different perspective. Specifically, we start with a decision-maker (the principal) who makes decisions under uncertainty based on limited evidence. To formalize this setting, we consider a two-stage dynamic Bayes network in which the principal observes a partial realization of the first stage, and makes a prediction (i.e., derives a posterior) about the second stage. We study the extent to which such a decision-maker is susceptible to deception through half-truths—that is, through an adversarial masking of a subset of first-stage variables, with the assumption that the principal is oblivious to the adversarial nature of this masking (for example, the individual is unaware, or fails to take into account, that it is performed adversarially).

While it may at first blush be puzzling how a rational Bayesian observer would be oblivious to the presence of an adversary, situations of this kind in fact abound. Consider algorithmic trading as one example. When order book information became available, it gave rise to numerous sophisticated machine learning methods aiming at taking advantage of this additional information [Nevmyvaka06, Nevmyvaka13]. However, many such approaches proved to be vulnerable to order book spoofing attacks [Wang18]. Another example is autonomous driving. Despite a number of illustrations of attacks on state-of-the-art sophisticated AI-based perception algorithms [Boloor19, Eykholt18, Sharif16, Vorobeychik18book], standard autonomous driving stacks, such as Autoware [Autoware] and Apollo [Apollo] are largely devoid of any techniques for robust perception.

Our first observation is that in our setting half-truths (that is, adversarial masking of observations) can lead to arbitrarily wrong beliefs. This is self-evident with lies, but surprising when we can only mask observations. However, we show that the problem of optimally choosing such a mask is extremely hard: in general, it is inapproximable to any polynomial factor. Next, we study an important restricted family of Bayes networks in which transition probabilities of nodes depend on the sum of the parents. This is a natural model if we consider, for example, opinion diffusion through social influence. For example, suppose that each variable represents whether an individual likes a particular candidate in an election. The opinions in the second stage would correspond to the impact of social influence, where parents of a node are their social network neighbors. Our model means that a node’s view depends on the number of their neighbors who like the candidate. In this additive model, we show that the problem does not admit a PTAS even when nodes have at most two parents. However, we exhibit two algorithmic approaches for solving this variant: the first an -approximation algorithm, the second a heuristic (which admits no performance guarantees). Our experiments show that the combination of the two yields good performance in practice, even while each is limited by itself. Finally, we show that when temporal dependency is linear, we can find an optimal mask in polynomial time.

Related Work A number of prior efforts study deception, many in the context of cybersecurity. Among the earliest is work by \citeauthorCohen03 \shortciteCohen03, who formalize deception as guiding attackers through (a benign part of) the attack graph. Recent qualitative studies of deception [Almeshekah16, Stech16] offer additional insights, but do not provide mathematical modeling approaches. A series of mathematical formalizations of deception in cyber security have also been proposed [Carrol11, Greenberg82, Ettinger10, Pawlick15, Xu16], but these tend to model static scenarios and misinformation, rather than information hiding. Several other mathematical models address allocation of honeypots, which is a common means for deceiving cyber attackers [Kiekintveld15]. Recently, deception has also been considered as a security game in which a defender chooses a deceptive presentation of system configuration to an attacker [Schlenker18], but without considering half-truths or structured information representation such as a DBN.

Another relevant stream of research is that on information design [Rayo2010Sender, e.g.]. In the commonly studied Bayesian persuasion model [kamenica2011bayesian], one considers a signaling game between a sender and a receiver, where the sender has the ability to acquire superior information to the receiver, and the receiver makes a decision that yields (state-dependent) utilities for both. The key question concerns the design of the optimal signal structure. This area has recently received attention from both the algorithmic perspective (how hard is the sender’s problem under different assumptions [dughmi2016algorithmic]) and in various applications, for example pricing [Shen2018Closed], auction design [li2019signal], and security games [Rabinovich2015information]. Our work is distinct in that it assumes an oblivious principal, but effectively considers signals which have combinatorial structure.

2 Preliminaries

Consider a collection of binary variables . We define a 2-stage dynamic Bayes network over these, using superscripts to indicate time steps (0 and 1). Specifically, we assume that each is unconditionally independent and for each , let . Moreover, each has a set of parent nodes, (we only allow inter-stage dependencies to simplify discussion), and for each , define as the probabilistic relationship of the associated variable with its parents (variables it depends on) from stage 0. We will denote the realized values of these random variables in lower case: that is, the realization of a random variable is .

We use this structure to define an interaction between an attacker and a myopic observer (who we also call the principal). In particular, consider an observer who observes a partial realization of stage-0 variables, and aims to predict (in a probabilistic sense) the values of variables in stage 1. This high-level problem is a stylized version of a broad range of decision problems, such as voting behavior. Examples include observing candidate promises, personalities, and past voting record, to predict what they would do once elected; observing infection status for a collection of individuals on a social network, and aiming to predict who will be infected in the future; and so on. We assume that the observer is myopic in the sense that they use standard Bayesian reasoning about posterior probabilities conditional on their observations of stage-0 realizations. However, we specifically study a situation in which a malicious party adversarially masks a subset of stage-0 realizations (having first observed them). We denote the masked posterior by , where is a binary vector with whenever the realization of is not observed (because it is masked). We assume that all the stage-0 realizations that are not masked are observed by the principal. Let denote the random vector distributed according to (the full set of its parents from ), while is a random vector distributed according to . More precisely, the sequence of the interaction is as follows:

  1. Nature generates a vector defining the outcomes of according to its prior distribution .

  2. The attacker observes and may choose up to outcomes to hide from the observer. This decision is captured by the mask .

  3. The observer observes the partially realized state of after applying the mask , and makes a prediction about (which we capture by the distribution of ).

  4. Nature then yields the realization of according to the posterior distribution of .

To understand the consequence of adversarial “half-truths” of this kind, we consider two problems faced by the adversary: targeted and untargeted attacks. Specifically, let the two random vectors, and also stand for their respective distributions, and let be a statistical distance between the two distributions according to some metric. In the untargeted case, the adversary’s problem is to maximize the distance between the masked and true posterior distributions over the random vector in stage 1:

(1)

In the targeted case, the adversary has some desired distribution, , and the adversary would like to push the observer’s perception as close to this distribution as possible. We formalize this as

(2)

Note that in this notation we are suppressing the dependence on the prior, which is implicitly part of any problem instance faced by the adversary.

3 Half-Truth is as Good as a Lie

Our first result demonstrates that in a fundamental sense, in our model, there are cases where partially hiding the true current state can lead to arbitrary distortion of belief by a myopic observer.

Recall that the adversary’s aim is to maximize statistical distance between the true posterior distribution over , and the posterior induced by masking a subset of variables in stage 0, . We now show that for most reasonable measures of statistical distance, we can construct cases in which the adversary can make it arbitrarily large (within limits of the measure itself)—that is, the adversary can induce essentially arbitrary distortion in belief solely by masking some of the observations.

Definition 1.

We say a statistical distance is positive if for any two random variables , we have .

Note that any distance metric, or probabilistic extension of a distance metric, fits the definition of positive symmetric.

Theorem 2.

Suppose the attacker’s objective is to maximize some positive statistical distance . Let and be any vectors of binary random variables, then there exists some sequence of dynamic Bayes networks such that

Proof.

Let be the vectors of binary random variables for which attains its maximum value, with respect to . Then , and each variable has prior , . Let define a dynamic Bayes network on variables. For all , let . That is, all nodes in layer 0 are parents of every node in layer 1. Define the probability distributions over and by the following: , . Next, , and .

For each we will consider the value of under three types of events that could occur with respect to the possible outcomes, of , the adversary’s budget , and the adversary’s choice of which nodes to hide conditional on . Each of these settings admits a unique type of optimal play from the adversary. Specifically

  • (1) . In this case the adversary will hide random nodes since all outcomes are 0.

  • (2) . In this case the adversary will hide only the nodes whose outcomes are 1.

  • (3) . In this case the adversary will hide nothing.

In events of type (1) when there is no mask . When a mask is employed, with probability , and with probability . Thus, in this setting,

Events of this type occur with probability .

In events of type (2), without we have . Since and all nodes with outcome 0 are hidden. Thus, in light of we have with probability , and with probability . Therefore, the expected value in this setting is

Events of this type occur with probability for each .

In events of type (3) there are more nodes yielding 1 in layer 0 than the adversary is capable of hiding. So . Events of this type occur with probability for each .

For notational convenience, and without loss of generality, we will reorder the nodes in after the observations are made by the adversary, such that for , . Suppose , similar analysis holds for any constant fraction of . Since is positive symmetric we have,

Using the binomial identities we can reduce the above equation to form

Thus, since both terms in the above sum are positive, it remains only to be shown for ,

This limit can be evaluated as follows

Using a slight variation to the identity , we can obtain that this limit does in-fact converge to 1. Thus giving the desired result that

4 Computational Complexity of Deception by Half-Truth

Let define a dynamic Bayes network over a set of binary random variables. Let be a binary vector describing the realized outcomes of .

In the remainder of the paper, we restrict attention to particular distance metrics of the form:

untargeted:
targeted:

where the expectation is with respect to the product distribution of the two random variables and . These are natural distances in the context of random variables, and correspond to the Lukaszyk-Karmowski metric (LKM) of statistical distance between the distributions. We call the resulting problems (of computing the optimal mask given a prior and a realization of variables at layer 0) Deception by Bayes Network Masking (DBNM) for the untargeted case, and Targeted Deception by Bayes Network Masking (TDBNM) for the targeted case. We now show that this problem does not even admit a polynomial factor approximation for any .

Theorem 3.

If DBNM has a deterministic, polynomial-time, polynomial approximation, for any value of , then P=NP.

Proof.

Suppose that there exists a deterministic, polynomial factor, polynomial time approximation of DBNM. We will show that under this assumption SAT can be solved in polynomial time. Consider an instance of SAT defined by a set of Boolean variables and a Boolean function , whose terms are the elements of . The objective is to determine if there exists an assignment of the variables in such that evaluates to 1. An arbitrary instance of SAT can be encoded into DBNM in the following manner. Let , , and define (that is, if and only if the formula evaluates to true). For all other , . Lastly, set each prior and set .

In the case that , , yields , the objective of the attacker is to select a mask that maximize the value of . For a given mask , be any outcome that agrees with on all in , i.e. for all . Let

Then, for any we have,

A certificate for the SAT instance can be generated via assigning if and if . To see that this certificate is valid, consider two cases on . The first being, corresponds to an assignment of yielding , and the second being when the assignment gives .

In the first case, let be the outcome such that for all and for all .

Then, since , we have

Note that for each , . Thus,

Therefore, if the adversary selects a mask that does not correspond to a satisfying assignment for , its utility is at most .

The next case to consider is when the adversary selects a a mask which induces . In this case, we have

Thus, if induces an assignment of that yields , the adversary utility at least . Which converges to , from below, faster than an polynomial of .

By these two cases, we know that when is satisfiable, there exists a mask with value at least and that no mask corresponding to can have value greater than . In addition to the results of these two cases, we also know that an optimal mask can achieve no more than a value of , since only 1 node in has outcomes dependent on and any norm applied to a vector with only a single nonzero dimension will evaluate to exactly the value of the dimension. Therefore, if a polynomial approximation of the optimal solution were to be given, one could deduce the satisfiability of based on the value of the mask . That is if , then is not satisfiable, and if , then is satisfiable and gives the satisfying assignment.

This covers all but the case when , , yields . In this case, the adversary could return a mask of value arbitrarily close to even though has a satisfying assignment. This case is easily remedied by choosing to check the assignment , , before running the approximation.

Under this scheme we could use the polynomial approximation algorithm to determine if a given instance of SAT is satisfiable. Since SAT is NP complete, the existence of such an approximation algorithm would imply that P = NP. ∎

Next, we show that this inapproximability obtains even if we consider randomized algorithms.

Theorem 4.

If DBNM has a randomized polynomial factor approximation with constant probability, for any , then PR = NP.

Proof.

Using the previous construction from SAT to DBNM. If there existed an algorithm that could produce a polynomial factor approximation of the constructed instance of DBNM with some constant probability , then the same line of reasoning in the above proof yields a polynomial time algorithm that can determine if a true instance of SAT is satisfiable with probability at . This algorithm could then be run times to obtain a success rate of . Moreover, the algorithm would never falsely identify a non-satisfiable instance as satisfiable. The existence of such an algorithm would imply that SAT RP, and since SAT is NP-complete and RP is closed under L-reductions, this would also imply that RP = NP. ∎

Finally, we extend the hardness results above to the targeted version of our problem.

Corollary 5.

If TDBNM has a deterministic polynomial time, polynomial approximation, or a randomized polynomial time, polynomial approxiation with constant probability, for any , then P=NP or RP=NP respectivly.

Proof.

In both cases we can set and our objective is exactly the same as it was in the untargeted case, with the only difference being that we need not consider the case when for all yields , since is an optimal mask. Once we have this setting for , the proof follows identically to the proofs of 3 and 4. ∎

5 Approximation Algorithm for the Additive Case

Our result above shows that polynomial approximations of the optimal solution are intractable in the general case, when the adversary must be able to compute the optimal mask for any prior and any realization of the variables in layer 0. Therefore, we now turn our focus to cases where the DBN exhibits special structure on the transition probabilities. We start with DBNs with additive transition structure, which we define next.

Definition 6.

We say a transition probability for is additive if

where

We term the problem of finding an optimal adversarial mask when all transitions are additive ADBNM, for Additive DBNM in the untargeted case, and TADBNM refers to the corresponding targeted problem.

5.1 Inapproximability in the Additive Case

First, we show that even this case is inapproximable, but now in the sense that no PTAS exists for this problem.

Theorem 7.

No PTAS exists for either ADBNM (untargeted) or TADBNM (targeted), when , unless P=NP, (even for monotone transition functions, when nodes have at most 2 parents).

Proof.

To show that no PTAS exists for either problem, we will reduce from Dense -Subgraph (DKSG). An instance of DKSG is defined by a budget and a graph . The objective is to find a vertex set such that is maximized while .

To reduce an instance of DKSG to an instance of ADBNM perform the following actions. First, let and let . For each , let for arbitrarily small . It is easy to check that for , similar reasoning to our previous hardness result holds. Lastly, set if and otherwise. Suppose that . For TADBNM we need one extra condition that . Now, let be any mask. Then, for each pair , we have

Therefore, for a given , the attacker’s total utility is

where is the number of unique pairs contained in . Hence, the maximum utility an attacker can obtain is where is the maximum number of distinct pairs that can be contained in any of size at most . Since each such pair represents an edge in and represents a collection of vertices of , the maximum dense -subgraph has size and is given by the vertices in . That is, if a given mask has utility , then the vertices in correspond to a subgraph of cardinality . Similarly, if describes a subgraph of size , then by mapping the vertices in to a mask , the attacker can achieve utility .

Since the objectives of the two problems share arbitrary similarity, if a PTAS where to exists for ADBNM, then that same PTAS also exists for DKSG. However, unless P=NP no such algorithm exists for DKSG. Thus, no PTAS exists for ADBNM, unless P=NP. ∎

Theorem 8.

For ADBNM (untargeted) or TADBNM (targeted), when , unless P=NP, (even for monotone transition functions, when nodes have at most 2 parents).

Proof.

We will use the same reduction from DKSG used in the proof of Theorem 7. Under construction, and for a general , the attacker’s utility for any is

with the understanding that . Note that this objective function is monotone with respect to the number of unique pairs that correspond to edges . Further, since each node in is identical each such pair contributes the same increase to the objective function. Therefore, the objective function increases with respect to the number of unique pairs corresponding to edges in the original graph, independent of which pair is added. Therefore the objective function of the attacker is maximized by finding the largest set of unique pairs which correspond to edges in the graph, this is the exact objective of the original DKSG problem, meaning that a valid solution to one problem is exactly a valid solution to the other and both ADBNM and TADBNM are NP hard for . ∎

5.2 Approximation Algorithm

While even the ADBNM special case is inapproximable in a sense, we now present our first positive result, which is an -approximation (recall that the best known approximation of DKSG is , and we showed that our problem is no easier in the reduction above).

First, we impose an additional restriction on the problem: we assume that all transition functions have the propriety that is monotone with respect to . We propose Algorithm 1 for this problem. Next, we show that this algorithm yields a provable approximation guarantee.

1:bestMask :=
2:for each  do
3:     
4:     if  increasing  then
5:         
6:     else if  increasing  then
7:         
8:     else if  decreasing  then
9:         
10:     else if  decreasing  then
11:               
12:     while  and  do
13:         if  has outcomes of 1 then
14:              
15:         else if  has outcomes of 0 then
16:                        
17:         add to      
18:     if  then
19:         bestMask      return bestMask
Algorithm 1 Approximation algorithm
Proposition 9.

For any Algorithm 1 achieves a approximation on both targeted and untargted attacks.

Proof.

The algorithm generates one mask for each node . The associated mask, , is meant to push the observer’s perception of as close to some extreme (0 or 1) as possible. We will examine the contribution that the , most pushed to the desired extreme, makes to the attacker’s total utility. Suppose is being pushed to 1. A symmetric argument will hold in the case of 0. Let and let . Next we will show that is at least of the optimal solution no matter what norm is used. The attacker’s utility is given by , where is a binary vector. For finite we have,

and in the case when we have

Under any the attackers utility on is at least . To get the actual bound on approximation we will split on 3 cases. The first being when , the second being when and the third being when . In each case, each node has probability at most to attian the desired outcome (0 or 1). In the first case, when , the attacker’s optimal utility is upper-bounded by

Hence the ratio to the optimal solution given by is In the second case, when , we have that the attackers optimal utility is upper-bounded by

and again we get that the ratio to the optimal solution is .

Lastly, when the attackers utility is exactly the probability that there exists at least one node with the desired outcome. Since each node has at most probability to yield the desired outcome, the attacker’s optimal utility is at most and the attacker’s utility on is at least . Thus the ratio to the optimal solution is at least By montonicity and evaluation of the limit as we see that Therefore, for any we get an approximation ratio of at least

5.3 Heuristic

In addition to our approximation algorithm above, we propose a simple heuristic approach for approximating the optimal mask. The heuristic is a hill-climbing strategy in which, at each iteration, we add the node to that results in the maximum increase of the value of ; see Algorithm 2. As we demonstrate in the experiments below, the combination of the algorithm and the heuristic performs much better than either in isolation (and, of course, jointly achieves the -approximation above).

1:bestMask :=
2: :=
3:while  do
4:      := node with largest increase to
5:      =
6:     if  then
7:         bestMask =      return bestMask
Algorithm 2 Heuristic algorithm

We now show that by itself, heuristic can be arbitrarily bad. Fix such that , let , and let for a sufficiently small . Suppose . Let = , and for each with , let
. Define and for all if , and if . Then we can see that the optimal mask, in both the hiding and flipping case is to hide all nodes . Which, results in a value of at least in the hiding case, and in the flipping case. However, since the only way to greedily increase the value of is to keep hiding nodes from , the mask produced by the heuristic will have value . Thus, we get a ratio of

Note that is independent of . Thus, as the value of the heuristic solution also converges to .

Next we will define and discuss linear Bayesian networks, on such networks this proposed heuristic is guaranteed to find the optimal solution, although doing so can be achieved by a much simpler algorithm which we will also discuss.

6 Polynomial-time Algorithm for Linear Bayesian Networks

Our final contribution is a further restriction on the DBN that yields a polynomial-time algorithm for computing an optimal mask for the adversary. Specifically, we consider networks in which each transition function is of the form

We call these linear Bayesian networks.

Theorem 10.

In linear Bayesian networks the optimal solution to DBNM and TDBNM can be computed in polynomial time for the -norm.

Proof.

Consider the untargeted case first. Let be the outcome given by nature. Let be any outcome of which agrees with on all elements except those in . More specifically, if then and if then is free to be either 0 or 1.

For notational convenience we define the following variables for any mask , and for any let

then attacker’s utility on can be given as

Consider the change in value of when adding some denote this new mask as . Assume that , a symmetric argument will yield a similar result when . For notational convenience, let . Then, the difference in value of and is

Thus for any if we hide when , then the change in utility to ’s contribution to the total utility is , and similarly when , the change is . Thus in both cases we get that hiding causes the attacker’s utility to increase by where . In the targeted case the only way in which our analysis changes is in the value of . Since we now have a desired target for each , if that desired target is 0 then is also 0 and similarly when the target is 1, so is . Thus in both the targeted and untargeted case the change in utility is independent of the current mask and that the total utility is simply the sum of the utility on each . Thus, when hiding any the change in the attacker’s total utility increases linearly by a value that depends only on and not on the current mask . Therefore the attackers utility can be written as

where is the index set of the parents of , and if then and if then . Assigning values to each such that can be done in polynomial time by simply selecting the ’s with the highest associated coefficients. ∎

7 Experiments

As discussed in Section 5, our approximation scheme is to compute both the -approximation mask and the heuristic mask, then take the one yielding the higher utility. Note that this combination clearly yields an -approximation. As we now demonstrate, it is also significantly better in combination than either of the approaches by itself.

Figure 1: Comparison between our combined algorithm, heuristic and approximation algorithms in isolation, and random masking on randomly generated networks (left) and networks generated adversarially (right).

Figure 1 (left) shows the results on random general and additive networks, and demonstrates that our combined algorithm significantly outperforms the approximation algorithm, largely on the strength of the heuristic, which is highly effective in these settings. Figure 1 (right) studies settings constructed to be adversarial to the heuristic. As we can see, here the combined algorithm performs similarly to the approximation algorithm, while the heuristic in isolation ultimately performs poorly. Thus, the combination of the two is far stronger than each component in isolation.

8 Conclusion

We introduce a model of deception in which a principal needs to make a decision based on the state of the world, and an adversary can mask information about the state. We study this in a model where the principal is oblivious to the presence of the adversary and reasons about state change using a dynamic Bayes network. Even in a simple two time period model, we showt the existence of cases where an adversary with the ability to mask information about the state at time 0 can cause the oblivious principal to have an arbitrarily incorrect posterior. However, computing, or even approximating these masks to within a polynomial factor, is NP-hard in the general case. We also consider this problem with special structure on the transition probabilities, showing that when transitions only depend on the sum of parent values, the problem remains inapproximable, although we now exhibit an -approximation. On the other hand, when transitions are linear, we show that it can be solved in polynomial time.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398391
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description