A neural network oracle for quantum nonlocality problems in networks

A neural network oracle for quantum nonlocality problems in networks

Tamás Kriváchy1    Yu Cai Department of Applied Physics, University of Geneva, CH-1211 Geneva, Switzerland    Daniel Cavalcanti ICFO, The Institute of Photonic Sciences, 08860 Castelldefels (Barcelona), Spain    Arash Tavakoli Dyson School of Design Engineering, Imperial College London, London SW7 2AZ, UK    Nicolas Gisin    Nicolas Brunner Department of Applied Physics, University of Geneva, CH-1211 Geneva, Switzerland
11Corresponding author. tamas.krivachy@unige.ch
November 15, 2019
Abstract

Characterizing quantum nonlocality in networks is a challenging, but important problem. Using quantum sources one can achieve distributions which are unattainable classically. A key point in investigations is to decide whether an observed probability distribution can be reproduced using only classical resources. This causal inference task is challenging even for simple networks, both analytically and using standard numerical techniques. We propose to use neural networks as numerical tools to overcome these challenges, by learning the classical strategies required to reproduce a distribution. As such, the neural network acts as an oracle, demonstrating that a behavior is classical if it can be learned. We apply our method to several examples in the triangle configuration. After demonstrating that the method is consistent with previously known results, we give solid evidence that the distribution presented in [N. Gisin, Entropy 21(3), 325 (2019)] is indeed nonlocal as conjectured. Finally we examine the genuinely nonlocal distribution presented in [M.-O. Renou et al., PRL 123, 140401 (2019)], and, guided by the findings of the neural network, conjecture nonlocality in a new range of parameters in these distributions. The method allows us to get an estimate on the noise robustness of all examined distributions.

quantum information, machine learning, neural network, quantum network, causal inference

I Introduction

The possibility of creating stronger than classical correlations between distant parties has deep implications for both the foundations and applications of quantum theory. These ideas have been initiated by Bell Bell (1964), with subsequesnt research leading to the theory of Bell nonlocality Brunner et al. (2014). In the Bell scenario multiple parties jointly share a single classical or quantum source, often referred to as local and nonlocal sources, respectively. Recently, interest in more decentralized causal structures, in which several independent sources are shared among the parties over a network, has been on the rise Branciard et al. (2010, 2012); Fritz (2012); Pusey (2019). Contrary to the Bell scenario, in even slightly more complex networks the boundary between local and nonlocal correlations becomes nonlinear and the local set non-convex, greatly perplexing rigorous analysis. Though some progress has been made Henson et al. (2014); Tavakoli et al. (2014); Chaves et al. (2015); Wolfe et al. (2019); Rosset et al. (2016); Navascues and Wolfe (2017); Rosset et al. (2017); Chaves (2016); Fraser and Wolfe (2018); Weilenmann and Colbeck (2018); Luo (2018); Renou et al. (2019a); Gisin et al. (2019); Renou et al. (2019b); Pozas-Kerstjens et al. (2019), we still lack a robust set of tools to investigate generic networks from an analytic and numerical perspective.

Here we explore the use of machine learning in these problems. In particular we tackle the membership problem for causal structures, i.e. given a network and a distribution over the observed outputs, we must decide whether it could have been produced by using exclusively local resources. We encode the causal structure into a neural network and ask the network to reproduce the target distribution. By doing so, we approximate the question “does a local causal model exist?” with “is a local causal model learnable?”. Neural networks have proven to be useful ansätze for generic nonlinear functions in terms of expressivity, ease of learning and robustness, both in- and outside the domain of physical sciences Melko et al. (2019); Iten et al. (2018); Melnikov et al. (2018); van Nieuwenburg et al. (2017); Carrasquilla and Melko (2017). Machine learning has also been used in the study of nonlocality Deng (2018); Canabarro et al. (2019). However, while the techniques of Ref. Canabarro et al. (2019) can only suggest if a distribution is local or nonlocal, the method employed here is generative and provides a certificate that a distribution is local once it is learned.

Figure 1: (Top) Triangle network configuration. (Bottom) Neural network which reproduces distributions compatible with the triangle configuration.

In our approach we exploit that both causal structures and feedforward neural networks have their information flow determined by a directed acyclic graph. For any given distribution over observed variables and an ansatz causal structure, we train a neural network which respects that causal structure to reproduce the target distribution. This is equivalent to having a neural network learn the local responses of the parties to their inputs. If the target distribution is inside the local set, then a sufficiently expressive neural network should be able to learn the appropriate response functions and reproduce it. For distributions outside the local set, we should see that the machine can not approximate the given target. This gives us a criterion for deciding whether a target distribution is inside the local set or not. In particular, if a given distribution is truly outside the local set, then by adding noise in a physically relevant way we should see a clear transition in the machine’s behavior when entering the set of local correlations.

We explore the strength of this method by examining a notorious causal structure, the so-called ‘triangle’ network, depicted in Fig. 1. The triangle configuration is among the simplest tripartite networks, yet it poses immense challenges theoretically and numerically. We use the triangle with quaternary outcomes as a test-bed for our neural network oracle. After checking for the consistency of our method with known results, we examine the so-called Elegant distribution, proposed in Gisin (2019), and the distribution proposed by Renou et al. in Renou et al. (2019a). Our method gives solid evidence that the Elegant distribution is outside the local set, as originally conjectured. The family of distributions propsed by Renou et al. was shown to be nonlocal in a certain regime of parameters. When examining the full range of parameters we not only recover the nonlocality in the already known regime, but also get a conjecture of nonlocality from the machine in another range of the parameters. Finally, we use our method to get estimates of the noise robustness of these nonlocal distributions, and to gain insight into the learned strategies.

Ii Encoding causal structures into neural networks

The methods developed in this work are in principle applicable to any causal structure. Here we demonstrate how to encode a network nonlocality configuration into a neural network on the highly non-trivial example of the triangle network with quaternary outputs and no inputs. In this scenario three sources, , send information through either a classical or a quantum channel to three parties, Alice, Bob and Charlie. Flow of information is constrained such that the sources are independent from each other, and each one only sends information to two parties of the three, as depicted in Fig. 1. Alice, Bob and Charlie process their inputs with arbitrary local response functions, and they each output a number , respectively. Under the assumption that each source is independent and identically distributed from round to round, and that the local response functions are fixed (though possibly stochastic), such a scenario is well characterized by the probability distribution over the random variables of the outputs.

If quantum channels are permitted from the sources to the parties then the set of distributions is larger than that achievable classically. Due to the nonlocal nature of quantum theory, these correlations are often referred to as nonlocal ones, as opposed to local behaviors arising from only using classical channels. In the classical case, the scenario is equivalent to a causal structure, otherwise known as a Bayesian network Pearl (2000); Koller and Friedman (2009).

For the classical setup we can assume without loss of generality that the sources each send a random variable drawn from a uniform distribution on the continuous interval between and . Given the network constraint, the probability distribution over the parties’ outputs can be written as

(1)

We now construct a neural network which is able to approximate a distribution of the form (1). We use a feedforward neural network, since it is described by a directed acyclic graph, similarly to a causal structure Goodfellow et al. (2016); Pearl (2000); Koller and Friedman (2009). This allows for a seamless transfer from the causal structure to the neural network model. The inputs are the hidden variables, i.e. uniformly drawn random numbers . The outputs are the conditional probabilities and , i.e. three normalized vectors, each of length 4. So as to respect the communication constraints of the triangle, the neural network is not fully connected, as shown in Fig. 1. We evaluate the neural network for values of in order to approximate the joint probability distribution (1) with a Monte Carlo approximation,

(2)

In our implementation each of the three conditional probability functions is modeled by a multilayer perceptron, with rectified linear or tangent hyperbolic activations, except at the last layer, where we have a softmax layer to impose normalization. Note, however, that any feedforward network can be used to model these conditional probabilities. The cost function can be any measure of discrepancy between the target distribution and the neural network’s output , such as the Kullback–Leibler divergence222We observed that for many target distributions our implementation worked well also when using the mean squared error or mean absolute error. However, the Kullback–Leibler divergence worked well with all examined distributions. of one relative to the other, namely . In order to train the neural network we synthetically generate uniform random numbers for the hidden variables, the inputs. We then adjust the weights of the network after evaluating the cost function on a minibatch of size , using conventional neural network optimization methods Goodfellow et al. (2016). The minibatch size is chosen arbitrarily and can be increased in order to increase the neural network’s precision. For the triangle with quaternary outputs an of several thousands is typically satisfactory.

By encoding the causal structure in a neural network like this, we can train the neural network to try to reproduce a given target distribution. The procedure generalizes in a straight-forward manner to any causal structure, and is thus in principle applicable to any quantum nonlocality network problem. We provide specific code online for the triangle configuration, as well as for the standard Bell scenario, which has inputs as well (see Section VI). After finishing this work we realized that related ideas have been investigated in causal inference, though in a different context, where network architectures and weights are simultaneously optimized to reproduce a given target distribution over continuous outputs, as opposed to discrete ones examined here Goudet et al. (2018).

Iii Results

Figure 2: Visualization of target distributions leaving the local set at an angle for a generic noisy distribution (left) and for the specific case of the Fritz distribution with a 2-qubit Werner state shared between Alice and Bob (right). The grey dots depict the target distributions, while the red dots depict the distributions which the neural network would find. In the generic case we depict the distance introduced in Eq. 3, for the special case of , as well as . Given an estimate for , the distance can be evaluated analytically, which (for an appropriate ) allows us to compare with the distance that the machine perceives.
Figure 3: Fritz distribution Fritz (2012) results. (Left) Plot of the distance perceived by the machine, and the analytic distance for and . (Right) Visualization of response functions of Bob as a function of for , from top left to bottom right, respectively. Note how the responses for are the same.

Given a target distribution , the neural network provides an explicit model for a distribution , which is, according to the machine, the closest local distribution to . The distribution is guaranteed to be from the local set by construction. The neural network will almost never exactly reproduce the target distribution since is learned by sampling a distribution a finite number of times, and additionally the learning techniques do not guarantee convergence to the global optimum. As such, to use the neural network as an oracle we could define some confidence level for the similarity between and . It is, however, more robust and informative if instead, we search for transitions in the machine’s behavior when giving it different target distributions from both outside the local set and inside it. We will typically define a family of target distributions by taking a distribution which is believed to be nonlocal and adding some noise controlled by the parameter , with being the completely noisy (local) distribution and being the noiseless, “most nonlocal” one. By adding noise in a physically meaningful way we guarantee that at some parameter value, , we will enter the local set and stay in it for . For each noisy target distribution we retrain the neural network and obtain a family of learned distributions . Observing a qualitative change in the machine’s performance at some point is an indication of traversing the local set’s boundary. In this work we extract information from the learned model through

  • the distance between the target and the learned distribution,

  • the learned distributions , in particular by examining the local response functions of Alice, Bob and Charlie.

Observing a clear liftoff of the distance at some point is a signal that we are leaving the local set. Somewhat surprisingly, we can deduce even more from the distance . Though the shape of the local set and the threshold value are unknown, in some cases, under mild assumptions, we can estimate not only , but also the angle at which the curve exits the local set, and in addition gain some insight into the shape of the local set near . To do this, let us first assume that the local set is flat near and that is a straight curve. Then the true distance from the local set is

(3)

where is the angle between the curve and the local set’s hyperplane (see Fig. 2 for an illustration). In the more general setting Eq. (3) is still approximately correct even for , if is almost straight and the local set is almost flat near . We denote this analytic approximation of the true distance form the local set as . We use Eq. (3) to calculate it but keep in mind that it is only an approximation. Given an estimate for the two parameters and this function can be compared to what the machine perceives as a distance, . Finding a match between the two distance functions gives us strong evidence that indeed the curve exits the local set at at an angle , where the hat is used to signify the obtained estimates.

We also get information out of the learned model by looking at the local responses of Alice, Bob and Charlie. Recall that the shared random variables, the sources, are uniformly distributed, hence the response functions encode the whole problem. We can visualize, for example, Bob’s response function by sampling several thousand values of . In order to capture the stochastic nature of the responses, for each pair we sample from thirty times and color-code the results red, blue, green, yellow. By scatter plotting these points with a finite opacity we gain an impression of the response function, such as in Fig. 3.

These figures are already interesting in themselves and can guide us towards analytic guesses of the ideal response functions. However, they can also be used to verify our results in some special cases. For example, if and the local set is sufficiently flat, then the response functions should be the same333For any target distribution the closest local response function is not unique, so response functions could vary above . However after running the algorithm for the full range of , for each we check whether the models at other values perform better for . This smooths the results and gives more consistent response functions. for all , as it is in Fig. 3. On the other hand if then we are in a scenario similar to that of the left panel in Fig. 2 and the response functions should differ for different values of .

Figure 4: Elegant distribution Gisin (2019) results. (Left) Comparison of the distance perceived by the machine, and the analytic distance . Both visibility and detector noise model results are shown. Inset: The target (gray) and learned (red) distributions visualized by plotting the probability of each of the 64 possible outcomes, for detector noise and . Note that for most gray dots are almost fully covered by the corresponding red dots. (Right) Responses of Charlie illustrated as a function of . Detector noise values (top left to bottom right): 0.5, 0.72, 0.76, 1.

iii.1 Fritz distribution

First let us consider the quantum distribution proposed by Fritz Fritz (2012), which can be viewed as a Bell scenario wrapped into the triangle topology. Alice and Bob share a singlet, i.e. , while Bob and Charlie share either a maximally entangled or a classically correlated state with Charlie, such as and similarly for . Alice measures the shared state with Charlie in the computational basis and, depending on this random bit, she measures either the Pauli or observable. Bob does the same with his shared state with Charlie and measures either or . They then both output the measurement result and the bit which they used to decide the measurement. Charlie measures both sources in the computational basis and announces the two bits. As a noise model we introduce a finite visibility for the singlet shared by Alice and Bob, thus we examine a Werner state,

(4)

where denotes the maximally mixed state of two qubits. For such a state we expect to find a local model below the threshold of .

In Fig. 3 we plot the learned () and analytic () distances discussed previously for and . The coincidence of the two curves is already good evidence that the machine finds the closest local distributions to the target distributions. Upon examining the response functions of Alice, Bob and Charlie, also in Fig. 3, we see that they do not change above , which means that the machine finds the same distributions for target distributions outside the local set. This is in line with our expectations. Due to the connection with the standard Bell scenario, we believe the curve exits the local set perpendicularly and that the local set is a polytope (as long as and are classical states), depicted on the right panel in Fig. 2. These results affirm that our algorithm functions well.

iii.2 Elegant distribution

Next we turn our attention to a distribution which is more native to the triangle structure, as it combines entangled states and entangled measurements. We examine the Elegant distribution, which is conjectured in Gisin (2019) to be outside the local set. The three parties share singlets and each perform a measurement on their two qubits, the eigenstates of which are

(5)

where the are the pure qubit states with unit length Bloch vectors pointing at the four vertices of the tetrahedron for , and are the same for the inverted tetrahedron.

We examine two noise models - one at the sources and one at the detectors. First we introduce a visibility to the singlets such that all three shared quantum states have the form (4). Second, we examine detector noise, in which each detector defaults independently with probability and gives a random output as a result. This is equivalent to adding white noise to the quantum measurements performed by the parties, i.e. the positive operator-valued measure elements are .

For both noise models we see a transition in the distance , depicted in Fig. 4, giving us strong evidence that the conjectured distribution is indeed nonlocal. Through this examination we gain insight into the noise robustness of the Elegant distribution as well. It seems that for visibilities above , or for detector noise above , the distribution is still nonlocal. The curves exit the local set at approximately and , respectively. Note that for both distribution families, by looking at the unit tangent vector, one can analytically verify that the curves are almost straight for values of above the observed threshold. This gives us even more confidence that it is legitimate to use the analytic distance as a reference (see Eq. (3)). In Fig. 4 we illustrate how the response function of Charlie changes when adding detector noise. It is peculiar how the machine often prefers horizontal and vertical separations of the latent variable space, with very clean, deterministic responses, similarly to how we would do it intuitively, especially for noiseless target distributions.

Figure 5: Renou et al. distribution Renou et al. (2019a) results. (Left) The distance perceived by the machine, , as a function of , with no added noise. Inset: The target (gray) and learned (red) distributions visualized by plotting the probability of each of the 64 possible outcomes, for and . These values approximately correspond to the two peaks in the scan. Note that most gray dots are almost fully covered by the corresponding red dots. (Right) Noise scans, i.e. the analytic (see Eq. (3)) and the learned , for the target distribution of , for the detector noise and visibility noise models.

iii.3 Renou et al. distribution

The authors of Renou et al. (2019a) recently introduced the first distribution in the triangle scenario which is not directly inspired by the Bell scenario and is proven to be nonlocal. To generate the distribution take all three shared states to be the entangled states . Each party performs the same measurement, characterized by a single parameter , with eigenstates . The authors prove that for this distribution is nonlocal, where and also show that there exist local models for . Though they argue that there must be some noise tolerance of the distribution, they lack a proper estimation of it.

First we examine these distributions as a function of , without any added noise. The results are depicted in Fig. 5 (left panel). To start with, note how the distances are numerically much smaller than in the previous examples, i.e. the machine finds distributions which are extremely close to the targets. See the inset in Fig. 5 for examples which exhibit how close the learned distributions are to the targets even for the points which have large distances (). We observe, consistently with analytic findings, that for , the machine finds a non-zero distance from the local set. It also recovers the local models at , with minor difficulties around . Astonishingly, the machine finds that for some values of , the distance from the local set is even larger than in the provenly nonlocal regime. This is a somewhat surprising finding, as one might naively assume that between and distributions are local, especially when one looks at the nonlocality proof used in the other regime. However, this is not what the machine finds. Instead it gives us a nontrivial conjecture about nonlocality in a new range of parameters . Though extracting precise boundaries in terms of for the new nonlocal regime would be difficult from the results in Fig. 5 alone, they strongly suggest that there is some nonlocality in this regime.

Finally, we have a look at the noise robustness of the distribution with , which is approximately the most distant distribution from the local set, from within the provenly nonlocal regime. For the detector noise and visibility noise models we recover , respectively, and for both. Note that these estimates are much more crude than those obtained for the Elegant distributions, primarily due to the target distributions being so much closer to the local set and the neural network getting stuck in local optima. This increases the variations in independent runs of the learning algorithm. E.g. in the left panel of Fig. 5, at the distance is about , whereas in the right panel, in an independent run, the distance for this same point () is around . The absolute difference is small, however the relative changes can have an impact in extracting noise thresholds. Given that the local set is so close to the target distributions (exemplified in the inset in Fig. 5), it is easily possible that the noise tolerance is smaller than that obtained here.

Iv Discussion

Let us contrast the presented method to standard numerical techniques. The standard method for tackling the membership problem in network nonlocality is numerical optimization. For a fixed number of possible outputs per party, , without loss of generality one can take the hidden variables to be discrete with a finite alphabet size, and the response functions to be deterministic. In fact the cardinality of the hidden variables can be upper bounded as a function of  Rosset et al. (2017). Specifically for the triangle this upper bound is . This results in a straightforward optimization over the probabilities of each hidden variable symbol and the deterministic responses of the observers, giving continuous parameters and a discrete configuration space of size to optimize over jointly. Note that this is a non-convex optimization space, making it a terribly difficult task. For binary outputs, i.e. , this means only 15 continuous variables and a discrete configuration space of 432 possibilities, and is feasible. However, already for the case of quaternary outputs, , this optimization is a computational nightmare on standard CPUs with a looming 177 continuous parameters and a discrete configuration space of size 43200. Even when constraining the response functions to be the same for the three parties, , and the latent variables to have the same distributions, , the problem becomes intractable around a hidden variable cardinality of , which is still much lower than the current upper bound of that needs to be examined. Standard numerical optimization tools quickly become infeasible even for the triangle configuration - not to mention larger networks!

The causal modeling and Bayesian network communities examine scenarios similar to those relevant for quantum information Pearl (2000); Koller and Friedman (2009). The core of both lines of research are directed acyclic graphs and probability distributions generated by them. In these communities there exist methods for this so-called ‘structure recovery’ or ‘structure learning’ task. However, these methods are either not applicable to our particular scenarios or are also approximate learning methods which make many assumptions on the hidden variables, including that the hidden variables are discrete. Hence, even if these learning methods are quicker than standard optimization for current scenarios of interest, they will run into the scaling problem of the latent variable cardinality.

The method demonstrated in this paper attacks the problem from a different angle. It relaxes both the discrete hidden variable and deterministic response function assumptions which are made by the methods previously mentioned. The complexity of the problem now boils down to the response function of the observers - each of which is represented by a feedforward neural network. Though our method is an approximate one, one can increase its precision by increasing the size of the neural network, the number of samples we sum over () and the amount of time provided for learning. Due to universal approximation theorems we are guaranteed to be able to represent essentially any function with arbitrary precision Cybenko (1989); Hornik (1991); Lu et al. (2017). For the first two distributions examined here we find that there is no significant change in the learned distributions after increasing the neural network’s width and depth above some moderate level, i.e. we have reached a plateau in performance. Regarding the Elegant distribution, for example, we used depth 5 and width 30 per party. However, we did not do a rigorous analysis in the minimum required size, perhaps an even smaller network would have worked. We were satisfied with the current complexity, since getting a local model for a single target distribution takes a few minutes on a standard computer, using a mini-batch size of . For the Renou et al. distribution there is still space for improvement in terms of the neural network architecture and the training procedure. The question of what the minimal required complexity of the response functions for a given target distribution is in itself interesting enough for a separate study, and can become a tedious task since the amount of time that the machine needs to learn typically increases with network size.

We have demonstrated how, by adding noise to a distribution and examining a family of distributions with the neural network, we can deduce information about the membership problem. For a single target distribution the machine finds only an upper bound to the distance from the local set. By examining families of target distributions, however, we get a robust signature of nonlocality due to the clear transitions in the distance function, which match very well with the approximately expected distances.

V Conclusion

In conclusion, we provide a method for testing whether a distribution is classically reproducible over a directed acyclic graph, relying on a fundamental connection to neural networks. The simple, yet effective method can be used for arbitrary causal structures, even in cases where current analytic tools are unavailable and numerical methods are futile, allowing quantum information scientist to test their conjectured quantum, or post-quantum, distributions to see whether they are locally reproducible or not, hopefully paving the way to a deeper understanding of quantum nonlocality in networks.

To illustrate the relevance of the method, we have applied it to two open problems, giving firm numerical evidence that the Elegant distribution is nonlocal on the triangle network, and getting estimates for the noise robustness of both the Elegant and the Renou et al. distribution, under physically relevant noise models. Additionally, we conjecture nonlocality in a surprising range of the Renou et al. distribution. Our work motivates finding proofs of the nonlocality for both these distributions.

The obtained results on nonlocality are insightful and convincing, but are nonetheless only numerical evidence. Examining whether a certificate of nonlocality can be obtained from machine learning techniques would be an interesting further research direction. In particular, it would be fascinating if a machine could derive, or at least give a good guess for a (nonlinear) Bell-type inequality which is violated by the Elegant or Renou et al. distribution. In general, seeing what insight can be gained about the boundary of the local set from machine learning would be interesting. Perhaps a step in this direction would be to understand better what the machine learned, for example by somehow extracting an interpretable model from the neural network analytically, instead of by sampling from it. A different direction for further research would be to apply similar ideas to networks with quantum sources, allowing a machine to learn quantum strategies for some target distributions.

Vi Code Availability

Our implementation of the method for the triangle network and for the two-party Bell scenario can be found at www.github.com/tkrivachy/neural-network-for-nonlocality-in-networks.

Acknowledgements.
The authors thank Raban Iten, Tony Metger, Elisa Bäumer and Marc-Olivier Renou for fruitful discussions. TK, YC, NG and NB acknowledge financial support from the Swiss National Science Foundation (Starting grant DIAQ, and QSIT), and the European Research Council (ERC MEC). DC acknowledges support from the Ramon y Cajal fellowship, Spanish MINECO (QIBEQI, Project No. FIS2016-80773-P, and Severo Ochoa SEV-2015-0522) and Fundació Cellex, Generalitat de Catalunya (SGR875 and CERCA Program).

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
398328
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description