Choice Set Misspecification in Reward Inference

Choice Set Misspecification in Reward Inference

Abstract

Specifying reward functions for robots that operate in environments without a natural reward signal can be challenging, and incorrectly specified rewards can incentivise degenerate or dangerous behavior. A promising alternative to manually specifying reward functions is to enable robots to infer them from human feedback, like demonstrations or corrections. To interpret this feedback, robots treat as approximately optimal a choice the person makes from a choice set, like the set of possible trajectories they could have demonstrated or possible corrections they could have made. In this work, we introduce the idea that the choice set itself might be difficult to specify, and analyze choice set misspecification: what happens as the robot makes incorrect assumptions about the set of choices from which the human selects their feedback. We propose a classification of different kinds of choice set misspecification, and show that these different classes lead to meaningful differences in the inferred reward and resulting performance. While we would normally expect misspecification to hurt, we find that certain kinds of misspecification are neither helpful nor harmful (in expectation). However, in other situations, misspecification can be extremely harmful, leading the robot to believe the opposite of what it should believe. We hope our results will allow for better prediction and response to the effects of misspecification in real-world reward inference.

\newboolean

include-notes \setbooleaninclude-notestrue

1 Introduction

1

Specifying reward functions for robots that operate in environments without a natural reward signal can be challenging, and incorrectly specified rewards can incentivise degenerate or dangerous behavior [14, 13]. A promising alternative to manually specifying reward functions is to design techniques that allow robots to infer them from observing and interacting with humans.

Figure 1: Example choice set misspecification: The human chooses a pack of peanuts at the supermarket. They only notice the expensive one because it has flashy packaging, so that’s the one they buy. However, the robot incorrectly assumes that the human can see both the expensive flashy one and the cheap one with dull packaging but extra peanuts. As a result, the robot incorrectly infers that the human likes flashy packaging, paying more, and getting fewer peanuts.

These techniques typically model humans as optimal or noisily optimal. Unfortunately, humans tend to deviate from optimality in systematically biased ways [12, 5]. Recent work improves upon these models by modeling pedagogy [10], strategic behavior [23], risk aversion [15], hyperbolic discounting [7], or indifference between similar options [4]. However, given the complexity of human behavior, our human models will likely always be at least somewhat misspecified [22].

One way to formally characterize misspecification is as a misalignment between the real human and the robot’s assumptions about the human. Recent work in this vein has examined incorrect assumptions about the human’s hypothesis space of rewards [3], their dynamics model of the world [19], and their level of pedagogic behavior [16]. In this work, we identify another potential source of misalignment: what if the robot is wrong about what feedback the human could have given? Consider the situation illustrated in Figure 1, in which the robot observes the human going grocery shopping. While the grocery store contains two packages of peanuts, the human only notices the more expensive version with flashy packaging, and so buys that one. If the robot doesn’t realize that the human was effectively unable to evaluate the cheaper package on its merits, it will learn that the human values flashy packaging.

We formalize this in the recent framework of reward-rational implicit choice (RRiC) [11] as misspecification in the human choice set, which specifies what feedback the human could have given. Our core contribution is to categorize choice set misspecification into several formally and empirically distinguishable “classes”, and find that different types have significantly different effects on performance. As we might expect, misspecification is usually harmful; in the most extreme case the choice set is so misspecified that the robot believes the human feedback was the worst possible feedback for the true reward, and so updates strongly towards the opposite of the true reward. Surprisingly, we find that under other circumstances misspecification is provably neutral: it neither helps nor hurts performance in expectation. Crucially, these results suggest that not all misspecification is equivalently harmful to reward inference: we may be able to minimize negative impact by systematically erring toward particular misspecification classes defined in this work. Future work will explore this possibility.

2 Reward Inference

There are many ways that a human can provide feedback to a robot: demonstrations [18, 1, 24], comparisons [20, 6], natural language [8], corrections [2], the state of the world [21], proxy rewards [9, 17], etc. Jeon et al. propose a unifying formalism for reward inference to capture all of these possible feedback modalities, called reward-rational (implicit) choice (RRiC). Rather than study each feedback modality separately, we study misspecification in this general framework.

RRiC consists of two main components: the human’s choice set, which corresponds to what the human could have done, and the grounding function, which converts choices into (distributions over) trajectories so that rewards can be computed.

For example, in the case of learning from comparisons, the human chooses which out of two trajectories is better. Thus, the human’s choice set is simply the set of trajectories they are comparing, and the grounding function is the identity. A more complex example is learning from the state of the world, in which the robot is deployed in an environment in which a human has already acted for timesteps, and must infer the human’s preferences from the current world state. In this case, the robot can interpret the human as choosing between different possible states. Thus, the choice set is the set of possible states that the human could reach in timesteps, and the grounding function maps each such state to the set of trajectories that could have produced it.

Let denote a trajectory and denote the set of all possible trajectories. Given a choice set for the human and grounding function , Jeon et al. define a procedure for reward learning. They assume that the human is Boltzmann-rational with rationality parameter , so that the probability of choosing any particular feedback is given by:

(1)

From the robot’s perspective, every piece of feedback is an observation about the true reward parameterization , so the robot can use Bayesian inference to infer a posterior over . Given a prior over reward parameters , the RRiC inference procedure is defined as:

(2)

Since we care about misspecification of the choice set , we focus on learning from demonstrations, where we restrict the set of trajectories that the expert can demonstrate. This enables us to have a rich choice set, while allowing for a simple grounding function (the identity). In future work, we aim to test choice set misspecification with other feedback modalities as well.

3 Choice Set Misspecification

For many common forms of feedback, including demonstrations and proxy rewards, the RRiC choice set is implicit. The robot knows which element of feedback the human provided (ex. which demonstration they performed), but must assume which elements of feedback the human could have provided based on their model of the human. However, this assumption could easily be incorrect – the robot may assume that the human has capabilities that they do not, or may fail to account for cognitive biases that blind the human to particular feedback options, such as the human bias towards the most visually attention-grabbing choice in Fig 1.

To model such effects, we assume that the human selects feedback according to , while the robot updates their belief assuming a different choice set to get . Note that is the robot’s assumption about what the human’s choice set is – this is distinct from the robot’s action space. When , we get choice set misspecification.

It is easy to detect such misspecification when the human chooses feedback . In this case, the robot observes a choice that it believes to be impossible, which should certainly be grounds for reverting to some safe baseline policy. So, we only consider the case where the human’s choice is also present in (which also requires and to have at least one element in common).

Within these constraints, we propose a classification of types of choice set misspecification in Table 1. On the vertical axis, misspecification is classified according to the location of the optimal element of feedback . If is available to the human (in ), then the class code begins with A. We only consider the case where is also in : the case where it is in but not is uninteresting, as the robot would observe the “impossible” event of the human choosing , which immediately demonstrates misspecification at which point the robot should revert to some safe baseline policy. If , then we must have (since it was chosen from ), and the class code begins with B. On the horizontal axis, misspecification is classified according to the relationship between and . may be a subset (code 1), superset (code 2), or intersecting class (code 3) of . For example, class A1 describes the case in which the robot’s choice set is a subset of the human’s (perhaps because the human is more versatile), but both choice sets contain the optimal choice (perhaps because it is obvious).

A1 A2 A3
B2 B3
Table 1: Choice set misspecification classification, where is the robot’s assumed choice set, is the human’s actual choice set, and is the optimal element from . B1 is omitted because if , then is empty and cannot contain .

Title

4 Experiments

To determine the effects of misspecification class, we artificially generated and with the properties of each particular class, simulated human feedback, ran RRiC reward inference, and then evaluated the robot’s resulting belief distribution and optimal policy.

Figure 2: The set of four gridworlds used in randomized experiments, with the lava feature marked in red.

4.1 Experimental Setup

Environment

To isolate the effects of misspecification and allow for computationally tractable Bayesian inference, we ran experiments in toy environments. We ran the randomized experiments in the four gridworlds shown in Fig 2. Each square in environment is a state . is a continuous feature, while is a binary feature set to in the lower-right square of each grid and everywhere else. The true reward function is a linear combination of these features and a constant stay-alive cost incurred at each timestep, parameterized by . Each episode begins with the robot in the upper-left corner and ends once the robot reaches the goal state or episode length reaches the horizon of 35 timesteps. Robot actions move the robot one square in a cardinal or diagonal direction, with actions that would move the robot off of the grid causing it to remain in place. The transition function is deterministic. Environment defines an MDP .

Inference

While the RRiC framework enables inference from many different types of feedback, we use demonstration feedback here because demonstrations have an implicit choice set and straightforward deterministic grounding. Only the human knows their true reward function parameterization . The robot begins with a uniform prior distribution over reward parameters in which and vary, but always . contains . RRiC inference proceeds as follows for each choice set tuple and environment . First, the simulated human selects the best demonstration from their choice set with respect to the true reward . Then, the simulated robot uses Eq. 2 to infer a “correct” distribution over reward parameterizations using the true human choice set, and a “misspecified” distribution using the misspecified human choice set. In order to evaluate the effects of each distribution on robot behavior, we define new MDPs and for each environment, solve them using value iteration, and then evaluate the rollouts of the resulting deterministic policies according to the true reward function .

4.2 Randomized Choice Sets

We ran experiments with randomized choice set selection for each misspecification class to evaluate the effects of class on entropy change and regret.

Conditions

The experimental conditions are the classes of choice set misspecification in Table 1: A1, A2, A3, B2 and B3. We tested each misspecification class on each environment, then averaged across environments to evaluate each class. For each environment , we first generated a master set of all demonstrations that are optimal w.r.t. at least one reward parameterization . For each experimental class, we randomly generated 6 valid tuples, with . Duplicate tuples, or tuples in which , were not considered.

Measures

There are two key experimental measures: entropy change and regret. Entropy change is the difference in entropy between the correct distribution , and the misspecified distribution . That is, . If entropy change is positive, then misspecification induces overconfidence, and if it is negative, then misspecification induces underconfidence.

Regret is the difference in return between the optimal solution to , with the correctly-inferred reward parameterization, and the optimal solution to , with the incorrectly-inferred parameterization, averaged across all 4 environments. If is an optimal trajectory in and is an optimal trajectory in , then . Note that we are measuring regret relative to the optimal action under the correctly specified belief, rather than optimal action under the true reward. As a result, it is possible for regret to be negative, e.g. if the misspecification makes the robot become more confident in the true reward than it would be under correct specification, and so execute a better policy.

4.3 Biased Choice Sets

We also ran an experiment in a fifth gridworld where we select the human choice set with a realistic human bias to illustrate how choice set misspecification may arise in practice. In this experiment the human only considers demonstrations that end at the goal state because, to humans, the word “goal” can be synonymous with “end” (Fig 2(a)). However, to the robot, the goal is merely one of multiple features in the environment. The robot has no reason to privilege it over the other features, so the robot considers every demonstration that is optimal w.r.t some possible reward parameterization (Fig 2(b)). The trajectory that only the robot considers is marked in blue. We ran RRiC inference using this and evaluated the results using the same measures described above.

(a)
(b)
Figure 3: Human and robot choice sets with a human goal bias. Because the human only considers trajectories that terminate at the goal, they don’t consider the blue trajectory in .

5 Results

We summarize the aggregated measures, discuss the realistic human bias result, then examine two interesting results: symmetry between classes A1 and A2 and high regret in class B3.

5.1 Aggregate Measures in Randomized Experiments

Figure 4: Entropy Change (N=24). The box is the IQR, the whiskers are the range, and the blue line is the median. There are no outliers.
Figure 5: Regret (N=24). The box is the IQR, the whiskers are the most distant points within 1.5*the IQR, and the green line is the mean. Multiple outliers are omitted.

Entropy Change

Entropy change varied significantly across misspecification class. As shown in Fig 4, the interquartile ranges (IQRs) of classes A1 and A3 did not overlap with the IQRs of A2 and B2. Moreover, A1 and A3 had positive medians, suggesting a tendency toward overconfidence, while A2 and B2 had negative medians, suggesting a tendency toward underconfidence. B3 was less distinctive, with an IQR that overlapped with that of all other classes. Notably, the distributions over entropy change of classes A1 and A2 are precisely symmetric about 0.

Regret

Regret also varied as a function of misspecification class. Each class had a median regret of 0, suggesting that misspecification commonly did not induce a large enough shift in belief for the robot to learn a different optimal policy. However the mean regret, plotted as green lines in Fig 5, did vary markedly across classes. Regret was sometimes so high in class B3 that outliers skewed the mean regret beyond of the whiskers of the boxplot. Again, classes A1 and A2 are precisely symmetric. We discuss this symmetry in Section 5.3, then discuss the poor performance of B3 in Section 5.4.

5.2 Effects of Biased Choice Sets

The human bias of only considering demonstrations that terminate at the goal leads to very poor inference in this environment. Because the human does not consider the blue demonstration from Fig 2(b), which avoids the lava altogether, they are forced to provide the demonstration in Fig 5(a), which terminates at the goal but is long and encounters lava. As a result, the robot infers the very incorrect belief distribution in Fig 5(b). Not only is this distribution underconfident (entropy change = ), but it also induces poor performance (regret = ). This result shows that we can see an outsized negative impact on robot reward inference with a small incorrect assumption that the human considered and rejected demonstrations that don’t terminate at the goal.

(a) feedback
(b)
Figure 6: Human feedback and the resulting misspecified robot belief with a human goal bias. Because the feedback that the biased human provides is poor, the robot learns a very incorrect distribution over rewards.

5.3 Symmetry

Intuitively, misspecification should lead to worse performance in expectation. Surprisingly, when we combine misspecification classes A1 and A2, their impact on entropy change and regret is actually neutral. The key to this is their symmetry – if we switch the contents of and in an instance of class A1 misspecification, we get an instance of class A2 with exactly the opposite performance characteristics. Thus, if a pair in A1 is harmful, then the analogous pair in A2 must be helpful, meaning that it is better for performance than having the correct belief about the human’s choice set. We show below that this is always the case under certain symmetry conditions that apply to A1 and A2.

Assume that there is a master choice set containing all possible elements of feedback for MDP , and that choice sets are sampled from a symmetric distribution over pairs of subsets with (where is the set of subsets of ). Let be the expected return from maximizing the reward function in . A reward parameterization is chosen from a shared prior and are sampled from . The human chooses the optimal element of feedback in their choice set .

Theorem 1.

Let and be defined as above. Assume that , we have ; that is, the human would pick the same feedback regardless of which choice set she sees. If the robot follows RRiC inference according to Eq. 2 and acts to maximize expected reward under the inferred belief, then:

Proof.

Define to be the return achieved when the robot follows RRiC inference with choice set and feedback , then acts to maximize , keeping fixed. Since the human’s choice is symmetric across , for any , regret is anti-symmetric:

Since is symmetric, is as likely as . Combined with the anti-symmetry of regret, this implies that the expected regret must be zero:

An analogous proof would work for any anti-symmetric measure (including entropy change).

Class Mean Std Q1 Q3
A1 0.256 0.2265 0.1153 0.4153
A2 -0.256 0.2265 -0.4153 -0.1153
Table 2: Entropy change is symmetric across classes A1 and A2.
Class Mean Std Q1 Q3
A1 0.04 0.4906 0.1664 0.0
A2 -0.04 0.4906 0.0 -0.1664
Table 3: Regret is symmetric across classes A1 and A2.

5.4 Worst Case

Class Mean Std Max Min
A3 -0.001 0.5964 1.1689 -1.1058
B2 0.228 0.6395 1.6358 -0.9973
B3 2.059 6.3767 24.7252 -0.9973
Table 4: Regret comparison showing that class B3 has much higher regret than neighboring classes.

As shown in Table 4, class B3 misspecification can induce regret an order of magnitude worse than the maximum regret induced by classes A3 and B2, which each differ from B3 along a single axis. This is because the worst case inference occurs in RRiC when the human feedback is the worst element of , and this is only possible in class B3. In class B2, contains all of , so as long as , must contain at least one element worse than . In class A3, , so cannot contain any elements better than . However, in class B3, need not contain any elements worse than , in which case the robot updates its belief in the opposite direction from the ground truth.

For example, consider the sample human choice set in Fig 6(a). Both trajectories are particularly poor, but the human chooses the demonstration in Fig 6(b) because it encounters slightly less lava and so has a marginally higher reward. Fig 7(a) shows a potential corresponding robot choice set from B2, containing both trajectories from the human choice set as well as a few others. Fig 7(b) shows . The axes represent the weights on the and features and the space of possible parameterizations lies on the circle where . The opacity of the gold line is proportional to the weight that places on each parameter combination. The true reward has , whereas the peak of this distribution has , but . This is because contains shorter trajectories that encounter the same amount of lava, and so the robot infers that must be preferred in large part due to its length.

Fig 8(a) shows an example robot choice set from B3, and Fig 8(b) shows the inferred . Note that the peak of this distribution has . Since is the longest and the highest-lava trajectory in , and alternative shorter and lower-lava trajectories exist in , the robot infers that the human is attempting to maximize both trajectory length and lava encountered: the opposite of the truth. Unsurprisingly, maximizing expected reward for this belief leads to high regret. The key difference between B2 and B3 is that is the lowest-reward element in , resulting in the robot updating directly away from the true reward.

(a)
(b)
Figure 7: Example human choice set and corresponding feedback.
(a)
(b)
Figure 8: Robot choice set and resulting misspecified belief in B2.
(a)
(b)
Figure 9: Robot choice set and resulting misspecified belief in B3.

6 Discussion

Summary

In this work, we highlighted the problem of choice set misspecification in generalized reward inference, where a human gives feedback selected from choice set but the robot assumes that the human was choosing from choice set . As expected, such misspecification on average induces suboptimal behavior resulting in regret. However, a different story emerged once we distinguished between misspecification classes. We defined five distinct classes varying along two axes: the relationship between and and the location of the optimal element of feedback . We empirically showed that different classes lead to different types of error, with some classes leading to overconfidence, some to underconfidence, and one to particularly high regret. Surprisingly, under certain conditions the expected regret under choice set misspecification is actually 0, meaning that in expectation, misspecification does not hurt in these situations.

Implications

There is wide variance across the different types of choice-set misspecification: some may have particularly detrimental effects, and others may not be harmful at all. This suggests strategies for designing robot choice sets to minimize the impact of misspecification. For example, we find that regret tends to be negative (that is, misspecification is helpful) when the optimal element of feedback is in both and and (class A2). Similarly, worst-case inference occurs when the optimal element of feedback is in only, and contains elements that are not in (class B3). This suggests that erring on the side of specifying a large , which makes A2 more likely and B3 less, may lead to more benign misspecification. Moreover, it may be possible to design protocols for the robot to identify unrealistic choice set-feedback combinations and verify its choice set with the human, reducing the likelihood of misspecification in the first place. We plan to investigate this in future work.

Limitations and future work.

In this paper, we primarily sampled choice sets randomly from the master choice set of all possibly optimal demonstrations. However, this is not a realistic model. In future work, we plan to select human choice sets based on actual human biases to improve ecological validity. We also plan to test this classification and our resulting conclusions in more complex and realistic environments. Eventually, we plan to work on active learning protocols that allow the robot to identify when its choice set is misspecified and alter its beliefs accordingly.

Acknowledgements

We thank colleagues at the Center for Human-Compatible AI for discussion and feedback. This work was partially supported by an ONR YIP.

Footnotes

  1. Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

References

  1. P. Abbeel and A. Y. Ng (2004) Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 1. Cited by: §2.
  2. A. Bajcsy, D. P. Losey, M. K. O’Malley and A. D. Dragan (2017) Learning robot objectives from physical human interaction. Proceedings of Machine Learning Research 78, pp. 217–226. Cited by: §2.
  3. A. Bobu, A. Bajcsy, J. F. Fisac, S. Deglurkar and A. D. Dragan (2020) Quantifying hypothesis space misspecification in learning from human–robot demonstrations and physical corrections. IEEE Transactions on Robotics. Cited by: §1.
  4. A. Bobu, D. R. Scobee, J. F. Fisac, S. S. Sastry and A. D. Dragan (2020) LESS is more: rethinking probabilistic models of human behavior. arXiv preprint arXiv:2001.04465. Cited by: §1.
  5. S. Choi, S. Kariv, W. Müller and D. Silverman (2014) Who is (more) rational?. American Economic Review 104 (6), pp. 1518–50. Cited by: §1.
  6. P. F. Christiano, J. Leike, T. B. Brown, M. Martic, S. Legg and D. Amodei (2017) Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems 2017-Decem, pp. 4300–4308. External Links: ISSN 10495258 Cited by: §2.
  7. O. Evans, A. Stuhlmueller and N. D. Goodman (2015) Learning the Preferences of Ignorant, Inconsistent Agents. arXiv. External Links: 1512.05832, Link Cited by: §1.
  8. P. Goyal, S. Niekum and R. J. Mooney (2019) Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020. Cited by: §2.
  9. D. Hadfield-Menell, S. Milli, P. Abbeel, S. J. Russell and A. Dragan (2017) Inverse reward design. In Advances in neural information processing systems, pp. 6765–6774. Cited by: §2.
  10. D. Hadfield-Menell, S. J. Russell, P. Abbeel and A. Dragan (2016) Cooperative inverse reinforcement learning. In Advances in neural information processing systems, pp. 3909–3917. Cited by: §1.
  11. H. J. Jeon, S. Milli and A. D. Dragan (2020) Reward-rational (implicit) choice: a unifying formalism for reward learning. arXiv preprint arXiv:2002.04833. Cited by: §1, §2, §2.
  12. D. Kahneman and A. Tversky (1979) Prospect Theory: An Analysis of Decision Under Risk. Econometrica 47 (2), pp. 263–292. Cited by: §1.
  13. V. Krakovna (2018-04) Specification gaming examples in ai. External Links: Link Cited by: §1.
  14. J. Leike, D. Krueger, T. Everitt, M. Martic, V. Maini and S. Legg (2018) Scalable agent alignment via reward modeling: a research direction. arXiv. External Links: 1811.07871, Link Cited by: §1.
  15. A. Majumdar, S. Singh, A. Mandlekar and M. Pavone (2017) Risk-sensitive inverse reinforcement learning via coherent risk models.. In Robotics: Science and Systems, Cited by: §1.
  16. S. Milli and A. D. Dragan (2019) Literal or pedagogic human? analyzing human model misspecification in objective learning. arXiv preprint arXiv:1903.03877. Cited by: §1.
  17. S. Mindermann, R. Shah, A. Gleave and D. Hadfield-Menell (2018) Active inverse reward design. arXiv preprint arXiv:1809.03060. Cited by: §2.
  18. A. Y. Ng and S. J. Russell (2000) Algorithms for inverse reinforcement learning. In International Confer-ence on Machine Learning (ICML), Cited by: §2.
  19. S. Reddy, A. Dragan and S. Levine (2018) Where do you think you’re going?: inferring beliefs about dynamics from behavior. In Advances in Neural Information Processing Systems, pp. 1454–1465. Cited by: §1.
  20. D. Sadigh, A. D. Dragan, S. Sastry and S. A. Seshia (2017) Active preference-based learning of reward functions.. In Robotics: Science and Systems, Cited by: §2.
  21. R. Shah, D. Krasheninnikov, J. Alexander, P. Abbeel and A. Dragan (2019) Preferences implicit in the state of the world. arXiv preprint arXiv:1902.04198. Cited by: §2.
  22. J. Steinhardt and O. Evans (2017-02) Model mis-specification and inverse reinforcement learning. External Links: Link Cited by: §1.
  23. K. Waugh, B. D. Ziebart and J. A. Bagnell (2013) Computational rationalization: the inverse equilibrium problem. arXiv preprint arXiv:1308.3506. Cited by: §1.
  24. B. D. Ziebart (2010) Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Ph.D. Thesis, Carnegie Mellon University. Cited by: §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
426438
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description