Model-based Policy Searchfor Partially Measurable Systems

Model-based Policy Search for Partially Measurable Systems

Abstract

In this paper, we propose a Model-Based Reinforcement Learning (MBRL) algorithm for Partially Measurable Systems (PMS), i.e., systems where the state can not be directly measured, but must be estimated through proper state observers. The proposed algorithm, named Monte Carlo Probabilistic Inference for Learning COntrol for Partially Measurable Systems (MC-PILCO4PMS), relies on Gaussian Processes (GPs) to model the system dynamics, and on a Monte Carlo approach to update the policy parameters. W.r.t. previous GP-based MBRL algorithms, MC-PILCO4PMS models explicitly the presence of state observers during policy optimization, allowing to deal PMS. The effectiveness of the proposed algorithm has been tested both in simulation and in two real systems.

1 Introduction

Reinforcement Learning (RL) [16] has achieved outstanding results in many different environments. MBRL algorithms seem a promising solution to apply RL to real systems, due to their data-efficiency w.r.t. model-free RL algorithms. In particular, remarkable results have been obtained relying on Gaussian Processes (GPs) [17] to model the systems dynamics, see for instance [5, 13, 3, 14, 15]. In this paper, we cosider the application of MBRL algorithms to PMS, i.e., systems where only a subset of the state components can be directly measured, and the remaining components can be estimated through proper state observer. PMS are particularly relevant in real world applications, think for example to mechanical systems, where, typically, only positions are measured, while velocities are estimated thorough numerical differentiation or more complex filters. The proposed algorithm, named MC-PILCO4PMS, relies on Gaussian Processes (GPs) to model the system dynamics, and on a Monte Carlo approach [1] to optimize the policy parameters. W.r.t. previous GP-based MBRL algorithms, such as [5, 13, 3, 4], MC-PILCO4PMS models explicitly the presence of two different state observers during the two phases of model learning and of policy optimization. This improves the characterization of the PMS in the two phases and so the control performance. In the following we provide a description of the proposed algorithm, assuming that it is applied to mechanical systems where only positions measurement are available. However, the algorithm generalizes to any PMS.

2 Problem Setting

Consider a mechanical system with degrees of freedom, and denote with its state, where and are, respectively, the vector of the generalized coordinates and its derivative w.r.t. time. Assume that joint positions can be directly measured, while must be estimated from the history of measurements. Moreover, let the system be Markovian, and describe its discrete-time dynamics as , where is an unknown transition function, represents the system input, while models uncertainty. The objective of RL algorithms is learning to accomplish a given task based on interaction data. The task is encoded in a cost function , defined to characterize the immediate penalty for being in state . The system inputs are chosen in accordance with a policy that depends on the parameter vector . Then, the objective is to find the policy that minimizes the expected cumulative cost over a finite number of time steps , with initial state distribution , i.e., .

3 Method

MC-PILCO4PMS consists of the iteration of three phases: (i) model learning, (ii) policy optimization, and (iii) policy execution. In the first phase, MC-PILCO4PMS relies on GPR to estimate the one-step-ahead system dynamics, while for the optimization of the policy parameters, MC-PILCO4PMS implements a gradient-based strategy. In the following, we briefly discuss the two phases.

3.1 Model Learning

Dynamics model. The proposed one-step-ahead GP model exploits the intrinsic correlation between the position and velocity. In our algorithm a distinct GP model is learned to predict the velocity change, while positions are obtained by integration. This approach is different from previous GP-based MBRL algorithms, such as [5, 13, 3], that learn one independent model for each state component.
Let us indicate the components of and with and , respectively, where . Then, let be the difference between the value of the i-th velocity at time and , and the noisy measurement of . For each velocity component , we model with a distinct GP with zero mean and kernel function , which takes as input . Details on the kernel choice can be found in Appendix 6.1. In GPR the posterior distribution of given the data is Gaussian, with mean and covariance available in closed form, see [17]. Then, given the GP input , a prediction of the velocity changes can be sampled from the aforementioned posterior distribution. When considering a sufficiently small sampling time , it is reasonable to assume constant accelerations between two consecutive time-steps, and the predicted positions and velocities are obtained with the following equations, and for .

Training data computation. As described before, velocities are not accessible and have to be estimated from measurements of positions. Notice that the velocity estimates used to train the GP models can be computed offline, exploiting the (past and future) history of measurements to improve accuracy. Well filtered data, that resemble the real states of the system, improve significantly the adherence between the learnt model and the real system. In our experiments, we computed offline the velocities used to train the GPs, using for example, the central difference formula, i.e., , which is an acausal filter. We would like to underline that these state estimates are different from the ones computed real-time and provided to the control policy during system interaction. Typically, due to real-time constraints, online estimates are less accurate and it is fundamental to keep this in consideration during policy optimization as we can see in the following.

3.2 Policy optimization

MC-PILCO4PMS optimizes the policy parameters with a gradient-based strategy. At each optimization step the algorithm performs the following operations: (i) approximation of the cumulative cost relying on a Monte Carlo approximation; (ii) computation of the gradient and update of . More precisely, the algorithm samples particles from the initial state distribution , and simulates their evolution for steps. At each simulation step the inputs are selected in accordance with the current policy, and the next state is predicted with the GP models previously described. This procedure models the propagation of the model uncertainty for long-term predictions. Then, the Monte Carlo estimate of the cumulative cost is , where denotes the state of the m-th particle at time . The gradient is computed by backpropagation on the computational graph of , exploiting the reparametrization trick [9] to propagate the gradient through the stochastic operations, i.e., the sampling from the GP posteriors distribution. Advantages of MC based long-term predictions w.r.t to e.g., moment matching [5] are that no assumptions on the state distribution and on the kernel function in the GP models have to be made. The policy parameters are updated using the Adam solver [8]. In the remainder of this section we describe the particles simulation, which is the main novelty introduced to deal with PMS.

Particles simulation with PMS. In order to deal with PMS we not only simulate the evolution of the system state, but also the evolution of the observed states, modeling the measurement system and the online state observers implemented in the real system. A block scheme of the particles generation is reported in Fig.1. Let be the state of the m-th particle at the simulation step predicted by the GP models. In order to transform the prediction of the system state to the observed state, firstly, we simulate the measurement system by corrupting positions with a zero mean Gaussian i.i.d noise : . Secondly, the measured states are used to compute an estimate of the observed states: , where is the online state observer implemented in the real system, with memory and . Finally, the control inputs for each particle are computed as , the next particles states are sampled from the GP dynamics, and the procedure is iterated. The procedure aims at obtaining robustness w.r.t. delays and distortions introduced by measurement noise and online observers. Notice that selecting the inputs as , as done in several previous MBRL algorithms, is equivalent to assume full access to the system state,

Figure 1: Block schemes illustrating particles generation in MC-PILCO4PMS.

which is often an unrealistic assumption when dealing with real systems, since the difference between the system state and the observed state might be significant. This is a key differentiation of our method. Let us denote, MC-PILCO, the version of the proposed algorithm which assumes fully access to the system state during policy optimization. A numerical comparison between MC-PILCO and two state-of-the-art GP-based MBRL algorithms is reported in the Appendix 6.3. The results obtained show that MC-PILCO overperforms both the algorithms in terms of data-efficiency and accuracy.

4 Experiments

MC-PILCO4PMS has been tested both in simulation and in real systems. First, we validate in simulation the impact of taking into consideration the measurement system and the online filter during particle simulation. Second, MC-PILCO4PMS has been successfully applied to two real systems: a Furuta pendulum and a Ball-and-Plate system. Further details about the implementation of the algorithm on the presented systems can be found in Appendices 6.2, 6.4, 6.5.

Simulation as proof of concepts. Here, we test the relevance of modeling the presence of online observers on a simulated cart-pole system. The objective is to learn a policy able to swing-up the pole and stabilize it in the upwards equilibrium, while keeping the cart stationary. We assumed to be able to measure only the cart position and the pole angle. The online estimates of the velocities were computed by means of causal numerical differentiation followed by a first order low-pass filter. The velocities used to train the GPs were derived with the central difference formula. Two policy functions have been trained: the first has been derived with MC-PILCO, assuming direct access to the full state predicted by the model; the second policy has been derived using MC-PILCO4PMS. In Figure 4, we report the results of a Monte Carlo study with 400 runs. Even though the two policies perform similarly when applied to the learned models, the results obtained with the cart-pole system are significantly different. MC-PILCO4PMS solves the task in all 400 attempts. In contrast, in several attempts, the MC-PILCO policy does not solve the task, due to delays and discrepancies introduced by the online filter and not considered during policy optimization.

Figure 2: Comparison of 400 simulated particles rollout (left) and the trajectories performed applying repetitively the policy 400 times in the system (right) with the simulated cart-pole system. MC-PILCO results are on the top plots, while MC-PILCO4PMS are on the bottom.
Figure 3: Trajectories for the pendulum angle (left) and arm angle (right) obtained at each trial. For all the kernels, the angles are plotted up to the trial that solved the task.
Figure 4: Ten different ball trajectories obtained on the Ball-and-Plate under the final policy learned by MC-PILCO4PMS. Steady-state positions are marked with black crosses. The dashed circle has the same diameter of the used ball.

Furuta Pendulum. The Furuta pendulum [2] is a popular under-actuated benchmark system that consists of a driven arm, revolving in the horizontal plane, with a pendulum attached to its end, which rotates in the vertical plane. Let be the horizontal angle of the arm, and the vertical angle of the pendulum. The objective is to learn a controller able to swing-up the pendulum and stabilize it in the upwards equilibrium ( [rad]) with [rad]. Offline estimates of velocities for the GP model have been computed by means of central differences. Causal numerical differentiation were used for the online estimation. MC-PILCO4PMS managed to solve the task using the three different choices of kernel functions presented in Appendix 6.1. In Figure 4, we show the resulting trajectories for each trial. These experiments show the effectivness of MC-PILCO4PMS and confirm the higher data efficiency of more structured kernels, which is one of the advantage that MC-PILCO4PMS offers by allowing any kernel function while in methods like PILCO the kernel choice is limited. For best of our knowledge, with 9 [s] of training data this algorithm is the most data-efficient to solve a FP.

Ball-and-Plate. The ball-and-plate system is composed of a square plate that can tilt in two orthogonal directions by means of two motors. On top of it, there is a camera to track the ball and measure its position on the plate. The objective of the experiment is to learn how to control the motor angles in order to stabilize the ball around the center of the plate. Measurements provided by the camera are very noisy, and cannot be used directly to estimate velocities from positions. We used a Kalman smoother [6] for the offline filtering of ball positions and associated velocities. Instead, in real-time we used a Kalman filter [7] to estimate online the ball state from noisy measures of positions. MC-PILCO4PMS learnt a policy to stabilize the ball around the center starting from any initial position after the third trial, 11.33 [s] of interaction with the system. We tested the learned policy starting from ten different points, see Figure 4. The mean steady-state error, i.e. the average distance of the final ball position from the center observed in the ten trials, was 0.0099 [m], while the maximum measured error was 0.0149 [m], which is lower than the ball radius of 0.016 [m].

5 Conclusions

We have presented a MBRL algorithm called, MC-PILCO4PMS, which does not assume that all the components of the state can be measured and we successfully applied it to robotic systems. The algorithm employs GPs to derive a probabilistic model of the system dynamics. Policy parameters are updated through a Monte Carlo gradient-based strategy: expected cumulative cost is estimated by averaging over hundreds of simulated rollouts, and policy gradient is computed by backpropagation on the resulting computational graph. We showed the importance of manipulating the measurements to both provide accurate state estimates to the model learning algorithm and to reproduce the measurement system together with the online state observer during policy optimization.

6 Appendix

6.1 Kernel functions

One of the advantages of the particle-based policy optimization method is the possibility of choosing any kernel functions without restrictions. Hence, we considered different kernel functions as examples to model the evolution of physical systems. But the reader can consider a custom kernel function appropriate for his application.

Squared exponential (SE). The SE kernel represents the standard choice adopted in many different works. It is defined as

(1)

where the scaling factor and the matrix are kernel hyperparameters which can be estimated by marginal likelihood maximization. Typically, is assumed to be diagonal, with the diagonal elements named lengthscales.

SE + Polynomial (SE+). Recalling that the sum of kernels is still a kernel [17], we considered also a kernel given by the sum of a SE and a polynomial kernel. In particular, we used the Multiplicative Polynomial (MP) kernel, which is a refinement of the standard polynomial kernel, introduced in [11]. The MP kernel of degree is defined as the product of linear kernels, namely,

where the matrices are distinct diagonal matrices. The diagonal elements of the , together with the elements are the kernel hyperparameters. The resulting kernel is

(2)

The idea motivating this choice is the following: the MP kernel allows capturing possible modes of the system that are polynomial functions in , which are typical in mechanical systems [10], while the SE kernel models more complex behaviors not captured by the polynomial kernel.

Semi-Parametrical (SP). When prior knowledge about the system dynamics is available, for example given by physics first principles, the so called physically inspired (PI) kernel can be derived. The PI kernel is a linear kernel defined on suitable basis functions , see for instance [14]. More precisely, is a, possibly nonlinear, transformation of the GP input determined by the physical model. Then we have

where is a positive-definite matrix, whose elements are the hyperparameters; to limit the number of hyperparameters, a standard choice consists in considering to be diagonal. To compensate possible inaccuracies of the physical model, it is common to combine with an SE kernel, obtaining so called semi-parametric kernels [12, 14], expressed as

The rationale behind this kernel is the following: encodes the prior information given by the physics, and compensates for the dynamical components unmodeled in .

6.2 Simulated Cart-pole

The physical properties of the cart-pole system considered are the following: the masses of both cart and pole are 0.5 [kg], the length of the pole is [m], and the coefficient of friction between cart and ground is 0.1. The state at each time step is defined as , where represents the position of the cart and the angle of the pole. The target states corresponding to the swing-up of the pendulum is given by [m] and [rad]. The downward stable equilibrium point is defined at [rad]. As done in [5], in order to avoid singularities due to the angles, is replaced with the state representation inside GP inputs. For the GP models SE kernels have been chosen (1). The control action is the force that pushes the cart horizontally. We considered white measurement noise with standard deviation of , and as initial state distribution . In order to obtain reliable estimates of the velocities, samples were collected at 30 [Hz]. The number of particles has been set to in all the tests.

The cost function optimized in MC-PILCO is the following,

(3)

where and are named lengthscales. Notice that the lengthscales define the shape of , the cost function goes to its maximum value more rapidly with small lengthscales. Therefore, higher cost is associated to the same distance from the target state with lower and . The lower the lengthscale the more selective the cost function. The absolute value on is needed to allow different swing-up solutions to both the equivalent target angles of the pole and . The selected lengthscales were and .

The policy adopted is an RBF network policy with outputs limited by an hyperbolic tangent function, properly scaled. We call this function squashed-RBF-network and it is defined as

(4)

parameters are , where and are, respectively, the weights and the centers of the Gaussian basis functions, while determines their shapes; in all experiments we assumed to be diagonal. is the maximum control action applicable. For this experiment we choose basis functions and [N]. The exploration trajectory is obtained by applying at each time step a random control action sampled from .

6.3 Comparison with state of the art algorithms

Trial 1 Trial 2 Trial 3 Trial 4 Trial 5
PILCO 2% 4% 20% 36% 42%
Black-DROPS 0% 4% 30% 68% 86%
MC-PILCO 0% 14% 78% 94% 100%
Table 1: Success rate per trial obtained with PILCO, Black-DROPS and MC-PILCO.

We tested PILCO[5], Black-DROPS[3] and MC-PILCO on the simulated cart-pole system.

Figure 5: Median and confidence interval of the cumulative cost per trial obtained with PILCO, Black-DROPS and MC-PILCO.

The setup is equal to the one described in Appendix 6.2, with the only two difference that here the samples are collected at 20 [Hz] and the noise standard deviation is . In PILCO and Black-DROPS, we considered their original cost function,

(5)

where is the squared distance between the tip of the pole and its position at the unstable equilibrium point with [m]. This last cost is also adopted as a common metric to compare the results obtained by the three algorithms. Results of the cumulative cost are reported in Figure 5, observed success rates are shown in Table 1. MC-PILCO achieved the best performance both in transitory and at convergence, by trial 5, it learned how to swing up the cart-pole with a success rate of 100%. In each and every trial, MC-PILCO obtained cumulative costs with lower median and less variability. On the other hand, the policy in PILCO showed poor convergence properties with only 42% of success rate after all the 5 trials. Black-DROPS outperforms PILCO, but it obtained worse results than MC-PILCO in each and every trial, with a success rate of only 86% at trial 5.

6.4 Furuta Pendulum

The Furuta pendulum (FP) [2] is a popular benchmark system used in nonlinear control and reinforcement learning. The system is composed of two revolute joints and three links.

Figure 6: Furuta pendulum used in the experiment while being controlled in the upward equilibrium point by the learned policy.

The first link, called the base link, is fixed and perpendicular to the ground. The second link, called arm, rotates parallel to the ground, while the rotation axis of the last link, the pendulum, is parallel to the principal axis of the second link, see Figure 6. The FP is an under-actuated system as only the first joint is actuated. In particular, in the FP considered the horizontal joint is actuated by a DC servomotor, and the two angles are measured by optical encoders with 4096 [ppr]. The control variable is the motor voltage. Let the state at time step be , where is the angle of the horizontal joint and the angle of the vertical joint attached to the pendulum. The objective is to learn a controller able to swing-up the pendulum and stabilize it in the upwards equilibrium ( [rad]) with [rad]. The trial length is 3 [s] with a sampling frequency of 30 [Hz]. The cost function is defined as

(6)

with

The first part of the function in (6) aims at driving the two angles towards and , while penalizes solutions where or . We set those boundaries to avoid the risk of damaging the system if the horizontal joint rotates too much. Offline estimates of velocities for the GP model have been computed by means of central differences. For the online estimation, we used causal numerical differentiation: , where is the sampling time. Instead of , we considered the extended state in GP input. The policy is a squashed-RBF-network with basis functions that receives as input . We used 400 particles to estimate the policy gradient from model predictions. The exploration trajectory has been obtained using as input a sum of ten sine waves of random frequencies and same amplitudes. The initial state distribution is assumed to be . particles were used for gradient estimation.

6.5 Ball-and-Plate

The ball-and-plate system is composed of a square plate that can be tilted in two orthogonal directions by means of two motors. On top of it, there is a camera to track the ball and measure its position on the plate. Let be the position of the center of the ball along X-axis and Y-axis, while and are the angles of the two motors tilting the plate, at time . So, the state of the system is defined as . The drivers of the motors allow only position control, and do not provide feedback about the motors angles. To keep track of the motor angles, we defined the control actions as the difference between two consecutive reference values sent to the motor controllers, and we limited the maximum input to a sufficiently small value, such that the motor controllers are able to reach the target angle within the sampling time. Then, in first approximation, the reference angles and the motor angles coincide, and we have and . The objective of the experiment is to learn how to control the motor angles in order to stabilize the ball around the center of the plate. Notice that the control task, with the given definition of inputs, is particularly difficult because the policy must learn to act in advance, and not only react to changes in the ball position.

The cost function is defined as

The trial length is 3 [s], with a sampling frequency of 30 [Hz]. Measurements provided by the camera are very noisy, and cannot be used directly to estimate velocities from positions. We used a Kalman smoother for the offline filtering of ball positions and associated velocities . In the control loop, instead, we used a Kalman filter [7] to estimate online the ball state from noisy measures of positions. Concerning the model, we need to learn only two GPs predicting the evolution of the ball velocity because we directly control motor angles, hence, their evolution is assumed deterministic. GP inputs, , include an extended version of the state, where angles have been replaced by their sines and cosines, and motor angular velocities have been estimated with causal numerical differentiation ( is the sampling time).

Figure 7: Ball-and-plate system used in the experiment.

The SE+ kernel (2) is used, where the linear kernel acts only on a subset of the model inputs, . We considered particles for policy gradient estimation. The policy is a multi-output squashed-RBF-network, with basis functions, that receives as inputs the estimates of computed with the Kalman filter; maximum angle displacement is [deg] for both motors. Initial exploration is given by two different trials, in which the control signals are two triangular waves perturbed by white noise. Mostly during exploration and initial trials, the ball might touch the borders of the plate. In those cases, we kept data up to the collision instant. A peculiarity of this experiment in comparison to the others seen before is a wide range of initial conditions. In fact, the ball could be positioned anywhere on the plate’s surface, and the policy must control it to the center. The initial distribution of and is a uniform , which covers almost the entire surface (the plate is a square with sides of about 0.20 [m]). For the other state components, and , we assumed tighter initial distributions .

Footnotes

  1. footnotemark:
  2. footnotemark:
  3. footnotemark:

References

  1. R. E. Caflisch (1998) Monte carlo and quasi-monte carlo methods. Acta numerica 1998, pp. 1–49. Cited by: §1.
  2. B. S. Cazzolato and Z. Prime (2011) On the dynamics of the furuta pendulum. Journal of Control Science and Engineering 2011. Cited by: §4, §6.4.
  3. K. Chatzilygeroudis, R. Rama, R. Kaushik, D. Goepp, V. Vassiliades and J. Mouret (2017) Black-box data-efficient policy search for robotics. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 51–58. Cited by: §1, §3.1, §6.3.
  4. A. Dalla Libera, D. Romeres, D. K. Jha, B. Yerazunis and D. Nikovski (2020) Model-based reinforcement learning for physical systems without velocity and acceleration measurements. IEEE Robotics and Automation Letters 5 (2), pp. 3548–3555. Cited by: §1.
  5. M. Deisenroth and C. E. Rasmussen (2011) PILCO: a model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465–472. Cited by: §1, §3.1, §3.2, §6.2, §6.3.
  6. G. A. Einicke (2006) Optimal and robust noncausal filter formulations. IEEE Transactions on Signal Processing 54 (3), pp. 1069–1077. Cited by: §4.
  7. R. E. Kalman (1960-03) A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering 82 (1), pp. 35–45. External Links: ISSN 0021-9223, Document, Link, https://asmedigitalcollection.asme.org/fluidsengineering/article-pdf/82/1/35/5518977/35_1.pdf Cited by: §4, §6.5.
  8. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.2.
  9. D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §3.2.
  10. A. D. Libera and R. Carli (2020) A data-efficient geometrically inspired polynomial kernel for robot inverse dynamic. IEEE Robotics and Automation Letters 5 (1), pp. 24–31. Cited by: §6.1.
  11. A. D. Libera, R. Carli and G. Pillonetto (2019) A novel multiplicative polynomial kernel for volterra series identification. arXiv preprint arXiv:1905.07960. Cited by: §6.1.
  12. D. Nguyen-Tuong and J. Peters (2010) Using model knowledge for learning inverse dynamics. In 2010 IEEE International Conference on Robotics and Automation, Vol. , pp. 2677–2682. Cited by: §6.1.
  13. P. Parmas, C. E. Rasmussen, J. Peters and K. Doya (2018) PIPPS: flexible model-based policy search robust to the curse of chaos. In International Conference on Machine Learning, pp. 4065–4074. Cited by: §1, §3.1.
  14. D. Romeres, D. K. Jha, A. DallaLibera, B. Yerazunis and D. Nikovski (2019) Semiparametrical gaussian processes learning of forward dynamical models for navigating in a circular maze. In 2019 International Conference on Robotics and Automation (ICRA), pp. 3195–3202. Cited by: §1, §6.1.
  15. D. Romeres, M. Zorzi, R. Camoriano, S. Traversaro and A. Chiuso (2019) Derivative-free online learning of inverse dynamics models. IEEE Transactions on Control Systems Technology 28 (3), pp. 816–830. Cited by: §1.
  16. R. S. Sutton and A. G. Barto (2018) Reinforcement learning: an introduction. MIT press. Cited by: §1.
  17. C. K. Williams and C. E. Rasmussen (2006) Gaussian processes for machine learning. MIT press Cambridge, MA. Cited by: §1, §3.1, §6.1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
426845
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description