Convex block-sparse linear regression with expanders – provably

Convex block-sparse linear regression with expanders – provably

Abstract

Sparse matrices are favorable objects in machine learning and optimization. When such matrices are used, in place of dense ones, the overall complexity requirements in optimization can be significantly reduced in practice, both in terms of space and run-time. Prompted by this observation, we study a convex optimization scheme for block-sparse recovery from linear measurements. To obtain linear sketches, we use expander matrices, i.e., sparse matrices containing only few non-zeros per column. Hitherto, to the best of our knowledge, such algorithmic solutions have been only studied from a non-convex perspective. Our aim here is to theoretically characterize the performance of convex approaches under such setting.

Our key novelty is the expression of the recovery error in terms of the model-based norm, while assuring that solution lives in the model. To achieve this, we show that sparse model-based matrices satisfy a group version of the null-space property. Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting – as opposed to using dense sensing matrices, while showing a competitive recovery performance.

1 Introduction

Consider the sparse recovery problem in the linear setting: given a measurement matrix (), we seek a sparse vector that satisfies a set of measurements or finds a solution in the feasible set , when noise is present. This problem is closely related to compressive sensing [11, 9] and the subset selection problem [27], as well as to graph sketching [1] and data streaming [25].

Typically, the analysis included in such task focuses on investigating conditions on and the recovery algorithm to obtain estimate that yields a guarantee:

(1)

Here, denotes the best -sparse approximation to for constants and [7, 5]. This type of guarantee is known as the error guarantee.

Without further assumptions, there is a range of recovery algorithms that achieve the above, both from a non-convex [24, 21, 23] and a convex perspective [10, 5]. In this work, we focus on the latter case. Within this context, we highlight -minimization method, also known as Basis pursuit (BP) [10], which is one of the prevalent schemes for compressive sensing:

(2)

Next, we discuss common configurations for such problem settings, as in (2), and how a priori knowledge can be incorporated in optimization. We conclude this section with our contributions.

Measurement matrices in sparse recovery: Conventional wisdom states that the best compression performance, i.e., using the least number of measurements to guarantee (1), is achieved by random, independent and identically distributed dense sub-gaussian matrices [8]. Such matrices are known to satisfy the -norm restricted isometry property (RIP-) with high-probability. That is, for some and [6, 5]:

(3)

In this case, we can tractably and provably approximate sparse signals from sketches for generic -sparse signals. Unfortunately, dense matrices require high time- and storage-complexity when applied in algorithmic solutions.1

On the other hand, in several areas, such as data stream computing and combinatorial group testing [25], it is vital to use a sparse sensing matrix , containing only very few non-zero elements per column [18, 5]. In this work, we consider random sparse matrices that are adjacency matrices of expander graphs; a precise definition is given below.

Definition 1.1.

Let a graph be a left-regular bipartite graph with left (variable) nodes, right (check) nodes, a set of edges and left degree . If, any with and for some , we have that the set of neighbours of , , satisfy , then the graph is called a -lossless expander graph. The adjacency matrix of an expander graph is denoted as and is called as expander matrix.

In the above, the attribute is known as the expansion property, where [20]. As [5] states, expander matrices satisfy the RIP-1 condition if , we have:

(4)

Since the recovery algorithms typically use and its adjoint as subroutines over vectors, sparse matrices have low computational complexity, even in the convex case (2), without any loss in sample complexity ; see Section 7 for some illustrative results on this matter. Along with computational benefits, [5] provides recovery guarantees of the form (1) for BP recovery where .

Model-based sparse recovery: Nevertheless, sparsity is merely a first-order description of and in many applications we have considerably more information a priori. In this work, we consider the -sparse block model [29, 3, 22]:

Definition 1.2.

We denote a block-sparse structure by where for , , and is the total number of groups. Given a group structure , the -sparse block model is defined as the collection of sets of groups from .

We consider block-sparse models such that and where is the number of groups. The idea behind group models is the identification of group of variates that should be either selected or discarded (i.e., set to zero) together. Such settings naturally appear in applications such as gene expression data [30] and neuroimaging [13].

As an indicator of what can be achieved, Model-based Compressive Sensing [4], leverages such models with dense sensing matrices to provably reduce the number of measurements for stable recovery from to using non-convex schemes2; see [14] for a more recent discussion and extensions on this matter. Along this research direction, [17] and [2] propose non-convex algorithms for equally-sized and variable-sized block-sparse signal recovery, respectively, using sparse matrices. To show reduction in sampling complexity, is assumed to satisfy the model RIP-1 [17]:3

(5)

, where represents the model-based expansion constant.

1.1 Contributions

Restricting error guarantees to only variations of standard -norms might be inconvenient in some applications. Here, we broaden the results in [16] towards having non-standard distances in approximation guarantees. To the best of our knowledge, this work is the first attempt for provable convex recovery with approximate guarantees in the appropriate norm and using sparse matrices. Similar results – but for a different model – for the case of dense (Gaussian) sensing matrices are presented in [12].

In particular, in Section 4, we provide provable error guarantees for the convex criterion:

(6)

where is the -norm; see next Section for details. Our key novelty is to provide approximation guarantees using sparse matrices for sensing; see Section 5. In practice, we show the merits of using sparse matrices in convex approaches for (6), both on synthetic and real data cases; see Section 7.

2 Preliminaries

Scalars are mostly denoted by plain letters (e.g. , ), vectors by lowercase boldface letters (e.g., ), matrices by uppercase boldface letters (e.g., ) and sets by uppercase calligraphic letters (e.g., ), with the exception of which denotes the index set . Given a set in item space such that , we denote its complement by ; we also use the notation , when the item space is clearly described by context. For vector , denotes the restriction of onto , i.e., if and otherwise. We use to denote the cardinality of a set . The norm of a vector is defined as .

The -norm: The first convex relaxation for block-sparse approximation is due to [29], who propose group sparsity-inducing convex norm:

The -norm construction follows group supports according to a predefined model and promotes sparsity in the group level. We assume all the , but may be otherwise for a more general setting [19].

Given a vector , we define as the best -block sparse index set according to:

3 Problem statement

To properly set up the problem and our results, consider the following question:

Question: Let be a predefined and known a priori block sparse model of groups of size and be a user-defined group-sparsity level. Consider ) be a known expander sparse matrix, satisfying model-RIP-1 in (5) for some degree and . Assume an unknown vector is observed through . Using criterion (6) to obtain a solution such that , can we achieve a constant approximation of the form:

where contains the groups of the best -block sparse approximation of ?

4 Main result

In this section, we answer affirmatively to the above question with the following result:

Theorem 4.1.

Let be a expander, satisfying model-RIP-1 (5) for block sparse model , as in Definition 1.2. Consider two vectors such that , where is unknown and is the solution to (6). Then, and:

In other words, Theorem 4.1 shows that, given a proper sparse matrix satisfying the model-RIP-1 (5) for structure , the -convex program in (6) provides a good block-sparse solution . Next, we also state a corollary to the theorem above for the more realistic noisy case.

Corollary 4.2.

Assume the setting in Theorem 4.1 and let two vectors such that , where is unknown and is the solution to (6). Then, and:

5 Proof of Theorem 4.1

Let us consider two solutions to the set of linear equations , where and let be a given expander satisfying (5). We define since .

Using the group model formulation, each vector can be decomposed as: where ; moreover, assume that . Thus, given the above decompositions, we also have:

(7)

where . By assumption, we know that . Using the definition of the , we have:

(8)

Denote the set of groups in the best -block sparse approximation of as . Then,

(9)

where the first equality is due to the block-sparse model and the first inequality is due to the triangle inequality. To proceed, we require the following Lemma; the proof is based on [5].

Lemma 5.1.

Let such that and , where . Then,

(10)

where denotes the model-based expansion parameter of expander matrix .

Proof.

Following [5], we assume the following decomposition of indices, according to : Due to the non-overlapping nature of , we split coordinates in into -block sparse index sets such that each , has groups (except probably ), each group has indices and there is ordering on groups such that:

Since , we have . Moreover, we denote as the set of indices of the rows of that correspond to the neighbours of left-nodes in ; see Figure 1. Thus, without loss of generality, we reorder the rows of such that the top rows are indexed by —the union of the set of neighbors of all . Given the above partition, we denote the submatrix of composed of these rows as , such that

Figure 1: Representation of neighbours of left-nodes, indexed by the -group sparse set .

Thus, we have:

By using the model-RIP-1 property (5) of on the input vector we have:

where the first equality is due to , the second equality due to the non-overlapping groups and the last inequality since .

Therefore, we have:

The quantity further satisfies:

where is due to the inclusion-exclusion principle, is due to the expansion property and, is due to .

Thus, the above inequality becomes:

Let denote the group that contains the maximizing index such that:

Then:

However, due to the ordering of -norm of groups:

we further have:

which leads to:

Using (10) in (9), we further have:

which is the desired result.

6 Comparison to state-of-the-art

To justify our theoretical results, here we compare with the only known results on convex sparse recovery using expander matrices of [5]. We note that, in [5], no apriori knowledge is assumed, beyond plain sparsity.

We start our discussion with the following remark.

Remark 6.1.

In the extreme case of , Lemma 5.1 is analogous to Lemma 16 of [5]. To see this, observe that for , where and or .

We highlight that, as grows, feasible values of , i.e., we require more rows to construct an expander matrix with such expansion property. For simplicity, in the discussion below we use interchangeably, where the type of used is apparent from context.

In the case where we are oblivious to any, a-priori known, structured sparsity model, [5] prove the following error guarantees for the vanilla BP formulation (2), using expander matrices:

(11)

where such that ; i.e., we are looking for a solution of the same sparsity as the union of groups in the block-sparse case. In order to compare (11) with our result, a naive transformation of (11) into terms leads to:

(12)

To define the conditions under which (12) is valid, we require:

i.e., is independent of , while in our case, we have:

i.e., our analysis provides weaker bounds with respect to the range of values such that the error guarantees is meaningful; see also Figure 2.

Figure 2: Feasibility regions for our approach and [5] as a function of and variables.

However, as already mentioned, (12) is oblivious to any model : the solution does not necessarily belong to . This fact provides degrees of freedom to obtain better approximation constants. Nevertheless, this does not guarantee , considering only simple sparsity. Section 7 includes examples that highlight the superiority of -norm in optimization.

7 Experiments

We start our discussion with a comparison between - and -norm convex approaches, when the ground truth is known to be block sparse, and with a comparison between dense sub-Gaussian and sparse sensing matrices in (6), both w.r.t. sampling complexity and computational complexity requirements. We conclude with the task of recovering 2D images from compressed measurements, using block sparsity.

Solver.

To solve both - and -norm instances (2) and (6), we use the primal-dual convex optimization framework in [28], that solves (2) and (6) – among other objectives, by using ideas from the alternating direction methods of multipliers. Using the same framework for solution, we obtain more accurate and credible comparison results between different problem settings.

Figure 3: Average probability of successful recovery, as a function of the total number of observations, over Monte Carlo iterations. Top panel: is generated from a normal distribution. Bottom panel: is generated from a Bernoulli distribution. -norm solver requires much less number of measurements for successful recovery, compared to -norm solver.

- vs. -norm.

In this experiment, we verify that a priori structure knowledge helps in recovery. To show this in the convex domain, consider the following artificial example. Let be the set of observations, where is a -block sparse vector, for . Here, we assume a block-sparse model with non-overlapping groups. Observe that .

Both for - and -norm cases, is designed as an expander, with degree , complying with our theory. Further, we make the convention .

We consider two cases: is generated from a Gaussian distribution and is generated as a binary signal . In both cases, is normalized such that . For all cases, we used the same algorithmic framework [28] and performed Monte Carlo realizations. Figure 3 shows the average probability for successful recovery as a function of the total number of measurements observed; we declare a success when . It is apparent that knowing the model a priori can guarantee recovery with less number of observations.

Sub-Gaussian vs. sparse sensing matrices in -norm recovery.

Let us focus now on the -norm case. For this setting, we perform two experiments. First, we consider a similar setting as previously; the only difference lies in the way we generate the sensing matrix . We consider two cases: and is a sparse expander matrix, again with degree . Here, we only use the -norm solver. Figure 4 depicts some results. We observe that expander matrices perform worse – as a function of the total number of measurements required for successful recovery. Nevertheless, using sparse matrices is still considerably competitive to dense Gaussian matrices.

Figure 4: Average probability of successful recovery, as a function of the total number of observations, over Monte Carlo iterations. Expander matrices perform worse but competitive to dense Gaussian matrices.
Model Time (sec)
Gaus. Exp. Gaus. Exp.
8.6e-07 3.3e-06 24.3 5.8
8.2e-06 3.4e-06 27.5 6.2
8.6e-06 3.2e-06 27.8 6.0
8.6e-06 3.4e-06 31.2 7.6
8.1e-07 3.4e-06 95.5 18.5
8.1e-06 3.3e-06 79.4 16.4
8.5e-06 3.4e-06 83.9 15.8
8.5e-06 3.5e-06 91.3 18.9
8.2e-06 3.3e-06 419.3 49.6
8.1e-06 3.4e-06 432.8 45.8
8.5e-06 3.6e-06 436.0 52.9
8.4e-06 3.5e-06 435.4 51.1
8.1e-06 9.4e-06 1585.5 55.1
8.1e-06 9.5e-06 1598.2 54.5
8.4e-06 9.4e-06 1600.6 56.2
8.1e-06 9.3e-06 1648.0 55.6
Table 1: Summary of comparison results for reconstruction and efficiency. Median values are reported. As a stopping criterion, we used , where is the estimate at the -th iteration. In all cases, .

Now, to grasp the full picture in this setting, we scale our experiments to higher dimensions. Table 1 summons up the results. All experiments were repeated for 10 independent realizations and the table contains the median scores. For , the total number of non-overlapping groups in was , respectively. The column cardinality is selected as . For each , the block sparsity ranges as . In all cases, .

One can observe that using sparse matrices in this setting results into faster convergence – as in total time required for stopping criterion to be met. Meanwhile, the solution quality is at least at the same levels, compared to that when dense Gaussian matrices are used.

Finally, we highlight that, for , since does not increase, the number of non-zeros per column decreases (i.e., while in all other cases). This results into a small deterioration of the recovery quality; though, still comparable to the convex counterpart.4 Meanwhile, the time required for convergence in the expander case remains at the same levels as when ; however, the same does not apply for the dense counterpart. This constitutes the use of expander matrices appealing in real applications.

Block sparsity in image processing.

For this experiment, we use the real background-subtracted image dataset, presented in [15]. Out of 205 frames, we randomly selected 100 frames to process. Each frame is rescaled to be of size pixels. Each pixel takes values in . We observe where is the ground-truth vectorized frame and, is either sparse or dense sensing matrix, designed as in the previous experiments.

Original + Exp. + Gaus. + Exp. + Gaus.
Figure 5: Left panel: representative examples of subtracted frame recovery from compressed measurements. Here, measurements are observed for . Block sparse model contains groups of consecutive indices where . Right panel: Accumulative computational time required to process 100 frames. Overall, using Gaussian matrices in the -norm case, DecOpt required almost 2.8 hours (upper bound), as compared to 0.55 hours when is a sparse expander matrix. Thus, while Gaussian matrices is known to lead to better recovery results if no time restrictions apply, sparse sensing matrices constitute an appealing choice in practice.

For the purpose of this experiment, we set up an upper wall time of seconds (i.e., hours) to process frames for each solver. This translates into seconds per frame.

Due to the nature of the dataset, we can safely assume that nonzeros are clustered together. Thus, we assume group models where groups are constituted of consecutive column pixels and the indices are divided in consecutive groups of equal size . No other parameters are required – this is an advantage over non-convex approaches, where a sparsity level is usually required to be known a priori. All experiments are repeated 10 independent times for different ’s.

Figure 5 shows some representative results. Left panel illustrates the recovery performance for different settings – - vs. -norm and Gaussian vs. sparse matrices ; results for other configurations are presented in the appendix. The first row considers a “simple” image with a small number of non-zeros; the other two rows show two less sparse cases. While for the “simple” case, solving -norm minimization with Gaussian matrices lead to better recovery results – within the time constraints, the same does not apply for the more “complex” cases. Overall, we observe that, given such time restrictions per frame, by using expander matrices one can achieve a better solution in terms of PSNR faster. This is shown in more detail in Figure 5 (right panel); see also the appendix for more results.

8 Conclusions

Sparse matrices are favorable objects in machine learning and optimization. When such matrices can be applied in favor of dense ones, the computational requirements can be significantly reduced in practice, both in terms of space and runtime complexity. In this work, we both show theoretically and experimentally that such selection is advantageous for the case of group-based basis pursuit recovery from linear measurements. As future work, one can consider other sparsity models, as well as different objective criteria.

Appendix A Appendix

Here, we report further results on the 2D image recovery problem. We remind that, for the purpose of this experiment, we set up an upper wall time of seconds (i.e., hours) to process frames for each solver. This translates into seconds per frame.

a.1 Varying group size

For this case, we focus on a single frame. Due to its higher number of non-zeros, we have selected the frame shown in Figure 6. For this case, we consider a roughly sufficient number of measurements is acquired where . By varying the group size , we obtain the results in Figure 6.

a.2 Varying number of measurements

Here, let as this group selection performs better, as shown in the previous subection. Here, we consider take values from . The results, are shown in Figure 7.

Figure 6: Results from real data. Representative examples of subtracted frame recovery from compressed measurements. Here, measurements are observed for . From top to bottom, each line corresponds to block sparse model with groups of consecutive indices, where , , and , respectively. One can observe that one obtains worse recovery as the group size increases; thus a model with groups is a good choice for this case.
Figure 7: Results from real data. Representative examples of subtracted frame recovery from compressed measurements. Here, we consider a block sparse model fixed, with block size per group. From top to bottom, the number of measurements range from to , for . One can observe that one obtains better recovery as the number of measurements increases.

Footnotes

  1. There exist structured dense matrices that achieve better time and storage complexity, as compared to random Gaussian matrices; see [26] for discussion on subsampled Fourier matrices for sparse recovery. However, sparse design matrices have been proved to be more advantageous over such structured dense matrices [5].
  2. Most of the non-convex approaches known heretofore consider a (block) sparse constrained optimization criterion, where one is minimizing a data fidelity term (e.g., a least-squares function) over cardinality constraints.
  3. While probabilistic constructions of expander matrices satisfying model RIP-1 coincide with those of regular RIP-1 expander matrices, the model-based assumption on signals in (5) results into constructions of expanders with less number of rows and guaranteed signal recovery.
  4. Observe that in most configurations, expander matrices find a solution closer to , compared to the dense setting, except for the case of , where we decrease the number of zeros per column.

References

  1. K. Ahn, S. Guha, and A. McGregor. Graph sketches: Sparsification, spanners, and subgraphs. In Proceedings of the 31st symposium on Principles of Database Systems, pages 5–14. ACM, 2012.
  2. B. Bah, L. Baldassarre, and V. Cevher. Model-based sketching and recovery with expanders. In Proceedings of ACM-SIAM Symposium on Discrete Algorithms, pages 1529–1543. ACM–SIAM, 2014.
  3. Luca Baldassarre, Nirav Bhan, Volkan Cevher, Anastasios Kyrillidis, and Siddhartha Satpathi. Group-sparse model selection: Hardness and relaxations. arXiv preprint arXiv:1303.3207, 2013.
  4. R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde. Model-based compressive sensing. Information Theory, IEEE Transactions on, 56(4):1982–2001, 2010.
  5. R. Berinde, A. Gilbert, P. Indyk, H. Karloff, and M. Strauss. Combining geometry and combinatorics: A unified approach to sparse signal recovery. In Communication, Control, and Computing, 2008 Allerton Conference on, pages 798–805. IEEE, 2008.
  6. E. Candès, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. Information Theory, IEEE Transactions on, 52(2):489–509, 2006.
  7. E. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications on pure and applied mathematics, 59(8):1207–1223, 2006.
  8. E. Candes and T. Tao. Decoding by linear programming. Information Theory, IEEE Transactions on, 51(12):4203–4215, 2005.
  9. E. Candès and M. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine, 25(2):21 – 30, March 2008.
  10. S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM journal on scientific computing, 20(1):33–61, 1998.
  11. D. Donoho. Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289–1306, 2006.
  12. Y. Eldar and M. Mishali. Robust recovery of signals from a structured union of subspaces. Information Theory, IEEE Transactions on, 55(11):5302–5316, 2009.
  13. A. Gramfort and M. Kowalski. Improving M/EEG source localization with an inter-condition sparse prior. In ISBI’09. IEEE International Symposium on, pages 141–144. IEEE, 2009.
  14. Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt. A nearly-linear time framework for graph-structured sparsity. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 928–937, 2015.
  15. Junzhou Huang, Tong Zhang, and Dimitris Metaxas. Learning with structured sparsity. The Journal of Machine Learning Research, 12:3371–3412, 2011.
  16. P. Indyk and E. Price. K-median clustering, model-based compressive sensing, and sparse recovery for earth mover distance. In Proceedings of ACM symposium on Theory of Computing, pages 627–636. ACM, 2011.
  17. P. Indyk and I. Razenshteyn. On model-based RIP-1 matrices. In Automata, Languages, and Programming, pages 564–575. Springer, 2013.
  18. P. Indyk and M. Ruzic. Near-optimal sparse recovery in the -norm. In Foundations of Computer Science, 2008. IEEE 49th Annual IEEE Symposium on, pages 199–207. IEEE, 2008.
  19. L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 433–440. ACM, 2009.
  20. S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank. Efficient and robust compressed sensing using optimized expander graphs. Information Theory, IEEE Transactions on, 55(9):4299–4308, 2009.
  21. A. Kyrillidis and V. Cevher. Recipes on hard thresholding methods. In 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 2011.
  22. Anastasios Kyrillidis, Luca Baldassarre, Marwa El Halabi, Quoc Tran-Dinh, and Volkan Cevher. Structured sparsity: Discrete and convex approaches. In Compressed Sensing and its Applications, pages 341–387. Springer, 2015.
  23. Anastasios Kyrillidis and Volkan Cevher. Combinatorial selection and least absolute shrinkage via the CLASH algorithm. In Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on, pages 2216–2220. IEEE, 2012.
  24. D. Needell and J. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26(3):301–321, 2009.
  25. J. Nelson, H. Nguyen, and D. Woodruff. On deterministic sketching and streaming for sparse recovery and norm estimation. pages 627–638, 2012.
  26. Eric Price. Sparse recovery and Fourier sampling. PhD thesis, Massachusetts Institute of Technology, 2013.
  27. R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
  28. Quoc Tran-Dinh and Volkan Cevher. Constrained convex minimization via model-based excessive gap. In Advances in Neural Information Processing Systems, pages 721–729, 2014.
  29. M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B, 68(1):49–67, 2006.
  30. H. Zhou, M. Sehl, J. Sinsheimer, and K. Lange. Association screening of common and rare genetic variants by penalized regression. Bioinformatics, 26(19):2375, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
13144
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description