Structured Sparse Principal Component Analysis

Structured Sparse Principal Component Analysis

Rodolphe Jenatton rodolphe.jenatton@inria.fr Guillaume Obozinski guillaume.obozinski@inria.fr Francis Bach francis.bach@inria.fr INRIA - WILLOW Project-team, Laboratoire d’Informatique de l’Ecole Normale Supérieure (INRIA/ENS/CNRS UMR 8548), 23, avenue d’Italie, 75214 Paris. France
Abstract

We present an extension of sparse PCA, or sparse dictionary learning, where the sparsity patterns of all dictionary elements are structured and constrained to belong to a prespecified set of shapes. This structured sparse PCA is based on a structured regularization recently introduced by GrosLasso (). While classical sparse priors only deal with cardinality, the regularization we use encodes higher-order information about the data. We propose an efficient and simple optimization procedure to solve this problem. Experiments with two practical tasks, face recognition and the study of the dynamics of a protein complex, demonstrate the benefits of the proposed structured approach over unstructured approaches.

1 Introduction

Principal component analysis (PCA) is an essential tool for data analysis and unsupervised dimensionality reduction, whose goal is to find, among linear combinations of the data variables, a sequence of orthogonal factors that most efficiently explain the variance of the observations.

One of its main shortcomings is that, even if PCA finds a small number of important factors, the factor themselves typically involve all original variables. In the last decade, several alternatives to PCA which find sparse and potentially interpretable factors have been proposed, notably non-negative matrix factorization (NMF) NMF () and sparse PCA (SPCA) jolliffe2003mpc (); zou2006spc (); Witten ().

However, in many applications, only constraining the size of the factors does not seem appropriate because the considered factors are not only expected to be sparse but also to have a certain structure. In fact, the popularity of NMF for face image analysis owes essentially to the fact that the method happens to retrieve sets of variables that are localized on the face and capture some features or parts of the face which seem intuitively meaningful given our a priori. We might therefore gain in the quality of the factors induced by enforcing directly this a priori in the matrix factorization constraints. More generally, it is desirable to encode higher-order information about the supports that reflects the structure of the data. For example, in computer vision, features associated to the pixels of an image are naturally organized on a grid and the supports of factors explaining the variability of images could be expected to be localized, connected or have some other regularity with respect to the grid. Similarly, in genomics, factors explaining the gene expression patterns observed on a microarray could be expected to involve groups of genes corresponding to biological pathways or set of genes that are neighbors in a protein-protein interaction network.

Recent research on structured sparsity LaurentGuillaumeGroupLasso (); huang2009 (); GrosLasso () has highlighted the benefit of exploiting such structure for variable selection and prediction in the context of regression and classification. In particular, GrosLasso () shows that, given any intersection-closed family of patterns of variables, such as all the rectangles on a 2-dimensional grid of variables, it is possible to build an ad hoc regularization norm that enforces that the support of the solution of the least-squares regression regularized by belongs to the family .

Capitalizing on these results, we aim in this paper to go beyond sparse PCA and propose structured sparse PCA (SSPCA), which explains the variance of the data by factors that are not only sparse but also respect some a priori structural constraints deemed relevant to model the data at hand. We show how slight variants of the regularization term of GrosLasso () can be used successfully to yield a structured and sparse formulation of principal component analysis for which we propose a simple and efficient optimization scheme.

The rest of the paper is organized as follows: Section 2 introduces the SSPCA problem in the dictionary learning framework, summarizes the regularization considered in GrosLasso () and its essential properties, and presents some simple variants which are more effective in the context of PCA. Section 3 is dedicated to our optimization scheme for solving SSPCA. Our experiments in Section 4 illustrate the benefits of our approach through applications to face recognition and the study of the dynamics of protein complexes.

Notation:

For any vector in and any , we denote by the (quasi-)norm of . Similarly, for any rectangular matrix , we denote by its Frobenius norm, where is the -th element of . We write for the -th column of . Given in and a subset of , denotes the vector in that has the same entries as for , and null entries outside of . In addition, is referred to as the support, or nonzero pattern of the vector . For any finite set with cardinality , we also define the -tuple as the collection of -dimensional vectors indexed by the elements of . Furthermore, for two vectors and in , we denote by the elementwise product of and . Finally, we extend by continuity in zero with if and otherwise.

2 Problem statement

It is useful to distinguish two conceptually different interpretations of PCA. In terms of analysis, PCA sequentially projects the data on subspaces that explain the largest fraction of the variance of the data. In terms of synthesis, PCA finds a basis, or orthogonal dictionary, such that all signals observed admit decompositions with low reconstruction error. These two interpretations recover the same basis of principal components for PCA but lead to different formulations for sparse PCA. The analysis interpretation leads to sequential formulations (spca2008aspremont (); moghaddam2006sbs (); jolliffe2003mpc ()) that consider components one at a time and perform a deflation of the covariance matrix at each step (see Mackey ()). The synthesis interpretation leads to non-convex global formulations (zou2006spc (); JulienOnLine (); moghaddam2006sbs (); lee ()) which estimate simultaneously all principal components, often drop the orthogonality constraints, and are referred to as matrix factorization problems (singh2008uvm ()) in machine learning, and dictionary learning in signal processing.

The approach we propose fits more naturally in the framework of dictionnary learning, whose terminology we now introduce.

2.1 Matrix factorization and dictionary learning

Given a matrix of rows corresponding to observations in , the dictionary learning problem is to find a matrix , called the dictionary, such that each observation can be well approximated by a linear combination of the columns of called the dictionary elements. If is the matrix of the linear combination coefficients or decomposition coefficients, the matrix product is called a decomposition of .

Learning simultaneously the dictionary and the decomposition corresponds to a matrix factorization problem (see Witten () and reference therein). As formulated in bach2008csm () or Witten (), it is natural, when learning a decomposition, to penalize or constrain some norms or quasi-norms of and , say and respectively, to encode prior information—typically sparsity—about the decomposition of . This can be written generally as

(1)

where the regularization parameter controls which extent the dictionary is regularized111From bach2008csm (), we know that our formulation is also equivalent to two unconstrained problems, with the penalizations or . . If we assume that both regularizations and are convex, problem (1) is convex w.r.t. for fixed and vice versa. It is however not jointly convex in .

The formulation of sparse PCA considered in lee () corresponds to a particular instance of this problem, where the dictionary elements are required to be sparse (without the orthogonality constraint ). This can be achieved by penalizing the columns of by a sparsity-inducing norm, e.g., the norm, . In the next section we consider a regularization which controls not only the sparsity but also the structure of the supports of dictionary elements.

2.2 Structured sparsity-inducing norms

The work of GrosLasso () considered a norm which induces structured sparsity in the following sense: the solutions to a learning problem regularized by this norm have a sparse support which moreover belongs to a certain set of groups of variables. Interesting sets of possible supports include set of variables forming rectangles when arranged on a grid and more generally convex subsets222We use the term convex informally here. It can however be made precise with the notion of convex subgraphs (chung1997sgt ())..

The framework of GrosLasso () can be summarized as follows: if we denote by a subset of the power set of , such that , we define a norm on a vector as

where is a -tuple of -dimensional vectors such that if and otherwise. This norm linearly combines the norms of possibly overlapping groups of variables, with variables in each group being weighted by . Note that a same variable belonging to two different groups is allowed to be weighted differently in and (by respectively and ).

For specific choices of , leads to standard sparsity-inducing norms. For example, when is the set of all singletons, is the usual norm (assuming that all the weights are equal to 1).

We focus on the case of a 2-dimensional grid where the set of groups is the set of all horizontal and vertical half-spaces (see Fig. 1 taken from GrosLasso ()). As proved in (GrosLasso, , Theorem 3.1), the norm sets to zero some groups of variables , i.e., some entire horizontal and vertical half-spaces of the grid, and therefore induces rectangular nonzero patterns. Note that a broader set of convex patterns can be obtained by adding in half-planes with other orientations. In practice, we use planes with angles which are multiples of .

Figure 1: (Left) The set of blue and green groups with their (not displayed) complements to penalize to select rectangles. (Right) In red, an example of recovered pattern in this setting.

Among sparsity inducing regularizations, is often privileged since it is convex. However, so-called concave penalizations, such as penalization by an quasi-norm, which are closer to and penalize more aggressively small coefficients can be preferred, especially in a context where the unregularized problem, here dictionary learning is itself non convex. In light of recent work showing the advantages of addressing sparse regression problems through concave penalization (e.g., see zhang2008msc ()), we therefore generalize to a family of non-convex regularizers as follows: for , we define the quasi-norm for all vectors as

where we denote by the -tuple composed of the different blocks . We thus replace the (convex) norm by the (neither convex, nor concave) quasi-norm . Note that this modification impacts the sparsity induced at the level of groups, since we have replaced the convex norm by the concave quasi-norm.

3 Optimization

We consider the optimization of Eq. (1) where we use to regularize the dictionary . We discuss in Section 3.3 which norms we can handle in this optimization framework.

3.1 Formulation as a sequence of convex problems

We are now considering Eq. (1) where we take to be , that is,

(2)

Although the minimization problem Eq. (2) is still convex in for fixed, the converse is not true anymore because of . Indeed, the formulation in is non-differentiable and non-convex. To address this problem, we use the variational equality based on the following lemma that is related333Note that we depart form bach2008cgl (); micchelli2006lkf () who consider a quadratic upperbound on the squared norm. We prefer to remain in the standard dictionary learning framework where the penalization is not squared. to ideas from bach2008cgl (); micchelli2006lkf ():

Lemma 3.1.

Let and . For any vector , we have the following equality

and the minimum is uniquely attained for

Proof.

Let be the continuously differentiable function defined on . We have and if (for , note that ). Thus, the infimum exists and it is attained. Taking the derivative w.r.t. (for ) leads to the expression of the unique minimum, expression that still holds for . ∎

To reformulate problem , let us consider the -tuple of -dimensional vectors that satisfy for all and , . It follows from Lemma (3.1) that

If we introduce the matrix defined by444For the sake of clarity, we do not specify the dependence of on . , we then obtain

This leads to the following formulation

(3)

which is equivalent to Eq. (2) and convex with respect to .

3.2 Sharing structure among dictionary elements

So far, the regularization quasi-norm has been used to induce a structure inside each dictionary element taken separately. Nonetheless, some applications may also benefit from a control of the structure across dictionary elements. For instance it can be desirable to impose the constraint that dictionary elements share only a few different nonzero patterns. In the context of face recognition, this could be relevant to model the variability of faces as the combined variability of several parts, with each part having a small support (such as eyes), and having its variance itself explained by several dictionary elements (corresponding for example to the color of the eyes).

To this end, we consider , a partition of . Imposing that two dictionary elements and share the same sparsity pattern is equivalent to imposing that and are simultaneously zero or non-zero. Following the approach used for joint feature selection (OboTasJor09 ()) where the norm is composed with an norm, we compose the norm with the norm , of all entries of each dictionary element of a class of the partition, leading to the regularization:

(4)

In fact, not surprisingly given that similar results hold for the group Lasso bach2008cgl (), it can be shown that the above extension is equivalent to the variational formulation

with class specific variables , , , defined by analogy with and , .

3.3 Algorithm

The main optimization procedure described in Algorithm 1 is based on a cyclic optimization over the three variables involved, namely , and . We use Lemma (3.1) to solve Eq. (2) by a sequence of problems that are convex in for fixed (and conversely, convex in for fixed ). For this sequence of problems, we then present efficient optimization procedures based on block coordinate descent (BCD) (bertsekas1995np, , Section 2.7). We describe these in detail in Algorithm 1. Note that we depart from the approach of GrosLasso () who use an active set algorithm. Their approach does not indeed allow warm restarts, which is crucial in our alternating optimization scheme.

Update of

The update of is straightforward (even if the underlying minimization problem is non-convex), since the minimizer in Lemma (3.1) is given in closed-form. In practice, as in micchelli2006lkf (), we avoid numerical instabilities near zero with the smoothed update , with .

Update of

The update of follows the technique suggested by JulienOnLine (). Each column of is constrained separately through . Furthermore, if we assume that and are fixed, some basic algebra leads to

(5)
(6)

which is simply the Euclidian projection of onto the unit ball of . Consequently, the cost of the BCD update of depends on how fast we can perform this projection; the and norms are typical cases where the projection can be computed efficiently. In the experiments, we take to be the norm.

In addition, since the function is continuously differentiable on the (closed convex) unit ball of , the convergence of the BCD procedure is guaranteed since the minimum in Eq. (5) is unique (bertsekas1995np, , Proposition 2.7.1). The complete update of is given in Algorithm 1.

Update of

A fairly natural way to update would be to compute the closed form solutions available for each row of . Indeed, both the loss and the penalization on are separable in the rows of , leading to independent ridge-regression problems, implying in turn matrix inversions.

However, in light of the update of , we consider again a BCD scheme on the columns of that turns out to be much more efficient, without requiring any non-diagonal matrix inversion. The detailed procedure is given in Algorithm 1. The convergence follows along the same arguments as those used for .

  Input: Dictionary size , data matrix .
  Initialization: Random initialization of .
   while ( stopping criterion not reached )
   Update : closed-form solution given by Lemma (3.1).
   Update by BCD:
   for to , for to :
  .
   Update by BCD:
   for to , for to :
   .
  Output: Decomposition .
Algorithm 1 Main optimization procedure for solving Eq. (3).

Our problem is not jointly convex in , and , which raises the question of the sensitivity of the optimization to its initialization. This point will be discussed in the experiments, Section 4. In practice, the stopping criterion relies on the relative decrease (typically ) in the cost function in Eq. (2).

Algorithmic complexity

The complexity of Algorithm 1 can be decomposed into 3 terms, corresponding to the update procedures of and . We denote by (respectively ) the number of updates of (respectively ) in Algorithm 1. First, computing and costs . The update of requires operations, where is the cost of projecting onto the unit ball of . Similarly, we get for the update of a complexity of . In practice, we notice that the BCD updates for both and require only few steps, so that we choose . In our experiments, the algorithmic complexity simplifies to times the number of iterations in Algorithm 1.

Extension to NMF

Our formalism does not cover the positivity constraints of non-negative matrix factorization, but it is straightforward to extend it at the cost of an additional threshold operation (to project onto the positive orthant) in the BCD updates of and .

4 Experiments

We first focus on the application of SSPCA to a face recognition problem and we show that, by adding a sparse structured prior instead of a simple sparse prior, we gain in robustness to occlusions. We then apply SSPCA to biological data to study the dynamics of a protein/protein complex.

Figure 2: Three learned dictionaries of faces with : NMF (top), SSPCA (middle) and shared-SSPCA (bottom) (i.e., SSPCA with different patterns of size ). The dictionary elements are sorted in decreasing order of variance explained. While NMF gives sparse spatially unconstrained patterns, SSPCA finds convex areas that correspond to more natural face segments. SSPCA captures the left/right illuminations in the dataset by recovering symmetric patterns.

The results we obtain are validated by known properties of the complex. In preliminary experiments, we considered the exact regularization of GrosLasso (), i.e., with , but found that the obtained patterns were not sufficiently sparse and salient. We therefore turned to the setting where the parameter is in . In the experiments described in this section we chose .

4.1 Face recognition

We first apply SSPCA on the cropped AR Face Database ARdataset () that consists of 2600 face images, corresponding to 100 individuals (50 women and 50 men). For each subject, there are 14 non-occluded poses and 12 occluded ones (the occlusions are due to sunglasses and scarfs). We reduce the resolution of the images from 165x120 to 38x27 for computational reasons.

Fig. 2 shows examples of learned dictionaries (for elements), for NMF, SSPCA and SSPCA with shared structure. While NMF finds sparse but spatially unconstrained patterns, SSPCA select sparse convex areas that correspond to a more natural segment of faces. For instance, meaningful parts such as the mouth and the eyes are recovered by the dictionary.

Figure 3: Correct classification rate vs. dictionary size: each dimensionality reduction technique is used with k-NN to classify occluded faces. SSPCA shows better robustness to occlusions.

We now compare SSPCA, SPCA (as in lee ()), PCA and NMF on a face recognition problem. We first split the data into 2 parts, the occluded faces and non-occluded ones. For different sizes of the dictionary, we apply each of the aforementioned dimensionality reduction techniques to the non-occluded faces. Keeping the learned dictionary , we decompose both non-occluded and occluded faces on . We then classify the occluded faces with a k-nearest-neighbors classifier (k-NN), based on the obtained low-dimensional representations. Given the size of the dictionary, we choose the number of nearest neighbor(s) and the amount of regularization by 5-fold cross-validation555In the 5-fold cross-validation, the number of nearest neighbor(s) is searched in while is in . For the dictionary, we consider the sizes ..

The formulations of NMF, SPCA and SSPCA are non-convex and as a consequence, the local minima reached by those methods are sensitive to the initialization. Thus, after having selected the parameters by cross-validation, we run each algorithm 20 times with different initializations on the non-occluded faces, divided into a training (900 instances) and validation set (500 instances) and take the model with the best classification score. We summarize the results in Fig. 3. We denote by shared-SSPCA (resp. shared-SPCA) the models where we impose, on top of the structure of , to have only 10 different nonzero patterns among the learned dictionaries (see Section 3.2).

As a baseline, we also plot the classification score that we obtain when we directly apply k-NN on the raw data, without preprocessing. Because of its local dictionary, SSPCA proves to be more robust to occlusions and therefore outperforms the other methods on this classification task. On the other hand, SPCA, that yields sparsity without a structure prior, performs poorly. Sharing structure across the dictionary elements (see Section 3.2) seems to help SPCA for which no structure information is otherwise available.

The goal of our paper is not to compete with state-of-the-art techniques of face recognition, but to demonstrate the improvement obtained between and more structured norms. We could still improve upon our results using non-linear classification (e.g., with a SVM) or by refining our features (e.g., with a Laplacian filter).

4.2 Protein complex dynamics

Understanding the dynamics of protein complexes is important since conformational changes of the complex are responsible for modification of the biological function of the proteins in the complex. For the EF-CAM complex we consider, it is of particular interest to study the interaction between EF (adenyl cyclase of Bacillus anthracis) and CAM (calmodulin) to find new ways to block the action of anthrax Protein ().

Figure 4: (Top left) Entire protein with biological domains highlighted in different colors. The two blue parts represent the CAM protein, while the rest of the complex corresponds to EF. (Top right, bottom left/right) Dictionary of size found by SSPCA with the same color code.

In our experiment, we consider 12000 successive positions of the 619 residues of the EF-CAM complex, each residue being represented by its position in (i.e., a total of 1857 variables). We look for dictionary elements that explain the dynamics of the complex, with the constraint that these dictionary elements have to be small convex regions in space. Indeed, the complex is comprised of several functional domains (see Fig. 4) whose spatial structure has to be preserved by our decomposition.

We use the norm (and ) to take into account the spatial configuration. Thus, we naturally extend the groups designed for a 2-dimensional grid (see Fig. 1) to a 3-dimensional setting. Since one residue corresponds to 3 variables, we could either (1) aggregate these variables into a single one and consider a 619-dimensional problem, or (2) we could use Section 3.2 to force, for each residue, the decompositions of all three coordinates to share the same support, i.e., in a 1857-dimensional problem. This second method has given us more satisfactory results.

We only present results on a small dictionary (see Fig. 4 with ). As a heuristic to pick , we maximize to select dictionary elements that cover pretty well the complex, without too many overlapping areas.

We retrieve groups of residues that match known energetically stable substructures of the complex Protein () . In particular, we recover the two tails of the complex and the interface between EF and CAM where the two proteins bind. Finally, we also run our method on the same EF-CAM complex perturbed by (2 and 4) calcium elements. Interestingly, we observe stable decompositions, which is in agreement with the analysis of Protein ().

5 Conclusions

We proposed to apply a non-convex variant of the regularization introduced by GrosLasso () to the problem of structured sparse dictionary learning. We present an efficient block-coordinate descent algorithm with closed-form updates. For face recognition, the dictionaries learned have increased robustness to occlusions compared to NMF. An application to the analysis of protein complexes reveals biologically meaningful structures of the complex. As future directions, we plan to refine our optimization scheme to better exploit sparsity. We also intend to apply this structured sparsity-inducing norm for multi-task learning, in order to take advantage of the structure between tasks.

Acknowledgments

We would like to thank Elodie Laine, Arnaud Blondel and Thérèse Malliavin from the Unité de Bioinformatique Structurale, URA CNRS 2185 at Institut Pasteur, Paris for proposing the analysis of the EF-CAM complex and fruitful discussions on protein complex dynamics. We also thank Julien Mairal for sharing his insights on dictionary learning.

References

  • [1] R. Jenatton, J-Y. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. Technical report, 2009. preprint arXiv:0904.3523v1.
  • [2] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788–791, 1999.
  • [3] I. T. Jolliffe, N. T. Trendafilov, and M. Uddin. A modified principal component technique based on the LASSO. Journal of Computational and Graphical Statistics, 12(3):531–547, 2003.
  • [4] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15(2):265–286, 2006.
  • [5] D. M. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics (Oxford, England), April 2009.
  • [6] L. Jacob, G. Obozinski, and J. P. Vert. Group Lasso with overlap and graph Lasso. In Proceedings of the 26th International Conference on Machine Learning, 2009.
  • [7] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. In Proceedings of the 26th International Conference on Machine Learning, 2009.
  • [8] A. d’Aspremont, F. Bach, and L. El Ghaoui. Optimal solutions for sparse principal component analysis. Journal of Machine Learning Research, 9:1269–1294, 2008.
  • [9] B. Moghaddam, Y. Weiss, and S. Avidan. Spectral bounds for sparse PCA: Exact and greedy algorithms. Advances in Neural Information Processing Systems, 18:915, 2006.
  • [10] Lester Mackey. Deflation methods for sparse pca. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1017–1024. 2009.
  • [11] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In Proceedings of the 26th International Conference on Machine Learning, 2009.
  • [12] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. In Advances in Neural Information Processing Systems, pages 801–808, 2007.
  • [13] A.P. Singh and G.J. Gordon. A Unified View of Matrix Factorization Models. In Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases-Part II, pages 358–373. Springer, 2008.
  • [14] F. Bach, J. Mairal, and J. Ponce. Convex sparse matrix factorizations. Technical report, 2008. preprint arXiv:0812.1869.
  • [15] F.R.K. Chung. Spectral graph theory. American Mathematical Society, 1997.
  • [16] T. Zhang. Multi-stage convex relaxation for learning with sparse regularization. In Advances in Neural Information Processing Systems, pages 1929–1936, 2008.
  • [17] F. Bach. Consistency of the group Lasso and multiple kernel learning. Journal of Machine Learning Research, 9:1179–1225, 2008.
  • [18] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine Learning Research, 6(2):1099, 2006.
  • [19] G. Obozinski, B. Taskar, and M. Jordan. Joint Covariate Selection and Joint Subspace Selection for Multiple Classification Problems. Statistics and Computing, 2009.
  • [20] D. P. Bertsekas. Nonlinear programming. Athena scientific, 1995.
  • [21] A. M. Martinez and A. C. Kak. PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2):228–233, 2001.
  • [22] E. Laine, A. Blondel, and T. E. Malliavin. Dynamics and energetics: A consensus analysis of the impact of calcium on ef-cam protein complex. Biophysical Journal, 96(4):1249–1263, 2009.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
3626
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description