Deep Discriminative Latent Space for Clustering

Deep Discriminative Latent Space for Clustering

Elad Tzoreff
Huawei Tel Aviv Research Center
Hod Hasharon
elad.tzoreff@huawei.com
&Olga Kogan11footnotemark: 1
Huawei Tel Aviv Research Center
Hod Hasharon
olga.kogan@huawei.com &Yoni Choukroun
Huawei Tel Aviv Research Center
Hod Hasharon
yoni.choukroun@huawei.com
Both authors contributed equally
Abstract

Clustering is one of the most fundamental tasks in data analysis and machine learning. It is central to many data-driven applications that aim to separate the data into groups with similar patterns. Moreover, clustering is a complex procedure that is affected significantly by the choice of the data representation method. Recent research has demonstrated encouraging clustering results by learning effectively these representations. In most of these works a deep auto-encoder is initially pre-trained to minimize a reconstruction loss, and then jointly optimized with clustering centroids in order to improve the clustering objective. Those works focus mainly on the clustering phase of the procedure, while not utilizing the potential benefit out of the initial phase. In this paper we propose to optimize an auto-encoder with respect to a discriminative pairwise loss function during the auto-encoder pre-training phase. We demonstrate the high accuracy obtained by the proposed method as well as its rapid convergence (e.g. reaching above accuracy on MNIST during the pre-training phase, in less than 50 epochs), even with small networks.

 

Deep Discriminative Latent Space for Clustering


  Elad Tzoreffthanks: Both authors contributed equally Huawei Tel Aviv Research Center Hod Hasharon elad.tzoreff@huawei.com Olga Kogan11footnotemark: 1 Huawei Tel Aviv Research Center Hod Hasharon olga.kogan@huawei.com Yoni Choukroun Huawei Tel Aviv Research Center Hod Hasharon yoni.choukroun@huawei.com

\@float

noticebox[b]31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\end@float

1 Introduction

Traditionally, most learning approaches have been treating representation-learning/feature-selection and clustering separately. However recent studies have outperformed traditional methods by learning optimal representations for clustering. In most of these works a deep auto-encoder is first trained to reduce a reconstruction loss. Next, the encoder parameters and clustering parameters (e.g. the K-means centroids) are jointly optimized in order to improve the overall clustering accuracy. However, we observed that in most cases the improvement of the clustering phase over the pre-training phase amounts no more than percents of accuracy. Therefore, reaching a high level of accuracy in the pre-raining phase is of crucial importance. Moreover, a reconstruction loss is not an optimal choice for clustering, due to the natural trade-off between reconstruction and clustering. A reconstruction aims to reproduce every detail in the original data, while clustering aims to reduce all possible variations into several templates.

In this paper we propose a novel unified framework for learning a clustering oriented representation. We suggest the optimization of an auto-encoder in the pre-training phase, with respect to a discriminative loss function which encourage a clustering oriented representation. The discriminative loss is the weighted sum of all pairwise similarities among data-points in the batch. Minimization of this loss implies making data-points’ representation as dissimilar as possible. Under the assumption of a balanced dataset, the majority of the pairs are indeed dissimilar and only a small fraction of them are similar (i.e., only the within cluster pairs). Accordingly, the utilization of this loss is justified. The proposed optimization scheme enables the utilization of relatively small networks whom can be trained very fast. For the clustering phase we propose a joint optimization scheme that maintain the pairwise discrimination loss while it optimizes the clustering objective. We apply the proposed algorithm to several datasets (MNIST, COIL-20, COIL-100), and demonstrate its superiority both in terms of accuracy and speed of convergence. To summarize, the contributions of this paper are twofold: (1) the major contribution of this paper is a novel optimization scheme for the auto-encoder pre-training phase, that encourages a discriminative latent space which fits with the clustering objective and reaches higher accuracy prior to the clustering phase, and (2) a novel clustering scheme in which the discrimination of the latent space is strengthened while searching for the optimal centroids.

2 Related work

We provide here a brief review of the currently known methods that optimize the latent space for clustering. (Xie et al., 2016) proposed DEC - a fully connected stacked auto encoder that learns a latent representation by minimizing the reconstruction loss in the pre-training phase. The objective function applied to the clustering phase, is the Kullback Leibler (KL) divergence between the clustering soft assignments modeled as a t-distribution, and a reference heuristic probability that is constructed from the soft assignments. In (Yang et al., 2016a) a fully connected auto-encoder is trained with a k-means loss along with a reconstruction loss at the clustering phase. (Li et al., 2017) proposed DBC - a fully convolutional network with layer-wised batch normalization that aims to overcome the relatively slow training of a stacked auto-encoder. DBC utilized the same objective function as DEC with a boosting factor as a hyperparameter, for the reference probability. (Yang et al., 2016b) introduced JULE - a method that jointly optimize a convolutional neural network with the clustering parameters in a recurrent manner using an agglomerative clustering approach. In (Jiang et al., 2016) VaDE - a variational auto encoder for deep embedding is proposed. A generative model from which a cluster is picked and a latent representation is selected followed by a deep network which decodes the latent embedding into the observables. In (Ji et al., 2017) a deep auto-encoder is trained to minimize a reconstruction loss together with a self expressive layer. This objective encourages a sparse representation of the original data. (Dizaji et al., 2017) proposed DEPICT - a method that train a convolutional auto-encoder with a softmax layer stacked on-top of the encoder. The softmax entries represent the assignment of each data-point to each one of the clusters. Finally, following (Shah and Koltun, 2017), a deep continuous clustering approach is suggested in (Shah and Koltun, 2018). In this method the auto-encoder parameters are simultaneously optimized together with a set of representatives defined against each data-point. The representatives are optimized by minimizing the distance between each representative and its associated data-point, while minimizing the pairwise distances between representatives (similar to the convex clustering approach proposed by (Chi and Lange, 2015)). They applied non-convex objective functions to penalize for the pairwise distances between the representatives. The clusters are then determined as the connected components in the graph created between the representatives.

3 Discriminative clustering

In this paper we choose the paradigm in which a latent space representation of a certain dataset is trained jointly with clustering parameters. This end-to-end approach, backed by recent papers, achieves a clustering oriented representation and therefore better clustering accuracy. In most papers the auto- encoder is first trained to minimize the reconstruction loss and then applied as an initial condition to the joint optimization problem in which both clustering and encoder parameters are jointly optimized to minimize the clustering error. However although a major attention is dedicated to the clustering phase, we have observed that in most cases the improvement of the clustering phase over the initial phase amounts a maximum of percents of accuracy. This leads to the conclusion that the initial phase has a significant effect on the overall accuracy and therefore a focus shall be put on this step.

3.1 The pre-training phase: obtaining discriminative latent space

Let denotes a dataset grouped into clusters , and let be a data-point in the set with dimension . Let stands for the latent representation of , where the parameters of the encoder are denoted by , and . Let stands for the reconstructed data point, that is, the output of the decoder, where the parameters of the decoder are denoted by .

We propose a family of pairwise discriminative functions of the form , where , and stands for any similarity measure between a pair of data-points.

(1)

where are related to the similarities between in the raw data-points. When prior knowledge is not available we set , where stands for the cardinality of the dataset. Note that the objective function in (1) with is sub-optimal, since it penalizes all similarities regardless of whether they belong to the same cluster of not. Obviously if the assignments of each data point were available, have been split into two sets of weights: one for the minimization of the cross clusters similarities, and one for the maximization of within cluster similarities, yielding the following objective function instead:

(2)

where the notation defines a pair of data-points related to the same cluster . stands for the number of between cluster and within cluster pairs, respectively. However, the justification of (1) with arises from the following observation, consider a balanced dataset that is not dominated by a single or a few clusters, then the number of pairs in are and the within cluster and between cluster pairs cardinalities are approximately and , respectively. Accordingly, there are much more dissimilar pairs than similar ones. Note that increases with both , and for , the number of within cluster pairs is approximately fraction of all data pairs.

In light of eq. (2) we create a k-nearest neighbors graph between the data-points based on their original representation. A fraction of pairs with the largest similarities from the k-nearest neighbors graph is then applied to eq. (1) as anchor pairs, whom similarity is to be maximized. The maximization of anchors’ similarities refers to the within cluster component in eq. (2). Utilization of only a small fraction of pairs from the k-nearest neighbors graph is motivated by the need to avoid maximization of similarities between pairs that are actually dissimilar. Since similarities based on the original representation are not reliable we only use the pairs with the highest confidence. Define as the set of anchor pairs, we have

(3)

yields

(4)

where is a hyper-parameter applied in order to compensate for the uncertainty of the anchor pairs being actually dissimilar. Let define as the normalized latent space representation. We apply the similarity measure . where stands for the absolute value. Note that eq. (4) is the weigthed norm of all pairwise cosine similarities. We choose the instead of e.g. due to the desired sparsity of the similarities (only non-zero elements) which is encouraged by the norm.

Since generally, the dataset cannot be maintained in main memory, and training is performed on batches of the dataset by stochastic gradient descent (SGD), eq. (4) shall be approximated using the batch matrix of the latent representation , where denotes the cardinality of the batch. Define as the row-wise normalized batch matrix (its -th row is the row vector ), and as the pairwise cosine similarity matrix, such that . Furthermore, for each batch a k-nearest neighbors graph is constructed and a set of is determined, yielding the following approximation to eq. (4):

(5)

Note that the diagonal terms of are constant and equal , and therefore do not affect the optimization. Furthermore, observe that in the right-hand component we take a sum without absolute value to encourage similarities with value of 1 rather than dissimilar opposite features with value -1.

In order to avoid an arbitrary discrimination of the data-points, we propose to regularize eq. (5) with the reconstruction loss, yielding the following optimization problem

(6)

where stands for the regularization strength, and denotes the reconstruction loss, stands for the raw input batch matrix and stands for the Frobenius norm.

3.2 The Clustering phase : maintaining dissimilarity

After optimizing for from the pre-training phase, the learned auto-encoder parameters are applied as an initial condition to the clustering phase. In this step we jointly optimize both the encoder-decoder parameters , and the centroids ; the new optimization variables for the clustering objective. A natural candidate for the objective function for the clustering phase is obviously the cosine similarity between the learned centroids and each data-point, since the cosine similarity is applied to discriminate between pairs of data points in the initial phase. Accordingly the primary goal of the clustering phase is to maximize the following objective function

(7)

where stands for the assignment matrix and are the hard decision of the clustering procedure. The clustering phase is divided into two steps: in the first step the clustering assignments are not trusted yet, so the clustering objective is kept regularized by eq. (6) in which the weights are determined by the anchor pairs. At the second step the clustering assignments are considered reliable and the weights are determined according to the assignments of the clustering. Accordingly the optimization problem solves in the first step is given by

(8)

where stands for the regularization strength of the discrminative and reconstruction losses, respectively. For the second step, let define a similarity measure between each pair of clusters, yielding

(9)

Note that, eq. (9) penalizes for the worst-case, that is, for the pair of clusters with the greatest similarity. In the same manner we do for the within class objective we have

(10)

Note that in eq. (10) the absolute value has been omitted, and the value of for the cosine similarity between pairs of data-points is preferable over . The optimization problem in the second step becomes

(11)

where stand for the regularization strength of the between cluster, within cluster and reconstruction loss respectively.

3.3 System architecture

Here we adopt the architecture proposed by (Li et al., 2017) Our system consists of two parts: a deep discriminative auto-encoder and clustering centroids. The auto-encoder network is a fully convolutional neural network with convolution layers with relu activations followed by a batch normalization (Ioffe and Szegedy, 2015) and a max pooling layers. The decoder up-samples the latent space to higher resolution using nearest-neighbor extrapolation that are followed by batch-normed convolution layers as described by (Shi et al., 2016). The auto-encoder architecture is depicted in figure 1.

\includegraphics

[width=0.85]system_diagram.png

Figure 1: Auto-encoder architecture for MNIST

3.4 Training strategy

Training the auto-encoder in the initial phase begins with the minimization of eq. (6). The regularization strength of the discriminative loss is an hyper-parameter such that . The value of differs among different datasets, such that datasets that are more complex, require more aggressive discrimination while maintaining the strength of the reconstruction loss is constant. The training is done on large batches to ensure . As described in 1 for each batch a k-nearest neighbors graph is constructed, then a small set of anchor pairs is extracted from the graph and eq. (6) is backpropagate to optimize for the auto-encoder parameters. The training scheme for the auto encoder is summarized in algorithm 1.

Input : dataset ,
Output : auto-encoder parameters

1:while  and  do
2:     Build: k-nearest neighbors graph
3:     Extract: anchor pairs
4:     Solve: Back-propagate for each
5:     
6: Extract latent space for each data-point
Algorithm 1 Auto-encoder pre-training

In the clustering phase the clustering variables are jointly optimized with the auto encoder parameters . We apply an alternating maximization scheme in which each set of variables is optimized while the other sets remain fixed. The optimization process begins with the initialization of by optimizing eq. (7) based on the entire dataset . Next we alternate over the maximization of the assignment matrix , followed by the maximization of , and finally maximizing with respect to the auto-encoder parameters. The optimization procedure iterates until convergence.

The clustering phase is divided into two stages that differ from each other by the objective functions they aim to maximize. The pseudo-code for the first stage is summarized in algorithm 2. In the first stage we optimize eq. (8), while using relative large regularization strength for the discriminative loss and lower regularization strength for the reconstruction loss. The while loop refers to the alternating maximization scheme in which each set of parameters is maximized over several epochs. The optimization is carried out using backpropagation for each . Termination occurs either when the maximal number of iteration is exceeded or when the clustering objective does not improve over consecutive iterations above a predefined tolerance tol. Note that are hyper-parameters and are dataset dependent.

Input : dataset
Output : auto-encoder parameters
centroids parameters
assignments
Initialize : Optimized over the entire dataset

1:while  and  do Alternating loop
2:     
3:      eq. eq. (8) is separable for each
4:     
5:     
Algorithm 2 Clustering phase - stage I

The second stage is initialized with the parameters from the first stage . Then we optimize for eq. (11) wherein, similarly to the previous stage, the discriminative regularization strengths are set to a relative high value, that is, , while the regularization strength of the reconstruction loss remain unchanged. The process iterates until convergence. The maximization of each set of variables is carried out using backpropagation of large batches over several epochs In both stages we used large batches as in the auto encoder training phase and for several epochs. The entire procedure of the clustering step is similar to the pseudo-code of algorithm 2 but now with eq. (11) and its associated hyper-parameters.

4 Experiments and results

The proposed method has been implemented using Python Tensorflow-1.5, and has been evaluated on four datasets, i.e MNIST handwritten digits dataset (LeCun and Cortes, 2010), COIL20 /COIL100 multi-view object recognition image datasets (Nene et al., 1996a, b). Datasets details are presented in table 1.

Dataset Samples Categories Image Size Channels
MNIST K 1
COIL 20 1
COIL 100 3
Table 1: Datasets statistics

The performance of our method has been examined versus the following baselines: Deep Embedding for clustering (DEC) by (Xie et al., 2016),Deep Clustering Network (DCN) by (Yang et al., 2016a), Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization (DEPICT) by (Dizaji et al., 2017), Discrimantive Boosted Clustering (DBC) by (Li et al., 2017) (JULE) by (Yang et al., 2016b), Variational deep embedding (VaDE) by (Jiang et al., 2016), Neural Clustering - (Saito and Tan, 2017), Deep Continuous Clustering (DCC) (Shah and Koltun, 2018) and Deep Subspace Clustering Networks (DSC-Net) by (Ji et al., 2017)

4.1 Evaluation Metrics

Following (Xie et al., 2016), given the ground truth labels and the clustering labels the clustering performance is evaluated as the accuracy resulting from the optimal one to one mapping between the ground truth clusters and the resulting clusters. This is carried out by recasting the problem as a linear assignment problem, which is efficiently solved by the Hungarian algorithm (Kuhn, 1955). The assignment problem for the clustering accuracy is defined as follows

where stands for the cardinality of the intersection of , i.e. the number of mutual members of both sets and the mapping variables are given by

such that both and are satisfied, which establishes the one to one mapping.

4.2 MNIST Dataset

The MNIST dataset constitutes of K samples of gray scale hand-written digits images, distributed over categories (). For MNIST we used a -layer encoder followed by a -layer decoder, in which the bottleneck of the encoder, i.e. the latent space dimension has been set to . The overall number of parameters of the entire network is K. At the initial phase; the auto-encoder pre-training, the regularization strength of the reconstruction loss in eq. (6) has been set to . The auto-encoder reaches prior to the clustering step in less than epochs. We use relatively large batches with to maintain the distribution of the data as in the original dataset such that the ratio between the within cluster pairs and between cluster pairs:

is preserved, and therefore the assumption that in eq. (5) is sparse is justified. The clustering phase has been initialized with the auto-encoder parameters obtained from the initial phase. At the first stage of the clustering process, the regularization strength of the discriminative objective, has been set to . Similarly to the pre-training phase we used large batches jointly optimize for the parameters of the centroids, auto-encoder and the assignment matrix according to the alternating scheme describe in algorithm 2. At the second stage of the clustering process the regularization strength of the between cluster similarity and within cluster similarity in eq. (11) have been set to , respectively. The separability of the latent space of MNIST represented using t-SNE is depicted in figure 5 during the different phases of the algorithm. The algorithm reaches accuracy of on both the train and the test sets at the final stage.

\includegraphics[width=]TSNE_raw_data.png
Figure 2: Raw data - 54% accuracy
\includegraphics[width=]TSNE_87_per.png
Figure 3: Encoder phase - 30 Epoch 87% accuracy
\includegraphics[width=]TSNE_97_4_normalized_vectors_per.png
Figure 4: Clustering Phase Stage II - 50 Epochs 97.4% accuracy
Figure 5: Latent space representation of the auto-encoder at different phases of the algorithm, represented by t-SNE on K data-points from MNIST dataset, and the clustering accuracy achieved by our proposed deep clustering method

4.3 COIL-20 and COIL-100 Datasets

We examined our method on the COIL multi-view object image datasets, COIL-20 and COIL-100 by (Nene et al., 1996a, b). COIL-20 consists of gray-scale image samples distributed over objects. Similarly, COIL-100 consists of colored images distributed over categories. For both datasets there are images for each category taken at pose intervals of degrees. The Images were down-sampled to . We apply the same architecture that applied to MNIST with a bottleneck dimension of the encoder was increased to for COIL-20 and COIL-100, respectively. At the initial phase; the regularization strength of the reconstruction loss eq. (6) has been set to . At both phases of the algorithm we use a batch size of of and for COIL-20, and COIL-100, respectively. Since COIL-20 is a small dataset, i.e. (only samples in the training set) we utilize relatively small batch size in order to maintain a diversity of anchor pairs across batches. The auto-encoder reaches and prior to the clustering step on COIL-20 and COIL 100, respectively. At the first stage of the clustering process, the regularization strength of the discriminative function has been set to and decreased to during the training process. The algorithm reaches accuracy of and on COIL-20 and COIL-100 at the final stage, respectively. Note that DSC-net reaches accuracy of on COIL-20, however this is achieved with a network of more than M parameters, that is quadratic in the size of the dataset, which is obviously not scalable, whereas our network consists of K and K parameters for COIL-20 and COIL -100, respectively (the difference arises due to the different bottleneck size). Observe that on COIL-100 JULE reaches higher accuracy than our method, that is, versus achieved by the proposed method.

Method MNIST COIL 20 COIL 100
DEC 111The authors have not reported performance on COIL 20 the numbers here are based on the reports of the authors of DCC- (Shah and Koltun, 2018)
DCN22footnotemark: 2
DEPICT22footnotemark: 2
DBC
JULE222The authors have not reported performance in terms of accuracy but only in terms of NMI the numbers here are based on the reports of the authors of DCC- (Shah and Koltun, 2018)
VaDe
Neural Clustering
DSC-Net
DCC
Proposed Method
Table 2: Clustering accuracy () performance comparison on all datasets

5 Conclusions

In this paper we proposed an efficient method for learning a latent space representation for clustering. We propose to minimize a pairwise discriminative function - the weighted sum of all pairwise similarities within the batch matrix with respect to the auto-encoder parameters prior to the clustering phase. We demonstrate the higher accuracy and rapid convergence it achieves, as well as the small models it can handle. However, it seems that there is an inherent limit in clustering arbitrary datasets. There are many ways of separating or grouping the dataset and there is no guaranty that the separation obtained by a clustering process will coincide with the ground truth-labels. Prior knowledge about small fraction of the dataset is a reasonable assumption in many applications and utilizing this knowledge within the deep clustering approach may lead to a much superior performance, similar to the achievements of one-shot learning.

References

  • Chi and Lange (2015) Eric C Chi and Kenneth Lange. Splitting methods for convex clustering. Journal of Computational and Graphical Statistics, 24(4):994–1013, 2015.
  • Dizaji et al. (2017) Kamran Ghasedi Dizaji, Amirhossein Herandi, Cheng Deng, Weidong Cai, and Heng Huang. Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 5747–5756. IEEE, 2017.
  • Ioffe and Szegedy (2015) Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • Ji et al. (2017) Pan Ji, Tong Zhang, Hongdong Li, Mathieu Salzmann, and Ian Reid. Deep subspace clustering networks. In Advances in Neural Information Processing Systems, pages 23–32, 2017.
  • Jiang et al. (2016) Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. arXiv preprint arXiv:1611.05148, 2016.
  • Kuhn (1955) Harold W Kuhn. The hungarian method for the assignment problem. Naval Research Logistics (NRL), 2(1-2):83–97, 1955.
  • LeCun and Cortes (2010) Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/.
  • Li et al. (2017) Fengfu Li, Hong Qiao, Bo Zhang, and Xuanyang Xi. Discriminatively boosted image clustering with fully convolutional auto-encoders. arXiv preprint arXiv:1703.07980, 2017.
  • Nene et al. (1996a) Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. Columbia object image library (coil-20. Technical report, 1996a.
  • Nene et al. (1996b) Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. object image library (coil-100. Technical report, 1996b.
  • Saito and Tan (2017) Sean Saito and Robby T Tan. Neural clustering: Concatenating layers for better projections. 2017.
  • Shah and Koltun (2017) Sohil Atul Shah and Vladlen Koltun. Robust continuous clustering. Proceedings of the National Academy of Sciences, 114(37):9814–9819, 2017.
  • Shah and Koltun (2018) Sohil Atul Shah and Vladlen Koltun. Deep continuous clustering. arXiv preprint arXiv:1803.01449, 2018.
  • Shi et al. (2016) Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874–1883, 2016.
  • Xie et al. (2016) Junyuan Xie, Ross Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, pages 478–487, 2016.
  • Yang et al. (2016a) Bo Yang, Xiao Fu, Nicholas D Sidiropoulos, and Mingyi Hong. Towards k-means-friendly spaces: Simultaneous deep learning and clustering. arXiv preprint arXiv:1610.04794, 2016a.
  • Yang et al. (2016b) Jianwei Yang, Devi Parikh, and Dhruv Batra. Joint unsupervised learning of deep representations and image clusters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5147–5156, 2016b.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
199823
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description