Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection

Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection

Samuel W. Remedios Zihao Wu Department of Electrical Engineering, Vanderbilt University Camilo Bermudez Department of Biomedical Engineering, Vanderbilt University Cailey I. Kerley Department of Electrical Engineering, Vanderbilt University Snehashis Roy Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation Mayur B. Patel Departments of Surgery, Neurosurgery, Hearing & Speech Sciences; Center for Health Services Research, Vanderbilt Brain Institute; Critical Illness, Brain Dysfunction, and Survivorship Center, Vanderbilt University Medical Center; VA Tennessee Valley Healthcare System, Department of Veterans Affairs Medical Center John A. Butman Radiology and Imaging Sciences, Clinical Center, National Institute of Health Bennett A. Landman Dzung L. Pham

Multiple instance learning (MIL) is a supervised learning methodology that aims to allow models to learn instance class labels from bag class labels, where a bag is defined to contain multiple instances. MIL is gaining traction for learning from weak labels but has not been widely applied to 3D medical imaging. MIL is well-suited to clinical CT acquisitions since (1) the highly anisotropic voxels hinder application of traditional 3D networks and (2) patch-based networks have limited ability to learn whole volume labels. In this work, we apply MIL with a deep convolutional neural network to identify whether clinical CT head image volumes possess one or more large hemorrhages (cm), resulting in a learned 2D model without the need for 2D slice annotations. Individual image volumes are considered separate bags, and the slices in each volume are instances. Such a framework sets the stage for incorporating information obtained in clinical reports to help train a 2D segmentation approach. Within this context, we evaluate the data requirements to enable generalization of MIL by varying the amount of training data. Our results show that a training size of at least patient image volumes was needed to achieve accurate per-slice hemorrhage detection. Over a five-fold cross-validation, the leading model, which made use of the maximum number of training volumes, had an average true positive rate of , an average true negative rate of , and an average precision of . The models have been made available along with source code[11] to enabled continued exploration and adaption of MIL in CT neuroimaging.

multiple instance learning, deep learning, neural network, computed tomography (CT), hematoma, lesion, classification

1 Introduction

Radiological interpretations are commonly available for clinically acquired medical images. There is growing interest in incorporating such information into medical image analysis algorithms, particularly in the segmentation and quantification of lesions or other pathology within the image. Segmentation algorithms can be facilitated by knowing whether a lesion is present a priori using “weakly” supervised machine learning approaches[6]. However, many segmentation algorithms are slice or patch-based, providing an additional challenge that the clinical determination of the presence of a lesion within the volume may or may not apply to that local slice or patch. The goal of this work is to learn 2D features accurately from weak 3D volumetric patient labels without the need for 2D manual annotation or interpolation into isotropic 3D space (Figure 1).

Lesions may occur in a number of neurological conditions, including traumatic brain injury (TBI), stroke, and cancer. Here, we focus on the detection of cerebral hemorrhages due to TBI in computed tomography (CT) scans. However, the acquisition of CT does not always provide isotropic images, especially in clinical environments where shorter scan durations are desired for safety and patient concerns. To minimize acquisition time and maximize signal-to-noise ratio, clinical CT scans are frequently acquired with low through-plane resolution, preserving high in-plane resolution (typically on the order of mm). While clinicians can visually interpret such thick-slice images, contemporary deep learning approaches must handle this anisotropy in other ways.

Historically, supervised deep learning applied to 3D medical images either makes use of isotropic research scans or anisotropic clinical scans that must be spatially interpolated. Traditional interpolation, however, only increases voxel density, not frequency resolution, and partial volume effects interfere with image processing pipelines such as registration. As such, Zhao et al. addressed anisotropy by applying self-trained super resolution with deep learning, interpolating both spatial and frequency information[27]. Others addressed anisotropy with network design rather than data preprocessing. Li et al. [16] first processed 2D slices of a volume before concatenating these slices into a 3D network. Chen et al. [3] used a recurrent neural network, effective at sequential data, to handle consecutive 2D slices. Liu et al. [17] proposed to use a hybrid network to transfer 2D features to a 3D network. Lee et al [13] selectively skipped feature downsampling in the low-resolution dimension, used 2D features at the finest bottleneck point, and applied anisotropic convolutional filters. Many others[5][28][7] restrict deep learning models to 2D, acting on the high resolution slices as they were acquired. This approach, however, requires manual annotation on each slice and loses 3D contextual information.

Recently, multiple instance learning (MIL)[8] has been proposed for use in tandem with deep learning [25][26]. In MIL, a “bag” is defined to be a collection of “instances”, and a bag is classified as positive if any instance is positive and is only considered negative if all instances are negative. In the pathology domain, MIL has been used to classify very high resolution slides as cancerous. In this case, a bag was the entire slide and instances were non-overlapping patches extracted from the slide[2]. MIL has also been applied to classify mammograms[29], detect gastric cancer in abdominal CT[14], detect colon cancer in abdominal CT[9][24], classify nodules in chest CT[20], and classify chronic obstructive pulmonary disease in chest CT[4]. Particularly of note, MIL has been used alongside deep learning to detect malignant cancerous nodules in lung CT[21] and to segment cancer in histopathology[12].

For segmentation of CT hemorrhages, 2D convolutional neural networks (CNNs) have demonstrated good success. To augment such a model with weak labels indicating presence of a hemorrhage within the volume, we consider a bag to be the collection of 2D slices within the volume, and an instance to be the 2D slice. We investigate the potential of using deep learning to predict whether a hemorrhage is present in a slice using MIL. We first train a 2D CNN to classify the presence of hemorrhages. Then, to characterize the minimum number of image volumes required to train such a model, we vary the number of training samples from to , the maximum available number of training samples in our dataset. To the best of our knowledge this is the first application of MIL to extract 2D features from 3D labels. Furthermore, this is the first characterization of the number of bags necessary for a MIL task using deep neural networks.

Figure 1: Radiological diagnoses are weak 3D patient labels, and do not necessarily contain information on the location of that diagnosis. Weak 2D per slice labels are useful for image analysis and 2D convolutional neural networks. Here, green boxes correspond to negative samples, images with the absence of a hemorrhage. Red boxes correspond to positive samples, images with the presence of a hemorrhage. We aim to increase the amount of label information in a 3D volume by extracting 2D weak labels from 3D weak labels.

2 Method

Herein we describe the format and preprocessing of the data as well as the implementation of MIL. MIL allows models to learn a binary classification task by learning from one or some instances in the bag. We selected “max pooling” as our MIL pooling algorithm. “Max pooling” is an overloaded term in machine learning; in the MIL context, “max pooling” refers to the selection of the “most-positive” instance[23]. On the surface it may not be immediately apparent how a neural network trained with MIL is able to achieve convergence with a max pooling operation. Given a randomly initialized neural network, the “most-positive” instance on the forward pass is not guaranteed to be a truly positive instance. The main mechanism which allows MIL neural networks to converge is the setup of the training data: a bag which is negative is guaranteed to contain all negative instances. In this way, the model learns which instances are negative, and positive instances (which in our case present differently in the images) are anomalies, and thus the model can learn to differentiate the two.

2.1 Data

At Vanderbilt University Medical Center, CT image volumes from patients from a consecutive retrospective study of trauma were retrieved in de-identified form under IRB supervision. All volumes were segmented by a hemorrhage segmentation convolutional neural network (CNN) trained as described in previous work[19]. Then, these automatic segmentations were manually reviewed and scored by a trained rater: () no hemorrhage and correct empty mask, () hemorrhage and slight errors in mask, () hemorrhage but large failure of algorithm, () hemorrhage and near-perfect mask, and () invalid data for the CNN. Image volumes with scores of and were omitted from selection due to uncertainty in hemorrhage size estimation and invalidity, respectively. All volumes with scores () were selected as the negative class with no hemorrhage present. From all axially acquired volumes with scores () and (), image volumes with cubic centimeters of blood were selected as the positive class with the presence of severe hemorrhages. After organizing data as such, the dataset consisted of a total of volumes of which contained hemorrhages and did not (Table 1). Since our focus is a hemorrhage detection task, the ground truth labels are a binary classification of the entire image; the aforementioned automatic segmentations were solely used to obtain this binary label and were not used in training any CNN.

Total Positive Samples Negative Samples
Table 1: Distribution of data for training, validation, and testing.

From the dataset, % of positive samples were randomly withheld for testing, and of the remaining %, another % was randomly selected to be the validation set. Patients were not mixed in the data split, and the negative samples were randomly undersampled such that classes were balanced for training and validation. All remaining negative samples were included in the test set. The sizes of training, validation, and test datasets are shown in Table 1.

All CT image volumes were converted from DICOM to NIFTI using dcm2niix[15] with voxel intensities preserved in Hounsfield units, and as such no intensity normalization was applied. Subsequently, each volume was skullstripped with CT_BET[18] and rigidly transformed to a common orientation. All axial slices were extracted from the volume and null-padded or cropped to a size of pixels. These slices were considered “instances”, and were converted to -bit floats before being shuffled and aggregated as “bags”. All bags were aggregated and written in the TFRecord file format[22] to avoid memory constraints.

2.2 Model Hyperparameters

The model architecture used for this task was a ResNet-[10], with two output neurons activated by softmax. We trained with a batch size of facilitated by accumulating gradients. The learning rate was set to with the Adam optimizer. The loss function was categorical crossentropy. Convergence was defined as no improvement of validation loss by in epochs. The selected deep learning framework was Tensorflow[1] v. and an NVIDIA TI was used for hardware acceleration.

2.3 Multiple Instance Learning Implementation

Our learning implementation consisted of several steps. First, for each bag, we performed model inference on each instance to obtain the probability that an instance belongs to the hemorrhage positive class. Then, we identified the instance with the highest probability of being the positive class and calculated the gradients for only this instance. Each subsequent bag’s gradients were aggregated with a running average. After bags, the accumulated gradient was applied as a batch to the model. This process is illustrated in Figure 2. In summary, the input to the CNN is a bag of 2D axial slices and the output is the probability to which class this bag belongs. The resultant model is a 2D CNN which classifies whether an axial CT slice contains a hemorrhage. Our implementation is publicly available[11].

Figure 2: Illustration of multiple instance learning with gradient accumulation. The first CT volume is organized into a bag of its 2D slices. Then, the model performs inference on all slices in the bag, and class probabilities are calculated. The gradient is calculated only for the instance corresponding to the most probable positive class. This gradient is saved and the gradient calculation is repeated for the next bag until the batch is done, and the accumulated gradient is averaged for the batch size and applied to the model.

2.4 Dataset Size Restriction

To investigate the required number of training samples needed for MIL to converge, we trained multiple models from the same initialization point with varying dataset sizes. We trained a total of models with datasets comprised of , , , , , and image volumes randomly selected from the training data outlined in Section 2.1. Hereafter, models are referred to as “Model N”, where the number of training samples. All models trained with the same architecture, learning rate, optimizer, and convergence criteria as described in Section 2.2.

3 Results

Because validation data on the presence or absence of hemorrhage was not available for each individual slice, evaluation was performed based on 3D classification accuracy. Image volumes were classified to be positive if any slice was positive and are only negative if all slices were negative. In other words, a patient was considered to have a hemorrhage if there is a hemorrhage present in any 2D slice of the 3D scan. Thus, Figures 446, and 7 report on head CT volumes.

Our convergence criteria terminated model training once the validation loss showed no improvement of in epochs. Thus, models of all training set sizes “converged,” but performance on the test set varied as expected.

Figure 3: Confusion matrix for Model obtained after bag-wise classification. Reported results are an average of five-fold cross-validation. For the negative class, Model achieved % accuracy with erroneous classifications, and for the positive class the model achieved % accuracy with erroneous classifications.
Figure 4: Cross-validated precision-recall curve for Model , trained on the entire training dataset. The average precision over all folds is .

3.1 Classification

First, we consider averaged cross-validated results from the model trained over the entire available training set, Model . Figure  4 shows the class-wise cross-validated testing accuracy of Model . Overall, the model correctly classified of image volumes using the MIL paradigm, corresponding to % accuracy. Of the image volumes not presenting with hemorrhage, the MIL model attained % accuracy with false positives. From the remaining true positive volumes, the model made errors. The corresponding precision-recall curve is shown in Figure 4; Model achieved an average precision of over all folds.

Representative false positives and false negatives are displayed in Figure 5. False positives generally occurred due to bright regions at the brain-skull boundary or in sinuses that were not removed in the skull-stripping step. Note the third column in Figure 5, corresponding to errors in the manual labels; these false positives were actually true positives where the human rater erroneously classified hemorrhage presence.

Figure 5: Representative erroneous classifications made by the leading model, Model . Each displayed axial slice corresponds to the most probable slice containing a hemorrhage. The first column shows false negatives, with apparent hemorrhages indicated by red arrows. The second column shows false positives, and it is possible that the non-hemorrhage regions indicated by yellow arrows caused the model to consider these volumes positive. The third column shows manual labeling errors which the model correctly classified as containing hemorrhages, indicated by red arrows.

3.2 Reduction of training samples

As expected, more data leads to better deep learning model performance, illustrated in Figures 6 and 7. Model stability over folds also increases, likely because each fold contains proportionally more data, increasing the likelihood to account for anatomical heterogeneties during training.

A comparison of the average precision obtained by each model trained with varying dataset counts over five folds is shown in Figure 6. There were three unexpected results. First, there was a dramatic decrease in model precision when reducing the number of training samples from to . While some models trained on folds with samples performed well, none from folds with samples achieved the mean performance of models trained with samples. Second, there was increased variance of performance in models trained on compared to those trained on examples. Several factors could have influenced this, such as the specific random sampling when constructing the fold or fluctuations in shuffling during the training procedure. Third, although model performance stabilized across folds with larger number of training samples, the gain in performance from adding more data above samples was significantly reduced.

Figure 6: Five-fold cross-validated average precision calculated from precision-recall curves per training sample size. As expected, average precision increases with more training samples. Note the inability for the model to learn with MIL at training samples as well as the model instability with lower numbers of samples.
Figure 7: Comparison of precision-recall curves for differing training dataset sizes over five-fold cross-validation. Faded filled color regions indicate variation in results among folds, and darker lines indicate the mean of results among folds. Model was unable to generalize to the testing dataset, where Models and achieve moderate performance. At least training samples were needed to train a generalizable model with MIL.

4 Discussion

The use of 2D supervised CNNs for segmentation of anisotropic medical images is common. We have applied MIL to allow a 2D CNN to learn from volumetric patient labels on a hemorrhage detection task. To the best of our knowledge, this is the first application of MIL and deep learning to clinical imaging to circumnavigate the need for 2D image slice labels, as well as the first characterization of required dataset sizes for MIL. We have found that for this hemorrhage detection task, at least annotated image volumes were necessary for decent classification, and to train a strong classifier. We conclude that MIL is a step forward towards building models which learn from patient-level labels. Moving forward, we anticipate further utility of MIL towards pre-training and transfer learning other weakly supervised tasks. Our source code is publicly available[11].

5 Acknowledgements

Support for this work included funding from the Intramural Research Program of the NIH Clinical Center and the Department of Defense in the Center for Neuroscience and Regenerative Medicine, the National Multiple Sclerosis Society RG-1507-05243 (Pham), and NIH grants 1R01EB017230-01A1 (Landman) and R01GM120484 and R01AG058639 (Patel), as well as NSF 1452485 (Landman). The VUMC dataset was obtained from ImageVU, a research resource supported by the VICTR CTSA award (ULTR000445 from NCATS/NIH), Vanderbilt University Medical Center institutional funding and Patient-Centered Outcomes Research Institute (PCORI; contract CDRN-1306-04869). This work received support from the Advanced Computing Center for Research and Education (ACCRE) at the Vanderbilt University, Nashville, TN, as well as in part by ViSE/VICTR VR3029. We also extend gratitude to NVIDIA for their support by means of the NVIDIA hardware grant.


  • [1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. (2016) Tensorflow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §2.2.
  • [2] G. Campanella, V. W. K. Silva, and T. J. Fuchs (2018) Terabyte-scale deep multiple instance learning for classification and localization in pathology. arXiv preprint arXiv:1805.06983. Cited by: §1.
  • [3] J. Chen, L. Yang, Y. Zhang, M. Alber, and D. Z. Chen (2019) Combining fully convolutional and recurrent neural networks for 3d biomedical image segmentation. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 3036–3044. External Links: Link Cited by: §1.
  • [4] V. Cheplygina, L. Sørensen, D. M. Tax, J. H. Pedersen, M. Loog, and M. de Bruijne (2014) Classification of copd with multiple instance learning. In 2014 22nd International Conference on pattern recognition, pp. 1508–1513. Cited by: §1.
  • [5] F. Ciompi, B. de Hoop, S. J. van Riel, K. Chung, E. T. Scholten, M. Oudkerk, P. A. de Jong, M. Prokop, and B. van Ginneken (2015) Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2d views and a convolutional neural network out-of-the-box. Medical image analysis 26 (1), pp. 195–202. Cited by: §1.
  • [6] M. de Bruijne (2016) Machine learning approaches in medical image analysis: from detection to diagnosis. Elsevier. Cited by: §1.
  • [7] B. D. de Vos, J. M. Wolterink, P. A. de Jong, M. A. Viergever, and I. Išgum (2016) 2D image classification for 3d anatomy localization: employing deep convolutional neural networks. In Medical Imaging 2016: Image Processing, Vol. 9784, pp. 97841Y. Cited by: §1.
  • [8] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez (1997) Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence 89 (1-2), pp. 31–71. Cited by: §1.
  • [9] M. M. Dundar, G. Fung, B. Krishnapuram, and R. B. Rao (2008) Multiple-instance learning algorithms for computer-aided detection. IEEE Transactions on Biomedical Engineering 55 (3), pp. 1015–1021. Cited by: §1.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2015) Deep residual learning for image recognition. CoRR abs/1512.03385. External Links: Link, 1512.03385 Cited by: §2.2.
  • [11] (2019) Implementation: 2d from 3d multiple instance learning classification. Note: https://github.com/sremedios/multiple_instance_learningAccessed: 2019-10-18 Cited by: Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection, §2.3, §4.
  • [12] Z. Jia, X. Huang, I. Eric, C. Chang, and Y. Xu (2017) Constrained deep weak supervision for histopathology image segmentation. IEEE transactions on medical imaging 36 (11), pp. 2376–2388. Cited by: §1.
  • [13] K. Lee, J. Zung, P. Li, V. Jain, and H. S. Seung (2017) Superhuman accuracy on the snemi3d connectomics challenge. arXiv preprint arXiv:1706.00120. Cited by: §1.
  • [14] C. Li, C. Shi, H. Zhang, Y. Chen, and S. Zhang (2015) Multiple instance learning for computer aided detection and diagnosis of gastric cancer with dual-energy ct imaging. Journal of biomedical informatics 57, pp. 358–368. Cited by: §1.
  • [15] X. Li, P. S. Morgan, J. Ashburner, J. Smith, and C. Rorden (2016) The first step for neuroimaging data analysis: dicom to nifti conversion. Journal of neuroscience methods 264, pp. 47–56. Cited by: §2.1.
  • [16] X. Li, H. Chen, X. Qi, Q. Dou, C. Fu, and P. Heng (2018) H-denseunet: hybrid densely connected unet for liver and tumor segmentation from ct volumes. IEEE transactions on medical imaging 37 (12), pp. 2663–2674. Cited by: §1.
  • [17] S. Liu, D. Xu, S. K. Zhou, O. Pauly, S. Grbic, T. Mertelmeier, J. Wicklein, A. Jerebko, W. Cai, and D. Comaniciu (2018) 3D anisotropic hybrid network: transferring convolutional features from 2d images to 3d anisotropic volumes. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, A. F. Frangi, J. A. Schnabel, C. Davatzikos, C. Alberola-López, and G. Fichtinger (Eds.), Cham, pp. 851–858. External Links: ISBN 978-3-030-00934-2 Cited by: §1.
  • [18] J. Muschelli, N. L. Ullman, W. A. Mould, P. Vespa, D. F. Hanley, and C. M. Crainiceanu (2015) Validated automatic brain extraction of head CT images. Neuroimage 114, pp. 379–385. Cited by: §2.1.
  • [19] S. Remedios, S. Roy, J. Blaber, C. Bermudez, V. Nath, M. B. Patel, J. A. Butman, B. A. Landman, and D. L. Pham (2019) Distributed deep learning for robust multi-site segmentation of ct imaging after traumatic brain injury. In Medical Imaging 2019: Image Processing, Vol. 10949, pp. 109490A. Cited by: §2.1.
  • [20] W. Safta and H. Frigui (2018) Multiple instance learning for benign vs. malignant classification of lung nodules in ct scans. In 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), pp. 490–494. Cited by: §1.
  • [21] W. Shen, M. Zhou, F. Yang, D. Dong, C. Yang, Y. Zang, and J. Tian (2016) Learning from experts: developing transferable deep features for patient-level lung cancer prediction. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 124–131. Cited by: §1.
  • [22] Tensorflow API: TFRecordDataset. Note: https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDatasetAccessed: 2019-07-31 Cited by: §2.1.
  • [23] Y. Wang, J. Li, and F. Metze (2019) A comparison of five multiple instance learning pooling functions for sound event detection with weak labeling. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 31–35. Cited by: §2.
  • [24] D. Wu, J. Bi, and K. Boyer (2009) A min-max framework of cascaded classifier with multiple instance learning for computer aided diagnosis. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1359–1366. Cited by: §1.
  • [25] J. Wu, Y. Yu, C. Huang, and K. Yu (2015) Deep multiple instance learning for image classification and auto-annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3460–3469. Cited by: §1.
  • [26] Z. Yan, Y. Zhan, Z. Peng, S. Liao, Y. Shinagawa, S. Zhang, D. N. Metaxas, and X. S. Zhou (2016) Multi-instance deep learning: discover discriminative local anatomies for bodypart recognition. IEEE transactions on medical imaging 35 (5), pp. 1332–1343. Cited by: §1.
  • [27] C. Zhao, A. Carass, B. E. Dewey, J. Woo, J. Oh, P. A. Calabresi, D. S. Reich, P. Sati, D. L. Pham, and J. L. Prince (2018) A deep learning based anti-aliasing self super-resolution algorithm for mri. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 100–108. Cited by: §1.
  • [28] X. Zhou, T. Ito, R. Takayama, S. Wang, T. Hara, and H. Fujita (2016) Three-dimensional ct image segmentation by combining 2d fully convolutional network with 3d majority voting. In Deep Learning and Data Labeling for Medical Applications, pp. 111–120. Cited by: §1.
  • [29] W. Zhu, Q. Lou, Y. S. Vang, and X. Xie (2017) Deep multi-instance networks with sparse label assignment for whole mammogram classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 603–611. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description