Beef Cattle Instance Segmentation Using Fully Convolutional Neural Network

Beef Cattle Instance Segmentation Using Fully Convolutional Neural Network

Abstract

We present an instance segmentation algorithm trained and applied to a CCTV recording of beef cattle during a winter finishing period. A fully convolutional network was transformed into an instance segmentation network that learns to label each instance of an animal separately. We introduce a conceptually simple framework that the network uses to output a single prediction for every animal. These results are a contribution towards behaviour analysis in winter finishing beef cattle for early detection of animal welfare-related problems.

\addauthor

Aram Ter-Sarkisovaram.ter-sarkisov@dit.ie1 \addauthorRobert Rossrobert.ross@dit.ie1 \addauthorJohn Kelleherjohn.kelleher@dit.ie1 \addauthorBernadette Earleybernadette.earley@teagasc.ie2 \addauthorMichael Keanemichael.keane@teagasc.ie2 \addinstitution School of Computer Science
Dublin Institute of Technology
Dublin, Ireland \addinstitution TEAGASC
Grange, Dunsany, Co. Meath, Ireland
Beef Cattle Instance Segmentation Using FCN

1 Introduction

Recently deep convolutional neural networks have made strong contribution in a number of areas in computer vision. In particular, three areas have seen progress: object detection, class (semantic) segmentation and instance segmentation (mask representation). Object detection algorithms, such as [Girshick et al.(2014)Girshick, Donahue, Darrell, and Malik, Ren et al.(2015)Ren, He, Girshick, and Sun, Girshick(2015)] generate bounding box proposals for every object in the image and classify these proposals. Class (semantic) segmentation algoithms delineate class of objects at a pixelwise level without making a distinction between two objects belonging to the same class [Long et al.(2015)Long, Shelhamer, and Darrell, Zheng et al.(2015)Zheng, Jayasumana, Romera-Paredes, Vineet, Su, Du, Huang, and Torr]. Instance segmentation algorithms find a mask representation for every object in the image. Most algorithms, such as [He et al.(2017)He, Gkioxari, Dollár, and Girshick, Dai et al.(2016)Dai, He, and Sun, Li et al.(2016)Li, Qi, Dai, Ji, and Wei] use a three-stage approach: identify promising regions (bounding box around the object) using a region proposal network (RPN), predict the object class, and extract its mask, often using a fully convolutional network. Recent results on benchmark datasets show the viability of this approach, and currently it is considered the state-of-the-art for extraction of mask representations. Mask R-CNN uses the RPN to predict bounding box coordinates for the objects in the image and their label (object/not object), then refines these predictions using Region of Interest Align (RoI Align) tool, an improvement over RoI pooling, extracts e mask and produces the class of the object. MNC uses a three-layer cascade of layers: first producess bounding boxes around objects usig RPN, then extracts masks using RoI and then takes both of these inputs and predicts class for each object. FCIS uses position-sensitive score maps (object vs background) in each RoI and simultaneously outputs the mask and the class.
Another approach to instance segmentation that has seen grwoing interest in the past year is instance embedding, which does not make use of bounding boxes, e.g. discriminative loss function, [De Brabandere et al.(2017)De Brabandere, Neven, and Van Gool, Kong and Fowlkes(2017)] that use feature clustering to locate the centers of instances.
We introduce a new framework that uses the output of the FCN and the ground truth mask to refine the mask representation of animals in images without bounding box prediction. Our approach is conceptually simple and does not make use of bounding boxes or RPNs, yet manages to achieve results that comapre favourably to all state-of-the-art networks. We show the strength of our approach both on two benchmark datasets and our data. The framework, MaskSplitter, learns to output three different types of mask representations: two bad and one good. A good mask representation overlaps with exactly one animal, bad mask representation of the first type overlaps with two or more animals and multiple bad mask representations of the second type overlaps with one animal. The framework consists of the algorithm that determines the type of the mask representations and the number of true objects (cows) that they predict, pixelwise sigmoid and Euclidean loss functions and a set of convolutional and fully connected layers, one for every type of prediction. The overhead of the network on top of FCN8s is very small, 187000 parameters.
This work is a step towards real-time animal monitoring in farming evironments that have different applications, such as early lameness detection, animal welfare improvement and reduction of cattle maintenance costs. Some other recent applications of deep learning in animal husbandry and agriculture include [Stern et al.(2015)Stern, He, and Yang, Gomez et al.(2016)Gomez, Diez, Salazar, and Diaz]

2 Datasets

2.1 Benchmark Datasets

Pascal VOC 2012 ([Everingham et al.(2010)Everingham, Van Gool, Williams, Winn, and Zisserman]) and MS COCO 2017 ([Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick]) are two benchmark datasets that are widely used in the Computer Visions community. There is a total of 1986 images with cows in the MS COCO training set and 87 in the validation set. In Pascal VOC training set there are 64 images with cows and in the validation set there are 71 images. In the training stage we only used MS COCO images. Testing was done both on Pascal VOC and MS COCO datasets.

2.2 Our Dataset

The CCTV raw videos were recorded at a winter finishing feedlot for beef heifers over a two-week period on video cameras pointing at a fixed angle towards enclosures with 10 heifers each, a total of 24 pens. One of the cattle pens was selected for dataset construction. This video sequence is challenging for a number of reasons, each of which poses a particular challenge to the segmenter:

  1. The objects The animals change position frequently and assume different postures when standing up and lying down. Unlike static objects, , e.g. tv sets or buildings their shape changes significantly when an animal moves around, lies down, eats, walks and grooms itself or other pen mates. Hence the network needs to have a much higher generalization capacity.

  2. Similarity between objects: it is often difficult (or in fact impossible) to distinguish between two particular animals due to the same coat colorand absense of other distinct physical markings. In case of partial occlusion, it is nearly impossible even for a human eye to tell one from another.

  3. Occlusion: animals were allotted to 24 individual pens, each containing 10 heifers, located in the same housing facility. CCTV cameras were positioned over each pen. The pens were evenly distributed in three separate rows, throughout the house. The pens were evenly distributed in three separate rows throughout the house and each row had two replicas of each treatment. To achieve the respective space allowances, the dimensions of the pens were 4.50 m 6.66 m (30 m2) for treatment 1, 9.0 m 5.0 m (45 m2) for treatment 2 and 9.0 m 6.66 m (60 m2) for treatments 3 and 4. The feed face length was the same (4.5 m) for all pens. As a result of this setup, the view of animals from the cameras suffers from a great degree of partial occlusion.

  4. Lighting: The house had both artificial lighting (12 light: 12 hour dark) and had natural lighting through vents positioned in the overhead roof. The natural lighting is quite bad. Overall it is rather dark, but gaps in the ceiling which allows allow light through is standard feature of animal housing design. The natural light can leave which leaves rectangular shadows or patches of light all over the enclosure, including the animalscows. This can makes the work of the segmenters more difficult as they could much harderconsider the patch of light to be the feature of the object.

  5. Background: the background (enclosure boundaries) in the video is dark and/or noisy and often cannot be distinguished from the segmented object, because heifers have a color pattern (dark/light brown or grey/white) similar to the background. In many cases the segmented animal instance nearly blends with the background making it impossible to be detected even by the most advanced algorithms.

Considering the substantial challenges, in [Ter-Sarkisov et al.(2017)Ter-Sarkisov, Ross, and Kelleher] we developed a separate naive segmentation algorithm that extracts images and ground truth from frames in the video. The pseudocode is presented as a flowchart in Figure 1.

Figure 1: Flowchart of the dataset construction and naive segmentation pipeline. RCF is Richer Convolutional Features CNN from [Liu et al.(2017)Liu, Cheng, Hu, Wang, and Bai], ISODATA is a threshold function from [Ridler and Calvard(1978)]

The dataset constructed from this raw video data consists of about 500 frames extracted from one of the feedlots, each frame was cropped to size 250x250 around every animal in the frame and split randomly into the training and validation datasets, of size 4872 and 984. To show that this data is very unique, we tested the state-of-the art networks (Table 3) on the validation set. Results for all thresholds of IoU are far lower than those for the benchmark sets.

3 Networks

3.1 Fcn

Although deeper networks like ResNet101, introduced in [He et al.(2016)He, Zhang, Ren, and Sun] have the capacity to learn more features, FCN, based on VGG16 ([Simonyan and Zisserman(2014)]), has a very convenient architecture that upsamples semantic features to the size of the input image, in effect producing image-sized score maps, adjusted for the number of classes. Therefore in all of our experiments we use FCN8s ([Long et al.(2015)Long, Shelhamer, and Darrell]), the most refined of FCN architectures (other being FCN-16s and FCN-32s) as a backbone network (feature and score map extractor). The output of FCN8s consists of two score maps (object and background score maps) that have the same dimensionality as the input image and accumulate pixelwise information about the corresponding class (i.e. either animal or background). If we take a pixel-wise argmax value of the score map layer, this results in a binary output mask with pixels valued at either 0 (background) or 1 (object), depending on the greater score. Each isolated group of pixels in the mask equal to 1 constitutes a prediction (mask representation) of the animal’s instance. This binary mask is the main output of the FCN. Our framework is a refinement of this predictive mask that attempts to recover the representation of every animal.

3.2 MaskSplitter

  Input:
  FCN8s output binary mask
  Ground truth mask
  Get IoU matrix for all predictions in and cows in
  Number of good predictions = 0
  Number of bad predictions (Type 1) = 0
  Number of bad predictions (Type 2) = 0
  Empty list of seen animals (Type 1)
   Empty list of seen animals(Type 2)
  for each mask representation  do
     Rank IoUs of this prediction in the decreasing order
     
     if  this mask representation overlaps with only one animal  then
        if  then
           Number of good predictions +=1
           Copy the mask representation to the mask with good predictions
           break
        else if  then
           if  then
              Number of bad predictions (Type 2) +=1
              
           end if
           Copy the mask representation to the mask with bad predictions (Type 2)
           break
        end if
     else if this prediction overlaps with multiple animals then
        for each overlap with cow  do
           if  then
              Number of bad predictions (Type 1) +=1
              
           end if
        end for
        Copy the mask representation to the mask with bad predictions (Type 1)
     end if
  end for
  Output: Number of good predictions
  Output: Number of bad predictions (Type 1)
  Output: Number of bad predictions (Type 2)
  Output: Mask with good predictions
  Output: Mask with bad predictions (Type 1)
  Output: Mask with bad predictions (Type 2)
Algorithm 1 Pseudocode for the MaskSplitter Framework

The main concept of MaskSplitter is to separate the learning of three types of predictions: good mask representations, and two bad ones. All predictions are taken from the FCN8s final binary mask. Good mask representation overlaps with exactly one animal in the ground truth map. Two bad predictions are the following two cases:

  1. Single mask representation overlapping with two or more animals,

  2. Two or more mask representations ovelapping with a single animal.

Predictions that have no overlaps with the animals at all are ignored. IoU matrix (number of predictions x number of animals) is used to find out which of the three classes the prediction belongs to. The discrepancy between the true and predicted number of animals gives rise to the three loss functions, one for each type. For the good predictions we use the total number of animals as the target value. For each of the bad predictions it is the total number of animals they overlap with. In addition, MaskSplitter splits the binary input mask into three masks, one of each type. These splits are used in the next step, sparsification. In total, MaskSplitter algorithm has 6 outputs.

Overall structure of the framework

A unified score map is connected to the two final score maps of FCN8s with a 3x3 convolutional filter. In order to learn the three types of predictions independently, we connect three different score maps to this unified score map, one for each type, also with a 3x3 filter. We use this approach instead of a convolutional layer with 3 maps because the loss functions for each type of prediction are computed independently. To maintain spatial features and add , we take a pixelwise product of these three score maps with the split masks. The output is a sparse layer. This approach resembles rectified linear unit, but empirically we found it more efficient. Each of the sparse layer is fully connected to a single unit. This architecture is designed to get the framework to learn to output the number of each type of predictions and compare it to the true number of predicted animals.

Loss layers

These three units are compared to the true number of predicted animals using Euclidean (L2) loss function. The fourth loss function is pixelwise sigmoid that takes the full ground truth mask and the score map for good predictions. The loss function of the full network is Equation 1.

(1)

here is a pixelwise cross-entropy loss taken over a sigmoid of the score map of the good predictions, the rest are Euclidean loss functions: is for predicted and true number (i.e. all animals) of good mask representations, is for one mask representation overlapping with 2 or more animals and is for two or more mask representation overlapping with a single animal.
Pseudocode of the main MaskSplitter logic is presented in Algorithm 1 and the flowchart in Figure 2. In the pseudocode and flowchart, for convenience we refer to a single prediction overlapping two or more animals as Type 1 bad prediction and two or more predictions per single animal as Type 2 bad predictions.

Figure 2: MaskSplitter framework architecture. FCN denotes the Fully Convolutional Network, L2 is Euclidean loss layer. The ‘bad’ prediction for two animals in the binary mask is marked green, as well as the corresponding two animals in the ground truth mask. Mask Splitter has six outputs in total: for each type of prediction it outputs one full-image mask with predictions of this types, extracting it from the binary mask and one number, extracting it from the ground truth map based on the overlaps between predictions and true objects. Best viewed in color.

4 Main Results

\bmvaHangBox \bmvaHangBox \bmvaHangBox \bmvaHangBox
\bmvaHangBox \bmvaHangBox \bmvaHangBox \bmvaHangBox
Image Mask R-CNN FCIS FCN8s+MS
Figure 3: Illustration of Mask R-CNN, FCIS, FCN8s+MaskSplitter performance on MS COCO and Pascal VOC validation datasets
\bmvaHangBox \bmvaHangBox \bmvaHangBox \bmvaHangBox
Image Mask R-CNN Mask R-CNN (heads) FCN8s+MS
Figure 4: Illustration of Mask R-CNN, Mask R-CNN (heads only) and FCN8s+MaskSplitter performance on cattle facility CCTV test dataset (after finetuning)

For experiments on benchmark datasets, we extracted 1986 images with cows from MS COCO 2017 training datset and augemnted them with 800 images of persons, sheep, horses and dogs to provide the networks with a larger variety of negatively labelled features. We trained the networks on this combined dataset and tested them on MS COCO 2017 and Pascal VOC 2012 validation dataset.
All models were trained in Caffe framework with ADAM optimizer with standard parameters: for 20000 iterations, base learning rate of on Tesla K40m GPU with 12 GB of VRAM. Since the network is very large (137M parameters), we used a small batch size of 4, and each iteration (feedforward+backpropagation) during training took 1.26 sec/iteration. As the testing procedure is identical to FCN (with fewer score maps, 2 instead of 21), the network required only 90 ms/image (i.e. 11 images/sec). The output of our network is binarized mask for good predictions, size of the image.
We did not do any additional finetuning of the state-of-the art networks on benchmark datsets, as they have been extensively trained already (up to 160000 iterations). Images in the MS COCO training set were cropped to 250x250 for consistency. All images in the validation set retained their dimensions. When testing on MS COCO and VOC validation dataset, we kept all network parameters (number of RPN proposals, proposal thresholds, etc) for MNC, Mask R-CNN and FCIS the same as in the results reported by the developers. All mask representations less than 1010 for all networks were ignored as nuisance.

4.1 Pascal VOC 2012 dataset

Comparison on Pascal VOC 2012 validation dataset consisting of 71 images of cows (Table 1 shows that our framework, while being conceptually far simpler than those used in Mask R-CNN, FCIS and MNC, using FCN8s with VGG16 ([Simonyan and Zisserman(2014)]) as backbone network (far smaller than ResNet ([He et al.(2016)He, Zhang, Ren, and Sun]) with 101 layers used by FCIS and Mask R-CNN), outperforms the best of these networks (Mask R-CNN) by over 3 percentage points (mean Aveage Precision).

Backbone Network AP@0.5 AP@0.7 AP@0.5:0.95
Mask R-CNN ResNet101 0.642 0.538 0.368
FCIS ResNet101 0.633 0.527 0.327
MNC VGG16 0.503 0.334 0.229
FCN8s+MS VGG16 0.656 0.559 0.399
Table 1: Results on Pascal VOC 2012 cow validation dataset

4.2 MS COCO 2017 dataset

MS COCO is a far more diverse dataset, in terms of the number, scale, distance and angle of the objects in images. Since both Mask R-CNN and FCIS were trained with ResNet101 as the backbone network, much of the gap between our results and theirs can be explained by ResNet’s depth rather than specific strengths of Mask R-CNN and FCIS architecture. Curently fully convolutional versions of ResNet are not available, but we plan to adapt it to our architecture.

Backbone Network AP@0.5 AP@0.7 AP@0.5:0.95
Mask R-CNN ResNet101 0.549 0.502 0.341
FCIS ResNet101 0.555 0.314 0.229
MNC VGG16 0.312 0.269 0.173
FCN8s+MaskSplitter VGG16 0.443 0.321 0.232
Table 2: Results on MS COCO 2017 cow validation dataset. Best results are in bold, second best italicized.

4.3 Our Dataset

Backbone Network AP@0.5 AP@0.7 AP@0.5:0.95
Mask R-CNN ResNet101 0.218 0.01 0.04
FCIS ResNet101 0.232 0.04 0.06
MNC VGG16 0.111 0.009 0.02
Table 3: Results on our test dataset (before finetuning)
AP@0.5 AP@0.7 AP@0.5:0.95
FCN8s-ft 0.487 0.302 0.232
Mask R-CNN-heads-ft 0.637 0.253 0.265
Mask R-CNN-ft 0.687 0.259 0.298
FCN8s+MaskSplitter 0.713 0.505 0.380
Table 4: Results on our test dataset (after finetuning)

Before finetuning Mask R-CNN to our data, we tested all three networks on it. Results in Table 3 demonstrate that even the most advanced networks do not get good predictions due to the vast difference between the benchmark datasets and ou data (see Section 2). None of the networks generalized successfully to the CCTV data. Although Mask R-CNN performed slightly worse than FCIS, it showed greater capacity on other datasets, so we chose this network to train on our data. Mask R-CNN was finetuned from ResNet101-FPN (feature pyramid network) weights pre-trained on MS COCO (https://github.com/matterport/Mask_RCNN/releases, v1.0 from 23/10/2017) with the starting learning rate of 0.0002 and max/min image size 256x256 (as this is the closest to the 250x250 training images) for 20000 iterations. As the max/min image size was reduced, the maximal RPN anchor scale was also reduced to 256 and the number of train regions of interest (RoIs) to 32. Other configuration parameters were kept the same. For comparison we trained both the full network and only heads (classification, bounding box and mask). For comparison we also finetuned FCN8s that does only mask representation without instance segmentation to serve as the baseline. Results of finetuning are presented in Table 4: FCN8s with MaskSplitter framework confidently outperforms both Mask R-CNN models: mAP is more than 8% higher and AP@0.5 IoU is more than 2.5% higher than the best Mask R-CNN model. This demonstrates the framework’s ability to perform well on object-vs-background problems without the RPN and RoI frameworks.

5 Conclusions

In this article we presented a conceptually simple but efficient framework that we called MaskSplitter for extracting and refining mask representation of cows, both in benchmark datasets and a challenging dataset obtained from a raw video taken at a cattle finishing facility. The framework uses both the binary mask output of the last FCN8s layer and the ground truth mask to extract information such as the number of overlaps between predicted mask representations and true objects. Without tricks like ensemble networks, image flipping and rotation our network performs well compared to all state-of-the-art instance segmentation solutions. This approach is easily generalizable to other backbone networks that produce image-sized score maps and other image-vs-background problems (e.g tumor detection).

References

  • [Dai et al.(2016)Dai, He, and Sun] Jifeng Dai, Kaiming He, and Jian Sun. Instance-aware semantic segmentation via multi-task network cascades. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3150–3158, 2016.
  • [De Brabandere et al.(2017)De Brabandere, Neven, and Van Gool] Bert De Brabandere, Davy Neven, and Luc Van Gool. Semantic instance segmentation with a discriminative loss function. arXiv preprint arXiv:1708.02551, 2017.
  • [Everingham et al.(2010)Everingham, Van Gool, Williams, Winn, and Zisserman] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
  • [Girshick(2015)] Ross Girshick. Fast r-cnn. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
  • [Girshick et al.(2014)Girshick, Donahue, Darrell, and Malik] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
  • [Gomez et al.(2016)Gomez, Diez, Salazar, and Diaz] Alexander Gomez, German Diez, Augusto Salazar, and Angelica Diaz. Animal identification in low quality camera-trap images using very deep convolutional neural networks and confidence thresholds. In International Symposium on Visual Computing, pages 747–756. Springer, 2016.
  • [He et al.(2016)He, Zhang, Ren, and Sun] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [He et al.(2017)He, Gkioxari, Dollár, and Girshick] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. arXiv preprint arXiv:1703.06870, 2017.
  • [Kong and Fowlkes(2017)] Shu Kong and Charless C. Fowlkes. Recurrent pixel embedding for instance grouping. CoRR, abs/1712.08273, 2017. URL http://arxiv.org/abs/1712.08273.
  • [Li et al.(2016)Li, Qi, Dai, Ji, and Wei] Yi Li, Haozhi Qi, Jifeng Dai, Xiangyang Ji, and Yichen Wei. Fully convolutional instance-aware semantic segmentation. arXiv preprint arXiv:1611.07709, 2016.
  • [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
  • [Liu et al.(2017)Liu, Cheng, Hu, Wang, and Bai] Yun Liu, Ming-Ming Cheng, Xiaowei Hu, Kai Wang, and Xiang Bai. Richer convolutional features for edge detection. 2017.
  • [Long et al.(2015)Long, Shelhamer, and Darrell] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
  • [Ren et al.(2015)Ren, He, Girshick, and Sun] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91–99. Curran Associates, Inc., 2015.
  • [Ridler and Calvard(1978)] TW Ridler and S Calvard. Picture thresholding using an iterative selection method. IEEE Trans Syst Man Cybern, 8(8):630–632, 1978.
  • [Simonyan and Zisserman(2014)] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [Stern et al.(2015)Stern, He, and Yang] Ulrich Stern, Ruo He, and Chung-Hui Yang. Analyzing animal behavior via classifying each video frame using convolutional neural networks. Scientific reports, 5, 2015.
  • [Ter-Sarkisov et al.(2017)Ter-Sarkisov, Ross, and Kelleher] Aram Ter-Sarkisov, Robert Ross, and John Kelleher. Bootstrapping labelled dataset construction for cow tracking and behavior analysis. arXiv preprint arXiv:1703.10571, 2017.
  • [Zheng et al.(2015)Zheng, Jayasumana, Romera-Paredes, Vineet, Su, Du, Huang, and Torr] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1529–1537, 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
212493
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description