Art of singular vectors and universal adversarial perturbations

Art of singular vectors and universal adversarial perturbations

Abstract

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has been attracting a lot of attention in recent studies. It has been shown that for many state of the art DNNs performing image classification there exist universal adversarial perturbations — image-agnostic perturbations mere addition of which to natural images with high probability leads to their misclassification. In this work we propose a new algorithm for constructing such universal perturbations. Our approach is based on computing the so-called -singular vectors of the Jacobian matrices of hidden layers of a network. Resulting perturbations present interesting visual patterns, and by using only 64 images we were able to construct universal perturbations with more than 60 % fooling rate on the dataset consisting of 50000 images. We also investigate a correlation between the maximal singular value of the Jacobian matrix and the fooling rate of the corresponding singular vector, and show that the constructed perturbations generalize across networks.

1Introduction

Deep Neural Networks (DNNs) with great success have been applied to many practical problems in computer vision [11] and in audio and text processing [7]. However, it was discovered that many state-of-the-art DNNs are vulnerable to adversarial attacks [6], based on adding a perturbation of a small magnitude to the image. Such perturbations are carefully constructed in order to lead to misclassification of the perturbed image and moreover may attempt to force a specific predicted class (targeted attacks), as opposed to just any class different from the ground truth (untargeted attacks). Potential undesirable usage of adversarial perturbations in practical applications such as autonomous driving systems and malware detection has been studied in [10]. This also motivated the research on defenses against various kinds of attack strategies [16].

In the recent work Moosavi [14] have shown that there exist universal adversarial perturbations — image-agnostic perturbations that cause most natural images to be misclassified. They were constructed by iterating over a dataset and recomputing the “worst” direction in the space of images by solving an optimization problem related to geometry of the decision boundary. Universal adversarial perturbations exhibit many interesting properties such as their universality across networks, which means that a perturbation constructed using one DNN will perform relatively well for other DNNs.

We present a new algorithm for constructing universal perturbations based on solving simple optimization problems which correspond to finding the so-called -singular vector of the Jacobian matrices of feature maps of a DNN. Our idea as based on the observation that since the norm of adversarial perturbations is typically very small, perturbations in the non-linear maps computed by the DNN can be reasonably well approximated by the Jacobian matrix. The -singular vector of a matrix is defined as the solution of the following optimization problem

and if we desire instead, it is sufficient to multiply the solution of (Equation 1) by . Universal adversarial perturbations are typically generated with a bound in the -norm, which motivates the usage of such general construction. To obtain the -singular vectors we use a modification of the standard power method, which is adapted to arbitrary -norms. The main contributions of our paper are

  • We propose an algorithm for generating universal adversarial perturbation, using the generalized power method for computing the -singular vectors of the Jacobian matrices of the feature maps.

  • Our method is able to produce relatively good universal adversarial examples from a relatively small number of images from a dataset.

  • We investigate a correlation between the largest -singular value and the fooling rate of the generated adversarial examples; this suggests that this singular value can be used as a quantitative measure of the robustness of a given neural network and can be in principle incorporated as the regularizer for the DNNs.

  • We analyze various properties of the computed adversarial perturbations such as generalization across networks and dependence of the fooling rate on the number of images used for construction of the perturbation.

2Problem statement

Suppose that we have a standard feed-forward DNN which takes a vector as the input, and outputs a vector of probabilities for the class labels. Our goal given parameters and is to produce a vector such that

for as many in a dataset as possible. Efficiency of a given universal adversarial perturbation for the dataset of the size is called the fooling rate and is defined as

Let us denote the outputs of the -th hidden layer of the network by . Then for a small vector we have

where

is the Jacobian matrix of . Thus, for any -norm

We can conclude that for perturbations which are small in magnitude in order to sufficiently perturb the output of a hidden layer, it is sufficient to maximize right-hand side of the . It seems reasonable to suggest that while propagating further in the network it will dramatically change the predicted label of .

Thus to construct an adversarial perturbation for an individual image we need to solve

and due to homogeneity of the problem defined by it is sufficient to solve it for . The solution of (Equation 3) is defined up to multiplication by and is called the -singular vector of . Its computation in a general case is the well-known problem [2]. In several cases e.g. and , algorithms for finding the exact solution of the problem (Equation 3) are known [22], and are based on finding the element of maximal absolute value in each row (column) of a matrix. However, this approach requires iterating over all elements of the matrix and thus has complexity for the matrix of size . Typical size of such a matrix appearing in our setting, e.g. taking VGG-19 network, output of the first pooling layer, and batch size of (usage of a batch of images is explained further in the text), would be , which requires roughly TB of memory to store and makes these algorithms completely impractical. In order to avoid these problems, we switch to iterative methods. Instead of evaluating and storing the full matrix we use only the matvec function of , which is the function that given an input vector computes an ordinary product without forming the full matrix , and typically has complexity. In many applications that deal with extremely large matrices using matvec functions is essentially mandatory.

For computing the -singular vectors there exists a well-known Power Method algorithm originally developed by Boyd [3], which we explain in the next section. We also present a modification of this method in order to construct universal adversarial perturbations.

3Generalized power method

Suppose that for some linear map we are given the matvec functions of and . Given parameter we also define a function

which applies to vectors element-wise. As usual for we also define such that . Then, given some initial condition , one can apply the following to obtain a solution of (Equation 3).

In the case it becomes the familiar power method for obtaining the largest eigenvalue and the corresponding eigenvector, applied to the matrix .
The discussion so far applies to finding an adversarial perturbation for an instance . To produce universal adversarial perturbation we would like to maximize the left-hand size of (Equation 3) uniformly across all the images in the dataset . For this we introduce a new optimization problem

A solution of the problem defined by uniformly perturbs the output of the -th layer of the DNN, and thus can serve as the universal adversarial perturbation due to the reasons discussed in the introduction. Note that the problem given in is exactly equivalent to

where is the matrix obtained via stacking vertically for each . To make this optimization problem tractable, we apply the same procedure to some randomly chosen subset of images (batch) , obtaining

and hypothesize that the obtained solution will be a good approximate to the exact solution of (Equation 4). We present this approach in more detail in the next section.

4Stochastic power method

Let us choose a fixed batch of images from the dataset and fix a hidden layer of the DNN, defining the map . Denote sizes of , by and correspondingly. Then, using the notation from we can compute for each . Let us now stack these Jacobian matrices vertically obtaining the matrix of size :

Note that to compute the matvec functions of and it suffices to be able to compute the individual matvec functions of and . We will present an algorithm for that in the next section and for now let us assume that these matvec functions are given. We can now apply to the matrix obtaining Stochastic Power Method (SPM).

Note that in we could in principle change the batch between iterations of the power method to compute “more general” singular vectors. However in our experiments we discovered that it almost does not affect the fooling rate of the generated universal perturbation.

5Efficient implementation of the matvec functions

Matrices involved in for typical DNNs are too large to be formed explicitly. However, using automatic differentiation available in most deep learning packages it is possible to construct matvec functions which then are evaluated in a fraction of a second. To compute the matvecs we follow the well-known approach based on Pearlmutter’s R-operator [17], which could be briefly explained as follows. Suppose that we are given an operation which computes the gradient of a scalar function with respect to the vector variable at the point . Let be some fixed layer of the DNN, such that and , thus and for vectors , we would like to compute , at some fixed point . These steps are presented in .
For a given batch of images this algorithm is run only once.

Let us summarize our approach for generating universal perturbations. Suppose that we have some dataset of natural images and a fixed deep neural network trained to perform image classification. At first we choose a fixed random batch of images from and specify a hidden layer of the DNN. Then using we construct the matvec functions of the matrix defined by . Finally we run to obtain the perturbation and then rescale it if necessary.

6Experiments

In this section we analyze various adversarial perturbations constructed as discussed in . For testing purposes we use the ILSVRC 2012 validation dataset [18] ( images).

6.1Adversarial perturbations

In our experiments we chose and computed the - singular vectors for various layers of VGG-16 and VGG-19 [19] and ResNet50 [9]. was chosen to smoothen optimization problem and effectively serves as the replacement for , for which the highest fooling rates were reported in [14]. We also investigate other values of in . Batch size in was chosen to be and we used the same images to construct all the adversarial perturbations.

Figure 1: VGG-16
Figure 1: VGG-16
Figure 2: VGG-19
Figure 2: VGG-19
Figure 3: ResNet50
Figure 3: ResNet50

Some of the computed singular vectors are presented in . We observe that computed singular vectors look visually appealing and present interesting visual patterns. Possible interpretation of these patterns can be given if we note that extremely similar images were computed in [15] in relation to feature visualization. Namely, for various layers in GoogLeNet [20] the images which activate a particular neuron were computed. In particular, visualization of the layer conv2d0, which corresponds to edge detection, looks surprisingly similar to several of our adversarial perturbations. Informally speaking, this might indicate that adversarial perturbations constructed as the -singular vectors attack a network by ruining a certain level of image understanding, where in particular first layers correspond to edge detection. This is partly supported by the fact that the approach used for feature visualization in [15] is based on computing the Jacobian matrix of a hidden layer and maximizing the response of a fixed neuron, which is in spirit related to our method.
To measure how strongly the -singular vector disturbs the output of the hidden layer based on which it was constructed, we evaluate the corresponding singular value. We have computed it for all the layers of VGG-16, VGG-19 and ResNet50. Results are given in . Note that in general singular values of the layers of ResNet50 are much smaller in magnitude than those of the VGG nets, which is further shown to roughly correspond to the obtained fooling rates.
Convergence of is analyzed in . We observe that a relatively low number of iterations is required to achieve good accuracy. In particular if each evaluation of the matvec functions takes operations, the total complexity is for iterations, which for as small as is a big improvement compared to of the exact algorithm.

VGG-16
VGG-16
VGG-19
VGG-19
ResNet50
ResNet50
Figure 4: Convergence of the (\infty, 10)-singular value. Relative error of the singular value w.r.t iteration number is shown.
Figure 4: Convergence of the -singular value. Relative error of the singular value w.r.t iteration number is shown.

6.2Fooling rate and singular values

As a next experiment we computed and compared the fooling rate of the perturbation by various computed singular vectors. We choose 1 – recall that this can be achieved just by multiplying the computed singular vector by the factor of . Results are given in .

Table 1: Fooling rates for ResNet50
Layer name block2_pool block3_conv1 block3_conv2 block3_conv3

Singular value

1165.74 2200.08 3146.66 6282.64

Fooling rate

0.52 0.39 0.50 0.50
Table 2: Fooling rates for ResNet50
Layer name block2_pool block3_conv1 block3_conv2 block3_conv3

Singular value

784.82 1274.99 1600.77 3063.72

Fooling rate

0.60 0.33 0.50 0.52
Table 3: Fooling rates for ResNet50
Layer name conv1 res3c_branch2a bn5a_branch2c activation_8

Singular value

59.69 19.21 138.81 15.55

Fooling rate

0.44 0.35 0.34 0.34


We see that using only images allowed us to achieve more than fooling rate for all the investigated networks on the dataset containing images of different classes. This means that by analyzing less than of the dataset it is possible to design strong universal adversarial attacks generalizing to many unseen classes and images. Similar fooling rates reported in [14] required roughly images to achieve (see for further comparison). Examples of images after addition of the adversarial perturbation with the highest fooling rate for VGG-19 are given in , and their predicted classes for various adversarial attacks (for each network we choose the adversarial perturbation with the highest fooling rate) are reported in . We note that the top-1 class probability for images after the adversarial attack is relatively low in most cases, which might indicate that images are moved away from the decision boundary. We test this behavior by computing the top-5 probabilities for several values of the -norm of the adversarial perturbation. Results are given in . We see that top- probability decreases significantly and becomes roughly equal to the top- probability. Similar behavior was noticed in some of the cases when the adversarial example failed to fool the DNN — top- probability still has decreased significantly. It is also interesting to note that such adversarial attack indeed introduces many new edges in the image, which supports the claim made in the previous section.

Figure 5: Top-5 probabilities predicted by the VGG-19 w.r.t \infty-norm of the universal adversarial perturbation. Tests were run for image 3 from . Universal adversarial perturbation with highest fooling rate in  was chosen.
Figure 5: Top-5 probabilities predicted by the VGG-19 w.r.t -norm of the universal adversarial perturbation. Tests were run for image 3 from . Universal adversarial perturbation with highest fooling rate in was chosen.


As a next experiment we investigate the dependence of the achieved fooling rate on the batch size used in . Some of the results are given in . Surprisingly, increasing the batch size does not significantly affect the fooling rate and by using as few as images it is possible to construct the adversarial perturbations with fooling rate. This suggests that the singular vector constructed using Stochastic Power Method reasonably well approximates solution of the general optimization problem (Equation 4).

Figure 6: Dependence of the fooling rate on the batch size. block2_pool layer in VGG-19 was used for the experiment.
Figure 6: Dependence of the fooling rate on the batch size. block2_pool layer in VGG-19 was used for the experiment.


It appears that higher singular value of the layer does not necessarily indicate higher fooling rate of the corresponding singular vector. However, as shown on the singular values of various layers VGG-19 are in general larger than those of VGG-16, and of VGG-16 are in general larger than the singular values of ResNet50, which is roughly in correspondence between the maximal fooling rates we obtained for these networks. Moreover, layers closer to the input of the DNN seem to produce better adversarial perturbations, than those closer to the end.
Based on this observation we hypothesize that to defend the DNN against this kind of adversarial attack one can choose some subset of the layers (preferably closer to the input) of the DNN and include the term

in the regularizer, where indicates the current learning batch. We plan to analyze this approach in future work.
Finally, we investigate if our adversarial perturbations generalize across different networks. For each DNN we have chosen the adversarial perturbation with the highest fooling rate from and tested it against other networks. Results are given in . We see that these adversarial perturbations are indeed doubly universal, reasonably well generalizing to other architectures. Surprisingly, in some cases the fooling rate of the adversarial perturbation constructed using other network was higher than that of the network’s own adversarial perturbation. This universality might be explained by the fact that if Deep Neural Networks independently of specifics of their architecture indeed learn to detect low-level patterns such as edges, then adding an edge-like noise has a high chance to ruin the prediction. It is interesting to note that the adversarial perturbation obtained using block2_pool layer of VGG-19 is the most efficient one, in correspondence with its interesting edge-like structure.

Table 4: Generalization of the adversarial perturbations across networks. Columns indicate the DNN for which the adversarial perturbation was computed, rows indicate on which network it was tested. Adversarial perturbations with highest fooling rates in were chosen.
VGG-16 VGG-19 ResNet50

VGG-16

0.52 0.60 0.39

VGG-19

0.48 0.60 0.38

ResNet50

0.41 0.47 0.44

6.3Dependence of the perturbation on

Figure 7: Examples of images misclassified after the adversarial attack (the attack based on block2_pool layer of VGG-19 is shown). Predicted classes are given in .
Figure 7: Examples of images misclassified after the adversarial attack (the attack based on block2_pool layer of VGG-19 is shown). Predicted classes are given in .
Table 5: ResNet50
image_1 image_2 image_3 image_4
mashed_potato pole fountain goblet
head_cabbage rubber_eraser carousel bucket
Table 6: ResNet50
image_1 image_2 image_3 image_4
mashed_potato flagpole fountain coffee_mug
flatworm letter_opener pillow candle
Table 7: ResNet50
image_1 image_2 image_3 image_4
mashed_potato totem_pole flagpole chocolate_sauce
stole fountain_pen monitor goblet

In the analysis so far we have chosen as an approximate to . However, any value of can be used for constructing the adversarial perturbations and in this subsection we investigate how the choice of affects the fooling rate and the generated perturbations (while keeping ). Perturbations computed for several different values of are presented in , and the corresponding fooling rates are reported in . We observe that bigger values of produce more clear edge-like patterns, which is reflected in the increase of the fooling rate. However, the maximal fooling rate seems to be achieved at , probably because it is ’smoother’ substitute for than , which might be important in such large scale problems.

Figure 8: Adversarial perturbations constructed for various values of q. Presented images correspond to values q uniformly increasing from 1.0 to 5.0. block2_pool layer of VGG-19 was used.
Figure 8: Adversarial perturbations constructed for various values of . Presented images correspond to values uniformly increasing from to . block2_pool layer of VGG-19 was used.
Figure 9: Dependence of the fooling rate on the value of q. As before, p = \infty and norm of the adversarial perturbation was set to be 10.
Figure 9: Dependence of the fooling rate on the value of . As before, and norm of the adversarial perturbation was set to be .

6.4Comparison of the algorithms

In this subsection we perform a comparison of the algorithm presented in Moosavi , which we refer to as UAP, and our method. For the former we use the Python implementation https://github.com/LTS4/universal/. Since one of the main features of our method is an extremely low number of images used for constructing the perturbation, we decided to compare the fooling rates of universal perturbations constructed using these two methods for various batch sizes. Results are presented in . Note that our method indeed captures the universal attack vector relatively fast, and the fooling rate stabilizes on roughly , while the fooling rate of the perturbation constructed by the UAP method starts low and then gradually increases as more images are added. Running time of our algorithm depends on which hidden layer we use. As an example for block2_pool layer of VGG-19 the running time per iteration of the power method for a batch of one image was roughly seconds (one NVIDIA Tesla K80 GPU was used and the algorithm was implemented using Tensorflow [1] and numpy libraries). Since the running time per iteration linearly depends on the batch size , the total running time could be estimated as seconds for iterations. By fixing and we obtain that the total running time to generate the universal perturbation with approximately fooling rate on the whole dataset is roughly minute (we did not include the time required to compute the symbolic Jacobian matvecs since it is performed only once, and is also required in the implementation of UAP, though different layer is used). In our hardware setup the running time of the UAP algorithm with batch size was approximately minutes, and the fooling rate of roughly was achieved. According to [14] approximately images will be required to obtain the fooling rates of order .

Figure 10: Dependence of the fooling rate on the number of images used for constructing the universal perturbation. SingularFool denotes the method proposed in the current paper, UAP denotes the algorithm presented in . q = 5 and block2_pool layer of VGG-19 were used.
Figure 10: Dependence of the fooling rate on the number of images used for constructing the universal perturbation. SingularFool denotes the method proposed in the current paper, UAP denotes the algorithm presented in . and block2_pool layer of VGG-19 were used.

7Related work

Many different methods [6] have been proposed to perform adversarial attacks on Deep Neural Networks in the white box setting where the DNN is fully available to the attacker. Two works are especially relevant for the present paper. Goodfellow [6] propose the fast gradient sign method, which is based on computing the gradient of the loss function at some image and taking its as the adversarial perturbation. This approach allows one to construct rather efficient adversarial perturbations for individual images and can be seen as a particular case of our method. Indeed if we take the batch size to be equal to and the loss function as the hidden layer, then is exactly the solution of the problem (Equation 3) with and (since is just a number this problem does not depend on ). Second work is Moosavi [14] where the universal adversarial perturbations have been proposed. It is based on a sequential solution of nonlinear optimization problems followed by a projection onto () sphere, which iteratively computes the ’worst’ possible direction towards the decision boundary. Optimization problems proposed in the current work are simpler in nature and well-studied, and due to their homogeneous property the adversarial perturbation with an arbitrary norm is obtained by simply rescaling the once computed perturbation, in contrast with the algorithm in [14].

8Conclusion

In this work we explored a new algorithm for generating universal adversarial perturbations and analyzed their main properties, such as generalization across networks, dependence of the fooling rate on various hyperparameters and having certain visual properties. We have showed that by using only images a single perturbation fooling the network in roughly cases can be constructed, while the previous known approach required several thousand of images to obtain such fooling rates. In a future work we plan to address the relation between feature visualization [15] and adversarial perturbations, as well as analyzing the defense approach discussed in .

Acknowledgements

This study was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001), by RFBR grants 16-31-60095-mol-a-dk, 16-31-00372-mola and by Skoltech NGP program.

Footnotes

  1. Pixels in the images from the dataset are normalized to be in range, so by choosing we make the adversarial perturbations quasi-imperceptible to human eye

References

  1. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.
    M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Software available from tensorflow.org.
  2. Approximating matrix p-norms.
    A. Bhaskara and A. Vijayaraghavan. In Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms, pages 497–511. SIAM, 2011.
  3. The power method for lp norms.
    D. W. Boyd. Linear Algebra and its Applications, 9:95–101, 1974.
  4. Learning to forget: Continual prediction with LSTM.
    F. A. Gers, J. Schmidhuber, and F. Cummins. 1999.
  5. cleverhans v0. 1: an adversarial machine learning library.
    I. Goodfellow, N. Papernot, and P. McDaniel. arXiv preprint arXiv:1610.00768, 2016.
  6. Explaining and harnessing adversarial examples.
    I. J. Goodfellow, J. Shlens, and C. Szegedy. arXiv preprint arXiv:1412.6572, 2014.
  7. Speech recognition with deep recurrent neural networks.
    A. Graves, A.-r. Mohamed, and G. Hinton. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645–6649. IEEE, 2013.
  8. Adversarial perturbations against deep neural networks for malware classification.
    K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel. arXiv preprint arXiv:1606.04435, 2016.
  9. Deep residual learning for image recognition.
    K. He, X. Zhang, S. Ren, and J. Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  10. Adversarial examples in the physical world.
    A. Kurakin, I. Goodfellow, and S. Bengio. arXiv preprint arXiv:1607.02533, 2016.
  11. Convolutional networks for images, speech, and time series.
    Y. LeCun, Y. Bengio, et al. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
  12. Delving into transferable adversarial examples and black-box attacks.
    Y. Liu, X. Chen, C. Liu, and D. Song. arXiv preprint arXiv:1611.02770, 2016.
  13. Extensions of recurrent neural network language model.
    T. Mikolov, S. Kombrink, L. Burget, J. Černocky, and S. Khudanpur. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 5528–5531. IEEE, 2011.
  14. Universal adversarial perturbations.
    S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. arXiv preprint arXiv:1610.08401, 2016.
  15. Feature visualization.
    C. Olah, A. Mordvintsev, and L. Schubert. Distill, 2017.
  16. Distillation as a defense to adversarial perturbations against deep neural networks.
    N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. In Security and Privacy (SP), 2016 IEEE Symposium on, pages 582–597. IEEE, 2016.
  17. Fast exact multiplication by the hessian.
    B. A. Pearlmutter. Neural computation, 6(1):147–160, 1994.
  18. Imagenet large scale visual recognition challenge.
    O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. International Journal of Computer Vision, 115(3):211–252, 2015.
  19. Very deep convolutional networks for large-scale image recognition.
    K. Simonyan and A. Zisserman. arXiv preprint arXiv:1409.1556, 2014.
  20. Going deeper with convolutions.
    C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
  21. Intriguing properties of neural networks.
    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. arXiv preprint arXiv:1312.6199, 2013.
  22. Numerical linear algebra, volume 50.
    L. N. Trefethen and D. Bau III. Siam, 1997.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
2004
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description