Eli commented this

This paper examines the effectiveness of MLPs over CNNs in case of images whose pixels are randomly permuted. A couple …

ivan replied this
Replied 2 days ago

The study of the performance of CNNs on pixel permuted images appeared as a motivation to understand how convolutions find correlations inside the data. The intuition was that they should not be able to perform classification on randomized images because there is no local correlation to capture. The study shows differently. A further motivation would be that in some areas of science where these methods are recently adopted the data does not show locally correlated information due to the physical processes involved in the measurements. It is important to understand the limit to which neural networks can be used. As it is already known, the MNISt dataset is well bahaved. The images are centered, there is no background, most of the digits from the same class are very similar to each other, the mean and standard deviation are very stable. MLPs are not permutation sensitive due to the fact that all neurons in a given layer are connected to all neurons in the orevious layer. Therefore permutation does not change the structure of the data the MLPs "perceive". Given the simplicity of the MNIST images one can argue that CNNs are still performant because the fully conected layers at the end of the series of convolutional layers are doing the hard work of classification. The dataset is simple enough even if the convolutiond don't find relevant correlations. CIFAR10 is a more difficult dataset, with no centered images, complex backgrounds, more channels etc. Because of missing local correlations (per channel) there is a significant difference in behaviour when one compares natural images vs permuted images.   Cheers, Cristian


You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description