Abstract
Highdimensional data and highdimensional representations of reality are inherent features of modern Artificial Intelligence systems and applications of machine learning. The wellknown phenomenon of the “curse of dimensionality” states: many problems become exponentially difficult in high dimensions. Recently, the other side of the coin, the “blessing of dimensionality”, has attracted much attention. It turns out that generic highdimensional datasets exhibit fairly simple geometric properties. Thus, there is a fundamental tradeoff between complexity and simplicity in high dimensional spaces. Here we present a brief explanatory review of recent ideas, results and hypotheses about the blessing of dimensionality and related simplifying effects relevant to machine learning and neuroscience.
parsep=6pt,itemsep=0pt,leftmargin=*,labelsep=5.5mm,align=parleft \setenumerateparsep=6pt,itemsep=0pt,leftmargin=*,labelsep=5.5mm,align=parleft \firstpage1 \firstpage1 \pubvolumexx \issuenum1 \articlenumber1 2018 \copyrightyear2018 \preto\abstractkeywords \TitleHigh–Dimensional Brain in a HighDimensional World: Blessing of Dimensionality \AuthorAlexander N. Gorban*\orcidA, Valery A. Makarov \orcidBand Ivan Y. Tyukin \orcidC \AuthorNamesAlexander N. Gorban, Valery A. Makarov, Ivan Y. Tyukin \corresCorrespondence: a.n.gorban@le.ac.uk
1 Introduction
During the last two decades, the curse of dimensionality in data analysis was complemented by the blessing of dimensionality: if a dataset is essentially highdimensional then, surprisingly, some problems get easier and can be solved by simple and robust old methods. The curse and the blessing of dimensionality are closely related, like two sides of the same coin. The research landscape of these phenomena is gradually becoming more complex and rich. New theoretical achievements and applications provide a new context for old results. The singlecell revolution in neuroscience, phenomena of grandmother cells and sparse coding discovered in the human brain meet the new mathematical ‘blessing of dimensionality’ ideas. In this minireview, we aim to provide a short guide to new results on the blessing of dimensionality and to highlight the path from the curse of dimensionality to the blessing of dimensionality. The selection of material and angle of view is based on our own experience. We are not trying to cover everything in the subject of review, but rather fill in the gaps in existing tutorials and surveys.
R. Bellman Bellman (1957) in the preface to his book, discussed the computational difficulties of multidimensional optimization and summarized them under the heading “curse of dimensionality”. He proposed to reexamine the situation, not as a mathematician, but as a “practical man” Bellman (1954), and concluded that the price of excessive dimensionality “arises from a demand for too much information”. Dynamic programming was considered a method of dimensionality reduction in the optimization of a multistage decision process. Bellman returned to the problem of dimensionality reduction many times in different contexts Bellman and Kalaba (1961). Now, dimensionality reduction is an essential element of the engineering (the “practical man”) approach to mathematical modeling Gorban et al. (2006). Many model reduction methods were developed and successfully implemented in applications, from various versions of principal component analysis to approximation by manifolds, graphs, and complexes Jolliffe (1993); Gorban et al. (2008); Gorban and Zinovyev (2010), and lowrank tensor network decompositions Cichocki et al. (2016, 2017).
Various reasons and forms of the curse of dimensionality were classified and studied, from the obvious combinatorial explosion (for example, for binary Boolean attributes, to check all the combinations of values we have to analyze cases) to more sophisticated distance concentration: in a highdimensional space, the distances between randomly selected points tend to concentrate near their mean value, and the neighborbased methods of data analysis become useless in their standard forms Beyer et al. (1999); Pestov (2013). Many “good” polynomial time algorithms become useless in high dimensions.
Surprisingly, however, and despite the expected challenges and difficulties, commonsense heuristics based on the simple and the most straightforward methods “can yield results which are almost surely optimal” for highdimensional problems Kainen (1997). Following this observation, the term “blessing of dimensionality” was introduced Kainen (1997); Brown et al. (1997). It was clearly articulated as a basis of future data mining in the Donoho “Millenium manifesto” Donoho (2000). After that, the effects of the blessing of dimensionality were discovered in many applications, for example in face recognition Chen et al. (2013), in analysis and separation of mixed data that lie on a union of multiple subspaces from their corrupted observations Liu et al. (2016), in multidimensional cluster analysis Murtagh (2009), in learning large Gaussian mixtures Anderson et al. (2014), in correction of errors of multidimensonal machine learning systems Gorban et al. (2016), in evaluation of statistical parameters Li et al. (2018), and in the development of generalized principal component analysis that provides lowrank estimates of the natural parameters by projecting the saturated model parameters Landgraf and Lee (2019).
Ideas of the blessing of dimensionality became popular in signal processing, for example in compressed sensing Donoho (2006); Donoho and Tanner (2009) or in recovering a vector of signals from corrupted measurements Candes et al. (2005), and even in such specific problems as analysis and classification of EEG patterns for attention deficit hyperactivity disorder diagnosis Pereda et al. (2018).
There exist exponentially large sets of pairwise almost orthogonal vectors (‘quasiorthogonal’ bases, Kainen and Kůrková (1993)) in Euclidean space. It was noticed in the analysis of several dimensional random vectors drawn from the standard Gaussian distribution with zero mean and identity covariance matrix, that all the rays from the origin to the data points have approximately equal length, are nearly orthogonal and the distances between data points are all about times larger Hall et al. (2005). This observation holds even for exponentially large samples (of the size for some , which depends on the degree of the approximate orthogonality) Gorban et al. (2016). Projection of a finite data set on random bases can reduce dimension with preservation of the ratios of distances (the Johnson–Lindenstrauss lemma Dasgupta and Gupta (2003)).
Such an intensive flux of works ensures us that we should not fear or avoid large dimensionality. We just have to use it properly. Each application requires a specific balance between the extraction of important lowdimensional structures (‘reduction’) and the use of the remarkable properties of highdimensional geometry that underlie statistical physics and other fundamental results Gorban and Tyukin (2018); Vershynin (2018).
Both the curse and the blessing of dimensionality are the consequences of the measure concentration phenomena Giannopoulos and Milman (2000); Ledoux (2005); Gorban and Tyukin (2018); Vershynin (2018). These phenomena were discovered in the development of the statistical backgrounds of thermodynamics. Maxwell, Boltzmann, Gibbs, and Einstein found that for many particles the distribution functions have surprising properties. For example, the Gibbs theorem of ensemble equivalence Gibbs (1960) states that a physically natural microcanonical ensemble (with fixed energy) is statistically equivalent (provides the same averages of physical quantities in the thermodynamic limit) to a maximum entropy canonical ensemble (the Boltzmann distribution). Simple geometric examples of similar equivalence gives the ‘thin shell’ concentration for balls: the volume of a highdimensional ball is concentrated near its surface. Moreover, a highdimensional sphere is concentrated near any equator (waist concentration; the general theory of such phenomena was elaborated by M. Gromov Gromov (2003)). P. Lévy Lévy (1951) analysed these effects and proved the first general concentration theorem. Modern measure concentration theory is a mature mathematical discipline with many deep results, comprehensive reviews Giannopoulos and Milman (2000), books Ledoux (2005); Dubhashi and Panconesi (2009), advanced textbooks Vershynin (2018), and even elementary geometric introductions Ball (1997). Nevertheless, surprising counterintuitive results continue to appear and push new achievements in machine learning, Artificial Intelligence (AI), and neuroscience.
This minireview focuses on several novel results: stochastic separation theorems and evaluation of goodness of clustering in high dimensions, and on their applications to corrections of AI errors. Several possible applications to the dynamics of selective memory in the real brain and ‘simplicity revolution in neuroscience’ are also briefly discussed.
2 Stochastic Separation Theorems
2.1 Blessing of Dimensionality Surprises and Correction of AI Mistakes
D. Donoho and J. Tanner Donoho and Tanner (2009) formulated several ‘blessing of dimensionality’ surprises. In most cases, they considered points sampled independently from a standard normal distribution in dimension . Intuitively, we expect that some of the points will lie on the boundary of the convex hull of these points, and the others will be inside the interior of the hull. However, for large and , this expectation is wrong. This is the main surprise. With a high probability all random points are vertices of their convex hull. It is sufficient that for some and that depend on only Gorban et al. (2018, 2019). Moreover, with a high probability, each segment connecting a pair of vertices is also an edge of the convex hull, and any simplex with vertices from the sample is a dimensional face of the convex hull for some range of values of . For uniform distributions in a ball, similar results were proved earlier by I. Bárány and Z. Füredi Bárány and Füredi (1988). According to these results, each point of a random sample can be separated from all other points by a linear functional, even if the set is exponentially large.
Such a separability is important for the solution of a technological problem of fast, robust and nondamaging correction of AI mistakes Gorban and Tyukin (2018); Gorban et al. (2018, 2019). AI systems make mistakes and will make mistakes in the future. If a mistake is detected, then it should be corrected. The complete retraining of the system requires too much resource and is rarely applicable to the correction of a single mistake. We proposed to use additional simple machine learning systems, correctors, for separation of the situations with higher risk of mistake from the situations with normal functioning Gorban et al. (2016); Gorban and Tyukin (2017) (Figure 1). The decision rules should be changed for situations with higher risk. Inputs for correctors are: the inputs of the original AI systems, the outputs of this system and (some) internal signals of this system Gorban et al. (2018, 2019). The construction of correctors for AI systems is crucial in the development of the future AI ecosystems.
Correctors should Gorban and Tyukin (2018):

be simple;

not damage the existing skills of the AI system;

allow fast noniterative learning;

correct new mistakes without destroying the previous fixes.
Of course, if an AI system made too many mistakes then their correctors could conflict. In such a case, retraining is needed with the inclusion of new samples.
2.2 Fisher Separablity
Linear separation of data points from datasets Bárány and Füredi (1988); Donoho and Tanner (2009) is a good candidate for the development of AI correctors. Nevertheless, from the ‘practical man’ point of view, one particular case, Fisher’s discriminant Fisher (1936), is much more preferable to the general case because it allows oneshot and explicit creation of the separating functional.
Consider a finite data set without any hypothesis about the probability distribution. Let be the standard inner product in . Let us define Fisher’s separability following Gorban et al. (2018). {Definition} A point is Fisherseparable from a finite set with a threshold () if
(1) 
This definition coincides with the textbook definition of Fisher’s discriminant if the data set is whitened, which means that the mean point is in the origin and the sample covariance matrix is the identity matrix. Whitening is often a simple byproduct of principal component analysis (PCA) because, on the basis of principal components, whitening is just the normalization of coordinates to unit variance. Again, following the ‘practical’ approach, we stress that the precise PCA and whitening are not necessary but rather a priori bounded condition number is needed: the ratio of the maximal and the minimal eigenvalues of the empirical covariance matrix after whitening should not exceed a given number , independently of the dimension.
A finite set is called Fisherseparable, if each point is Fisherseparable from the rest of the set (Definition 3, Gorban et al. (2018)).
A finite set is called Fisherseparable with threshold if inequality (1) holds for all such that . The set is called Fisherseparable if there exists some () such that is Fisherseparable with threshold .
2.3 Stochastic Separation for Distributions with Bounded Support
Let us analyse the separability of a random point from a finite set in the dimensional unit ball . Consider the distributions that can deviate from the equidistribution, and these deviations can grow with dimension but not faster than the geometric progression with the common ratio , and, hence, the maximal density satisfies:
(3) 
where constant does not depend on .
For such a distribution in the unit ball, the probability to find a random point in the excluded volume (Figure 2) tends to 0 as a geometric progression with the common ratio when .
(Theorem 1, Gorban et al. (2018)) Let , , and . Assume that a probability distribution in the unit ball has a density with maximal value , which satisfies inequality (3). Then the probability that a random point from this distribution is Fisherseparable from is , where the probability of inseparability
Let us evaluate the probability that a random set is Fisherseparable. Assume that each point of is randomly selected from a distribution that satisfies (3). These distributions could be different for different .
(Theorem 2, Gorban et al. (2018)) Assume that a probability distribution in the unit ball has a density with maximal value , which satisfies inequality (3). Let and . Then the probability that is Fisherseparable is , where the probability of inseparability
The difference in conditions from Theorem 2.3 is that here and in Theorem 2.3 . Again, can grow exponentially with the dimension as the geometric progression with the common factor , while faster than the geometric progression with the common factor .
For illustration, if is an i.i.d. sample from the uniform distribution in the 100dimensional ball and , then with probablity this set is Fisherseparable Gorban and Tyukin (2017).
2.4 Generalisations
V. Kůrková Kůrková (2019) emphasized that many attractive measure concentration results are formulated for i.i.d. samples from very simple distributions (Gaussian, uniform, etc.), whereas the reality of big data is very different: the data are not i.i.d. samples from simple distributions. The machine learning theory based on the i.i.d. assumption should be revised, indeed Gorban et al. (2019). In the theorems above two main restrictions were employed: the probability of a set occupying relatively small volume could not be large (3), and the support of the distribution is bounded. The requirement of identical distribution of different points is not needed. The independence of the data points can be relaxed Gorban et al. (2018). The boundedness of the support of distribution can be transformed to the ‘nottooheavytail’ condition. The condition ‘sets of relatively small volume should not have large probability’ remains in most generalisations. It can be considered as ‘smeared absolute continuity’ because absolute continuity means that the sets of zero volume have zero probability. Theorems 2.3 and 2.3 have numerous generalisations Gorban et al. (2018, 2019); Grechuk (2019); Kůrková and Sanguineti (2019). Let us briefly list some of them:

Logconcave distributions (a distribution with density is logconcave if the set is convex and is a convex function on ). In this case, the possibility of an exponential (nonGaussian) tail brings a surprise: the upper size bound of the random set , sufficient for Fisherseparability in high dimensions with high probability, grows with dimension as , i.e. slower than exponential (Theorem 5, Gorban et al. (2018)).

Strongly logconcave distributions. A log concave distribution is strongly logconcave if there exists a constant such that
In this case, we return to the exponential estimation of the maximal allowed size of (Corollary 4, Gorban et al. (2018)). The comparison theorems Gorban et al. (2018) allow us to combine different distributions, for example the distribution from Theorem 2.3 in a ball with the logconcave or strongly logconcave tail outside the ball.

The kernel versions of the stochastic separation theorem were found, proved and applied to some reallife problems Tyukin et al. (2019).

There are also various estimations beyond the standard i.i.d. hypothesis Gorban et al. (2018) but the general theory is yet to be developed.
2.5 Some Applications
The correction methods were tested on various AI applications for videostream processing: detection of faces for security applications and detection of pedestrians Gorban et al. (2018); Meshkinfamfard et al. (2018); Gorban et al. (2019), translation of Sign Language into text for communication between deafmute people Tyukin et al. (2019), knowledge transfer between AI systems Tyukin et al. (2018), medical image analysis, scanning and classifying archaeological artifacts Allison et al. (2018), etc., and even to some industrial systems with relatively high level of errors Tyukin et al. (2019).
Application of the corrector technology to image processing was patented together with industrial partners Romanenko et al. (2019). A typical test of correctors’ performance is described below. For more detail of this test, we refer to Gorban et al. (2019). A convolutional neural network (CNN) was trained to detect pedestrians in images. A set of 114,000 positive pedestrian and 375,000 negative nonpedestrian RGB images, resized to , were collected and used as a training set. The testing set comprised of 10,000 positives and 10,000 negatives. The training and testing sets did not intersect.
We investigated in the computational experiments if it is possible to take one of cutting edge CNNs and train a oneneuron corrector to eliminate all the false positives produced. We also look at what effect, this corrector had on true positive numbers.
For each positive and false positive we extracted the second to last fully connected layer from CNN. These extracted feature vectors have dimension 4096. We applied PCA to reduce the dimension and analyzed how the effectiveness of the correctors depends on the number of principal components retained. This number varied in our experiments from 50 to 2000. The 25 false positives, taken from the testing set, were chosen at random to model single mistakes of the legacy classifier. Several such samples were chosen. For data projected on more than the first 87 principal components one neuron with weights selected by the Fisher linear discriminant formula corrected 25 errors without doing any damage to classification capabilities (original skills) of the legacy AI system on the training set. For 50 or less principal components this separation is not perfect.
Single false positives were corrected successfully without any increase of the true positive rates. We removed more than 10 false positives at no cost to true positive detections in the street video data (Nottingham) by the use of a single linear function. Further increasing the number of corrected false positives demonstrated that a singleneuron corrector could result in gradual deterioration of the true positive rates.
3 Clustering in High Dimensions
Producing a special corrector for every single mistake seems to be a nonoptimal approach, despite some successes. In practice, happily, often one corrector improves performance and prevents the system from some new mistakes because they are correlated. Moreover, mistakes can be grouped in clusters and we can create correctors for the clusters of situations rather than for single mistakes. Here we meet another measure concentration ‘blessing’. In high dimensions, clusters are good (wellseparated) even in the situations when one can expect their strong intersection. For example, consider two clusters and the distancebased clustering. Let and be the mean squared Euclidean distances between the centroids of the clusters and their data points, and be the distance between two centroids. The standard criteria of clusters’ quality Xu and Wunsch (2008) compare with and assume that for ‘good’ clusters . Assume the opposite, and evaluate the volume of the intersection of two balls with radii , . The intersection of the spheres (boundaries of the balls) is a dimensional sphere with the centre (Figure 3). Assume , which means that is situated between the centers of the balls (otherwise, the biggest ball includes more than a half of the volume of the smallest one). The intersection of clusters belongs to a ball of radius :
(4) 
and the fractions of the volume of the two initial balls in the intersection is less then . These fractions evaluate the probability to confuse points between the clusters (for uniform distributions, for the Gaussian distributions the estimates are similar). We can measure the goodness of highdimensional clusters by
Note that exponentially tends to zero with increase. Small means ‘good’ clustering.
If then the probability to find a data point in the intersection of the balls (the ‘area of confusion’ between clusters) is negligible for uniform distributions in balls, isotropic Gaussian distributions and always when small volume implies small probability. Therefore, the clustering of mistakes for correction of highdimensional machine learning systems gives good results even if clusters are not very good in the standard measures, and correction of clustered mistakes requires much fewer correctors for the same or even better accuracy Tyukin et al. (2019).
We implemented the correctors with separation of clustered falsepositive mistakes from the set of true positive and tested them on the classical face detection task Tyukin et al. (2019). The legacy object detector was an OpenCV implementation of the Haar face detector. It has been applied to video footage capturing traffic and pedestrians on the streets of Montreal. The powerful MTCNN face detector was used to generate ground truth data. The total number of true positives was 21896, and the total number of false positives was 9372. The training set contained randomly chosen 50% of positives and false positives. PCA was used for dimensionality reduction with 200 principal components retained. Singlecluster corrector allows one to filter 90% of all errors at the cost of missing 5% percent of true positives. In dimension 200, a cluster of errors is sufficiently wellseparated from the true positives. A significant classification performance gain was observed with more clusters, up to 100.
Further increase of dimension (the number of principal components retained) can even damage the performance because the number of features does not coincide with the dimension of the dataset, and the whitening with retained minor component can lead to illposed problems and loss of stability. For more detail, we refer to Tyukin et al. (2019).
4 What Does ‘High Dimensionality’ Mean?
The dimensionality of data should not be naively confused with the number of features. Let us have objects with features. The usual data matrix in statistics is a 2D array with rows and columns. The rows give values of features for an individual sample, and the columns give values of a feature for different objects. In classical statistics, we assume that and even study asymptotic estimates for and fixed. But, the modern ‘postclassical’ world is different Donoho (2000): the situation with (and even ) is not anomalous anymore. Moreover, it can be considered in some sense as the generic case: we can measure a very large number of attributes for a relatively small number of individual cases.
In such a situation the default preprocessing method could be recommended Moczko et al. (2016): transform the data matrix with into the square matrix of inner products (or correlation coefficients) between the individual data vectors. After that, apply PCA and all the standard machinery of machine learning. New data will be presented by projections on the old samples. (Detailed description of this preprocessing and the following steps is presented in Moczko et al. (2016) with an applied case study for and .) Such a preprocessing reduces the apparent dimension of the data_space to .
PCA gives us a tool for estimating the linear dimension of the dataset. Dimensionality reduction is achieved by using only the first few principal components. Several heuristics are used for evaluation of how many principal components should be retained:

The classical Kaiser rule recommends to retain the principal components corresponding to the eigenvalues of the correlation matrix (or where is a selected threshold; often is selected). This is, perhaps, the most popular choice.

Control of the fraction of variance unexplained. This approach is also popular, but it can retain too many minor components that can be considered ‘noise’.

Conditional number control Gorban et al. (2018) recommends to retain the principal components corresponding to , where is the maximal eigenvalue of the correlation matrix and is the upper border of the conditional number (the recommended values are Dormann et al. (2013)). This recommendation is very useful because it provides direct control of multicollinearity.
After dimensionality reduction, we can perform whitening of data and apply the stochastic separation theorems. This requires a hypothesis about the distribution of data: sets of a relatively small volume should not have a high probability, and there should be no ‘heavy tails’. Unfortunately, this assumption is not always true in the practice of big data analysis. (We are grateful to G. Hinton and V. Kůrková for this comment.)
The separability properties can be affected by various violations of i.i.d. structure of data, inhomogeneity of data, small clusters and finegrained lumping, and other peculiarities Albergante et al. (2019). Therefore, the notion of dimension should be revisited. We proposed to use the Fisher separability of data to estimate the dimension Gorban et al. (2018). For regular probability distributions, this estimate will give a standard geometric dimension, whereas, for complex (and often more realistic) cases, it will provide a more useful dimension characteristic. This approach was tested Albergante et al. (2019) for many bioinformatic datasets.
For analysis of Fisher’s separability and related estimation of dimensionality for general distribution and empirical datasets, an auxiliary random variable is used Gorban et al. (2018); Albergante et al. (2019). This is the probability that a randomly chosen point is not Fisherseparable with threshold from a given data point by the discriminant (1):
(5) 
where is the probability measure for .
If is selected at random (not compulsory with the same distribution as ) then is a random variable. For a finite dataset the probability that the data point is not Fisherseparable with threshold from can be evaluated by the sum of for :
(6) 
Comparison of the empirical distribution of to the distribution evaluated for the highdimensional sphere can be used as information about the ‘effective’ dimension of data. The probability is the same for all and exponentially decreases for large . We assume that is sampled randomly from for the rotationally invariant distribution on the unit sphere . For large the asymptotic formula holds Gorban et al. (2018); Albergante et al. (2019):
(7) 
Here means that when (the functions here are strictly positive). It was noticed that the asymptotically equivalent formula with the denominator performs better in small dimensions Albergante et al. (2019).
The introduced measure of dimension performs competitively with other stateoftheart measures for simple i.i.d. data situated on manifolds Gorban et al. (2018); Albergante et al. (2019). It was shown to perform better in the case of noisy samples and allows estimation of the intrinsic dimension in situations where the intrinsic manifold, regular distribution and i.i.d. assumptions are not valid Albergante et al. (2019).
After this revision of the definition of data dimension, we can answer the question from the title of this section: What does ‘high dimensionality’ mean? The answer is given by the stochastic separation estimates for the uniform distribution in the unit sphere . Let . We use notation for the volume (surface) of . The points of , which are not Fisherseparable from with a given threshold , form a spherical cap with the base radius (Figure 4). The area of this cap is estimated from above by the lateral surface of the cone with the same base, which is tangent to the sphere at the base points (see Figure 4). Therefore, the probability that a point selected randomly from the rotationally invariant distribution on is not Fisherseparable from is estimated from above as
(8) 
The surface area of is
(9) 
where is Euler’s gammafunction.
Rewrite the estimate (8) as
(10) 
Recall that is a monotonically increasing logarithmically convex function for Artin (2015). Therefore, for
Take into account that (because ). After elementary transforms it gives us
Finally, we got an elementary estimate for from above
(11) 
Compared to (7), this estimate from above is asymptotically exact.
Estimate from above a probability of a separability violations using (11) and an elementary rule: for any family of events ,
(12) 
According to (11) and (12), if , is an i.i.d. sample from a rotationally invariant distribution on and
(13) 
then all sample points with a probability greater than are Fischerseparable from a given point with a threshold . Similarly, if
(14) 
then with probability greater than each sample point is Fisherseparable from the rest of the sample with a threshold .
Estimates (13) and (14) provide sufficient conditions for separability. The Table 1 illustrates these estimates (the upper borders of in these estimates are presented in the table with three significant figures). For an illustration of the separability properties, we estimated from above the sample size for which the Fisherseparability is guaranteed with a probability 0.99 and a threshold value (Table 1). These sample sizes grow fast with dimension. From the Fisherseparability point of view, dimensions 30 or 50 are already large. The effects of highdimensional stochastic separability emerge with increasing dimensionality much earlier than, for example, the appearance of exponentially large quasiorthogonal bases Gorban et al. (2016).
n = 10  n = 20  n = 30  n = 40  n = 50  n = 60  n = 70  n = 80  

5  
2  37  542 
5 Discussion: The Heresy of Unheardof Simplicity and Single Cell Revolution in Neuroscience
V. Kreinovich Kreinovich (2019) summarised the impression from the effective AI correctors based on Fisher’s discriminant in high dimensions as “The heresy of unheardof simplicity” using quotation of the famous Pasternak poetry. Such a simplicity appears also in brain functioning. Despite our expectation that complex intellectual phenomena is a result of a perfectly orchestrated collaboration between many different cells, there is a phenomenon of sparse coding, concept cells, or socalled ‘grandmother cells’ which selectively react to the specific concepts like a grandmother or a wellknown actress (‘Jennifer Aniston cells’) Quian Quiroga et al. (2005). These experimental results continue the single neuron revolution in sensory psychology Barlow (1972).
The idea of grandmother or concept cells was proposed in the late 1960s. In 1972, Barlow published a manifest about the single neuron revolution in sensory psychology Barlow (1972). He suggested: “our perceptions are caused by the activity of a rather small number of neurons selected from a very large population of predominantly silent cells.” Barlow presented many experimental evidences of singlecell perception. In all these examples, neurons reacted selectively to the key patterns (called ‘trigger features’). This reaction was invariant to various changes in conditions.
The modern point of view on the singlecell revolution was briefly summarised recently by R. Quian Quiroga Quian Quiroga (2019). He mentioned that the ‘grandmother cells’ were invented by Lettvin “to ridicule the idea that single neurons can encode specific concepts”. Later discoveries changed the situation and added more meaning and detail to these ideas. The idea of concept cells was evolved during decades. According to Quian Quiroga, these cells are not involved in identifying a particular stimulus or concept. They are rather involved in creating and retrieving associations and can be seen as the “building blocks of episodic memory”. Many recent discoveries used data received from intracranial electrodes implanted in the medial temporal lobe (MTL; the hippocampus and surrounding cortex) for patients medications. The activity of dozens of neurons can be recorded while patients perform different tasks. Neurons with high selectivity and invariance were found. In particular, one neuron fired to the presentation of seven different pictures of Jennifer Aniston and her spoken and written name, but not to 80 pictures of other persons. Emergence of associations between images was also discovered.
Some important memory functions are performed by stratified brain structures, such as the hippocampus. The CA1 region of the hippocampus includes a monolayer of morphologically similar pyramidal cells oriented parallel to the main axis (Figure 5). In humans, CA1 region of the hippocampus contains of pyramidal neurons. Excitatory inputs to these neurons come from the CA3 regions (ipsi and contralateral). Each CA3 pyramidal neuron sends an axon that bifurcates and leaves multiple collaterals in the CA1 (Figure 5b). This structural organization allows transmitting multidimensional information from the CA3 region to neurons in the CA1 region. Thus, we have simultaneous convergence and divergence of the information content (Figure 5b, right). A single pyramidal cell can receive around 30,000 excitatory and 1700 inhibitory inputs (data for rats Megías et al. (2001)). Moreover, these numbers of synaptic contacts of cells vary greatly between neurons Druckmann et al. (2014). There are nonuniform and clustered connectivity patterns. Such a variability is considered as a part of the mechanism enhancing neuronal feature selectivity Druckmann et al. (2014). However, anatomical connectivity is not automatically transferred into functional connectivity and a realistic model should decrease significantly (by several orders of magnitude) the number of functional connections (see, for example, Brivanlou et al. (2004)). Nevertheless, even several dozens of effective functional connections are sufficient for the application of stochastic separation theorems (see Table 1).
For sufficiently highdimensional sets of input signals a simple enough functional neuronal model with Hebbian learning (the generalized Oja rule Tyukin et al. (2019); Gorban et al. (2019)) is capable of explaining the following phenomena:

the extreme selectivity of single neurons to the information content of highdimensional data (Figure 5(c1)),

simultaneous separation of several uncorrelated informational items from a large set of stimuli (Figure 5(c2)),

dynamic learning of new items by associating them with already known ones (Figure 5(c3)).
These results constitute a basis for the organization of complex memories in ensembles of single neurons.
Retraining large ensembles of neurons is extremely time and resources consuming both in the brain and in machine learning. It is, in fact, impossible to realize such a retraining in many reallife situations and applications. “The existence of high discriminative units and a hierarchical organization for error correction are fundamental for effective information encoding, processing and execution, also relevant for fast learning and to optimize memory capacity” Varona (2019).
The multidimensional brain is the most puzzling example of the ‘heresy of unheardof simplicity’, but the same phenomenon has been observed in social sciences and in many other disciplines Kreinovich (2019).
There is a fundamental difference and complementarity between analysis of essentially highdimensional datasets, where simple linear methods are applicable, and reducible datasets for which nonlinear methods are needed, both for reduction and analysis Gorban and Tyukin (2018). This alternative in neuroscience was described as highdimensional ‘brainland’ versus lowdimensional ‘flatland’ Barrio (2019). The specific multidimensional effects of the ‘blessing of dimensionality’ can be considered as the deepest reason for the discovery of small groups of neurons that control important physiological phenomena. On the other hand, even low dimensional data live often in a higherdimensional space and the dynamics of lowdimensional models should be naturally embedded into the highdimensional ‘brainland’. Thus, a “crucial problem nowadays is the ‘game’ of moving from ‘brainland’ to ‘flatland’ and backward” Barrio (2019).
C. van Leeuwen formulated a radically opposite point of view van Leeuwen (2019): neither highdimensional linear models nor lowdimensional nonlinear models have serious relations to the brain.
The devil is in the detail. First of all, the preprocessing is always needed to extract the relevant features. The linear method of choice is PCA. Various versions of nonlinear PCA can be also useful Gorban et al. (2008). After that, nobody has a guarantee that the dataset is either essentially highdimensional or reducible. It can be a mixture of both alternatives, therefore both extraction of reducible lowerdimensional subset for nonlinear analysis and linear analysis of the high dimensional residuals could be needed together.
Conceptualization, ANG, VAM and IYT; Methodology, ANG, VAM and IYT; Writing–Original Draft Preparation, ANG; Writing–Editing, ANG, VAM and IYT; Visualization, ANG, VAM and IYT.
The work was supported by the Ministry of Science and Higher Education of the Russian Federation (Project No. 14.Y26.31.0022). Work of ANG and IYT was also supported by Innovate UK (Knowledge Transfer Partnership grants KTP009890; KTP010522) and University of Leicester. VAM acknowledges support from the Spanish Ministry of Economy, Industry, and Competitiveness (grant FIS201782900P).
The authors declare no conflict of interest. \reftitleReferences
yes
References
 Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957.
 Bellman, R. The theory of dynamic programming. Bull. Amer. Math. Soc. 1954, 60, 503–515.
 Bellman, R.; Kalaba, R. Reduction of dimensionality, dynamic programming, and control processes. J. Basic Eng. 1961, 83, 82–84, https://doi.org/10.1115/1.3658896.
 Gorban, A.N.; Kazantzis, N.; Kevrekidis, I.G.; Öttinger, H.C.; Theodoropoulos, C. (Eds.) Model Reduction and Coarse–Graining Approaches for Multiscale Phenomena; Springer: Berlin/Heidelberg, Germany, 2006.
 Jolliffe, I. Principal Component Analysis; Springer: Berlin/Heidelberg, Germany, 1993.
 Gorban, A.N.; Kégl, B.; Wunsch, D.; Zinovyev, A. (Eds.) Principal Manifolds for Data Visualisation and Dimension Reduction; Springer: Berlin/Heidelberg, Germany, 2008; https://doi.org/10.1007/9783540737506.
 Gorban, A.N.; Zinovyev, A. Principal manifolds and graphs in practice: from molecular biology to dynamical systems. Int. J. Neural Syst. 2010, 20, 219–232, https://doi.org/10.1142/S0129065710002383.
 Cichocki, A.; Lee, N.; Oseledets, I.; Phan, A.H.; Zhao, Q.; Mandic, D.P. Tensor networks for dimensionality reduction and largescale optimization: Part 1 lowrank tensor decompositions. Found. Trends Mach. Learn. 2016, 9, 249–429, https://doi.org/10.1561/2200000059.
 Cichocki, A.; .; Phan, A.H.; Zhao, Q.; Lee, N.; Oseledets, I.; Sugiyama, M.; Mandic, D.P. Tensor networks for dimensionality reduction and largescale optimization: Part 2 applications and future perspectives. Found. Trends Mach. Learn. 2017, 9, 431–673, https://doi.org/10.1561/2200000067.
 Beyer, K.; Goldstein, J.; Ramakrishnan, R.; Shaft, U. When is “nearest neighbor” meaningful? In Proceedings of the 7th International Conference on Database Theory (ICDT), Jerusalem, Israel, 10–12 January 1999; pp. 217–235, https://doi.org/10.1007/3540492577_15.
 Pestov, V. Is the kNN classifier in high dimensions affected by the curse of dimensionality? Comput. Math. Appl. 2013, 65, 1427–1437, https://doi.org/10.1016/j.camwa.2012.09.011.
 Kainen, P.C. Utilizing geometric anomalies of high dimension: when complexity makes computation easier. In ComputerIntensive Methods in Control and Signal Processing: The Curse of Dimensionality; Warwick, K., Kárný, M., Eds.; Springer: New York, NY, USA, 1997; pp. 283–294, https://doi.org/10.1007/9781461219965_18.
 Brown, B.M.; Hall, P.; Young, G.A. On the effect of inliers on the spatial median. J. Multivar. Anal. 1997, 63, 88–104, https://doi.org/10.1006/jmva.1997.1691.
 Donoho, D.L. HighDimensional Data Analysis: The Curses and Blessings of Dimensionality; Invited lecture at Mathematical Challenges of the 21st Century, AMS National Meeting, Los Angeles, CA, USA, August 612, 2000; CiteSeerX 10.1.1.329.3392.
 Chen, D.; Cao, X.; Wen, F.; Sun, J. Blessing of dimensionality: Highdimensional feature and its efficient compression for face verification. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 3025–3032, https://doi.org/10.1109/CVPR.2013.389.
 Liu, G.; Liu, Q.; Li, P. Blessing of dimensionality: Recovering mixture data via dictionary pursuit. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 47–60, https://doi.org/10.1109/TPAMI.2016.2539946.
 Murtagh, F. The remarkable simplicity of very high dimensional data: application of modelbased clustering. J. Classif. 2009, 26, 249–277, https://doi.org/10.1007/s0035700990379.
 Anderson, J.; Belkin, M.; Goyal, N.; Rademacher, L.; Voss, J. The More, the Merrier: the Blessing of Dimensionality for Learning Large Gaussian Mixtures. In Proceedings of The 27th Conference on Learning Theory, Barcelona, Spain, 13–15 June 2014; Balcan, M.F.; Feldman, V.; Szepesvári, C., Eds.; PMLR: Barcelona, Spain, 2014; Volume 35, pp. 1135–1164.
 Gorban, A.N.; Tyukin, I.Y.; Romanenko, I. The blessing of dimensionality: Separation theorems in the thermodynamic limit. IFACPapersOnLine 2016, 49, 64–69, https://doi.org/10.1016/j.ifacol.2016.10.755.
 Li, Q.; Cheng, G.; Fan, J.; Wang, Y. Embracing the blessing of dimensionality in factor models. J. Am. Stat. Assoc. 2018, 113, 380–389, https://doi.org/10.1080/01621459.2016.1256815.
 Landgraf, A.J.; Lee, Y. Generalized principal component analysis: Projection of saturated model parameters. Technometrics 2019, https://doi.org/10.1080/00401706.2019.1668854.
 Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306, https://doi.org/10.1109/TIT.2006.871582.
 Donoho, D.; Tanner, J. Observed universality of phase transitions in highdimensional geometry, with implications for modern data analysis and signal processing. Phil. Trans. R. Soc. A 2009, 367, 4273–4293, https://doi.org/10.1098/rsta.2009.0152.
 Candes, E.; Rudelson, M.; Tao, T.; Vershynin, R. Error correction via linear programming. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), Pittsburgh, PA, USA, 23–25 October 2005; pp. 668–681, https://doi.org/10.1109/SFCS.2005.5464411.
 Pereda, E.; GarcíaTorres, M.; MeliánBatista, B.; nas, S.M.; Méndez, L.; González, J.J. The blessing of Dimensionality: Feature Selection outperforms functional connectivitybased feature transformation to classify ADHD subjects from EEG patterns of phase synchronisation. PLoS ONE 2018, 13, e0201660, https://doi.org/10.1371/journal.pone.0201660.
 Kainen, P.; Kůrková, V. Quasiorthogonal dimension of Euclidian spaces. Appl. Math. Lett. 1993, 6, 7–10, https://doi.org/10.1016/08939659(93)90023G.
 Hall, P.; Marron, J.; Neeman, A. Geometric representation of high dimension, low sample size data. J. Royal Stat. Soc. B 2005, 67, 427–444, https://doi.org/10.1111/j.14679868.2005.00510.x.
 Gorban, A.N.; Tyukin, I.; Prokhorov, D.; Sofeikov, K. Approximation with random bases: Pro et contra. Inf. Sci. 2016, 364–365, 129–145, https://doi.org/10.1016/j.ins.2015.09.021.
 Dasgupta, S.; Gupta, A. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Sruct. Algor. 2003, 22, 60–65, https://doi.org/10.1002/rsa.10073.
 Gorban, A.N.; Tyukin, I.Y. Blessing of dimensionality: mathematical foundations of the statistical physics of data. Phil. Trans. R. Soc. A 2018, 376, 20170237, https://doi.org/10.1098/rsta.2017.0237.
 Vershynin, R. HighDimensional Probability: An Introduction with Applications in Data Science; Cambridge Series in Statistical and Probabilistic Mathematics; Cambridge University Press: Cambridge, UK, 2018.
 Giannopoulos, A.A.; Milman, V.D. Concentration property on probability spaces. Adv. Math. 2000, 156, 77–106, https://doi.org/10.1006/aima.2000.1949.
 Ledoux, M. The Concentration of Measure Phenomenon; Number 89 in Mathematical Surveys & Monographs; AMS: Providence, RI, USA, 2005.
 Gibbs, J.W. Elementary Principles in Statistical Mechanics, Developed with Especial Reference to the Rational Foundation of Thermodynamics; Dover Publications: New York, NY, USA, 1960.
 Gromov, M. Isoperimetry of waists and concentration of maps. Geom. Funct. Anal. 2003, 13, 178–215, https://doi.org/10.1007/s0003900907031.
 Lévy, P. Problèmes Concrets D’analyse Fonctionnelle; GauthierVillars: Paris, France, 1951.
 Dubhashi, D.P.; Panconesi, A. Concentration of Measure for the Analysis of Randomized Algorithms; Cambridge University Press: Cambridge, UK, 2009.
 Ball, K. An Elementary Introduction to Modern Convex Geometry. In Flavors of Geometry; Cambridge University Press: Cambridge, UK, 1997; Volume 31.
 Gorban, A.N.; Golubkov, A.; Grechuk, B.; Mirkes, E.M.; Tyukin, I.Y. Correction of AI systems by linear discriminants: Probabilistic foundations. Inf. Sci. 2018, 466, 303–322, https://doi.org/10.1016/j.ins.2018.07.040.
 Gorban, A.N.; Makarov, V.A.; Tyukin, I.Y. The unreasonable effectiveness of small neural ensembles in highdimensional brain. Phys. Life Rev. 2019, 29, 55–88, https://doi.org/10.1016/j.plrev.2018.09.005.
 Bárány, I.; Füredi, Z. On the shape of the convex hull of random points. Probab. Theory Relat. Fields 1988, 77, 231–240, https://doi.org/10.1007/BF00334039.
 Gorban, A.N.; Tyukin, I.Y. Stochastic separation theorems. Neural Netw. 2017, 94, 255–259, https://doi.org/10.1016/j.neunet.2017.07.014.
 Tyukin, I.Y.; Gorban, A.N.; McEwan, A.A.; Meshkinfamfard, S. Blessing of dimensionality at the edge. arXiv 2019, arXiv:1910.00445.
 Gorban, A.N.; Burton, R.; Romanenko, I.; Tyukin, I.Y. Onetrial correction of legacy AI systems and stochastic separation theorems. Inf. Sci. 2019, 484, 237–254, https://doi.org/10.1016/j.ins.2019.02.001.
 Fisher, R.A. The Use of Multiple Measurements in Taxonomic Problems. Ann. Eugenics 1936, 7, 179–188, https://doi.org/10.1111/j.14691809.1936.tb02137.x.
 Kůrková, V. Some insights from highdimensional spheres: Comment on “The unreasonable effectiveness of small neural ensembles in highdimensional brain” by Alexander N. Gorban et al. Phys. Life Rev. 2019, 29, 98–100, https://doi.org/10.1016/j.plrev.2019.03.014.
 Gorban, A.N.; Makarov, V.A.; Tyukin, I.Y. Symphony of highdimensional brain. Reply to comments on “The unreasonable effectiveness of small neural ensembles in highdimensional brain”. Phys. Life Rev. 2019, 29, 115–119, https://doi.org/10.1016/j.plrev.2019.06.003.
 Grechuk, B. Practical stochastic separation theorems for product distributions. In Proceedings of the IEEE IJCNN 2019—International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019; https://doi.org/10.1109/IJCNN.2019.8851817.
 Kůrková, V.; Sanguineti, M. Probabilistic Bounds for Binary Classification of Large Data Sets. In Proceedings of the International Neural Networks Society, Genova, Italy, 16–18 April 2019; Oneto, L., Navarin, N., Sperduti, A., Anguita, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 1, pp. 309–319, https://doi.org/10.1007/9783030168414_32.
 Tyukin, I.Y.; Gorban, A.N.; Grechuk, B. Kernel Stochastic Separation Theorems and Separability Characterizations of Kernel Classifiers. In Proceedings of the IEEE IJCNN 2019—International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019; https://doi.org/10.1109/IJCNN.2019.8852278.
 Meshkinfamfard, S.; Gorban, A.N.; Tyukin, I.V. Tackling Rare FalsePositives in Face Recognition: A Case Study. In Proceedings of the 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS); IEEE: Exeter, United Kingdom, 2018; pp. 1592–1598, https://doi.org/10.1109/HPCC/SmartCity/DSS.2018.00260.
 Tyukin, I.Y.; Gorban, A.N.; Green, S.; Prokhorov, D. Fast construction of correcting ensembles for legacy artificial intelligence systems: Algorithms and a case study. Inf. Sci. 2019, 485, 230–247, https://doi.org/10.1016/j.ins.2018.11.057.
 Tyukin, I.Y.; Gorban, A.N.; Sofeikov, K.; Romanenko, I. Knowledge transfer between artificial intelligence systems. Front. Neurorobot. 2018, 12, https://doi.org/10.3389/fnbot.2018.00049.
 Allison, P.M.; Sofeikov, K.; Levesley, J.; Gorban, A.N.; Tyukin, I.; Cooper, N.J. Exploring automated pottery identification [ArchIScan]. Internet Archaeol. 2018, 50, https://doi.org/10.11141/ia.50.11.
 Romanenko, I.; Gorban, A.; Tyukin, I. Image Processing. US Patent 10,489,634 B2, Nov. 26, 2019. Available online: https://patents.google.com/patent/US10489634B2/en (accessed on 5 January 2020).
 Xu, R.; Wunsch, D. Clustering; Wiley: Hoboken, NJ, USA, 2008.
 Moczko, E.; Mirkes, E.M.; Cáceres, C.; Gorban, A.N.; Piletsky, S. Fluorescencebased assay as a new screening tool for toxic chemicals. Sci. Rep. 2016, 6, 33922, https://doi.org/10.1038/srep33922.
 Dormann, C.F.; Elith, J.; Bacher, S.; Buchmann, C.; Carl, G.; Carré, G.; Marquéz, J.R.; Gruber, B.; Lafourcade, B.; Leitão, P.J.; et al. Collinearity: A review of methods to deal with it and a simulation study evaluating their performance. Ecography 2013, 36, 27–46, https://doi.org/10.1111/j.16000587.2012.07348.x.
 Albergante, L.; Bac, J.; Zinovyev, A. Estimating the effective dimension of large biological datasets using Fisher separability analysis. In Proceedings of the IEEE IJCNN 2019—International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019; https://doi.org/10.1109/IJCNN.2019.8852450.
 Artin, E. The Gamma Function; Courier Dover Publications: Mineola, NY, USA, 2015.
 Kreinovich, V. The heresy of unheardof simplicity: Comment on “The unreasonable effectiveness of small neural ensembles in highdimensional brain” by A.N. Gorban, V.A. Makarov, and I.Y. Tyukin. Phys. Life Rev. 2019, 29, 93–95, https://doi.org/10.1016/j.plrev.2019.04.006.
 Quian Quiroga, R.; Reddy, L.; Kreiman, G.; Koch, C.; Fried, I. Invariant visual representation by single neurons in the human brain. Nature 2005, 435, 1102–1107, https://doi.org/10.1038/nature03687.
 Barlow, H.B. Single units and sensation: a neuron doctrine for perceptual psychology? Perception 1972, 1, 371–394, https://doi.org/10.1068/p010371.
 Quian Quiroga, R. Akakhievitch revisited: Comment on “The unreasonable effectiveness of small neural ensembles in highdimensional brain” by Alexander N. Gorban et al. Phys. Life Rev. 2019, 29, 111–114, https://doi.org/10.1016/j.plrev.2019.02.014.
 Megías, M.; Emri, Z.S.; Freund, T.F.; Gulyás, A.I. Total number and distribution of inhibitory and excitatory synapses on hippocampal CA1 pyramidal cells. Neuroscience 2001, 102, 527–540, https://doi.org/10.1016/S03064522(00)004966.
 Druckmann, S.; Feng, L.; Lee, B.; Yook, C.; Zhao, T.; Magee, J.C.; Kim, J. Structured synaptic connectivity between hippocampal regions. Neuron 2014, 81, 629–640, https://doi.org/10.1016/j.neuron.2013.11.026.
 Brivanlou, I.H.; Dantzker, J.L.; Stevens, C.F.; Callaway, E.M. Topographic specificity of functional connections from hippocampal CA3 to CA1. Proc. Natl. Acad. Sci. USA 2004, 101, 2560–2565, https://doi.org/10.1073/pnas.0308577100.
 Tyukin, I.; Gorban, A.N.; Calvo, C.; Makarova, J.; Makarov, V.A. Highdimensional brain: A tool for encoding and rapid learning of memories by single neurons. Bull. Math. Biol. 2019, 81, 4856–4888, https://doi.org/10.1007/s1153801804155.
 Varona, P. High and low dimensionality in neuroscience and artificial intelligence: Comment on “The unreasonable effectiveness of small neural ensembles in highdimensional brain” by A.N. Gorban et al. Phys. Life Rev. 2019, 29, 106–107, https://doi.org/10.1016/j.plrev.2019.02.008.
 Barrio, R. “Brainland” vs. “flatland”: how many dimensions do we need in brain dynamics? Comment on the paper “The unreasonable effectiveness of small neural ensembles in highdimensional brain” by Alexander N. Gorban et al. Phys. Life Rev. 2019, 29, 108–110, https://doi.org/10.1016/j.plrev.2019.02.010.
 van Leeuwen, C. The reasonable ineffectiveness of biological brains in applying the principles of highdimensional cybernetics: Comment on “The unreasonable effectiveness of small neural ensembles in highdimensional brain” by Alexander N. Gorban et al. Phys. Life Rev. 2019, 29, 104–105, https://doi.org/10.1016/j.plrev.2019.03.005.