Classification of Fixed Point Network Dynamics From Multiple Node Timeseries Data
Fixed point networks are dynamic networks that encode stimuli via distinct output patterns. Although such networks are omnipresent in neural systems, their structures are typically unknown or poorly characterized. It is therefore valuable to use a supervised approach for resolving how a network encodes distinct inputs of interest, and the superposition of those inputs from sampled multiple node time series. In this paper we show that accomplishing such a task involves finding a low-dimensional state space from supervised recordings. We demonstrate that standard methods for dimension reduction are unable to provide the desired functionality of optimal separation of the fixed points and transient trajectories to them. However, the combination of dimension reduction with selection and optimization can successfully provide such functionality. Specifically, we propose two methods: Exclusive Threshold Reduction (ETR) and Optimal Exclusive Threshold Reduction (OETR) for finding a basis for the classification state space. We show that the classification space constructed upon combination of dimension reduction optimal separation can directly facilitate recognition of stimuli, and classify complex inputs (mixtures) into similarity classes. We test our methodology and compare it to standard state-of-the-art methods on a benchmark dataset - an experimental neuronal network (the olfactory system) that we recorded from to test these methods. We show that our methods are capable of providing a basis for the classification space in such network, and to perform recognition at a significantly better rate than previously proposed approaches.
Robust neural networked systems encode their dynamics by producing attractors in a low-dimensional state space. The attractors represent neural network response to various inputs and reflect particular states of the system. Such networks are common in neuronal systems that process sensory stimuli, command motor systems, or store memory [2, 6, 20]. The manifestation of attractors ensures robustness of network performance, such that, for a range of network initializations and stimuli, it exhibits reliable dynamics. The simplest type of attractors are input induced fixed points, triggered by input signals (e.g., step functions) into a subset of network nodes, and as a response after transient dynamics, a subset of the network output nodes produce a steady state pattern. In neuronal networks, these patterns are identified as neural codes . The network is considered selective when it is capable to distinguish between stimuli by producing distinct fixed points. In particular, the network will produce similar fixed points for similar stimuli, and distinguishable fixed points for distinct stimuli, where similarity is defined by a metric in a low-dimensional space. Such functionality is the primary principle upon which fixed-point networks incorporate recognition and quantification of mixed stimuli. In addition, these networks are typically ‘robust’ as they are capable of perform under high dynamic input variability, i.e., low signal to noise ratio (SNR).
A fascinating question is how to infer the low-dimensional state space and fixed points within it from sampled time-series data of the network. It is a particularly relevant problem when it is of interest to characterize the functionality of black-box networks: which connectivity and node dynamics are unknown, and probing network response for various stimuli is the only resort. A structural approach for construction of low-dimensional state space is supervised classification. In such an approach, a set of distinct inputs is being applied to the network independently, and sampling of network activity, i.e., time-series of multiple node dynamics, are being obtained. Thereby, for each input, the fragment of sampled time-series corresponds to a matrix with ‘nodes’ and ‘time’ dimensions, and the whole database of sampled responses corresponds to a collection of matrices (see Fig. 1). Classification task for such a collection is to find the most informative low-dimensional state space where the number of distinct fixed points is the number of presented distinct inputs, and that the low-rank space could be used for examining the transient dynamics reaching these fixed points and their superpositions. Such a space would be considered optimally selective when the both the fixed points and the transient dynamics leading to the fixed points are maximally separated (orthogonal). For example, for a collection that constitutes responses to three inputs, we expect to find three basis vectors each representing a single stimulus.
For neuronal sensory networks, the collection would typically correspond to instantaneous firing-rates of neurons (peri-stimulus time-histograms) obtained from multi-neuron recordings when the neural system is stimulated. A particularly intriguing system is the olfactory neuronal network in insects. Within the network, the antennal lobe, primary processing unit for olfactory neural responses, is receiving input from olfactory receptors and is able to respond with output activity which discriminates between odorants and classifies them into similarity classes. Experiments have shown that such classification is associated with behavior, and could adapt over time or after training [14, 15]. Furthermore, analysis of multi-unit electrophysiological recordings from output (projection) neurons shows that the network employs input induced fixed points to classify the olfactory stimuli .
Since the collection consists of matrices, a natural methodology for inference of the low-dimensional space is structuring all sampled response matrices into a single matrix and employing classical multivariate matrix decomposition methods, such as Singular Value Decomposition (SVD), or sparse representations computed by L1 minimization, that will identify dominant orthogonal patterns [18, 19]. However, in practice, due to the ambiguity in SVD (i.e., indifference for the structure of the data and sensitivity to low SNR in the inputs), the resulting modes are mixed and non-uniquely discriminate single inputs from the collection. Alternatively, application of dimension reduction methods for each of the sub-matrices individually produces unambiguous low-rank representations - however it introduces a problem of gathering the obtained patterns into a single, joint, meaningful representation. This is a generic problem of finding an optimal low-rank representation of multiple instances data matrix.
To address the problem, here we propose a classification method that leverages the benefits of the two approaches: orthogonality and non-ambiguity with respect to the structure of the data. The method, called exclusive threshold reduction (ETR) operates on the low-rank representations obtained from individual applications of data reduction applied to each sub matrix, and creates an orthogonal basis. Effectively, the method exclusively associates each neuron with one of the stimuli and assigns a weight for the association. We show that such an approach can successfully separate response trajectories to the various stimuli when the response matrix is projected onto the basis. In addition, we formulate an optimization routine, called optimal exclusive threshold reduction (OETR) which allows us to achieve maximal separation of fixed points. For evaluation and demonstration of our methodology we have obtained a benchmark database of multi-unit time series recordings from the antennal lobe projection neurons in the Manduca sexta moth. The recordings were obtained from the neuronal network responses to stimulation of constituents of the Datura flower scent (single molecules and mixtures), a major food resource for the moth. The dataset is divided into a training set for construction of classification space, and a testing set to evaluate our methodology and compare with other approaches. In particular, an important test is determining how well the approach discriminates between distinct stimuli and captures similarity of various mixtures - for example, the method should be able to classify the network response to a stimulus as whether it is behavioral or non-behavioral neural code. We show that the OETR method allows such performance for various number of dimensions of classification space and mixture inputs.
Ii-a Classification from Multiple Node Collection
For supervised classification we consider a set of input nodes which receive a set of stimuli . To monitor network response, we consider a set of output nodes denoted as . In the training stage of supervised classification, independent stimuli are applied to the network and the timeseries of output nodes that produce dynamics attracted to stable fixed points are being recorded. Per each stimulus , output nodes timeseries that have been recorded correspond to a matrix of dimensions with rows being the nodes and columns are recording time stamps. Therefore the training stage of a supervised classification produces the collection , which includes multi-node times-series of network responses (see Fig. 1 for an example). The goal of the classification is to find an optimal representation such that the responses to distinct stimuli in the collection are separable. Effectively, the problem is related to finding a separating hyperplane between each fixed point and its associated trajectories that approach/leave it and all other fixed points and their associated trajectories. In practice, the recordings are noisy since incorporate large variability in time and space of the input.
To quantify the success of various classification methods we have established a benchmark data-set for classification and recognition. The benchmark consists of a collection of responses of neurons within a neurobiological network that exhibits fixed point dynamics. Specifically, to create the benchmark, we have obtained electrophysiological multi-neuron recordings from projection (output) neurons within the antennal lobe neuronal network, the primary olfactory processing unit in insects. It was shown that the response of this neuronal network to odor stimuli is expressed in terms of fixed point dynamics of projection neurons . To obtain the collection, spiking activity of multiple projection neurons (N=106) was recorded from the antennal lobe of Manduca sexta moth subject to distinct odor molecules (odorants) and their mixtures extracted from the Datura wrightii flower, a major food resource for the moth. Since realistic response times are of order of few hundreds of milliseconds (200-400 msec), each response was recorded for 1 sec. The recordings were repeated over 5 trials with restoration intervals in between the trials. The benchmark includes responses to 8 odorants (labeled as: bea, bol, lin, car, ner, far, myr, ger), 1 control odor (labeled as: crl), and 8 mixture odors (labeled as: e2, e3, e3b, p2, p3, p4, p5, p9), see Table in Fig. 2 for more details. Mixtures are labeled as ‘behavioral’ - identifying that the behavioral response to these stimuli is similar to response to Datura floral scent -, or ‘non-behavioral’ - to denote the mixtures and odorants which do not elicit a significant behavioral response. To work with continuous time series, neural spiking activity was transformed to peri-stimulus time histograms (instantaneous firing rates) which are nonnegative values.
With the established benchmark we first demonstrate that finding a hyperplane that separates the data points is not obvious due to the high-dimensionality and variability (noise) of the data, thus precluding classification. Therefore, methods to directly identify such a separation are not expected to yield a precise classification. To test several standard methods with the benchmark, we first used the Support Vector Machine (SVM) classifier [13, 5] with a binary classification scheme for which the data was separated into two classes: responses triggered by stimuli of interest and responses triggered by other stimuli. SVM is supposed to classify the responses by finding the optimal hyperplane that separates all data points into one of the two classes (see further details on the method in the Appendix). For this binary classification problem we found the performance to be low with average classification accuracies around 50% for various binary classes, as we show in Fig. 3. We find that both precision (percentage of true positives out of instances classified as positive) and recall (percentage of true positives out of instances expected to be positive) errors are and respectively, indicating poor performance in the usefulness of the classification (precision) and completeness of classification (recall), comparable to a random guessing strategy for the binary test sets that we have created.
A possible explanation for the failure of SVM on the benchmark is that the points are not linearly separable. We therefore tested nonlinear versions of the classifier, e.g., by using a Gaussian kernel, however they produced similar classification errors. Another hypothesis for low performance of SVM could stem from response dynamics to reside in a lower dimension than both the nodes dimension and the number of data points upon which the separating hyperplane is computed. This creates a situation where the data is being overfit. Another, obstacle could be in the form of the classes being imbalanced, i.e. when SVM classifiers are trained on an imbalanced dataset, they can produce biases towards the majority class and misrepresent the minority class. We therefore tested other techniques designed to deal with class imbalances, such as SMOTING and RUSBoost  with the benchmark. However, we did not obtain significant improvement with these methods than SVM classification (Fig. 3). We therefore conclude that direct classification from the complete data-set is difficult and approaches that incorporate pre-processing of the data are necessary. In particular, we identify that finding an appropriate low dimensional state space, where distinct fixed points and their associated dynamics are easily separable, could significantly simplify classification.
Ii-B Classification using Matrix Decomposition Applied to Timeseries Collection
Since classification methods applied to the high-dimensional collection are unable to provide robust representation, an alternative approach is to represent the data in low-dimensional space. The structure of the collection as a set of matrices suggests that matrix decomposition could be a useful tool for finding such low-dimensional basis. There are several possibilities to create a matrix to be used for decomposition from the collection . The first possibility is to format the collection into a concatenated matrix consisting of sub-matrices
Data reduction methods can be then applied to the matrix to decompose it into vectors (time-dependent coefficients) and (spatial patterns/neural codes):
Our goal is to find the most informative decomposition where the number of patterns is the number of distinct presented inputs and they span a low rank space of projections in which the dynamics of these patterns, i.e., time-dependent coefficients , and their superpositions could be examined. For example, for a matrix that constitutes the responses to three inputs we expect to get three pattern vectors each representing a single stimulus. Classical multivariate decomposition of the matrix , e.g., Singular Value Decomposition (SVD) / Principal Component Analysis (PCA) that decompose the matrix into , or sparse representations computed by L1 minimization, can identify dominant orthogonal patterns [7, 18].
When computing the SVD for the matrix , the procedure we call SVDCon, the column vectors of orthonormal matrix correspond to pattern vectors. The -th column vector of , the vector (or denoted as in Fig. 3), is the -th pattern. Each pattern has associated time-dependent coefficients vector , where is the -th row vector of the orthonormal matrix and is the -th singular value, the -th element of the diagonal matrix . Singular values indicate the weight of each pattern vector, since they are nonnegative elements ordered as and since both and are normal vectors. To determine the relative weights of pattern vectors it is possible to define the relative energy of each pattern, such that which yields that the total energy is normalized , and each indicates the relative ‘percentage’ of the energy in the pattern . The distribution of values provides an estimate for the effectiveness of the decomposition to find a low dimensional representation. When several first values stand out it identifies that the patterns corresponding to these singular values are more dominant than others and the truncation of the remainder modes would maintain a reasonable approximation of the original matrix.
We apply SVDCon to a concatenated matrix generated from three response matrices, to three distinct odorant stimuli from the benchmark set. We observe that values distribution is such that only the first singular value stands out () and all other are significantly smaller. Such distribution typically identifies that SVD was not able to capture the variability in data. Furthermore, examination of time dependent coefficients associated with the first three patterns shows that they are overlapping and indistinguishable (Fig. 3). Effectively, these results indicate that there is discrepancy between the expected three dimensional state space and the outcome of SVDCon.
Another possibility to format the collection to a matrix form is to consider each response matrix separately, i.e.,
Using this form, SVD can be applied to each matrix, procedure we call SVDSep. In this procedure sets of decompositions are obtained, where is the number of distinct stimuli. We apply SVDSep onto three stimuli collection, as for SVDCon, however here it is formatted into three separate response matrices , and obtain 3 decompositions. We observe that each decomposition is dominated by 1 pattern ( for each of the three matrices).
Therefore collecting dominant pattern vectors from all decompositions into a common set could represent a projection space for the distinct responses. More generally, a procedure following SVDSep would take the first pattern vector (i.e. most dominant mode) from each decomposition of and store them as column vectors in a library matrix , which has the form:
where there are rows accounting for every node measured in the network. Notably, however, column vectors of are taken from separate decompositions and therefore non-orthogonal to each other and do not share the same Euclidean space. Therefore projections onto the column vectors of the library are not guaranteed to be non-overlapping. Indeed, we observe that that projection of the response matrices onto for the constructed library from the benchmark experiment of three distinct odorants, does not produce separable fixed points and associated trajectories (see Fig. 3). The non-orthogonality of the basis warrants development of an approach to transform the library so that the projected fixed points and their trajectories are maximally separable.
Ii-C Optimal Exculsive Threshold Reduction Method
To find separation between fixed points associated with distinct stimuli, we propose a simple method that will obtain an orthogonal set of vectors from the matrix . The method, called exclusive threshold reduction (ETR), operates on the nodes dimension of (rows): it selects the maximal value in each row and sets to zero all other elements in that row. The procedure is performed on each row, and once completed, leaves a new matrix with nonzero elements out of elements, effectively associating each node with a stimulus. For example, if for node the element in the third column is the maximal element then node is associated with and the weight for the association is the value of the element, see top row of Fig. 4 for graphical illustration of the method. Therefore, the ETR method both associates nodes/neurons with stimuli, the different axes in Fig. 4, and assigns weights to each association. By definition, ETR guarantees an orthogonal set of vectors, i.e., an orthogonal matrix of the form
where is the threshold value. If , as in many applications, the generated matrix is a basis for the nodes space and a projection space for trajectories associated with stimuli. The matrix defines a mapping from the high- to low-dimensional system. Such a mapping was used to recover network connectivity in conjunction with proper orthogonal decomposition of population model equations for the antennal lobe neuronal network . The mapping is supposed to group node responses and capture the exclusive features associated with each stimulus. When we test the projection of three odorant benchmark matrices onto the matrix , producing time-dependent trajectories in stimulus space, we observe that the trajectories are separate from each other. We also locate the fixed points, which coordinates are defined as the outcome of the product . We find their location to be close to the stimulus axis of each trajectory (Fig. 4A). These experiments indicate that the ETR method is effective in mapping distinct trajectories to their own axes, and can facilitate a distinct classification space.
To test the scalability of the approach we apply ETR on various subsets of responses to odorants in the benchmark set. We start with 2 odorants and increment by one the number of included matrices to 8 odorants. Application of ETR on the latter produces 8 dimensional space (eight column matrix ). In these experiments we further confirm the scalable performance of the approach and its ability to produce separate fixed points and trajectories (as shown in Fig. 4A).
While ETR turns out to be valuable in separation of trajectories, it does not ensure generically, especially when the number of matrices in the collection grows, that the fixed points are optimally separated and hence noisy trajectories are attracted to the fixed points. To optimally separate the fixed points we propose to introduce additional weights contained in the diagonal matrix
The formulation assigns weights that scale the weight of each individual node obtained by ETR to ensure that the fixed points are orthonormal (Fig. 4B) and thus satisfies the assumption of distinct stimuli used in supervised classification. With such scaling fixed points coordinates are computed as and in order for them to be orthonormal are expected to be exclusive in the associated axis of each stimulus. We therefore require that the coordinates of the fixed points will be represented by the identity matrix, i.e., . To solve for the weights, we formulate the following convex optimization problem
We denote the generalized approach of solving the optimization problem above, in conjunction with the ETR method, as Optimal Exclusive Threshold Reduction (OETR) method. It is expected to produce optimally separable projected trajectories for multiple distinct stimuli and hence the vectors of are optimal axes of the classification state space. To solve the optimization problem, computational packages for convex optimization can be used. For example, we have used the CVX package implemented in MATLAB . Application of OETR to the three distinct stimuli benchmark set indeed produces more optimally separable fixed points and their associated trajectories as shown in Fig. 4B.
Next, we compare OETR with other approaches, such as the Independent Component Analysis (ICA), to exclude that the benchmark set is too simple problem of separation. The ICA method that we apply is information based algorithm (Infomax) particularly designed to obtain separable signals from a collection of inseparable signals, such as the library , see Methods for more details on our ICA implementation and based on [10, 9, 11]. With ICA, our results show that projections on the obtained vectors remain to be overlapping and do not produce efficient classification (Fig. 4C).
Ii-D Recognition Metrics and Classifiers based on the Classification State Space
With the classification space constructed using ETR and OETR we consider how it can be utilized for recognition and classification of novel stimuli. These stimuli include mixtures of odorants upon which the axes of the state space were constructed and responses to odorant stimuli on trial by trial basis, where each trial is a novel stochastic realization of the trajectory than the one used for space construction. Since the fixed points and their associated trajectories are well separated in the classification space, we propose the convex hull metric defined by the fixed point and the associated trajectory. A simplification of the convex hull metric (and more robust) is an -dimensional hyperellipse centered at the fixed point
where is the dimension of the classification state space. The metric is a pointwise metric which provides for each point of the tested projected trajectory, sampled at time , whether it lies inside the hyperellipse () or outside (). Integration of the pointwise metric over a time interval of interest, typically the time of the response, e.g., for the benchmark here it is the time of the stimulus being ON ( ms) provides a total score of the trajectory being located within the hyperellipse
and indicates the similarity of the tested trajectory with the stimulus and its associated fixed point and hyperellipse. Normalization of the total score over the total number points provides a recognition ratio score . Such a metric is simple to implement and is specially designed to harness the optimal separation of fixed points in the classification space and expected to provide consistent scores for testing similarity between different trajectories in the classification space by being robust to different stochastic realizations of the trajectories (due to noise and other perturbation effects). Other metrics, measuring the time scales of convergence to the fixed point or focusing on specific properties of the trajectories are not expected to be robust for low signal-to-noise ratio (SNR) dynamics (see Fig. 5). Such properties of the dynamics are typical to multi-unit recordings from neuronal networks. For example, in the benchmark set SNR is less than 3 and stimuli trajectories exhibit short timescales and do not necessarily converge to the fixed points. Rather, they only approach their vicinity.
To investigate the performance of recognition and classification using the hyperellipse metric we computed the similarity of various stimuli ( stimuli from the benchmark set) with B1 mixture from the benchmark consisting of the odors: Benzaldehyde (S1); Benzyl Alcohol (S2); and Linalool (S3). This mixture was shown to include the dominant constituents of the Datura scent - a important flower nectar source of the moths - and moths stimulated with it elicit a ‘behavioral’ response similar to stimulation by the full Datura scent, and similarly to mixtures B2 and B3 . The goal of the classification is examine the mixture trajectories and determine which are similar to B1. Strong similarity indicates that the mixture is classified as ‘behavioral’ as well. The goal of recognition is to decide instantaneously given a single trial whether the stimulus is B1. Notably, behavioral experiments show that similarity in responses to these different mixtures was not solely due to the concentration of the constituents making up the mixture, and therefore classification methods from neural dynamics are in need. For example, experiments show that when Linalool (component with the least concentration) is taken out of B1 mixture the new mixture becomes non-behavioral. On the other hand, adding other constituents from the Datura scent replacing Linalool does not restore the ability of the mixture to elicit behavior.
To compare between different approaches we constructed classification spaces using four methods: SVDSep, ICA, ETR and OETR for the odorants S1–S8 (see table in Fig. 3). We also varied the dimension of the classification space from incrementally to . For each method and dimension of the classification space we computed the score for each trajectory (5 trials of 17 stimuli = 85 trajectories) with respect to hyperellipse that corresponds to B1 stimulus (sphere with fixed radius). We show our results in Fig. 5. In classification, for each dimension of the space, scores were computed for all stimuli and trials. The average score was then computed over the trials of each stimulus. The distribution of average scores for 17 stimuli is then normalized by the maximal value. A decision line for binary classification of ‘behavioral’ (B class) or ‘non-behavioral’ (S or E classes) is set as the midpoint between the mean of the distribution and maximal value: (depicted as blue line in the inset showing distributions for in Fig. 5). Values above are classified as behavioral while values below are classified as non-behavioral. In Fig. 5 the bars that correspond to erroneous classification are marked with red color and correctly classified bars are marked with green color. We show the total accuracies of the four methods (precision recall) for each dimension as a bar plot in the middle plot of Fig. 5 and also show the distributions for below it.
Classification results demonstrate that ETR and OETR are significant improvements over other dimensional reduction techniques, which in turn are significant improvement over methods that do not employ dimension reduction. All methods perform poorly when the classification space is of or dimensions. Indeed, since B1 consists of 3 independent stimuli, the results indicate that there is not enough data for discrimination based on less dimensions than the constitutents of the tested stimulus. As the dimension increases OETR achieves perfect accuracies for . ETR achieves accuracy for and then stabilizes on lower value of with high precision but lower recall rates. ICA achieves accuracy for but when the dimension increases both precision and recall rates drop to values as for and dimensions. SVDSep produces constantly low accuracies for all dimensions. From the distributions, as shown for , we get insights on the differences in the performance of the methods. In particular, two stimuli are used for validation: B1, which is expected to produce a high score, and E6 (control), which is expected to produce a low score. We observe that the scores for B1 are high across methods, however in SVDSep B1 does not cross the threshold line. For E6, only ETR and OETR successfully assign a significantly lower score to E6. As expected, individual odorants S1,..,S8 and non-behavioral mixtures E1,…E6 are far from threshold in OETR method and mostly far from threshold in ETR. The form of the distributions transforms from nearly uniform (for SVDSep) to spiked distribution with high values for B stimuli and low for other (for OETR). The latter form allows for the line to easily separate the B class from other classes. Indeed we observe that while ETR is able to perform with high precision, it is unable to recall one of the behavioral stimuli (B2). Next, we proceed and test whether the methods are sensitive to the choice of the hyperellipse. In particular, we vary the radius of B1 sphere in the range of – for dimension and compute classification accuracies. We observe a clear separation between SVDSep/ICA and ETR/OETR groups of methods. We find that recall rates are stable across methods, however precision rates are sensitive. Precision rates of SVDSep and ICA are less than overall and indicate that these methods are not robust. By contrast, ETR and OETR are more robust. ETR achieves precision for a range of radii and OETR achieves precision in a much wider range of .
For recognition task, we compute the score for classification space for each trial (85 trials). If the score crosses a threshold of of the score of a target averaged trajectory (i.e. for more than 400 msec the trajectories are in the same hyperellipse) the tested trajectory is recognized as associated with the target stimulus. Using this method, we test recognition for B1 hyperellipse (sphere of radius 0.65 in our case). As in classification, our results show that OETR is more accurate than other methods. Recall and precision rates of OETR and ETR are higher than the rates of SVDSep and ICA. Interestingly, when we test for recognition of B1 stimulus only (i.e. we expect to recognize only 5 noisy trajectories of B1 stimulus as positive) precision rates do not reach in any of the methods; there are other trajectories marked positively as B1 although not B1. These trajectories turn out to be mainly from B class. We show that by expanding the class of recognition and to allow for any stimulus in B class to be marked as B1. In such a case, we obtain precision for OETR and ETR. These results demonstrate that noisy B trajectories appear extremely similar to each other in our recognition scheme. OETR is hence consistent with experimental observations and metrics which indicate that B class stimuli are behaviorally indistinguishable.
Classification of fixed point networks responses from sampled network activity is a challenging fundamental problem, especially when the inputs into a network are highly variable and noisy. We have addressed this problem by proposing novel supervised classification methods which find a low-dimensional representation from a collection of matrices of multiple node time-series data from the network. To test our methods we have established a benchmark database of multichannel time-series recordings from a real neural fixed point network: the antennal lobe in the Manduca sexta moth. We have shown that the OETR method that we have introduced allowed us to create a classification space that separates fixed points and their transient trajectories with high accuracy. Furthermore it allows to represent and classify noisy mixtures of stimuli. In contrast, we have shown that traditional methods such as SVM and Boosting that do not rely on dimension reduction and classical matrix decomposition and reduction methods such as SVD and ICA do not succeed at classifying fixed point networks dynamics with comparable performance to OETR.
Recordings from the olfactory network are valuable benchmark for testing classification. The network is known for its robustness in discrimination of scents composed of numerous distinct molecules in a highly dynamic environment. It has been established that the network employs fixed points for odor representation that are being read and processed by higher layer olfactory processing units (mushroom body in insects). In this respect, the ETR and OETR methods that we have proposed are simple to implement since they rely on dominant mean pattern (first SVD mode) associated with each distinct odorant, maximum rule and linear weights. Such components are natural to neural networks. Thereby application of the proposed methods could help in understanding the processes that higher layer units employ to instantaneously read information from fixed point networks. Here we focused on fixed point networks. These networks are fundamental and simplest attractor networks. It is thereby plausible that our methodology will lay the foundations for future work to extend the methodology to other more complex attractor networks such as limit cycle networks, lines of fixed points networks that are also ubiquitous networks in neural systems.
Iv Methods and Procedures
The benchmark collection and code that implements the various methods is available online on GitHub: https://github.com/shlizee/fpnets-classification.
Below we include further information on the algorithms that we have applied and compared with ETR and OETR methods.
Iv-a Class Imbalances
If the error or misclassification percentage is the only metric taken, learning algorithms like SVM appear to work well, but this is not an accurate result. The reason the error is so low, is because the first class is not classified at all and the second class is perfectly classified. Therefore, different tactics are needed to work with imbalanced data. The first tactic, is to change the performance metric considered. Two informative measures are precision (the positive predictive value, or, what fraction of the labeled class actually are in the class) and recall (the sensitivity, or, what fraction of the relevant class was retrieved). Precision () and recall () are defined as follows: where tp is true positive, fp is false positive, and fn is false negative. Another tactic, is re-sampling the data set. This includes, oversampling the class with fewer instances to try and achieve an equal balance between the classes, and under sampling the data to try and reduce the instances in the larger class. However, over- or under sampling can cause biases in the data as either the model is overfit, as in the case of over fitting, or, in the case of under sampling, information is lost. Another technique to improve classification performances when dealing with class imbalances, is boosting. Boosting can improve the performance of weak classifiers, such as decision trees, by re-sampling the training data by its assigned weights. Boosting techniques constantly tweak weak learners until they converge towards a strong learner.
Iv-B SVM Binary Classification from Raw Data
In our application of SVM approach, a hyperplane is created to divide the classes into their proper labels (-1 or 1) and is designed to give the largest margin between these classes. The hyperplane is then constructed as, , where is the distance to and and is the vector of weights for the features in . In order to obtain the optimal hyperlpane, must be maximized, or must be minimized. Thus, the Maximum Likelihood Estimator (MLE) for SVMs is found by minimizing , where is the class label and is the penalty on the weights .
RUSBoosting is a hybrid method of random undersampling and boosting, which is designed to improve performance when classifying imbalanced data. RUSBoost randomly removes instances of the majority class to try and equalize it with the minority class. RUSBoost has been shown to work well, as well as, perform simply and speedily. Seiffert et al.  demonstrated that RUSBoosting shares similiar performances to SMOTEBoosting, which is a common algorithm for managing class imbalances, but does so faster and more simply. Given a training set of examples with a minority class . A new set is generated from random undersampling, and a weak learner (i.e. a decision tree) is called to create a hypothesis . The pseudo loss, for and , is then calculated using .
The weight update parameter is then derived with: . Normalize this and the final hypothesis .
We are comparing our methods with the Infomax ICA algorithm which given observed components, in our case the matrix , finds a linear transformation from to , i.e. . The matrix is decomposed into independent components that minimize mutual information. The assumption of the ICA model is that the components in are statistically independent. We are using the implementation of the infomax based on , in which a maximum likelihood estimation is found by using the log-likelihood in the form where is the density functions of the columns of . This is equivalent to the Infomax Principle, where are non-linear scalar functions. To solve the Infomax algorithm we used Python software package .
The work was supported by the National Science Foundation under Grant Nos. DMS-1361145 (ES and JAR), and IOS-1354159 (JAR), the Air Force Office of Sponsored Research under Grant No. FA95501610167 (JAR and ES), and Washington Research Foundation Innovation Fund (ES). The authors would also like to acknowledge the partial support by the Departments of Electrical Engineering, Applied Mathematics and Biology at the University of Washington.
-  Higuera A.A.F, Garcia R.A.B., and Bermudez R.V.G. Python version of infomax independent component analysis. https://github.com/alvarouc/ica/, 2015–2016.
-  Daniel J Amit. Modeling brain function: The world of attractor neural networks. Cambridge University Press, 1992.
-  Bruno B Averbeck, Peter E Latham, and Alexandre Pouget. Neural correlations, population coding and computation. Nature Reviews Neuroscience, 7(5):358–366, 2006.
-  Anthony J Bell and Terrence J Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129–1159, 1995.
-  C Bishop. Pattern recognition and machine learning (information science and statistics), 1st edn. 2006. corr. 2nd printing edn, 2007.
-  Mark M Churchland, John P Cunningham, Matthew T Kaufman, Justin D Foster, Paul Nuyujukian, Stephen I Ryu, and Krishna V Shenoy. Neural population dynamics during reaching. Nature, 487(7405):51–56, 2012.
-  Gene H Golub and Charles F Van Loan. Matrix computations, volume 3. JHU Press, 2012.
-  Michael Grant, Stephen Boyd, and Yinyu Ye. Cvx: Matlab software for disciplined convex programming, 2008.
-  Aapo Hyvärinen. Fast and robust fixed-point algorithms for independent component analysis. Neural Networks, IEEE Transactions on, 10(3):626–634, 1999.
-  Aapo Hyvärinen and Erkki Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4):411–430, 2000.
-  Dominic Langlois, Sylvain Chartier, and Dominique Gosselin. An introduction to independent component analysis: Infomax and fastica algorithms. Tutorials in Quantitative Methods for Psychology, 6(1):31–38, 2010.
-  Ofer Mazor and Gilles Laurent. Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. Neuron, 48(4):661–673, 2005.
-  Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
-  Jeffrey A Riffell, Hong Lei, Leif Abrell, and John G Hildebrand. Neural basis of a pollinator’s buffet: olfactory specialization and learning in manduca sexta. Science, 339(6116):200–204, 2013.
-  Jeffrey A Riffell, Eli Shlizerman, Elischa Sanders, Leif Abrell, Billie Medina, Armin J Hinterwirth, and J Nathan Kutz. Flower discrimination by pollinators in a dynamic chemical environment. Science, 344(6191):1515–1518, 2014.
-  Chris Seiffert, Taghi M Khoshgoftaar, Jason Van Hulse, and Amri Napolitano. Rusboost: A hybrid approach to alleviating class imbalance. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 40(1):185–197, 2010.
-  Eli Shlizerman, Jeffrey A Riffell, and J Nathan Kutz. Data-driven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe. Frontiers in computational neuroscience, 8, 2014.
-  Eli Shlizerman, Konrad Schroder, and J Nathan Kutz. Neural activity measures and their dynamics. SIAM Journal on Applied Mathematics, 72(4):1260–1291, 2012.
-  Lawrence Sirovich. Turbulence and the dynamics of coherent structures. part i: Coherent structures. Quarterly of applied mathematics, 45(3):561–571, 1987.
-  Tom J Wills, Colin Lever, Francesca Cacucci, Neil Burgess, and John O’Keefe. Attractor dynamics in the hippocampal representation of the local environment. Science, 308(5723):873–876, 2005.