Geometric Scattering on Manifolds
The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of the success of convolutional neural networks (ConvNets) in image data analysis and other tasks. Inspired by recent interest in geometric deep learning, which aims to generalize ConvNets to manifold and graph-structured domains, we generalize the scattering transform to compact manifolds. Similar to the Euclidean scattering transform, our geometric scattering transform is based on a cascade of designed filters and pointwise nonlinearities, which enables rigorous analysis of the feature extraction provided by scattering layers. Our main focus here is on theoretical understanding of this geometric scattering network, while setting aside implementation aspects, although we remark that application of similar transforms to graph data analysis has been studied recently in related work. Our results establish conditions under which geometric scattering provides localized isometry invariant descriptions of manifold signals, and conditions in which the filters used in our network are stable to families of diffeomorphisms formulated in intrinsic manifolds terms. These results not only generalize the deformation stability and local roto-translation invariance of Euclidean scattering, but also demonstrate the importance of linking the used filter structures (e.g., in geometric deep learning) to the underlying manifold geometry, or the data geometry it represents.
Geometric Scattering on Manifolds
Michael Perlmutter††thanks: Corresponding author Dept. of CMSE††thanks: Computational Mathematics, Science & Engineering Michigan State University East Lansing, MI, USA email@example.com Guy Wolf††thanks: http://guywolf.org/ Dept. of Math. and Stat. Université de Montréal Montreal, QC, Canada firstname.lastname@example.org Matthew Hirn††thanks: https://matthewhirn.com/ Dept. of CMSE22footnotemark: 2 Dept. of Mathematics Michigan State University East Lansing, MI, USA email@example.com
noticebox[b]NeurIPS 2018, Integration of Deep Learning Theories Workshop, Montréal, Canada (Extended Version). \end@float
Characterizing variability in data is one of the most fundamental aspects of modern data analysis, which appears at the core of both supervised and unsupervised learning tasks. In particular, a crucial part of typical machine learning methods is to separate informative sources of variability from disruptive sources, which are considered via deformation groups or noise models. For example, linear classification methods aim to find a separating hyperplane that captures undesired (i.e., irrelevant for discriminating between classes) variance, and then eliminate it by projecting the data on normal directions of the hyperplane. Principal component analysis (PCA), on the other hand, aims to find a hyperplane that maximizes the captured variance, while treating directions of minimal variance as noise directions. Nonlinear methods extend such notions to consider richer models (beyond linear hyperplanes and projections) for capturing or eliminating variability - either task dependent or following an assumed model for the data.
In the past decade, deep learning methods have shown impressive potential in capturing complex nonlinear variability by using a cascade of linear transformations and simple nonlinear activations that together form approximations of target functions. While there is a multitude of applications where such methods are effective, some of their most popular achievements are in fields traditionally associated with signal processing in general, and image processing (i.e., computer vision) in particular. In such settings, the data has an inherent spatial (or temporal) structure, and collected observations form signals over it. Convolutional neural networks (ConvNets) utilize this structure to treat their linear transformations as convolutions of input signals with filters that are learned in the training process. In classification applications, the resulting deep cascade of convolutional filters (and nonlinearities) eliminates intra-class heterogeneity and results in high accuracy classifiers. Furthermore, hidden ConvNet layers yield coefficients that isolate semantic features and can be utilized for unsupervised feature extraction, e.g., as part of dimensionality reduction or generative models.
In an effort to improve mathematical understanding of deep convolutional networks and their learned features, Mallat (2010, 2012) presented the scattering transform. This transform has an architecture similar to ConvNets, based on a cascade of convolutional filters and simple pointwise nonlinearities in the form of complex modulus or absolute value. However, unlike deep learning methods, this transform does not learn its filters from data, but rather has them designed to provide guaranteed stability to a given family of deformations. As shown in Mallat (2012), under some admissibility conditions, one can use appropriate wavelet filter banks in the scattering transform to provide invariance to the actions of Lie groups. Moreover, the resulting scattering features also provide Lipschitz stability to small diffeomorphisms, where the size of a diffeomorphism is quantified by its deviation from a translation. These notions were made concrete in Bruna and Mallat (2011, 2013), Sifre and Mallat (2012, 2013, 2014) and Oyallon and Mallat (2015) using groups of translations, rotations, and scaling operations, with applications in image and texture classification. Further applications of the scattering transform and its deep filter bank approach were shown effective in several fields, such as audio processing (Andén and Mallat, 2011, 2014; Wolf et al., 2014, 2015; Andén et al., 2018), medical signal processing (Chudácek et al., 2014), and quantum chemistry (Hirn et al., 2017; Eickenberg et al., 2017, 2018; Brumwell et al., 2018).
Another structure often found in modern data is that of a graph between collected observations or features. Such structure naturally arises, for example, in transportation and social networks, where weighted edges represent connections between locations (e.g., roads or other routes) or user profiles (e.g., friendships or exchanged communications). They are also common when representing molecular structures in chemical data, or biological interactions in biomedical data. Relatedly, signals supported on manifolds or manifold valued data is also becoming increasing prevalent, in particular in shape matching and computer graphics. Manifold models for high dimensional data also arise in the field of manifold learning (e.g., Tenenbaum et al., 2000; Coifman and Lafon, 2006a; van der Maaten and Hinton, 2008), in which unsupervised algorithms infer data-driven geometries and use them to capture intrinsic structure and patterns in data. As such, a large body of work has emerged to explore the generalization of spectral and signal processing notions to manifolds (e.g., Coifman and Lafon, 2006b) and graphs (Shuman et al., 2013, and references therein). In these settings, functions are supported on the manifold or the vertices of the graph, and the eigenfunctions of the Laplace-Beltrami operator or the eigenvectors of the graph Laplacian serve as the Fourier harmonics.
This increasing interest in non-Euclidean data, particularly graphs and manifolds, has led to a new research direction known as geometric deep learning, which aims to generalize convolutional networks to graph and manifold structured data (Bronstein et al., 2017, and references therein). Unlike classical ConvNets, in which filters are learned on collected data features (i.e., in spatial or temporal domain), many geometric deep learning approaches for manifolds learn spectral coefficients of their filters (Bruna et al., 2014; Defferrard et al., 2016; Levie et al., 2017; Yi et al., 2017). The frequency spectrum is defined by the eigenvalues of the Laplace-Beltrami operator on the manifold, which links geometric deep learning with manifold and graph signal processing.
Inspired by geometric deep learning, recent works have also proposed an extension of the scattering transform to graph domains. These mostly focused on finding features that represent a graph structure (given a fixed set of signals on it) while being stable to graph perturbations. In Gama et al. (2018), a cascade of diffusion wavelets from Coifman and Maggioni (2006) was proposed, and its Lipschitz stability was shown with respect to a global diffusion-inspired distance between graphs. A similar construction discussed in Zou and Lerman (2018) was shown to be stable to permutations of vertex indices, and to small perturbations of edge weights. Finally, Gao et al. (2018) established the viability of scattering coefficients as universal graph features for data analysis tasks (e.g., in social networks and biochemistry data).
In this paper, we take the less charted path and consider the manifold aspect of geometric deep learning. In this setting, one needs to process signals over a manifold, and in particular, to represent them with features that are stable to orientations, noise, or deformations over the manifold geometry. In order to work towards these aims, we define a scattering transform on compact smooth Riemannian manifolds without boundary, which we call geometric scattering. Our construction is based on convolutional filters defined spectrally via the eigendecomposition of the Laplace-Beltrami operator over the manifold, as discussed in Section 3. We show that these convolutional operators can be used to construct a frame, which, with appropriately chosen low-pass and high-pass filters, forms a wavelet frame similar to the diffusion wavelets constructed in Coifman and Maggioni (2006). Then, in Section 4, a cascade of these generalized convolutions and pointwise absolute value operations is used to map signals on the manifold to scattering coefficients that encode approximate local invariance to isometries, which correspond to translations, rotations, and reflections in Euclidean space. In Section 5, we analyze the commutators of the filters used in our network with the action of diffeomorphisms. We believe that the results presented there will allow us to study the stability of our scattering network to these diffeomorphisms in future work using a notion of stability that analogous to the Lipschitz stability considered in Mallat (2012) on Euclidean space. As we discuss in Section 5, this requires a quantitative notion of deformation size. We consider three formulations of such a notion, which measure how far a given diffeomorphism is from being an isometry, and explore the type of stability that can be achieved for the geometric scattering transform with respect to each of them. Our results provide a path forward for utilizing the scattering mathematical framework to analyze and understand geometric deep learning, while also shedding light on the challenges involved in such generalization to non-Euclidean domains.
Let be a smooth, compact, and connected, -dimensional Riemannian manifold without boundary contained in . Let denote the geodesic distance between two points, and let be the Laplace-Beltrami operator on The eigenfunctions and non-unique eigenvalues of are denoted by and , respectively. Since is compact, the spectrum of is countable and we may assume that forms an orthonormal basis for . We define the Fourier transform of as the sequence defined by . The set of unique eigenvalues of is denoted by , and for we let and denote the corresponding multiplicities and eigenspaces. The diffeomorphism group of is , and the isometry group is . For a diffeomorphism , we let be the operator and let . We let denote a constant which depends only on the manifold and for two operators and defined on we let denote their commutator.
2 Problem Setup
We consider signals and representations which encode information about the signal, often referred to as the “features” of . A common goal in machine learning tasks is to classify or cluster signals based upon these features. However, we often want to consider two signals, or even two manifolds, to be equivalent if they differ by the action of a global isometry. Therefore, we seek to construct a family of representations, , which are invariant to isometric transformations of any up to the scale . Such a representation should satisfy a condition similar to,
where measures the size of the isometry with , and decreases to zero as the scale grows to infinity.
Along similar lines, it is desirable that the action of small diffeomorphisms on or on the underlying manifold should not have a large impact on the representation of the inputted signal for tasks such as classification or regression. However, the set of is a large group and invariance over would collapse the variability even between vastly different signals. Thus, in this case, we want a family that is stable to diffeomorphism actions on , but not invariant. This leads to a condition such as:
where measures how much differs from being an isometry, with if and if .
At the same time, the representations should not be trivial. Indeed, distinguishing different classes or types of signals often requires leveraging subtle information in the signals. This information is often stored in the high frequencies, i.e., for large . Our problem is thus to find a family of representations that are stable to diffeomorphisms, discriminative between different types of signals, which allow one to control the scale of isometric invariance, and do so for data that is supported on a manifold. The wavelet scattering transform of Mallat (2012) achieves goals analogous to the ones presented here, but for Euclidean supported signals. Therefore, we seek to construct a geometric version of the scattering transform, using filters corresponding to the spectral geometry of and to show that it has similar properties to its Euclidean counterpart.
3 Spectral Integral Operators
Similar to traditional ConvNets, the Euclidean scattering transform constructed in Mallat (2012) consists of an alternating cascade of convolutions and nonlinearities. In the manifold setting, it is not immediately clear how to define convolution operators, because translation is not well defined. In this section, we introduce a family of operators on that are analogous to convolution operators on Euclidean space due to the characterization of Euclidean convolution operators as Fourier multipliers. These operators will then be used in Sec. 4 to construct the geometric scattering transform.
For a function we define a spectral kernel by
and refer to the integral operator with kernel as a spectral integral operator. Grouping together the belonging to each eigenspace we write
where Using the fact that is an orthonormal basis for , a simple calculation shows that
and so, if then is a bounded operator of with operator norm In particular, if then is nonexpansive. Operators of this form are analogous to convolution operators defined on since the latter are diagonalized in the Fourier basis. To further emphasize this connection, we note the following theorem, which shows that spectral integral operators are equivariant with respect to isometries.
For every spectral integral operator and for every
Let be an isometry and let . For and we define and by
Since is an isometry, and are both orthonormal bases for . Therefore, there exists an unitary matrix (that does not depend upon ) such that . Using this fact, we see
Therefore, by (5), we see that for all Now letting and changing variables we have
as desired. ∎
We will consider frame analysis operators that are constructed using a countable family of spectral integral operators. We call a spectral function a low-pass filter if
Similarly, we call a high-pass filter if
We will assume that we have a low-pass filter , and a family of high-pass filters which satisfy a Littlewood-Paley type condition
for some A frame analysis operator is then defined by
where and are the spectral integral operators corresponding to and , respectively. The Littlewood-Paley type condition (8) implies that the filters used to define evenly cover the frequencies of . The following proposition shows that if and satisfy (8), then has corresponding upper and lower frame bounds.
Under the Littlewood-Paley condition (8), is a bounded operator from to and
In particular, if , then is an isometry.
We then group together the terms corresponding to each eigenspace, apply (8), and then use Parseval’s Identity. Note that for each and each appear times in the above sum. It is for this reason that the term is needed in (8). For full details of the proof, see Appendix A.
3.1 Geometric Wavelet Transforms
As an important example of the frame analysis operator , we define a geometric wavelet transform in terms of a single low-pass filter. Given a spectral function, we define the dilation at scale by
and a normalized spectral function by
The normalization factor of will ensure that the wavelet frame constructed below satisfies the Littlewood-Paley condition (8).
Let be a nonincreasing low-pass filter, and let In a manner analogous to classical wavelet constructions, (see for example, Meyer, 1993), we define a high-pass filter by
and observe that
To emphasize the connection with classical wavelet transforms, we denote and The geometric wavelet transform is then defined as
The following proposition can be proved by observing that
forms a telescoping sum, and that therefore, satisfies the Littlewood-Paley condition with We give a complete proof in Appendix B.
For all the geometric wavelet transform is an isometry from to , i.e.,
An important example is . In this case the low-pass kernel corresponding to is the heat kernel on at time , the kernel corresponding to is a normalized version, and the corresponding wavelet operators are similar to the diffusion wavelets introduced in Coifman and Maggioni (2006). The key difference between the construction presented here and the original diffusion wavelet construction is the normalization of the spectral function according to the multiplicity of each eigenvalue, which is needed to obtain an isometric wavelet transform.
4 The Geometric Scattering Transform
The geometric scattering transform is a nonlinear operator constructed through an alternating cascade of spectral integral operators and nonlinearities. Let be the modulus operator, and for each we let
For let be the set of all paths of the form and let denote the set of a all finite paths. For let
We define an operator , called the scattering propagator, by
In a slight abuse of notation, we include the “empty-path” of length zero in and define to be the identity if is the empty path.
The scattering transform over a path is defined as the integration of against the low-pass integral operator i.e., . The operator given by
is referred to as the geometric scattering transform. It is a mathematical model for geometric ConvNets that are constructed out of spectral filters. The following proposition shows that is nonexpansive if the Littlewood-Paley condition holds with , which we assume for the rest of the paper. The proof is nearly identical to Proposition 2.5 of Mallat (2012), and is thus omitted.
If the Littlewood-Paley condition (8) holds with , then
The scattering transform is invariant to the action of the isometry group on the inputted signal up to a factor that depends upon the decay of the low-pass spectral function . If for some constant and (e.g., the heat kernel), then the following theorem establishes isometric invariance up to the scale .
Let and for some constant and . Then there exists a constant such that
There exists a constant such that for every spectral integral operator, and for every ,
Moreover, if for some constant and some , then there exists a constant such that for any ,
5 Stability to Diffeomorphisms
In the previous section, we showed that the scattering transform is invariant to the action of isometries up to a factor depending on the scale In Mallat (2012), it is shown that the Euclidean scattering transform is stable to the action of certain diffeomorphisms that are close to being translations. In an effort to prove analogous results for the geometric scattering transform, we introduce three quantities and , which measure how far away a diffeomorphism is from being an isometry. If is an isometry, then by definition for all and in . This motivates us to define
It is also known (see for example, Kobayashi and Nomizu, 1963) that isometries are volume preserving in the sense that for all Therefore, we introduce the quantity
which measures how much a distorts volumes. Lastly,
measures the distance from to the isomtery group in the norm. We note that
Thus, will be small if can be factored into a global isometry and a small perturbation
A key step to showing that the Euclidean scattering transform is stable to the action of certain diffeomorphisms is a bound on the commutator norm This bound is then combined with a bound on which is obtained by methods similar to those used to prove Theorem 4.2, and the triangle inequality to produce a bound on . This motivates us to study the commutator of spectral integral operators with for diffeomorphisms that are close to being isometries. To this end, in the two theorems below, we show that can be controlled in terms of the quantities , , and , as well -dependent quantities.
There exists a constant such that if is a spectral integral operator and , then for all ,
On certain manifolds, we can prove a modified version of Theorem 5.1 that replaces the term with
A manifold is said to be homogeneous (with respect to the action of the isometry group), if for any two points, , there exists an isometry such that
A manifold is said to be two-point homogeneous, if for any two pairs of points, such that there exists an isometry such that and
If is two-point homogeneous, then there exists a constant such that if is spectral integral operator and , then for all ,
The proofs of Theorems 5.1 and 5.2 are in Appendices D and E. The keys to these proofs are Lemmas 5.3, 5.4, and 5.5 below, as well as the fact that spectral kernels are radial on two-point homogeneous manifolds. For proofs of these lemmas, please see Appendices F, G, and H. We note that the assumption that is radial, made in Lemma 5.4, is equivalent to the assumption (16), made in Lemma 5.3, on two-point homogeneous manifolds, but is a stronger assumption in general.
There exists a constant such that if and is a kernel integral operator on with a kernel that is invariant to the action of isometries, i.e.,
There exists a constant such that if is a kernel integral operator with a radial kernel and , then
There exists a constant such that for all . As a consequence, if is a spectral kernel, then
Furthermore, if is homogeneous, then and
The generalization of convolutional neural networks to non-Euclidean domains has been the focus of a significant body of work recently, generally referred to as geometric deep learning. In the process, various challenges have arisen, both computational and mathematical, with the fundamental question being whether the success of ConvNets on traditional signals can be replicated on more complicated geometric settings such as graphs and manifolds. We have presented geometric scattering as a mathematical framework for analyzing the theoretical potential of deep convolutional networks in such settings, in a manner analogous to the approach used by Mallat (2010, 2012) in the Euclidean setting. Our construction is also closely related to recent attempts at defining finite-graph scattering (Gama et al., 2018; Zou and Lerman, 2018; Gao et al., 2018). Our construction is defined in a continuous setting and therefore, we view it is a more direct generalization of the original Euclidean scattering transform. Datasets defined on certain graphs can then be viewed as finite subsamples of the underlying manifold, and the graph scattering transform presented in Gao et al. (2018) can be viewed as a discretized version of the construction presented here. Finally, our stability results provided here not only establish the potential of deep cascades of designed filters to enable rigorous analysis in geoemtric deep learning; they also provide insights into the challenges that arise in non-Euclidean settings, which we expect will motivate further investigation in future work.
This research was partially funded by: IVADO (l’institut de valorisation des données) [G.W.]; the Alfred P. Sloan Fellowship (grant FG-2016-6607), the DARPA Young Faculty Award (grant D16AP00117), and NSF grant 1620216 [M.H.].
- Andén and Mallat  Joakim Andén and Stéphane Mallat. Multiscale scattering for audio classification. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011), pages 657–662, 2011.
- Andén and Mallat  Joakim Andén and Stéphane Mallat. Deep scattering spectrum. IEEE Transactions on Signal Processing, 62(16):4114–4128, August 2014.
- Andén et al.  Joakim Andén, Vincent Lostanlen, and Stéphane Mallat. Classification with joint time-frequency scattering. arXiv:1807.08869, 2018.
- Bérard et al.  Pierre Bérard, Gérard Besson, and Sylvestre Gallot. Embedding Riemannian manifolds by their heat kernel. Geometric and Functional Analysis, 4(4):373–398, 1994.
- Bronstein et al.  Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: Going beyond Euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
- Brumwell et al.  Xavier Brumwell, Paul Sinz, Kwang Jin Kim, Yue Qi, and Matthew Hirn. Steerable wavelet scattering for 3D atomic systems with application to Li-Si energy prediction. In NeurIPS Workshop on Machine Learning for Molecules and Materials, 2018. arXiv:1812.02320.
- Bruna and Mallat  Joan Bruna and Stéphane Mallat. Classification with scattering operators. In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1561–1566, 2011.
- Bruna and Mallat  Joan Bruna and Stéphane Mallat. Invariant scattering convolution networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1872–1886, August 2013.
- Bruna et al.  Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and deep locally connected networks on graphs. In International Conference on Learning Representations (ICLR), 2014.
- Chudácek et al.  Václav Chudácek, Ronen Talmon, Joakim Andén, Stéphane Mallat, Ronald R Coifman, Patrice Abry, and Muriel Doret. Low dimensional manifold embedding for scattering coefficients of intrapartum fetale heart rate variability. In 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 6373–6376, 2014.
- Coifman and Lafon [2006a] Ronald R. Coifman and Stéphane Lafon. Diffusion maps. Applied and Computational Harmonic Analysis, 21:5–30, 2006a.
- Coifman and Lafon [2006b] Ronald R. Coifman and Stéphane Lafon. Geometric harmonics: A novel tool for multiscale out-of-sample extension of empirical functions. Applied and Computational Harmonic Analysis, 21(1):31–52, July 2006b.
- Coifman and Maggioni  Ronald R. Coifman and Mauro Maggioni. Diffusion wavelets. Applied and Computational Harmonic Analysis, 21(1):53–94, 2006.
- Defferrard et al.  Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems 29, pages 3844–3852, 2016.
- Eickenberg et al.  Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, and Stéphane Mallat. Solid harmonic wavelet scattering: Predicting quantum molecular energy from invariant descriptors of 3D electronic densities. In Advances in Neural Information Processing Systems 30 (NIPS 2017), pages 6540–6549, 2017.
- Eickenberg et al.  Michael Eickenberg, Georgios Exarchakis, Matthew Hirn, Stéphane Mallat, and Louis Thiry. Solid harmonic wavelet scattering for predictions of molecule properties. Journal of Chemical Physics, 148:241732, 2018.
- Gama et al.  Fernando Gama, Alejandro Ribeiro, and Joan Bruna. Diffusion scattering transforms on graphs. arXiv:1806.08829, 2018.
- Gao et al.  Feng Gao, Guy Wolf, and Matthew Hirn. Graph classification with geometric scattering. arXiv:1810.03068, 2018.
- Giné  Evarist Giné. The addition formula for the eigenfunctions of the Laplacian. Advances in Mathematics, 18(1):102–107, 1975.
- Hirn et al.  Matthew Hirn, Stéphane Mallat, and Nicolas Poilvert. Wavelet scattering regression of quantum chemical energies. Multiscale Modeling and Simulation, 15(2):827–863, 2017.
- Hörmander  Lars Hörmander. The spectral function of an elliptic operator. Acta Mathematica, 121:193–218, 1968.
- Kobayashi and Nomizu  Shoshichi Kobayashi and Katsumi Nomizu. Foundations of Differential Geometry, Volume 1. Number 15 in Interscience Tracts in Pure and Applied Math. John Wiley and Sons Inc., New York, 1963.
- Levie et al.  Ron Levie, Federico Monti, Xavier Bresson, and Michael M. Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. arXiv:1705.07664, 2017.
- Mallat  Stéphane Mallat. Recursive interferometric representations. In 18th European Signal Processing Conference (EUSIPCO-2010), Aalborg, Denmark, 2010.
- Mallat  Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65(10):1331–1398, October 2012.
- Meyer  Yves Meyer. Wavelets and Operators, volume 1. Cambridge University Press, 1993.
- Oyallon and Mallat  Edouard Oyallon and Stéphane Mallat. Deep roto-translation scattering for object classification. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2865–2873, 2015.
- Shi and Xu  Yiqian Shi and Bin Xu. Gradient estimate of an eigenfunction on a compact Riemannian manifold without boundary. Annals of Global Analysis and Geometry, 38:21–26, 2010.
- Shuman et al.  David I. Shuman, Sunil K. Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3):83–98, 2013.
- Sifre and Mallat  Laurent Sifre and Stéphane Mallat. Combined scattering for rotation invariant texture analysis. In Proceedings of the 20th European Symposium on Artificial Neural Networks (ESANN 2012), 2012.
- Sifre and Mallat  Laurent Sifre and Stéphane Mallat. Rotation, scaling and deformation invariant scattering for texture discrimination. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013.
- Sifre and Mallat  Laurent Sifre and Stéphane Mallat. Rigid-motion scattering for texture classification. arXiv:1403.1687, 2014.
- Tenenbaum et al.  Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
- Turk and Levoy  Greg Turk and Marc Levoy. The Stanford bunny, 2005.
- van der Maaten and Hinton  Laurens van der Maaten and Geoffrey Hinton. Visualizing high-dimensional data using t-SNE. Journal of Machine Learning Research, 9:2579–2605, 2008.
- Wolf et al.  Guy Wolf, Stephane Mallat, and Shihab A. Shamma. Audio source separation with time-frequency velocities. In 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Reims, France, 2014.
- Wolf et al.  Guy Wolf, Stephane Mallat, and Shihab A. Shamma. Rigid motion model for audio source separation. IEEE Transactions on Signal Processing, 64(7):1822–1831, 2015.
- Yi et al.  Li Yi, Hao Su, Xingwen Guo, and Leonidas Guibas. Syncspeccnn: Synchronized spectral cnn for 3d shape segmentation. In The Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- Zou and Lerman  Dongmian Zou and Gilad Lerman. Graph convolutional neural networks via scattering. arXiv:1804:00099, 2018.
Appendix A Proof of Proposition 3.2
We note that
where denotes projection onto the eigenspace Therefore, applying (6) to and we see that
Therefore, the result follows from Parseval’s Identity and the assumption that
Appendix B Proof of Proposition 3.3
We will show that
The result will then follow from Proposition 3.2.