Wall Stress Estimation of Cerebral Aneurysm based on Zernike Convolutional Neural Networks
Abstract
Convolutional neural networks (ConvNets) have demonstrated an exceptional capacity to discern visual patterns from digital images and signals. Unfortunately, such powerful ConvNets do not generalize well to arbitraryshaped manifolds, where data representation does not fit into a tensorlike grid. Hence, many fields of science and engineering, where data points possess some manifold structure, cannot enjoy the full benefits of the recent advances in ConvNets. The aneurysm wall stress estimation problem introduced in this paper is one of many such problems. The problem is wellknown to be of a paramount clinical importance, but yet, traditional ConvNets cannot be applied due to the manifold structure of the data, neither does the stateoftheart geometric ConvNets perform well. Motivated by this, we propose a new geometric ConvNet method named ZerNet, which builds upon our novel mathematical generalization of convolution and pooling operations on manifolds. Our study shows that the ZerNet outperforms the other stateoftheart geometric ConvNets in terms of accuracy.
1 Introduction
Many areas of science and engineering have to deal with geometry data of some kind, because shape often encompasses a great deal of critical information for understanding phenomena. Such problems often boil down to a question of identifying latent geometric patterns behind a variety of shapes and correlating the patterns with a certain physical phenomena. In this regard, the convolutional neural networks, also known as CNN or ConvNets, have demonstrated a phenomenal capacity in capturing and recognizing important visual features from images or signals, conceiving the unprecedented advances in machine learning and artificial intelligence research.
However, like what has already been pointed out by Bronstein et al. bronstein2017geometric (), geometric features on freeform surfaces are not quite trivial to describe numerically and, hence, do not enjoy the luxury of such powerful ConvNets. That is, in contrast to plain signals or images, where a tensorlike grid structure is readily available, freeform surfaces, or more formally nonEuclidean manifolds, do not possess such grid structure, rendering significant difficulties in consistent data representation. In this reason, many of the core operations of ConvNets, including the very notion of function convolution, cannot be nicely generalized, which is the current bottleneck preventing the widespread use of ConvNets in computational geometry and many other relevant research areas.
The problem we are addressing in this paper, estimating the wall stress distribution on a cerebral aneurysm surface, is one of such problems that cannot be addressed using the traditional ConvNets. We hypothesize that the magnitude of the wall stress applied to cerebral aneurysms has a correlation with the local surface features of an aneurysm and the semantic context of how these features are aligned. If this hypothesis is true, the magnitude of the wall stress on an aneurysm could be estimated fairly quickly based on its geometric shape. As such, the use of computationallyheavy, finite element type of stress analysis methods will become unnecessary, which presents tons of clinical advantages in patient care. However, for the testing of this hypothesis, a statistical model that concerns with surface features needs to be built, which, unfortunately, is not possible because of the manifold structure of data as previously mentioned.
Therefore, we introduce in this paper a novel concept of Zernike convolution on manifold surfaces in an attempt to rigorously expand the concept of ConvNets to surface manifolds. Although we are targeting at a rather specific application of aneurysm wall stress estimation, we keep the generality in formulating the mathematical concepts and theorems behind our method, such that it is seamlessly scalable to other problems. The main scientific contributions of our work can be summarized as follows:

A novel concept of Zernike convolution generalizing the (planar) convolution operation is proposed. In particular, by using the Zernike polynomials as the function bases, the tensor field defined on the manifold is locally approximated. By proving that the function convolution is equivalent to the inner product between the Zernike coefficient, a simple and efficient convolution operation is rigorously defined on manifold domains.

Compared with the current stateoftheart methods for manifold convolution masci2015geodesic (); boscaini2016learning (); monti2017geometric (), our approach comes with a lesser number of neural weights as we demonstrated in Section 3.4, presenting benefits to the computational efficiency.

We propose ZerNet: a neural network architecture that generalize ConvNets to solve featurebased regression problems correlated to geometric properties residing in a manifold structure, showing a superior accuracy compared with the other stateofthe art geometric ConvNets.
2 Related works
One of the pioneering works in this area is due to Bruna et al. bruna2013spectral (); henaff2015deep (). They generalized the convolution operation lecun1998gradient () to graphstructured data by representing the graph convolution operation in terms of spectral bases of the graphLaplacian. Defferrard et al. defferrard2016convolutional () shares the similar idea of using the spectral bases for the definition of graph convolution. However, they introduced the Chebyshev polynomials as to avoid the explicit computation of the graphLaplacian eigenbases which can be highly time consuming for large scale problems. Later, Kipf et al. kipf2016semi () further improved this strategy by defining more concise filters based only on the onering neighborhoods of the graph. In the meantime, there also have been a branch of research that attempts to generalize ConvNets based on spatial formulation duvenaud2015convolutional (); niepert2016learning (); hechtlinger2017generalization (), instead of using spectral bases. Those spatial approaches generalize the convolution as an inner product of the parameters (filter coefficients) with spatially close neighbors relying on the graph’s spatial structure. Aforementioned graphbased generalization of ConvNets act as paradigms in attempts to generalize ConvNets to NonEulicidean structured data, providing some insights toward the application of ConvNets to surface geometry data.
More with the direct connection to surface geometry processing, several manifoldbased approaches have also been reported in this area. Similar to the spectral formulations above, Boscaini et al. boscaini2015learning () utilized the windowed Fourier transform shuman2016vertex () to define the convolution on localized spectral bases of a manifold. However, a fundamental issue of the spectral formulation type of convolution is its domaindependent. The spectral filters learned with respect to the Fourier basis on one domain cannot be applied to another domain in a trivial way. Although, in the work boscaini2015learning (), some spatial interpretation on manifold is given by the localized spectral filters, which to some extent addressed such issue. The shiftinvariance, a critical property should be preserved through the convolution operation, was still an unaddressed problem.
Therefore, an alternative approach generalizing the convolution purely in the spatial domain on manifolds ascends the stage. Masci et al. masci2015geodesic () proposed Geodesic CNN (GCNN), the first version of spatial formulated geometric ConvNets on manifolds. In masci2015geodesic (), the filters were applied to the extracted patch which is represented as a combination of gaussian weights defined on a local geodesic polar coordinates. As a followup work, Boscaini st al. boscaini2016learning () adopted an alternative approach of local patch construction by using anisotropic heat kernels andreux2014anisotropic (). Monti et al. monti2017geometric () further generalized the patch construction by introducing some learnable parameters to parametrize the weighting function defined on a local system of pseudocoordinates. The strength of spatial based ConvNets on manifolds is the ability to adapt the model across different domains, in the mean time preserving the property of shiftinvariance during the convolution operation. This property is crucial for the applications in geometry processing, satisfying the requirement that a ConvNet model trained on one shape can be applied to another one.
3 Our approach
In this section, we introduce a new concept of Zernike convolution to generalize ConvNets on manifolds. A conventional convolution operation defined on image domains amounts to extracting a local patch of pixels and taking the weighted sum over a template, or a convolutional filter. The inherent gridlike structure of image domains (i.e. Euclidean) presents a consistent global parametrization property, which allows the application of a template in a consistent orientation across the pixels. This, however, is not necessarily true on manifold domains where the Euclidean gridstructure is not available. Hence, the main challenge in this regard is how to formulate a strategy to apply convolutional filters in a consistent but effective manner.
To this end, we propose the following approach. First, for each point sampled on a given manifold, we define a local patch around the sampled point. Then, at each point, the tensor field defined on the manifold is locally approximated by using the Zernike polynomials as the function bases (Figure 1). Under this setting, we prove that the function convolution becomes merely the inner product between the Zernike coefficient, permitting a simple and efficient computation. Further, under this setting, the change of patch orientation can be achieved by simply multiplying a typical 22 planar rotation matrix, which facilitates the angular operations (e.g. angular pooling) that mitigate the global parameterization issue.
3.1 Zernike polynomials
By definition, Zernike polynomials von1934beugungstheorie () are an orthogonal polynomial sequence (i.e. for ) defined over a unit disk. The Zernike polynomials are separated into even and odd polynomials denoted as and respectively:
(1) 
where and are nonnegative integer indices with ; is the radial distance; and is the azimuthal angle. Furthermore, the Zernike radial polynomial is defined as:
(2) 
In order to achieve a sequence of orthonormal bases on a unit disk, Zernike polynomials can be further normalized with a normalization factor :
(3) 
where denotes the Kronecker delta.
The normalized Zernike polynomials are useful for decomposing complex functions or data due to their orthonormality over the unit disk. Any function defined on the domain can be expressed as a sum of Zernike modes:
(4) 
For simplicity, for the rest of the paper we use the term ”Zernike functions″ standing for normalized Zernike polynomials, denoted as with index corresponding to a certain pair of in (4), then (4) can be expressed as:
(5) 
3.2 Zernike convolution
Building upon the above definition of (normalized) Zernike polynomials, we define key building blocks of Zernike convolutional neural networks. On a manifold domain , a convolutional response of a function at a point with respect to a convolutional filter is defined as follows:
(6) 
where and are functions defined on the manifold and denotes a convolutional filter centered at .
For a local extrinsic patch extraction on the manifold, we adopt a method similar to masci2015geodesic (). For a local neighborhood of , a geodesic ball of radius is defined around . Then, given that the radius is sufficiently small w.r.t. the local convexity radius of the manifold, the geodesic ball is homeomorphic to the topological disk, and thus, one can define a bijective mapping from to a unit disk. That is, there always exists a mapping in the polar coordinate convention. This permits the mapping of a function on to a function on the polar coordinates, as well as the convolutional kernel :
(7) 
(8) 
with different sets of coefficients and respectively. Under this notion, the convolution operation on the manifold is defined as:
(9) 
However, due to the orthonormality property of the Zernike functions over the unit disk, (9) can further be simplified as:
(10) 
which indicates that the convolution operation defined on the manifold is equivalent to the dot product of the Zernike coefficients and .
Angular operation
In order to overcome the lack of an orientationconsistent global parametrization, we define the Zernike angular pooling operation in a similar spirit to masci2015geodesic (). From the definition of Zernike polynomials in (7), A Zernike patch after undergoing a rotation can be represented as:
(11) 
However, it is easy to show from the trigonometric sum and product formulae that
(12) 
Therefore, the function after the rotation is merely a rotational transform of the Zernike coefficients and for all pairs of :
(13) 
Thus, in the Zernike convolution setting, the angular pooling operation can be achieved with any desired angle steps without losing any mathematical rigor, as opposed to the fixed angular bin heuristics as in masci2015geodesic ().
Patch operator
We introduce a term called Zernike patch operator :
(14) 
which is used to describe the operation that derives the set of coefficients in (7). We can regard as an extracted intrinsic ’patch’ on the manifold through aforementioned operation.
By introducing Zernike patch operator and Zernike angular rotator, we define the Zernike convolution:
(15) 
where denote the filters applied on the extracted ’patch’ on the manifold, which can be rotated by arbitrary angle to resolve the issue of angular coordinate ambiguity, denote a finite number of Zernike functions used for decomposition.
3.3 ZerNet
With the convolution operation on manifold, Zernike convolution, being well defined, now we are ready to extend ConvNet for geometric feature learning on 3D graphical models. We propose a framework of constructing Zernike convolutional neural network (ZerNet), which enables extracting geometric features from 3D graphical models and learning the correlations between such features and target properties distributed over the graphical models. ZerNet can take any geometry vectors distributed over the model surface (e.g. coordinates of mesh vertices) as input; finally outputs the target properties as a regressed scalar field. As the major building blocks of ZerNet, we introduce the following types of layers:
Zernikepatch extraction
layer is set as a fixed layer before Zernike convolution layer, which typically takes the initial geometric vectors from input layer or the evolved feature vectors from previous layer as the input. In this layer, such input vectors are partitioned to the positions in each local extrinsic patch (predefined local unit disks distributed over manifold). Then by taking into the precalculated Zernike bases on each local patch, an overdetermined system of linear equations can be solved, which outputs the Zernike coefficients as the extracted intrinsic ‘patch’.
Zernike convolution
layer takes the extracted intrinsic ’patch’outputted from the Zernikepatch extraction layer as input. By specifying the number of filters and the number of all possible rotations in a range of of individual filter, the operation described in (15) is conducted in this layer with , which outputs dimensional vectors as the input of Angular maxpooling layer.
Angular maxpooling
layer is an internal embedded layer in conjunction with the Zernike convolution layer, which takes the maximum over filter rotations and outputs the evolved geometric feature vectors in a dimensional space.
Featurepatch normalization
layer is an optional option that can be embedded into the Zernikepatch extraction layer, which is used to normalize the partitioned feature vectors distributed in each local patch by subtracting the feature vector located at patch center. Then the normalized vectors in each local patch are used for deriving the Zernike coefficients.
Patch linear/ regression
layer is used to linear transform the geometric feature vector to match a desired output dimensions. For a regression purpose, each geometric feature vector is regressed to a scalar that match the target output at corresponding position. Across the patches, an identical linear transformation is shared.
For the implementation of ZCNNs, we use Keras chollet2015keras () and Tensorflow tensorflow2015whitepaper (). The Conv1D layer in Keras is utilized as our Patch linear/regression layer due to its weights sharing mechanism. The rest of aforementioned layers are implemented as customized Keras layers to meet our needs.
3.4 Comparison to other geometric ConvNets
In previous efforts for defining the convolution on manifolds, an interpolating approach based on spatial neighbors was commonly used for patch extraction. By defining a set of weighting functions , the function at the position can be interpolated by its neighbors spatially centered at . In this manner, the local patch can be obtained by mapping from the manifold into some local system of coordinates around :
(16) 
Then a spatial definition of the convolution on manifold is given by following a templatematching procedure:
(17) 
where denote the filters (template coefficients) applied on the extracted patch at position . Geodesic CNN masci2015geodesic (), Anisotropic CNN boscaini2016learning () and MoNet monti2017geometric () those pioneers, also considered as stateoftheart methods in defining convolution on manifold all adopted such weights interpolating mechanism, but with different approaches in defining the weighting functions.
Compared to above methods, other than adopting an approach of using surrounding neighbors to interpolating defined on a local patch by introducing any weighting functions, we introduce a function decomposition approach by using a sequence of orthogonal bases (Zernike functions). In terms of parameterization, our approach usually comes with a less complexity. The number of parameters required for those chartingbased methods is equal to the number of the weighting functions, which essentially has to be the number of discretized samples in a local patch. Whereas in our case, the required number of parameters is the number of the orthogonal bases we chose for the decomposition, which practically can be much smaller than the number of discretized samples.
4 Case study  wall stress estimation in aneurysm dataset
As a case study, we construct a ZerNet to learn the correlations between geometric features and wall stress distribution in cerebral aneurysms. Once the training is completed, by inputting the 3D mesh of any individual cerebral aneurysm, our ZerNet model can estimate the magnitude of wall stress distributed over the mesh surface. Figure 2 shows a schematic diagram of our approach for such scalar field regression type of problems.
4.1 Data set
Our data set contains cerebral aneurysm models that modeled as triangular mesh. For each individual model, the wall stress distributed on each of the mesh vertices is provided. All the mesh models have different mesh topologies. For the sake of obtaining a better reference in setting the radius of the geodesic ball for the local patch extraction (mentioned in section 3.2) across all models, we normalize all mesh models to have an identical mesh surface area. Empirically, we choose a sufficiently small to extract each local patch with an area approximately of the total mesh surface area. In order to resolve the uncomformity issue across the mesh models in our data set, we uniformly samples random points on each mesh surface by using a sampling algorithm for arbitrary surface in gptoolbox (). The coordinates of each sampled point is obtained by an interpolation of the three surrounding face vertices on the original mesh. By using the same interpolating weights, the wall stress distributed on the sampled points can be derived.
4.2 Input data processing
For the input data, other than using any sophistic handcraft geometry vectors, we tend to use more trivial ones to demonstrate our ZerNet’s value in discovering latent geometric patterns. The XYZcoordinates (probability the most standard one in representing any 3D graphic model) of sampled points of each individual aneurysm model are used as the initial input geometric vectors. For initial Zernikepatch extraction, after those XYZcoordinate vectors being partitioned to the positions in each local patch and going through the standard Featurepatch normalization process, the normalized vectors within each local patch are further operated by aligning their Zaxis with the normal vector at the patch center to achieve a position and orientation invariant representation of each input aneurysm model. For the Zernike bases defined in each of the local patch, we used the first normalized Zernike polynomials ( is up to 5 in (3)) calculated by using an implemented algorithm in fricker2008analyzing ().
4.3 Results
ZerNet  ACNN  

Model ID  MAPE  PCC  HR()  HR()  MAPE  PCC  HR()  HR() 
TPIa105I  13.13%  0.88  58.55%  85.25%  18.51%  0.71  41.45%  70.33% 
TPIa166I  10.75%  0.85  61.47%  90.07%  21.53%  0.64  35.19%  61.47% 
TPIa182I  15.43%  0.84  43.62%  75.65%  23.37%  0.75  28.75%  53.59% 
TPIa32I  17.45%  0.85  50.02%  78.34%  32.17%  0.68  26.53%  48.14% 
TPIa33I  9.25%  0.89  68.42%  88.86%  21.08%  0.58  35.65%  58.38% 
We constructed a ZerNet architecture containing three Zernike convolution layers with the number of filters setting as , and respectively. For a further evolving of the geometric feature vectors, we appended one more Patch linear layer with a setting of as output dimension in conjunction with the last Zernike convolution layer. Finally a Patch regression layer with output dimension was set as the output layer. Among the aneurysm models, we randomly selected models for performing leaveoneout cross validations. For each of the model among the , we left it out as the test/validation set and used the rest models in our data set to train the ZerNet. Training was performed by using Adam optimizer DBLP:journals/corr/KingmaB14 () to minimize mean squared error between the network output and the provided wall stress distribution of each trained model.
For the performance evaluation, we compared our method to current stateoftheart method, Anisotropic Convolutional Neural Networks (ACNN) boscaini2016anisotropic (). Due to the requiring of a consistent triangular mesh topology across the models as for ACNN method, we used the original mesh of one aneurysm model as template to register the rest aneurysm models by using a deformable registration algorithm li08global (). For the input of the ACNN, we used the local SHOT descriptor salti2014shot () with dimensions as suggested in bronstein2017geometric (). For the architecture of ACNN, we constructed three Anisotropic convolutional layers with setting the same number of filters for each as in ZerNet; other layers are equivalently set as in ZerNet as well. Finally, for each validation model, the predicted wall stress outputted by ZerNet and ACNN were both mapped back to every vertex on the original triangular mesh via nearest neighbors search. The quantitative evaluation of our method compared to ACNN was summarized in Table 1. Figure 3 shows qualitative visualizations of the wall stress distribution estimated by our method and ACNN. We can clearly see that our proposed ZerNet has a promising performance in such aneurysm wall stress estimation task, which significantly outperforms ACNN.
5 Conclusion
In this paper, we introduced a new concept of Zernike convolution by generalizing the convolution operation to surface manifolds. Building upon such, we proposed the novel ZerNet architecture, which extends conventional ConvNets to learn geometric features from 3d graphic models. Experiments has shown that the ZerNet were indeed capable of capturing the local geometry features correlated to the defined task (wall stress on cerebral aneurysms), showing a promising estimation of the stress distribution trend. For the definition of such ZerNet, no problemspecific assumptions has been made, such that many other problems of a similar kind can benefit from our new method.
To our knowledge, our work is the first to generalize ConvNets in attempts for localfeatures regression within manifolds domain. Unlike the many classification based problems that have been addressed by previous works, which are challenging for computer but relatively perceptually easy for humans such as finding shape correspondence, shape retrieval and etc, our work acts as a paradigm in a sense that how the “geometric deep learning” can really facilitate the solving for those perceptually difficult tasks.
References
 Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Largescale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
 Mathieu Andreux, Emanuele Rodola, Mathieu Aubry, and Daniel Cremers. Anisotropic laplacebeltrami operators for shape analysis. In European Conference on Computer Vision, pages 299–312. Springer, 2014.
 Davide Boscaini, Jonathan Masci, Simone Melzi, Michael M Bronstein, Umberto Castellani, and Pierre Vandergheynst. Learning classspecific descriptors for deformable shapes using localized spectral convolutional networks. In Computer Graphics Forum, volume 34, pages 13–23. Wiley Online Library, 2015.
 Davide Boscaini, Jonathan Masci, Emanuele Rodolà, and Michael Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems, pages 3189–3197, 2016.
 Davide Boscaini, Jonathan Masci, Emanuele Rodolà, Michael M Bronstein, and Daniel Cremers. Anisotropic diffusion descriptors. In Computer Graphics Forum, volume 35, pages 431–441. Wiley Online Library, 2016.
 Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
 Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
 François Chollet et al. Keras. https://github.com/kerasteam/keras, 2015.
 Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852, 2016.
 David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán AspuruGuzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pages 2224–2232, 2015.
 Paul Fricker. Analyzing lasik optical data using zernike functions. Matlab Digest, 1, 2008.
 Yotam Hechtlinger, Purvasha Chakravarti, and Jining Qin. A generalization of convolutional neural networks to graphstructured data. arXiv preprint arXiv:1704.08165, 2017.
 Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graphstructured data. arXiv preprint arXiv:1506.05163, 2015.
 Alec Jacobson et al. gptoolbox: Geometry processing toolbox, 2016. http://github.com/alecjacobson/gptoolbox.
 Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
 Thomas N Kipf and Max Welling. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
 Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Hao Li, Robert W. Sumner, and Mark Pauly. Global correspondence optimization for nonrigid registration of depth scans. Computer Graphics Forum (Proc. SGP’08), 27(5), July 2008.
 Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37–45, 2015.
 Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proc. CVPR, volume 1, page 3, 2017.
 Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pages 2014–2023, 2016.
 Samuele Salti, Federico Tombari, and Luigi Di Stefano. Shot: Unique signatures of histograms for surface and texture description. Computer Vision and Image Understanding, 125:251–264, 2014.
 David I Shuman, Benjamin Ricaud, and Pierre Vandergheynst. Vertexfrequency analysis on graphs. Applied and Computational Harmonic Analysis, 40(2):260–291, 2016.
 Zernike von F. Beugungstheorie des schneidenverfahrens und seiner verbesserten form, der phasenkontrastmethode. Physica, 1(712):689–704, 1934.