On Universal Equivariant Set Networks

On Universal Equivariant Set Networks

Nimrod Segol & Yaron Lipman
Department of Computer Science and Applied Mathematics
Weizmann Institute of Science
Rehovot, Israel
Abstract

Using deep neural networks that are either invariant or equivariant to permutations in order to learn functions on unordered sets has become prevalent. The most popular, basic models are DeepSets (zaheer2017deep) and PointNet (qi2017pointnet). While known to be universal for approximating invariant functions, DeepSets and PointNet are not known to be universal when approximating equivariant set functions. On the other hand, several recent equivariant set architectures have been proven equivariant universal (sannai2019universal; keriven2019universal), however these models either use layers that are not permutation equivariant (in the standard sense) and/or use higher order tensor variables which are less practical. There is, therefore, a gap in understanding the universality of popular equivariant set models versus theoretical ones.

In this paper we close this gap by proving that: (i) PointNet is not equivariant universal; and (ii) adding a single linear transmission layer makes PointNet universal. We call this architecture PointNetST and argue it is the simplest permutation equivariant universal model known to date. Another consequence is that DeepSets is universal, and also PointNetSeg, a popular point cloud segmentation network (used e.g., in (qi2017pointnet)) is universal.

The key theoretical tool used to prove the above results is an explicit characterization of all permutation equivariant polynomial layers. Lastly, we provide numerical experiments validating the theoretical results and comparing different permutation equivariant models.

1 introduction

Many interesting tasks in machine learning can be described by functions that take as input a set, , and output some per-element features or values, . Permutation equivariance is the property required of so it is well-defined. Namely, it assures that reshuffling the elements in and applying results in the same output, reshuffled in the same manner. For example, if then .

Building neural networks that are permutation equivariant by construction proved extremely useful in practice where the most popular models, DeepSets zaheer2017deep and PointNet qi2017pointnet enjoy small number of parameters, low memory footprint and computational efficiency along with high empirical expressiveness. Although both DeepSets and PointNet are known to be invariant universal (i.e., can approximate arbitrary invariant continuous functions) they are not known to be equivariant universal (i.e., can approximate arbitrary equivariant continuous functions).

On the other hand, several researchers have suggested theoretical permutation equivariant models and proved they are equivariant universal. sannai2019universal builds a universal equivariant network by taking copies of -invariant networks and combines them with a layer that is not permutation invariant in the standard (above mentioned) sense. keriven2019universal solves a more general problem of building networks that are equivariant universal over arbitrary high order input tensors (including graphs); their construction, however, uses higher order tensors as hidden variables which is of less practical value. yarotsky2018universal proves that neural networks constructed using a finite set of invariant and equivariant polynomial layers are also equivariant universal, however his network is not explicit (i.e., the polynomials are not characterized for the equivariant case) and also of less practical interest due to the high degree polynomial layers.

In this paper we close the gap between the practical and theoretical permutation equivariant constructions and prove:

Theorem 1.
  1. PointNet is not equivariant universal.

  2. Adding a single linear transmission layer (i.e., ) to PointNet makes it equivariant universal.

  3. Using ReLU activation the minimal width required for universal permutation equivariant network satisfies .

This theorem suggests that, arguably, PointNet with an addition of a single linear layer is the simplest universal equivariant network, able to learn arbitrary continuous equivariant functions of sets. An immediate corollary of this theorem is

Corollary 1.

DeepSets and PointNetSeg are universal.

PointNetSeg is a network used often for point cloud segmentation (e.g., in qi2017pointnet). One of the benefit of our result is that it provides a simple characterization of universal equivariant architectures that can be used in the network design process to guarantee universality.

The theoretical tool used for the proof of Theorem 1 is an explicit characterization of the permutation equivariant polynomials over sets of vectors in using power-sum multi-symmetric polynomials. We prove:

Theorem 2.

Let be a permutation equivariant polynomial map. Then,

(1)

where , , where , , are invariant polynomials; are the power-sum multi-symmetric polynomials. On the other hand every polynomial map satisfying Equation 1 is equivariant.

This theorem, which extends a result in Golubitsky2002TheSP to sets of vectors using multivariate polynomials, lends itself to expressing arbitrary equivariant polynomials as a composition of entry-wise continuous functions and a single linear transmission, which in turn facilitates the proof of Theorem 1.

We conclude the paper by numerical experiments validating the theoretical results and testing several permutation equivariant networks for the tasks of set classification and regression.

2 Preliminaries

Equivariant maps.

Vectors are by default column vectors; are the all zero and all one vectors/tensors; is the -th standard basis vector; is the identity matrix; all dimensions are inferred from context or mentioned explicitly. We represent a set of vectors in as a matrix and denote , where , , are the columns of . We denote by the permutation group of ; its action on is defined by , . That is, is reshuffling the rows of . The natural class of maps assigning a value or feature vector to every element in an input set is permutation equivariant maps:

Definition 1.

A map satisfying for all and is called permutation equivariant.

Power-sum multi-symmetric polynomials.

For a vector and a multi-index vector we define , and . Given a vector the power-sum symmetric polynomials , with , uniquely characterize up to permuting its entries. In other words, for we have for some if and only if for all . An equivalent property is that every invariant polynomial can be expressed as a polynomial in the power-sum symmetric polynomials, i.e., , see rydh2007minimal Corollary 8.4, Briand04 Theorem 3.

A generalization of the power-sum symmetric polynomials to matrices exists and is called power-sum multi-symmetric polynomials, defined with a bit of notation abuse: , where is a multi-index satisfying . Note that the number of power-sum multi-symmetric polynomials acting on is . For notation simplicity let be a list of all with . Then we index the collection of power-sum multi-symmetric polynomials as .

Similarly to the vector case the numbers , characterize up to permutation of its rows. That is for some iff for all . Furthermore, every invariant polynomial can be expressed as a polynomial in the power-sum multi-symmetric polynomials (see (rydh2007minimal) corollary 8.4), i.e.,

(2)

These polynomials were recently used to encode multi-sets in Maron2019provably.

3 Equivariant multi-symmetric polynomial layers

In this section we develop the main theoretical tool of this paper, namely, a characterization of all permutation equivariant polynomial layers. As far as we know, these layers were not fully characterized before.

Theorem 2 provides an explicit representation of arbitrary permutation equivariant polynomial maps using the basis of power-sum multi-symmetric polynomials, . The particular use of power-sum polynomials has the advantage it can be encoded efficiently using a neural network: as we will show can be approximated using a PointNet with a single linear transmission layer. This allows approximating an arbitrary equivariant polynomial map using PointNet with a single linear transmission layer. In contrast, yarotsky2018universal also provides a polynomial characterization of equivariant maps (see Lemma 2.1 and Proposition 2.4 in (yarotsky2018universal)) however there is no formula given to the invariant/equivariant generating set of polynomials and their efficient implementation/approximation is therefore questionable.

A version of this theorem for vectors instead of matrices (i.e., the case of ) appears in Golubitsky2002TheSP; we extend their proof to matrices, which is the relevant scenario for ML applications as it allows working with sets of vectors.

First, note that it is enough to prove Theorem 1 for and apply it to every column of . Hence, we deal with a vector of polynomials and need to prove it can be expressed as , for invariant polynomial .

Given a polynomial and the cyclic permutation the following operation, taking a polynomial to a vector of polynomials, is useful in characterizing equivariant polynomial maps:

(3)

Theorem 1 will be proved using the following two lemmas:

Lemma 1.

Let be an equivariant polynomial map. Then, there exists a polynomial , invariant to (permuting the last rows of ) so that .

Proof.

Equivariance of means that for all it holds that

(4)

Choosing an arbitrary permutation , namely a permutation satisfying , and observing the first row in Equation 4 we get . Since this is true for all is invariant. Next, applying to Equation 4 and observing the first row again we get . Using the invariance of to we get . ∎

Lemma 2.

Let be a polynomial invariant to (permuting the last rows of ) then

(5)

where are invariant.

Proof.

Expanding with respect to we get

(6)

for some . We first claim are invariant. Indeed, note that if is invariant, i.e., invariant to permutations of , then also its derivatives are permutation invariant, for all . Taking the derivative on both sides of Equation 6 we get that is equivariant.

For brevity denote . Since is invariant it can be expressed as a polynomial in the power-sum multi-symmetric polynomials, i.e., . Note that and therefore

Since is a polynomial, expanding its monomials in and shows can be expressed as , where , and are invariant (as multiplication of invariant polynomials ). Plugging this in Equation 6 we get Equation 5, possibly with the sum over some . It remains to show can be taken to be at-most . This is proved in Corollary 5 in Briand04

Proof.

(Theorem 2) Given an equivariant as above, use Lemma 1 to write where is invariant to permuting the last rows of . Use Lemma 2 to write , where are invariant. We get,

The converse direction is immediate after noting that are equivariant and are invariant. ∎

4 Universality of set equivariant neural networks

We consider equivariant deep neural networks ,

(7)

where are affine equivariant transformations, and is an entry-wise non-linearity (e.g., ReLU). We define the width of the network to be ; note that this definition is different from the one used for standard MLP where the width would be , see e.g., hanin17. zaheer2017deep proved that affine equivariant are of the form

(8)

where , and are the layer’s trainable parameters; we call the linear transformation a linear transmission layer. Equation 7 with the choice of layers as in Equation 8 is the DeepSets architecture (zaheer2017deep). Taking in all layers is the PointNet architecture (qi2017pointnet). Another type of architecture of interest is PointNetSeg appearing in (qi2017pointnet). Variations of this architecture are used for Point cloud segmentation, see e,g, li2018sonet and 3dsemseg_ICCVW17. PointNsetSeg uses an invariant version of PointNet (i.e., PointNet composed with an invariant max layer) and concatenates its output as a constant feature to an intermediate layer which is then inputed to another PointNet. Lastly, we will also consider a model we call PointNetST that is PointNet with a Single (linear) Transmission layer; in more details, PointNetST is an equivariant model of the form Equation 7 with layer as in Equation 8 where only a single layer has a non-zero (see Equation 8). We will prove PointNetST is permutation equivariant universal and therefore arguably the simplest permutation equivariant universal model known to date.

Universality of equivariant deep networks is defined next.

Definition 2.

Permutation equivariant universality111Or just equivariant universal in short. of a model means that for every permutation equivariant continuous function defined over the cube , and there exists a choice of (i.e., network depth), (i.e., network width) and the trainable parameters of so that for all .

We prove our main theorem next.

Proof. (Theorem 1) Fact (i), namely that PointNet is not equivariant universal is a consequence of the following simple lemma:

Lemma 3.

Let be the equivariant linear function defined by . There is no so that for all and .

Proof.

Assume such exists. Let . Then,

reaching a contradiction. ∎

To prove (ii) we first reduce the problem from the class of all continuous equivariant functions to the class of equivariant polynomials. This is justified by the following lemma.

Lemma 4.

Equivariant polynomials are dense in the space of continuous equivariant functions over the cube .

Proof.

Take an arbitrary . Consider the function , which denotes the -th output entry of . By the Stone-Weierstrass Theorem there exists a polynomial such that for all . Consider the polynomial map defined by . is in general not equivariant. To finish the proof we will symmetrize :

where in the first equality we used the fact that is equivariant. This concludes the proof since is an equivariant polynomial map. ∎

Now, according to Theorem 2 an arbitrary equivariant polynomial can be written as , where and are invariant polynomials. Remember that every invariant polynomial can be expressed as a polynomial in the power-sum multi-symmetric polynomials , (we use the normalized version for a bit more simplicity later on). We can therefore write as composition of three maps:

(9)

where is defined by

; is defined as in Equation 8 with and , where the identity matrix and represents the standard basis (as-usual). We assume , for . Note that the output of is of the form

Finally, is defined by

and .

Figure 1: The construction of the universal network (PointNetST).

The decomposition in Equation 9 of suggests that replacing with Multi-Layer Perceptrons (MLPs) would lead to a universal permutation equivariant network consisting of PointNet with a single linear transmission layer, namely PointNetST.

The approximating will be defined as

(10)

where and are both of PointNet architecture, namely there exist MLPs and so that and . See Figure 1 for an illustration of .

To build the MLPs , we will first construct to approximate , that is, we use the universality of MLPS (see (hornik1991approximation; sonoda2017neural; hanin17)) to construct so that for all . Furthermore, as over is uniformly continuous, let be such that if , then . Now, we use universality again to construct approximating , that is we take so that for all .

First, for all and therefore . Second, note that if then and . Therefore by construction of we have .

To prove (iii) we use the result in hanin17 (see Theorem 1) bounding the width of an MLP approximating a function by . Therefore, the width of the MLP is bounded by , where the width of the MLP is bounded by , proving the bound.                                     ∎

Before we prove Corollary 1 let us recall the PointNetSeg model from qi2017pointnet in detail. We write the model as follows: First, we apply . Let be the output of the first layer of the mlp and let be the vector where the ’th coordinate is the maximal value over the ’th column in . Then , with .

We can now prove Cororllary 1.

Proof.

(Corollary 1)

The fact that the DeepSets model is equivariant universal is immediate. Indeed, The PointNetST model can be obtained from the DeepSets model by setting in all but one layer, with as in Equation 8.

For the PointNetSeg model note that by Theorem 1 in qi2017pointnet every invariant function can be approximated by a network of the form

Where , with multi layer perceptrons. In particular, for every there exists such that for every where are the power sum multi symmetric polynomials. The rest of the proof follows closely the proof of Theorem 1.

5 Experiments

We conducted experiments in order to validate our theoretical observations. We compared the results of several equivariant models, as well as baseline (full) MLP, on three equivariant learning tasks: a classification task (knapsack) and two regression tasks (squared norm and Fiedler eigen vector). For all tasks we compare results of different models: DeepSets, PointNet, PointNetSeg, PointNetST, and PointNetQT. PointNetQT is PointNet with a single quadratic equivariant transmission layer as defined in the appendix. In all experiments we used a network of the form Equation 7 with depth and varying width, fixed across all layers.

Equivariant classification.

For classification, we chose to learn the multidimensional knapsack problem, which is known to be NP-hard. We are given a set of -vectors, represented by , and our goal is to learn the equivariant classification function defined by the following optimization problem:

s.t.

Intuitively, given a set of vectors , , where each row represents an element in a set, our goal is to find a subset maximizing the value while satisfying budget constraints. The first column of defines the value of each element, and the three other columns the costs. In Subsection 5.1 we detail how we generated this dataset.

Equivariant regression.

The first equivariant function we considered for regression is the function . hanin17 showed this function cannot be approximated by MLPs of small width. We drew training examples and test examples i.i.d. from a distribution (per entry of ).

The second equivariant function we considered is defined on point clouds . For each point cloud we computed a graph by connecting every point to its nearest neighbors. We then computed the absolute value of the first non trivial eigenvector of the graph Laplacian. We used the ModelNet dataset (Wu_2015_CVPR) which contains training meshes and test meshes. The point clouds are generated by randomly sampling points from each mesh.

Knapsack test Fiedler test test
Knapsack train Fiedler train train
Figure 2: Classification and regression tasks with permutation equivariant models. All the universal permutation equivariant models perform similarly, while the equivariant non-universal PointNet demonstrates reduced performace consistently; MLP baseline (with the same number of parameters as the equivariant models) performs poorly.

Figure 2 summarizes train and test accuracy of the 6 models after training (training details in Subsection 5.1) as a function of the network width . We have tested 15 values equidistant in .

As can be see in the graphs, in all three datasets the equivariant universal models (PointNetST, PointNetQT , DeepSets, PointNetSeg) achieved comparable accuracy. PointNet, which is not equivariant universal, consistently achieved inferior performance compared to the universal models, as expected by the theory developed in this paper. The non-equivariant MLP, although universal, used the same width (i.e., same number of parameters) as the equivariant models and was able to over-fit only on one train set (the quadratic function); its performance on the test sets was inferior by a large margin to the equivariant models.

An interesting point is that although the width used in the experiments in much smaller than the bound established by Theorem 1, the universal models are still able to learn well the functions we tested on. This raises the question of the tightness of this bound, which we leave to future work.

Lastly, we see that adding a quadratic transmission layer (PointNetQT) has a marginal effect on the model performance, while adding a single linear transmission layer to a PointNet network gives similar results to the DeepSets architecture, while having lower memory footprint.

5.1 Implementation details

Knapsack data generation.

We constructed a dataset of training examples and test examples consisting of matrices. We took , , . To generate , we draw an integer uniformly at random between and and randomly choose integers between as the first column of . We also randomly chose an integer between and and then randomly chose integers in that range as the three last columns of . The labels for each input were computed by a standard dynamic programming approach, see Martello:1990:KPA:98124.

Optimization.

We implemented the experiments in Pytorch paszke2017automatic with the Adam kingma2014adam optimizer for learning. For the classification we used the cross entropy loss and trained for 150 epochs with learning rate 0.001, learning rate decay of 0.5 every 100 epochs and batch size 32. For the quadratic function regression we trained for 150 epochs with leaning rate of 0.001, learning rate decay 0.1 every 50 epochs and batch size 64; for the regression to the leading eigen vecto we trained for 50 epochs with leaning rate of 0.001 and batch size 32.

References

Appendix A Appendix

One potential application of Theorem 2 is augmenting an equivariant neural network (Equation 7) with equivariant polynomial layers of some maximal degree . This can be done in the following way: look for all solutions to so that . Any solution to this equation will give a basis element of the form .

In the paper we tested PointNetQT, an architecture that adds to PointNet a single quadratic equivariant layer. We opted to use only the quadratic transmission operators: For a matrix we define as follows:

where is a pointwise multiplication and are the learnable parameters.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393227
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description