Mixed Dimension Embedding with Application to MemoryEfficient Recommendation Systems
Abstract
In many realworld applications, e.g. recommendation systems, certain items appear much more frequently than other items. However, standard embedding methods—which form the basis of many ML algorithms—allocate the same dimension to all of the items. This leads to statistical and memory inefficiencies. In this work, we propose mixed dimension embedding layers in which the dimension of a particular embedding vector can depend on the frequency of the item. This approach drastically reduces the memory requirement for the embedding, while maintaining and sometimes improving the ML performance. We show that the proposed mixed dimension layers achieve a higher accuracy, while using 8 fewer parameters, for collaborative filtering on the MovieLens dataset. Also, they improve accuracy by 0.1% using half as many parameters or maintain baseline accuracy using 16 fewer parameters for clickthrough rate prediction task on the Criteo Kaggle dataset.
embedding representations recommendation systems collaborative filtering clickthrough rate prediction
1 Introduction
It is difficult to overstate the impact of representation learning and embeddingbased models in the present AI landscape. Embedding representations power stateoftheart applications in diverse domains, including computer vision [barz2019hierarchy, vasileva2018learning], natural language processing [chiu2016train, Mikolov2013distributed, liu2015topical, shoeybi2019megatron, akbik2018contextual, peters2018dissecting, liu2019roberta], computational biology [Asgari2015], and recommendation systems [cheng2016wide, Park2018, Wu2019].
There seems to be a fundamental tradeoff between the dimension of an embedding representation and the statistical performance (i.e. accuracy or loss) of embeddingbased models on a particular learning task. It is welldocumented that statistical performance suffers when embedding dimension is too low [yin2018dimensionality]. Thus, we are interested in the fundamental question: Is it possible to rearchitect embedding representations for a more favorable tradeoff between number of parameters and statistical performance?
This challenge is particularly prominent in recommendation systems tasks such as collaborative filtering (CF) and clickthrough rate (CTR) prediction problems. Recommendation models power some of the most commonplace datadriven services on the internet that benefit users across the globe with personalized experiences on a daily basis [Aggarwal2016, hazelwood2018, he2014practical, Pi2019].
At present, the outsized memory consumption of standard embedding layers in these recommendation systems is a major burden on the memory hierarchy – an embedding layer for a single recommendation engine can consume tens of gigabytes of space [Park2018, Pi2019]. This makes the engineering challenges associated with largescale recommendation embeddings of particular importance and interest.
In many models, embedding layers are used to map input categorical values into a vector space. This feature mapping outputs a dimensional vector for each value, and is learned concurrently with the model during supervised training. However, the relationship between data distributions, embedding representations, model architectures, and optimization algorithms is highly complex. This makes it difficult to design efficient quantization and compression algorithms for embedding representations from informationtheoretic principles.
Nevertheless, the distributions of accesses for users and items is often heavily skewed in many realworld applications. For instance, for the CF task on MovieLens dataset^{1}^{1}1https://grouplens.org/datasets/movielens/, the top 10% of users receive as many queries as the remaining 90% and the top 1% of items receive as many queries as the remaining 99%. To an even greater extent, for the CTR prediction task on the Criteo AI Labs Ad Kaggle dataset^{2}^{2}2https://www.kaggle.com/c/criteodisplayadchallenge the top of indices receive as many queries as the remaining 30 million, as summarized on Fig. 1. In networks science, this phenomena is referred to as popularity [cho2012wave, papadopoulos2012popularity], a terminology we adopt here.
We propose mixed dimension embedding layers, a novel architecture for embedding representations. The central thesis behind mixed dimension embedding layers is that the dimension of a particular embedding vector should not remain uniform, but should scale with that feature’s popularity. In particular, we show that popular embedding features are often not allotted sufficient parameters, whereas infrequent embeddings waste them. By architecting mixed dimension embedding layers such that the dimension of an embedding vector scales with its popularity, we can make significant improvements to statistical performance at small parameter budgets.
In order to illustrate that the proposed mixed dimension embedding layers greatly improve parameterefficiency we show that mixed dimension layers achieve slightly lower loss, while using 8 fewer parameters compared to uniform dimension embeddings on the Movielens dataset [Harper2015]. Also, we show that mixed dimension embeddings improve accuracy by 0.1% using half as many parameters and maintain the baseline accuracy using 16 fewer parameters on the Criteo Kaggle dataset [Criteo2014].
We point out that even though we focus on recommendation and and eventprobability prediction problems, our methods are applicable to representation learning in general. We state our main contributions next.
Summary of Contributions

We propose mixed dimension embedding layers, where the dimension of a particular embedding vector scales with said vector’s popularity.

We provide a simple heuristic scheme for sizing of embedding vectors given a prior distribution or training sample. The dimensions prescribed by our scheme compare favorably to uniform dimensions.

We identify two distinct mechanisms by which mixed dimension layers improve parameterefficiency and achieve superior statistical performance at a given parameter budget.
2 Background & Related Works
In this work, we focus on explicit CF as well as CTR prediction. In explicit CF user ratings for particular items are directly observed and therefore it can be formally framed as a matrix completion problem [candes2010matrix, candes2009exact, candes2010power, hastie2015matrix]. Embeddingbased approaches, such as matrix factorization (MF) [hastie2015matrix, Koren2009, Rendle2019] or neural collaborative filtering (NCF) [Dacrema2019, he2017neural] are among the most popular and efficient solutions to matrix completion. The main alternative is to use a convex relaxation to find the minimum nuclear norm solution. This entails solving a semidefinite program, which takes time [zheng2012efficient], and thus will not scale to realworld applications. Instead, embedding/factorization approaches have the drawback of explicitly requiring an embedding dimension, but in practice this can be solved with crossvalidation or other hyperparameter tuning techniques. In CTR prediction tasks we are predicting the eventprobability of a click, which can also be viewed as contextbased CF with binary ratings. A wide variety of models has been developed over the recent years for this task, including but not limited to [cheng2016wide, guo2017deepfm, lian2018xdeepfm, naumov2019deep, zhou2018deepi, zhou2018deep]. These stateoftheart models share many similar characteristics, and all of them without exception use memoryintensive embedding layers that dwarf the rest of the model.
In modern machine learning, embeddings are often used to represent categorical features. Embedding vectors are mined from data, with the intention that certain semantic relationships between the categorical concepts represented by the vectors are encoded by the spatial or geometric relationships and properties of the vectors [Mikolov2013distributed, naumov2019dimensionality]. Thus, large embeddings are a natural choice for recommendation systems, which require models to understand the relationships between users and items.
Many techniques have been developed to decrease the amount of memory consumed by embedding layers. They can roughly be split into two highlevel classes: (i) compression algorithms and (ii) compressed architectures. Compression algorithms usually involve some form of additional processing of the model beyond standard training. They can be performed offline, when they only involve posttraining processing, or online, when the compression process is interleaved with or otherwise materially alters the training process. Simple offline compression algorithms include posttraining quantization, pruning or lowrank SVD [Andrews2015, Bhavana2019, Sattigeri, Sun2016]. Model distillation techniques, such as compositional coding [Shu2017] and neural binarization [Tissier2018] are also a form of complex offline compression in which autoencoders are trained to mimic uncompressed, pretrained embedding layers. Online compression algorithms include quantizationaware training, gradual pruning, and periodic regularization [alvarez2017compression, Elthakeb2019, frankle2018lottery, Naumov2018reg, park2018value]. We note that many of these compression algorithms are not unique to embedding layers, and are widely used in the model compression literature.
On the other hand, we also have compressed architectures, which attempt to architect embedding representations of comparable statistical quality with fewer parameters. Compressed architectures have the advantage of not only reducing memory requirements for inference time, but also at training time. This is the approach followed by hashingbased and tensor factorization methods [attenberg2009collaborative, gao2018cuckoo, karatzoglou2010collaborative, Khrulkov2019, Shi2018], which focus on reducing the number of parameters used in an embedding layer by reusing parameters in various ways. These techniques stand in contrast with our approach, which focuses on nonuniform reduction of the dimensionality of embedding vectors based on embedding popularity. In principle, nothing precludes the compound use of our proposed technique and most other compression algorithms or compressed architectures. This is an interesting direction for future investigation.
Finally, we note that nonuniform and deterministic sampling has been addressed in the matrix completion literature in a line of work [negahban2012restricted], but only in so far as how to correct for popularity so as to improve statistical recovery performance, or build theoretical guarantees for completion under nonuniform sampling [papadopoulos2012popularity]. As far as we know, we are the first to leverage popularity to actually reduce parameter counts in largescale embedding layers.
3 Problem and Model Formulation
We pose the problem formulations for CF and CTR tasks. For each task, we also describe the relevant models.
3.1 Collaborative Filtering Problem
Inspired by the apparent inefficiencies of using uniform dimension vectors in the nonuniform sampling regime, we formally state the central formulation of study in the noisy setting, similar to that of [negahban2012restricted].
Let , for , be an unknown target matrix. Let denote a sample of indices. Define for some noise term with . Our observation, denoted , is the set of 3tuples:
We treat as a setvalued random variable. In particular, samples entries from without replacement according to some distribution . Like [negahban2012restricted], we assume that probability matrix is rank1: for some dimensional and dimensional probability vectors and , respectively. In principle, if and are known, they may be used directly. In our applications, we estimate them from training data using empirical frequencies of access f for users and g for items, respectively.
We refer to a collaborative filter as an algorithm that inputs the observation and outputs an estimate of , denoted . The goal of the filter is to minimize the weighted Frobenius loss:
3.1.1 Matrix Factorization
In MF we define as solving the following optimization problem
(1) 
where user and item embedding matrices correspond to an dimensional embedding layer. When the rank of is a known priori, it can be used to set the dimension of the embedding layer, otherwise it is treated as a hyperparameter.
3.1.2 Neural Collaborative Filtering
Although NCF lacks the nice theoretical guarantees offered by MF, it empirically performs mildly better on realworld datasets [he2017neural]. Since we are primarily concerned with the embedding layer, we adopt the simplest NCF model where user and item embeddings are concatenated to form an input to the following multilayer perceptron (MLP).
In NCF we define as solving the following optimization problem
(2) 
where and are embeddings, while denotes additional parameters of the MLP, denoted by function .
3.2 ClickThrough Rate Prediction Problem
CTR prediction tasks can be interpreted as eventprobability prediction problem or as contextbased CF with binary ratings. Compared to canonical CF, these tasks also include a large amount of context which can be used to better predict user and item interaction events. Therefore, this problem can be viewed as restricting targets , while allowing user and items to be represented by multiple features, often expressed through sets of indices (categorical) and floating point values (continuous).
Deep Learning Recommendation Model
Facebook’s stateoftheart deep learning recommendation model (DLRM) [naumov2019deep] allows for categorical features, which can represent arbitrary details about the context of an onclick or personalization event. The th categorical feature can be represented by an index for . In addition to categorical features, we also have scalar features, together producing a dense feature vector . Thus, given some , we would like to predict which denotes an onclick event in response to a particular personalized context .
In DLRM we define as solving the following optimization problem
(3) 
where for are embeddings, while denotes any additional parameters (mostly related to MLPs) in the model . Note that we often use binary crossentropy instead of MSE loss in practice.
4 Mixed Dimension Embedding Layer
Let us now define the mixed dimension embedding layer and describe the equipartition as well as popularitybased schemes used to determine its dimensionality. We will also discuss how we can apply it to CF and CTR prediction tasks.
Let a mixed dimension embedding layer consist of blocks and be defined by matrices, so that
(4) 
with and for , where is implicitly defined as identity.
Let us assume that the dimensions of these blocks are fixed. Then, forward propagation for a mixed dimension embedding layer takes an index in the range from to , and produces an embedding vector as defined in Alg. 1. The steps involved in this algorithm are differentiable, therefore we can perform backward propagation through this layer and update matrices and accordingly during training. We note that Alg. 1 may be generalized to support multihot lookups, where embedding vectors corresponding to some query indices are fetched and reduced by a differentiable operator, such as add, multiply or concatenation.
Note that we return projected embeddings for all but the first embedding matrix and that all embedding vectors have the same base dimension . Therefore, models based on a mixed dimension embedding layer should be sized with respect to . We illustrate the matrix architecture of the mixed dimension embedding layer with two blocks on Fig. 2, where the parameter budget (total area) consumed by uniform and mixed dimension matrices is the same, but allocated differently.
Let us now focus on how to find the block structure in the mixed dimension architecture. This includes the row count as well as dimension assigned to each block in the mixed dimension embedding layer. We restrict ourselves to use of popularity information for sizing the mixed dimension embedding layer (i.e. frequency f of access of a particular feature; assumed here to be mostly consistent between training and test samples). We note that in principle, one could also use a related but distinct notion of importance, that refers to how statistically informative a particular feature is, on average, to the inference of the target variable. Importance could be determined either by domain experts or in datadriven manner at training time.
4.1 Blocking Scheme for Mixed Dimensions
There is some amount of malleability and choice in the rearchitecture of a uniform dimension embedding layer into a mixed dimension layer. With appropriate reindexing, multiple embedding matrices may be stacked into a single block matrix, or a single embedding matrix may be rowwise partitioned into multiple block matrices. The point of the partitioned blocking scheme is to map the total embedding rows into blocks, with blocklevel row counts given by and offset vector with .
Blocking for Collaborative Filtering
In CF tasks we only have two features – corresponding to users and items – with corresponding embedding matrices and , respectively. To size the mixed dimension embedding layer we apply mixed dimensions within individual embedding matrices by partitioning them. First, we sort and reindex the rows based on rowwise frequency: . Then, we partition each embedding matrix into blocks such that the total popularity (area under the curve) in each block is constant, as shown in Alg. 2. For a given frequency the equipartition is unique and is simple to compute. In our experiments, we saw that setting anywhere in the range is sufficient to observe the effects induced by mixed dimensions, with diminishing effect beyond that.
Blocking for CTR Prediction
In CTR prediction tasks, we have several categorical features, with corresponding embedding matrices . To size the mixed dimension embedding layer for CTR prediction applications we apply mixed dimensions across different embedding matrices by stacking them. Therefore, the problem structure defines the number of blocks and number of vectors in each original embedding defines the row counts in the corresponding block in the mixed dimension embedding layer.
4.2 PopularityBased Mixed Dimensions
We now assume that the number of vectors in each block of the mixed dimension embedding layer is already fixed. Therefore, it only remains to assign the dimensions to completely specify it.
We propose a popularitybased scheme that operates on a blocklevel and that is based on a heuristic: Each embedding should be assigned a dimension proportional to some fractional power of its popularity. Note that here we distinguish blocklevel probability from rowwise frequency . Given , we define as the area under the frequency curve in the interval and total . Then, we let blocklevel probability vector be defined by its elements . We formalize the popularitybased scheme in Alg. 3, with an extra hyperparameter temperature .
The proposed technique requires knowledge of probability vector p that governs the feature popularity. When such distribution is unknown, we may easily replace it by the empirical distributions from the data sample.^{3}^{3}3For instance, in CF for practical purposes we can approximate as number of samples pertaining to ith user. Alternatively, we can set to approximately^{4}^{4}4This does not account for integer rounding or for the parameters in the projection matrices , but the projection matrices should only add a small number of parameters relative to the total when , which generally holds. constrain the embedding layer sizing to a total (number of parameters) budget . Furthermore, to get crisp sizing, one might also elect to round , perhaps to the nearest power of two, after applying Alg. 3.
5 Experiments
We proceed to present our experiments validating mixed dimension embedding layers. All implementations described herein are done in the Pytorch framework [paszke2017automatic] using NVIDIA V100 32GB GPUs [NvidiaVolta100]. In total, we run experiments on two datasets with corresponding models summarized in Table 1. We note that we always use the MLP variation of the NCF model.
Dataset  Tasks  # Samples  # Categories  Models Used 

MovieLens  CF  27M  300K  MF, NCF 
Criteo Kaggle  CTR  40M  32M  DLRM 
We explore the tradeoff between memory and statistical performance for mixed dimension embeddings, parameterized by varying . For a given , we vary the budget by varying the base dimension, . In our setting, we measure memory in terms of number of bit floating point parameters in the embedding layer. We measure performance in terms of MSE loss for CF tasks and accuracy for CTR prediction tasks. We choose to use loss rather than accuracy for CF because we would like to differentiate when the quality of our predictions are closer to the target (e.g. we want to differentiate between predicted ratings of and for a target of ) in our results. We do not have to address this issue for the binary targets in the CTR prediction problem and therefore use more natural accuracy as a metric for it. In all experiments, we use the Xavier uniform initialization [glorot2010understanding], for all matrices in our models.
The parameters are allocated to embedding layers according to the parameterized rule proposed in Alg. 3. Notice that for models with uniform dimension embedding layers (), the number of parameters directly controls the embedding dimension because the number of vectors per embedding is fixed. On the other hand, for mixed dimension layers () the fixed parameters lead to skewed dimensions. In fact the more alpha increases the more skewed dimension assignments become based on popularity. For instance, we illustrate how different choices of assign embedding dimensions in Fig 3.
5.1 Matrix Factorization
MF is a canonical algorithm for CF. From the perspective of this work, pure MF model for CF tasks is important because it offers the most distilled, simplest setting for which we can investigate the effects of mixed dimension embeddings. In our experiments we use the MovieLens dataset [Harper2015]. We train at learning rate using Amsgrad optimizer [Reddi2018] for 100 epochs, taking model that scored the lowest validation loss. We always run 3 replicates per experiment.
In Fig. 3(a), we report learning curves for MF with three different embedding layers, plotting the number of training epochs against the validation loss for each epoch. We plot the learning curve for uniform dimension embeddings, which has the lowest validation loss (under early termination). We also plot the learning curve for as a baseline. Third, we plot the learning curve for mixed dimension embedding layers at , which uses a number of parameters equivalent to uniform dimension embedding layers at .
Notice that for uniform dimension (orange line) the model underfits the data, because the attained loss is significantly higher than the one achieved with other hyperparameters. Also, notice that the model with uniform dimension (blue line) severely overfits after about epochs. On the other hand, mixed dimension embeddings (green line) train well and achieve the lowest validation loss among other hyperparameters.
This evidence that at dimension , the popular embedding vectors are underfitting, and that at dimension the unpopular embedding vectors are overfitting. At the parameter budget, the mixed dimension embeddings at use a baseline dimension and result in an embedding layer architecture that can more adequately fit both popular and unpopular embedding vectors.
In Fig. 3(b), we report the test loss for different at varying total parameter budgets using optimal early termination. We can see that using mixed dimension embedding layers generally improves the memoryperformance frontier at each parameter budget. We point out a simple trend illustrated by the uniform dimension (blue line): performance improves with number of parameters, until we reach a critical point, after which performance decreases with increasing parameters. Notice that at all memory budgets there is an that produces mixed embeddings that are equivalent or better than uniform dimension embeddings ().
Finally, we point out that the optimal is dependant on the total memory budget. Thus, at any given budget, one should tune as a hyperparameter. Ultimately, we are able to achieve approximately 0.02 lower MSE while using approximately 4 fewer parameters by using mixed dimension embeddings.
5.2 Neural Collaborative Filtering
NCF models represent more modern approaches for CF. From the perspective of this work, NCF models are interesting because they add a moderate degree of realism and show how the presence of the nonlinearity in neural layers affects the results.
In our experiments we use NCF with a 3layer MLP with dimension 128. We train at learning rate using Amsgrad optimizer for 100 epochs, taking model that scored the lowest validation loss. We still use the same dataset and run 3 random seeds for each point in the experiments.
In Fig. 4(a), we report learning curves for the NCF model based on three different embedding layers, plotting the number of training epochs against the validation loss for each epoch. We plot the learning curve for uniform dimension embeddings, which has the lowest validation loss (under early termination). We also plot the learning curve for as a baseline. Third, we plot the learning curve for mixed dimension embedding layers at , which uses a number of parameters equivalent to uniform dimension embedding layers at .
Similarly to the MF setting, the optimally terminated model using the embeddings has slightly higher loss than the chosen mixed dimension embeddings. However, unlike in MF, here, we do see that all three models overfit as training progresses, and thus early termination is essential for all three models.
Under prolonged training, the uniform based NCF model overfit severely. The mixed dimension based NCF model overfits signficantly, but not as drastically as the optimal NCF model. The NCF model with embeddings overfits only slightly. Given that the two latter embedding layers, when identically architected and initialized, do not cause the model overfit when the embedding vectors as used for MF, we can attribute the overfitting directly to presence of the MLP layers in the NCF model.
In Fig 4(b), we report the test loss for different at varying total parameter budgets using optimal early termination. Overall, the NCF models attain slightly lower test losses than their MF counterparts, generally a decrease in the range of MSE. Interestingly, the optimally terminated NCF models actually suffered from less overfittitng than the MF models did at higher parameter counts. The critical points corresponding to the optimal total parameter also increased, for each . Yet, still performed the best, albeit at a larger higher budget.
Finally, concerning the comparison between mixed and uniform dimension embeddings, we see similar trends and draw the same conclusions as in the MF setting. Ultimately, we achieve approximately 0.01 lower MSE while using approximately 8 fewer parameters by using mixed dimension embeddings.
5.3 Deep Learning Recommendation Model
CTR prediction tasks can be interpreted as eventprobability prediction problem or as contextbased CF with binary ratings. From the perspective of this work, using mixed dimension embedding layers on real CTR prediction data with stateoftheart deep recommendation models is an important experiment that shows how mixed dimension embedding layers might scale to realworld recommendation engines.
In our experiments we use stateoftheart Facebook’s DLRM [naumov2019deep] and the Criteo Kaggle dataset [Criteo2014]. We train at learning rate using Amsgrad optimizer for a single epoch. For each we increase the embedding layer’s parameter budget up to 32GB (GPU memory limit). Notice that because of this limitation we do not see the overfitting behavior from earlier sections. Also, as usual we perform 3 replicates per experiment. However, for this task we report accuracy and not loss as discussed earlier in this section.
In Fig 5(a), we plot learning curves for the DLRM model based on three different embedding layers. We show that a mixed dimension embeddings with (orange line) produces a learning curve equivalent to that of a uniform dimension embeddings (blue line) using a total parameter count equivalent to that of uniform dimension (green line).
In Fig. 5(b), we present test accuracy for DLRM using mixed dimension embedding layers at various with varying parameter budgets. It is evident that mixed dimension embedding layers improve the memoryperformance frontier. In fact, we see a very similar trend compared to the two classical CF settings. Ultimately, we achieve approximately 0.01% higher accuracy while using approximately half as many parameters and achieve on par accuracy with 16 fewer parameters by using mixed dimension embeddings.
To summarize we identify two distinct mechanisms by which mixed dimension embedding layers improve upon uniformdimension layers in our experiments.
i) Generalization – At training time, embeddings learn to represent categorical features such as users or products. We find that at a uniformdimension, frequentlyaccessed vectors are often underfitting, whereas infrequentlyaccessed vectors are often overfitting. Thus, mixed dimension embeddings actually learn in more parameterefficient manner.
ii) Allocation – Learning aside, at inference time, for a sufficiently constrained parameter budget, one faces a resource allocation tradeoff. Even with an oracle factorization of the target matrix, under an expected distortion metric at a given parameter budget, it is more efficient to allocate more parameters to frequentlyaccessed vectors than to infrequentlyaccessed vectors.
6 Conclusion
We propose mixed dimension embedding layers in which the dimension of a particular embedding vector is based on its popularity. This approach addresses severe inefficiencies in the number of parameters used in uniform dimension embedding layers. It offers a superior tradeoff between number of parameters and statistical performance of the model. In general, it improves the memoryperformance frontier across memory budgets, but is particularly effective at small parameter budgets. For instance, in our experiments we were able to attain the same baseline accuracy with about 8 and 16 smaller models for CF on MovieLens and CTR prediction on Criteo Kaggle datasets, respectively. In the future we would like to investigate the composition of mixed dimension embeddings technique with other compression algorithms and architectures.
Acknowledgements
The authors would like to thank Shubho Sengupta, Michael Shi, Jongsoo Park, Jonathan Frankle and Misha Smelyanskiy for helpful comments and suggestions about this work.