Learning From Ordered Sets and Applications in Collaborative Ranking

Learning From Ordered Sets and Applications in Collaborative Ranking

Truyen Tran, Dinh Phung, Svetha Venkatesh
Pattern Recognition and Data Analytics, Deakin University, Australia
Department of Computing, Curtin University, Australia
{truyen.tran,dinh.phung,svetha.venkatesh}@deakin.edu.au
Abstract

Ranking over sets arise when users choose between groups of items. For example, a group may be of those movies deemed stars to them, or a customized tour package. It turns out, to model this data type properly, we need to investigate the general combinatorics problem of partitioning a set and ordering the subsets. Here we construct a probabilistic log-linear model over a set of ordered subsets. Inference in this combinatorial space is highly challenging: The space size approaches as approaches infinity. We propose a split-and-merge Metropolis-Hastings procedure that can explore the state-space efficiently. For discovering hidden aspects in the data, we enrich the model with latent binary variables so that the posteriors can be efficiently evaluated. Finally, we evaluate the proposed model on large-scale collaborative filtering tasks and demonstrate that it is competitive against state-of-the-art methods.

1 Introduction

Rank data has recently generated a considerable interest within the machine learning community, as evidenced in ranking labels [7, 21] and ranking data instances [5, 23]. The problem is often cast as generating a list of objects (e.g., labels, documents) which are arranged in decreasing order of relevance with respect to some query (e.g., input features, keywords). The treatment effectively ignores the grouping property of compatible objects [22]. This phenomenon occurs when some objects are likely to be grouped with some others in certain ways. For example, a grocery basket is likely to contain a variety of goods which are complementary for household needs and at the same time, satisfy weekly budget constraints. Likewise, a set of movies are likely to given the same quality rating according to a particular user. In these situations, it is better to consider ranking groups instead of individual objects. It is beneficial not only when we need to recommend a subset (as in the case of grocery shopping), but also when we just want to produce a ranked list (as in the case of watching movies) because we would better exploit the compatibility among grouped items.

This poses a question of how to group individual objects into subsets given a list of all possible objects. Unlike the situation when the subsets are pre-defined and fixed (e.g., sport teams in a particular season), here we need to explore the space of set partitioning and ordering simultaneously. In the grocery example we need to partition the stocks in the store into baskets and then rank them with respect to their utilities; and in the movie rating example we group movies in the same quality-package and then rank these groups according to their given ratings. The situation is somewhat related to multilabel learning, where our goal is to produce a subset of labels out of many for a given input, but it is inherently more complicated: not only we need to produce all subsets, but also to rank them.

This paper introduces a probabilistic model for this type of situations, i.e., we want to learn the statistical patterns from which a set of objects is partitioned and ordered, and to compute the probability of any scheme of partitioning and ordering. In particular, the model imposes a log-linear distribution over the joint events of partitioning and ordering. It turns out, however, that the state-space is prohibitively large: If the space of complete ranking has the complexity of for objects, then the space of partitioning a set and ordering approaches in size as approaches infinity [11, pp. 396–397]. Clearly, the latter grows much faster than the former by an exponential factor of . To manage the exploration of this space, we design a split-and-merge Metropolis-Hastings procedure which iteratively visits all possible ways of partitioning and ordering. The procedure randomly alternates between the split move, where a subset is split into two consecutive parts, and the merge move, where two consecutive subsets are merged. The proposed model is termed Ordered Sets Model (OSM).

To discover hidden aspects in ordered sets (e.g., latent aspects that capture the taste of a user in his or her movie genre), we further introduce binary latent variables in a fashion similar to that of restricted Boltzmann machines (RBMs) [18]. The posteriors of hidden units given the visible rank data can be used as a vectorial representation of the data - this can be handy in tasks such as computing distance measures or visualisation. This results in a new model called Latent OSM.

Finally, we show how the proposed Latent OSM can be applied for collaborative filtering, e.g., when we need to take seen grouped item ratings as input and produce a ranked list of unseen item for each user. We then demonstrate and evaluate our model on large-scale public datasets. The experiments show that our approach is competitive against several state-of-the-art methods.

The rest of the paper is organised as follows. Section 2 presents the log-linear model over ordered sets (OSM) together with our main contribution – the split-and-merge procedure. Section 3 introduces Latent OSM, which extends the OSM to incorporate latent variables in the form of a set of binary factors. An application of the proposed Latent OSM for collaborative filtering is described in Section 4. Related work is reviewed in the next section, followed by the conclusions.

2 Ordered Set Log-linear Models

  
(a) OSM (b) (c)
Figure 1: (a) Ordered Set Model; (b) the split operator; and (c) the merge operator. The figure in (c) represents the result of a merge of the middle two subsets in (b). Conversely, the (b) figure can be considered as result of a splitting the middle subset of the (c) figure. Arrows represent the preference orders, not the causality or conditioning.

2.1 General Description

We first present an intuitive description of the problem and our solutions in modelling, learning and inference. Fig. 1(a) depicts the problem of grouping items into subsets (represented by a box of circles) and ordering these subsets (represented by arrows which indicate the ordering directions). This looks like a high-order Markov chain in a standard setting, and thus it is tempting to impose a chain-based distribution. However, the difficulty is that the partitioning of set into subsets is also random, and thus a simple treatment is not applicable. Recently, Truyen et al (2011) describe a model in this direction with a careful treatment of the partitioning effect. However, their model does not allow fast inference since we need to take care of the high-order properties.

Our solution is as follows. To capture the grouping and relative ordering, we impose on each group a subset potential function capturing the relations among compatible elements, and on each pair of subsets a ordering potential function. The distribution over the space of grouping and ordering is defined using a log-linear model, where the product of all potentials accounts for the unnormalised probability. This log-linear parameterization allows flexible inference in the combinatorial space of all possible groupings and orderings.

In this paper inference is carried out in a MCMC manner. At each step, we randomly choose a split or a merge operator. The split operator takes a subset at random (e.g., Fig. 1(c)) and uniformly splits it into two smaller subsets. The order between these two smaller subset is also random, but their relative positions with respect to other subsets remain unchanged (e.g., Fig. 1(b)). The merge operator is the reverse (e.g., converting Fig. 1(b) into Fig. 1(c)). With an appropriate acceptance probability, this procedure is guaranteed to explore the entire combinatorial space.

Armed with this sampling procedure, learning can be carried out using stochastic gradient techniques [25].

2.2 Problem Description

Given two objects and , we use the notation to denote the expression of is ranked higher than , and to denote the between the two belongs to the same group. Furthermore, we use the notation of as a collection of objects. Assume that is partitioned into subsets . However, unlike usual notion of partitioning of a set, we further posit an order among these subsets in which members of each subset presumably share the same rank. Therefore, our partitioning process is order-sensitive instead of being exchangeable at the partition level. Specifically, we use the indices to denote the decreasing order in the rank of subsets. These notations allow us to write the collection of objects as a union of ordered subsets:111Alternatively, we could have proceeded from the permutation perspective to indicate the ordering of the subsets, but we simplify the notation here for clarity.

(1)

where are non-empty subsets of objects so that .

As a special case when , we obtain an exhaustive ordering among objects wherein each subset has exactly one element and there is no grouping among objects. This special case is equivalent with a complete ranking scenario. To illustrate the complexity of the problem, let us characterise the state-space, or more precisely, the number of all possible ways of partitioning and ordering governed by the above definition. Recall that there are ways to divide a set of objects into partitions, where denotes the Stirling numbers of second kind [20, p. 105]. Therefore, for each pair , there are ways to perform the partitioning with ordering. Considering all the possible values of give us the size of our model state-space:

(2)

which is also known in combinatorics as the Fubini’s number [11, pp. 396–397]. This number grows super-exponentially and it is known that it approaches as [11, pp. 396–397]. Taking the logarithm, we get . As , this clearly grows faster than , which is the log of the size of the standard complete permutations.

2.3 Model Specification

Denote by a positive potential function over a single subset222In this paper, we do not consider the case of empty sets, but it can be assumed that . and by a potential function over a ordered pair of subsets where . Our intention is to use to encode the compatibility among all member of , and to encode the ordering properties between and . We then impose a distribution over the collection of objects as:

(3)

and is the partition function. We further posit the following factorisation for the potential functions:

(4)

where captures the effect of grouping, and captures the relative ordering between objects and . Hereafter, we shall refer to this proposed model as the Ordered Set Model (OSM).

2.4 Split-and-Merge MCMC Inference

In order to evaluate we need to sum over all possible configurations of which is in the complexity of the Fubini() over the set of objects (cf. Section 2.2, Eq. 2). We develop a Metropolis-Hastings (MH) procedure for sampling . Recall that the MH sampling involves a proposal distribution that allows drawing a new sample from the current state with probability . The move is then accepted with probability

(5)

To evaluate the likelihood ratio we use the model specification defined in Eq (3). We then need to compute the proposal probability ratio . The key intuition is to design a random local move from to that makes a relatively small change to the current partitioning and ordering. If the change is large, then the rejection rate is high, thus leading to high cost (typically the computational cost increases with the step size of the local moves). On the other hand, if the change is small, then the random walks will explore the state-space too slowly.

We propose two operators to enable the proposal move: the split operator takes a non-singleton subset and randomly splits it into two sub-subsets , where is inserted right next to ; and the merge operator takes two consecutive subsets and merges them. This dual procedure will guarantee exploration of all possible configurations of partitioning and ordering, given enough time (See Figure 1 for an illustration).

2.4.1 Split Operator

Assume that among the subsets, there are non-singleton subsets from which we randomly select one subset to split, and let this be . Since we want the resulting sub-subsets to be non-empty, we first randomly draw two distinct objects from and place them into the two subsets. Then, for each remaining object, there is an equal chance going to either or . Let , the probability of this drawing is . Since the probability that these two sub-subsets will be merged back is , the proposal probability ratio can be computed as in Eq (6). Since our potential functions depend only on the relative orders between subsets and between objects in the same set, the likelihood ratio due to the split operator does not depend on other subsets, it can be given as in Eq (7). This is because the members of are now ranked higher than those of while they are of the same rank previously.

(6)
(7)

2.4.2 Merge Operator

For subsets, the probability of merging two consecutive ones will be since there are pairs, and each pair can be merged in exactly one way. Let be the number of non-singleton subsets after the merge, and let and be the sizes of the two subsets and , respectively. Let , the probability of recovering the state before the merge (by applying the split operator) is . Consequently, the proposal probability ratio can be given as in Eq (8), and the likelihood ratio is clearly the inverse of the split case as shown in Eq (9).

(8)
(9)

Finally, the pseudo-code of the split-and-merge Metropolis-Hastings procedure for the OSM is presented in Algorithm 1.

1. Given an initial state .

2. Repeat until convergence

 2a. Draw a random number .
 2b. If {Split}
i. Randomly choose a non-singleton subset. ii. Split into two sub-subsets and insert one sub-subset right after the another. iii. Evaluate the acceptance probability using Eqs.(6,7,5). iv. Accept the move with probability .
Else {Merge}
i. Randomly choose two consecutive subsets. ii. Merge them in one, keeping the relative orders with other subsets unchanged. iii. Evaluate the acceptance probability using Eqs.(8,9,5). iv. Accept the move with probability .
End

End

Algorithm 1 Pseudo-code of the split-and-merge Metropolis-Hastings for OSM.

2.5 Estimating Partition Function

To estimate the normalisation constant , we employ an efficient procedure called Annealed Importance Sampling (AIS) proposed recently [12]. More specifically, AIS introduces the notion of inverse-temperature into the model, that is .

Let be the (slowly) increasing sequence of temperature, where and , that is . At , we have a uniform distribution, and at , we obtain the desired distribution. At each step , we draw a sample from the distribution (e.g. using the split-and-merge procedure). Let be the unnormalised distribution of , that is . The final weight after the annealing process is computed as

(10)

The above procedure is repeated times. Finally, the normalisation constant at is computed as where , which is the number of configurations of the model state variables .

2.6 Log-linear Parameterisation and Learning

Here we assume that the model is in the log-linear form, that is and , where are sufficient statistics (or feature functions) and are free parameters.

Learning by maximising (log-)likelihood in log-linear models with respect to free parameters often leads to computing the expectation of sufficient statistics. For example, is needed in the gradient of the log-likelihood with respect to , where is the pairwise marginal. Unfortunately, computing is inherently hard, and running a full MCMC chain to estimate it is too expensive for practical purposes. Here we follow the stochastic approximation proposed in [25], in that we iteratively update parameters after very short MCMC chains (e.g., using Algorithm 1).

3 Introducing Latent Variables to OSMs

(a) (b)
Figure 2: (a) A Semi-Restricted Boltzmann Machine representation of vectorial data: each shaded node represents a visible variable and empty nodes the hidden units. (b) A Latent OSM for representing ordered sets: each box represents a subset of objects.

In this section, we further extend the proposed OSM by introducing latent variables into the model. The latent variables serve multiple purposes. For example, in collaborative filtering, each person chooses only a small subset of objects, thus the specific choice of objects and the ranking reflects personal taste. This cannot be discovered by the standard OSM. Second, if we want to measure the distance or similarity between two ordered partitioned sets, e.g. for clustering or visualisation, it may be useful to first transform the data into some vectorial representation.

3.1 Model Specification

Denote by the hidden units to be used in conjunction with the ordered sets. The idea is to estimate the posterior - the probability that the hidden unit will be activated by the input . Thus, the requirement is that the model should allow the evaluation of efficiently. Borrowing from the Restricted Boltzmann Machine architecture [18, 24], we can extend the model potential function as follows:

(11)

where admits the similar factorisation as , i.e. , and

(12)

where and capture the events of tie and relative ordering between objects and under the presence of the hidden unit, respectively.

We then define the model with hidden variables as , where . A graphical representation is given in Figure 2b. Hereafter, we shall refer to this proposed model as the Latent OSM.

3.2 Inference

The posteriors are indeed efficient to evaluate:

(13)

Denote by as the shorthand for , the vector can then be used as a latent representation of the configuration .

The generation of given is, however, much more involved as we need to explore the whole subset partitioning and ordering space:

(14)

For inference, since we have two layers and , we can alternate between them in a Gibbs sampling manner, that is, sampling from and then from . Since sampling from is straightforward, it remains to sample from . Since has the same factorisation structure into a product of pairwise potentials as , we can employ the split-and-merge technique described in the previous section in a similar manner.

To see how, let and , then from Eqs.(4,11,12). We can see that is now factorised into products of and in the same way as into products of and :

where

Estimating the normalisation constant can be performed using the AIS procedure described earlier (cf. Section 2.5), except that the unnormalised distribution is given as:

which can be computed efficiently for each .

For sampling from , one way is to sample directly from the in a Rao-Blackwellised fashion (e.g. by marginalising over we obtain the unnormalised ). A more straightforward way is alternating between and as usual. Although the former would give lower variance, we implement the latter for simplicity. The remaining is similar to the case without hidden variables, and we note that the base partition function should be modified to , taking into account of binary hidden variables. A pseudo-code for the split-and-merge algorithm for Latent OSM is given in Algorithm 8.

1. Given an initial state .

2. Repeat until convergence

 2a. Sample from using Eq.(13).
 2b. Sample from using Eq.(14) and Algorithm 1.
End

End

Algorithm 2 Pseudo-code of the split-and-merge Gibbs/Metropolis-Hastings for Latent OSM.

3.3 Parameter Specification and Learning

Like the OSM, we also assume log-linear parameterisation. In addition to those potentials shared with the OSM, here we specify hidden-specific potentials as follows: and . Now and are new sufficient statistics. As before, we need to estimate the expectation of sufficient statistics, e.g., . Equipped with Algorithm 8, the stochastic gradient trick as in Section 2.6 can then be used, that is, parameters are updated after very short chains (with respect to the model distribution ).

4 Application in Collaborative Filtering

In this section, we present one specific application of our Latent OSM in collaborative filtering. Recall that in this application, each user has usually expressed their preferences over a set of items by rating them (e.g., by assigning each item a small number of stars). Since it is cumbersome to rank all the items completely, the user often joins items into groups of similar ratings. As each user often rates only a handful of items out of thousands (or even millions), this creates a sparse ordering of subsets. Our goal is to first discover the latent taste factors for each user from their given ordered subsets, and then use these factors to recommend new items for each individual.

4.1 Rank Reconstruction and Completion

In this application, we are limited to producing a complete ranking over objects instead of subset partitioning and ordering. Here we consider two tasks: (i) rank completion where we want to rank unseen items given a partially ranked set333This is important in recommendation, as we shall see in the experiments., and (ii) rank reconstruction444This would be useful in data compression setting. where we want to reconstruct the complete rank from the posterior vector .

Rank completion.

Assume that an unseen item might be ranked higher than any seen item . Let us start from the mean-field approximation

From the mean-field theory, we arrive at Eq (15), which resembles the factorisation in (11).

(15)

Now assume that is sufficiently informative to estimate , we make further approximation . Finally, due to the factorisation in (12), this reduces to

The RHS can be used for the purpose of ranking among new items .

Rank reconstruction.

The rank reconstruction task can be thought as estimating where . Since this maximisation is generally intractable, we may approximate it by treating as state variable of unseen items, and apply the mean-field technique as in the completion task.

4.2 Models Implementation

To enable fast recommendation, we use a rather simple scheme: Each item is assigned a worth which can be used for ranking purposes. Under the Latent OSM, the worth is also associated with a hidden unit, e.g. . Then the events of grouping and ordering can be simplified as

where is a factor signifying the contribution of item compatibility to the model probability. Basically the first equation says that if the two items are compatible, their worth should be positively correlated. The second asserts that if there is an ordering, we should choose the better one. This reduces to the tie model of [6] when there are only two items.

For learning, we parameterise the models as follows

where , and are free parameters. The Latent OSM is trained using stochastic gradient with a few samples per user to approximate the gradient (e.g., see Section 3.3). To speed up learning, parameters are updated after every block of users. Figure 3(a) shows the learning progress with learning rate of using parallel persistent Markov chains, one chain per user [25]. The samples get closer to the observed data as the model is updated, while the acceptance rates of the split-and-merge decrease, possibly because the samplers are near the region of attraction. A notable effect is that the split-and-merge dual operators favour sets of small size due to the fact that there are far more many ways to split a big subset than to merge them. For the AIS, we follow previous practice (e.g. see [16]), i.e. and .

For comparison, we implemented existing methods including the Probabilistic Matrix Factorisation (PMF) [15] where the predicted rating is used as scoring function, the Probabilistic Latent Preference Analysis (pLPA) [10], the ListRank.MF [17] and the matrix-factored Plackett-Luce model [19] (Plackett-Luce.MF). For the pLPA we did not use the MM algorithm but resorted to simple gradient ascent for the inner loop of the EM algorithm. We also ran the CoFi variants [23] with code provided by the authors555http://cofirank.org. We found that the ListRank-MF and the Plackett-Luce.MF are very sensitive to initialisation, and good results can be obtained by randomly initialising the user-based parameter matrix with non-negative entries. To create a rank for Plackett-Luce.MF, we order the ratings according to quicksort.

The performance will be judged based on the correlation between the predicted rank and ground-truth ratings. Two performance metrics are reported: the Normalised Discounted Cumulative Gain at the truncated position (NDCG) [9], and the Expected Reciprocal Rank (ERR) [4]:

where is the relevance judgment of the movie at position , is a normalisation constant to make sure that the gain is if the rank is correct. Both the metrics put more emphasis on top ranked items.

4.3 Results

(a) (b) (c)
Figure 3: Results with MovieLens data. (a) Learning progress with time: Dist(sample,observed) is the portion of pairwise orders being incorrectly sampled by the split-and-merge Markov chains (,). (b) Rank completion, as measured in NDCG@T (). (c) Rank reconstruction () - trained on users and tested on users.

We evaluate our proposed model and inference on large-scale collaborative filtering datasets: the MovieLens666http://www.grouplens.org/node/12 M and the Netflix challenge777http://www.netflixprize.com. The MovieLens dataset consists of slightly over million half-integer ratings (from to ) applied to movies by users. The ratings are from to with increments. We divide the rating range into segments of equal length., and those ratings from the same segment will share the same rank. The Netflix dataset has slightly over million ratings applied to movies by users, where ratings are integers in a -star ordinal scale.

Data likelihood estimation. Table 1 shows the log-likelihood of test data averaged over users with different numbers of movies per user. Results for the Latent OSM are estimated using the AIS procedure.

Plackett-Luce.MF -14.7 -41.3 -72.6
Latent OSM -9.8 -37.9 -73.4
Table 1: Average log-likelihood over users of test data (Movie Lens M dataset), after training on the movies per user (). is the number of test movies per user. The Plackett-Luce.MF and the Latent OSM are comparable because they are both probabilistic in ranks and can capture latent aspects of the data. The main difference is that the Plackett-Luce.MF does not handle groupings or ties.

Rank reconstruction. Given the posterior vector, we ask whether we can reconstruct the original rank of movies for that data instance. For simplicity, we only wish to obtain a complete ranking, since it is very efficient (e.g. a typical cost would be per user). Figure 3(c) indicates that high quality rank reconstruction (on both training and test data) is possible given enough hidden units. This suggests an interesting way to store and process rank data by using vectorial representation.

(a) MovieLens, (b) Netflix
Figure 4: Rank completion quality vs. number of hidden units - note that since the PMF is not defined when hidden size is , we substitute using the result for hidden size .

Rank completion. In collaborative filtering settings, we are interested in ranking unseen movies for a given user. To highlight the disparity between user tastes, we remove movies whose qualities are inherently good or bad, that is when there is a general agreement among users. More specifically, we compute the movie entropy as where is estimated as the proportion of users who rate the movie by points. We then remove half of the movies with lowest entropy. For each dataset, we split the data into a training set and a test set as follows. For each user, we randomly choose , and items for training, and the rest for testing. To ensure that each user has at least test items, we keep only those users with no less than , and ratings, respectively.

Figs. 3(b), 4(a) and Table 2 report the results on the MovieLens M dataset; Figs. 4(b) and Table 3 show the results for the Netflix dataset. It can be seen that the Latent OSM performs better than rivals when is moderate. For large , the rating-based method (PMF) seems to work better, possibly because converting rating into ordering loses too much information in this case, and it is more difficult for the Latent OSM to explore the hyper-exponential state-space .

ERR N@5 ERR N@5 ERR N@5
PMF 0.673 0.603 0.687 0.612 0.717 0.638
pLPA 0.674 0.596 0.684 0.601 0.683 0.595
ListRank.MF 0.683 0.603 0.682 0.601 0.684 0.595
Plackett-Luce.MF 0.663 0.586 0.677 0.591 0.681 0.586
CoFi.Regress 0.675 0.597 0.681 0.598 0.667 0.572
CoFi.Ordinal 0.623 0.530 0.621 0.522 0.622 0.515
CoFi.N@10 0.615 0.522 0.623 0.517 0.602 0.491
Latent OSM 0.690 0.619 0.708 0.632 0.710 0.629
Table 2: Model comparison on the MovieLens data for rank completion (). N@ is a shorthand for NDCG@.
ERR N@1 N@5 N@10 ERR N@1 N@5 N@10
PMF 0.678 0.586 0.607 0.649 0.691 0.601 0.624 0.661
ListRank.MF 0.656 0.553 0.579 0.623 0.658 0.553 0.577 0.617
Latent OSM 0.694 0.611 0.628 0.666 0.714 0.638 0.648 0.680
Table 3: Model comparison on the Netflix data for rank completion ().

5 Related Work

This work is closely related to the emerging concept of preferences over sets in AI [3, 22] and in social choice and utility theories [1]. However, most existing work has focused on representing preferences and computing the optimal set under preference constraints [2]. These differ from our goals to model a distribution over all possible set orderings and to learn from example orderings. Learning from expressed preferences has been studied intensively in AI and machine learning, but they are often limited to pairwise preferences or complete ordering [5, 23].

On the other hand, there has been very little work on learning from ordered sets [26, 22]. The most recent and closest to our is the PMOP which models ordered sets as a locally normalised high-order Markov chain [19]. This contrasts with our setting which involves a globally normalised log-linear solution. Note that since the high-order Markov chain involves all previously ranked subsets, while our OSM involves pairwise comparisons, the former is not a special case of ours. Our additional contribution is that we model the space of partitioning and ordering directly and offer sampling tools to explore the space. This ease of inference is not readily available for the PMOP. Finally, our solution easily leads to the introduction of latent variables, while their approach lacks that capacity.

Our split-and-merge sampling procedure bears some similarity to the one proposed in [8] for mixture assignment. The main difference is that we need to handle the extra orderings between partitions, while it is assumed to be exchangeable in [8]. This causes a subtle difference in generating proposal moves. Likewise, a similar method is employed in [14] for mapping a set of observations into a set of landmarks, but again, ranking is not considered.

With respect to collaborative ranking, there has been work focusing on producing a set of items instead of just ranking individual ones [13]. These can be considered as a special case of OSM where there are only two subsets (those selected and the rest).

6 Conclusion and Future Work

We have introduced a latent variable approach to modelling ranked groups. Our main contribution is an efficient split-and-merge MCMC inference procedure that can effectively explore the hyper-exponential state-space. We demonstrate how the proposed model can be useful in collaborative filtering. The empirical results suggest that proposed model is competitive against state-of-the-art rivals on a number of large-scale collaborative filtering datasets.

References

  • [1] S. Barberà, W. Bossert, and P.K. Pattanaik. Ranking sets of objects. Handbook of Utility Theory: Extensions, 2:893, 2004.
  • [2] M. Binshtok, R.I. Brafman, S.E. Shimony, A. Martin, and C. Boutilier. Computing optimal subsets. In Proceedings of the National Conference on Artificial Intelligence, volume 22, page 1231. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2007.
  • [3] R.I. Brafman, C. Domshlak, S.E. Shimony, and Y. Silver. Preferences over sets. In Proceedings of the National Conference on Artificial Intelligence, volume 21, page 1101. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006.
  • [4] O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan. Expected reciprocal rank for graded relevance. In CIKM, pages 621–630. ACM, 2009.
  • [5] W.W. Cohen, R.E. Schapire, and Y. Singer. Learning to order things. J Artif Intell Res, 10:243–270, 1999.
  • [6] R.R. Davidson. On extending the Bradley-Terry model to accommodate ties in paired comparison experiments. Journal of the American Statistical Association, 65(329):317–328, 1970.
  • [7] O. Dekel, C. Manning, and Y. Singer. Log-linear models for label ranking. Advances in Neural Information Processing Systems, 16, 2003.
  • [8] S. Jain and R.M. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture model. Journal of Computational and Graphical Statistics, 13(1):158–182, 2004.
  • [9] K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4):446, 2002.
  • [10] N.N. Liu, M. Zhao, and Q. Yang. Probabilistic latent preference analysis for collaborative filtering. In CIKM, pages 759–766. ACM, 2009.
  • [11] M. Mureşan. A concrete approach to classical analysis. Springer Verlag, 2008.
  • [12] R.M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001.
  • [13] R. Price and P.R. Messinger. Optimal recommendation sets: Covering uncertainty over user preferences. In Proceedings of the National Conference on Artificial Intelligence, volume 20, page 541. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.
  • [14] A. Ranganathan, E. Menegatti, and F. Dellaert. Bayesian inference in the space of topological maps. Robotics, IEEE Transactions on, 22(1):92–107, 2006.
  • [15] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. Advances in neural information processing systems, 20:1257–1264, 2008.
  • [16] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning, pages 872–879. ACM, 2008.
  • [17] Y. Shi, M. Larson, and A. Hanjalic. List-wise learning to rank with matrix factorization for collaborative filtering. In ACM RecSys, pages 269–272. ACM, 2010.
  • [18] P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Parallel distributed processing: Explorations in the microstructure of cognition, 1:194–281, 1986.
  • [19] T. Truyen, D.Q Phung, and S. Venkatesh. Probabilistic models over ordered partitions with applications in document ranking and collaborative filtering. In Proc. of SIAM Conference on Data Mining (SDM), Mesa, Arizona, USA, 2011. SIAM.
  • [20] J.H. van Lint and R.M. Wilson. A course in combinatorics. Cambridge Univ Pr, 1992.
  • [21] S. Vembu and T. Gärtner. Label ranking algorithms: A survey. Preference Learning, page 45, 2010.
  • [22] K.L. Wagstaff, Marie desJardins, and E. Eaton. Modelling and learning user preferences over sets. Journal of Experimental & Theoretical Artificial Intelligence, 22(3):237–268, 2010.
  • [23] M. Weimer, A. Karatzoglou, Q. Le, and A. Smola. CoFi-maximum margin matrix factorization for collaborative ranking. Advances in neural information processing systems, 20:1593–1600, 2008.
  • [24] M. Welling, M. Rosen-Zvi, and G. Hinton. Exponential family harmoniums with an application to information retrieval. In Advances in NIPS, volume 17, pages 1481–1488. 2005.
  • [25] L. Younes. Parametric inference for imperfectly observed Gibbsian fields. Probability Theory and Related Fields, 82(4):625–645, 1989.
  • [26] Y. Yue and T. Joachims. Predicting diverse subsets using structural SVMs. In Proceedings of the 25th international conference on Machine learning, pages 1224–1231. ACM, 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
18201
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description