UTApoly and UTAsplines: additive value functions with polynomial marginals
Abstract
Additive utility function models are widely used in multiple criteria decision analysis. In such models, a numerical value is associated to each alternative involved in the decision problem. It is computed by aggregating the scores of the alternative on the different criteria of the decision problem. The score of an alternative is determined by a marginal value function that evolves monotonically as a function of the performance of the alternative on this criterion. Determining the shape of the marginals is not easy for a decision maker. It is easier for him/her to make statements such as “alternative is preferred to ”. In order to help the decision maker, UTA disaggregation procedures use linear programming to approximate the marginals by piecewise linear functions based only on such statements. In this paper, we propose to infer polynomials and splines instead of piecewise linear functions for the marginals. In this aim, we use semidefinite programming instead of linear programming. We illustrate this new elicitation method and present some experimental results.
keywords:
Multiple criteria decision analysis, UTA method, Additive value function model, Preference learning, Disaggregation, Ordinal regression, Semidefinite programming1 Introduction
The theory of value functions aims at assigning a number to each alternative in such a way that the decision maker’s preference order on the alternatives is the same as the order on the numbers associated with the alternatives. The number or value associated to an alternative is a monotone function of its evaluations on the various relevant criteria. For preferences satisfying some additional properties (including preferential independence), the value of an alternative can be obtained as the sum of marginal value functions each depending only on a single criterion (keeneyraiffa1976, , Chapter 6).
These functions usually are monotone, i.e., marginal value functions either increase or decrease with the assessment of the alternative on the associated criterion. Many questioning protocols have been proposed aiming to elicit an additive value function (keeneyraiffa1976, ; fishburn67, ) through interactions with the decision maker (DM). These direct elicitation methods are timeconsuming and require a substantial cognitive effort from the DM. Therefore, in certain cases, an indirect approach may prove fruitful. The latter consists in learning an additive value model (or a set of such models) from a set of declared or observed preferences. In case we know that the DM prefers alternative to for some pairs , we may infer a model that is compatible with these preferences. Learning approaches have been proposed not only for inferring an additive value function that is used to rank all other alternatives. They have also been used for sorting alternatives in ordered categories (yu1992, ; roybouyssou1993, ; zopounidisdoumpos2002, ). In this model, an alternative is assigned to a category (e.g. “Satisfactory”, “Intermediate”, “Not satisfactory”) whenever its value passes some threshold and does not exceed some other, which are respectively the lower and upper values of the alternatives to be assigned to this category.
The UTA method (jaquetlsiskos1982, ) was the original proposal for this purpose. It uses a linear programming formulation to determine piecewise linear marginal value functions that are compatible with the DM’s known preferences. Several variants of this idea for learning a piecewise linear additive value function on the basis of examples of ordered pairs of alternatives are described in jaquetlsiskos2001 (). The variant used for inferring a rule to assign alternatives to ordered categories on the basis of assignment examples is called UTADIS in zopounidisdoumpos99 () (see also zopounidisdoumpos2002 ()). The interested reader is referred to SiskosInErgFigGre05 () for a comprehensive review of UTA methods, their variants and developments.
A problem with these methods is that, often, the information available about the DM’s preferences is far from determining a single additive value function. In general, the set of piecewise linear value functions compatible with the partial knowledge of the DM’s preferences is a polytope in an appropriate space. Therefore the learning methods that have been proposed either select a “representative” value function or they work with all possible value functions and derive robust conclusions, i.e. information on the DM’s preference that does not depend on the particular choice of a value function in the polytope. Among the latter, one may cite UTAGMS (grecoetal2008, ; grecoetal2010, ) and GRIP (figueiraetal2009b, ). This research avenue is known under the name robust ordinal regression methods.
The original approach has to face the issue of defining what is a “representative” value function or a default value function. UTASTAR (jaquetlsiskos1982, ; siskosyanacopoulos1985, ) solves the problem implicitly by returning an “average solution” computed as the mean of “extreme” solutions (this approach is sometimes referred to as “postoptimality analysis” (DoumposEtAl2014, )). Although, UTASTAR does not give any formal definition of a representative solution, it returns a solution that tends to lie “in the middle” of the polytope determined by the constraints. The idea of centrality, as a definition of representativeness, has been illustrated with the ACUTA method bousetal2010 (), in which the selected value function corresponds to the analytic center of the polytope, and the other formulation, using the Chebyshev center (DoumposEtAl2014, ). On the other hand, KadzinskiEtal2012 () propose a completely different approach to the idea of representativeness. They define five targets and select a representative value function taking into account a prioritization of the targets by the DM in the context of robust ordinal regression methods. The same authors also proposed a method for selecting a representative value function for robust sorting of alternatives in ordered categories (GrecoEtAlReprVFSorting2011, ).
In all the approaches aiming to return a “representative” value function, the marginal value functions are piecewise linear. The choice of such functions is historically motivated by the opportunity of using linear programming solvers (except for ACUTA bousetal2010 ()). Although piecewise linear functions are wellsuited for approximating monotone continuous functions, their lack of smoothness (derivability) may make them seem “not natural” in some contexts, especially for economists. Brutal changes in slope at the breakpoints is difficult to explain and justify. Therefore, using smooth functions as marginals is advantageous from an interpretative point of view.
The MIIDAS system siskosetalejor99 () proposes tools to model marginal value functions. Possibly nonlinear (and even nonmonotone) shapes of marginals can be chosen from parameterized families of curves. The value of the parameters is adjusted by using ad hoc techniques such as the midpoint value. In burgeraetal2002 (), the authors propose an inference method based on a linear program that infers quadratic utility functions in the context of an application to the banking sector.
In this paper, we propose another approach to build the marginals, which is based on semidefinite programming. It allows for learning marginals which are composed of one or several polynomials of degree , being fixed a priori. Besides facilitating the interpretations of the returned marginals, using such functions increases the descriptive power of the model, which is of secondary importance for decision aiding but may be valuable in other applications. In particular, in machine learning, learning sets may involve thousands of pairs of ordered alternatives or assignment examples, which may provide an advantage to more flexible models. Beyond these advantages, the most striking aspect of this work is the fact that a single new optimization technique allows us to deal with polynomial of any degree and piecewise polynomial marginals instead of piecewise linear marginals. The semidefinite programming approach used in this paper for UTA might open new perspectives for the elicitation of other preference models based on additive or partly additive value structures, such as additive differences models (MACBETH bana1994macbeth (); bana2005 ()), and GAI networks (GonzalesEtAl2011, ).
This paper contributes to the field of preference elicitation by proposing a new way to model marginal value functions using polynomials or splines instead of piecewise linear value functions. The paper is organized as follows. Section 2 recalls the principles of UTA methods. We then describe a new method called UTApoly which computes each marginal as a degree polynomial instead of a piecewise linear function. Section 4 introduces another approach called UTAsplines which is a generalization of UTA and UTApoly. The shape of the marginals used by UTAsplines are piecewise polynomials or polynomial splines. These methods can be used either for ranking alternatives or for sorting them in ordered categories. The next section gives an illustrative example of the use of UTApoly and UTAsplines. Finally, we present experimental results comparing the new methods with UTA both in terms of accuracy, model retrieval and computational effort.
2 UTA methods
In this section we briefly recall the basics of the additive value function model (see keeneyraiffa1976 () for a classical exposition) and two inference methods that are based on this model.
2.1 Additive utility function models
Let denote the preference relation of a DM on a set of alternatives. We assume that each of these alternatives is fully described by a dimensional vector the components of which are the evaluations of the alternative w.r.t. criteria or attributes. Under some conditions, among which preferential independence (see keeneyraiffa1976 (), p.110), such a preference can be represented by means of an additive value function. To be more precise, let (resp. ) denote an alternative described by the vector (resp. ) of its evaluations on criteria. The preference of the DM is representable by an additive value function if there is a function which associates a value (or score) to each alternative in such a way that whenever the DM prefers to () and
(1) 
where is a marginal value function defined on the scale or range of criterion and is a weight or tradeoff associated to criterion . Weights can be normalized w.l.o.g., i.e. .
In the sequel, we assume that the range of each criterion is an interval of an ordered set, e.g. the real line. We assume w.l.o.g. that, along each criterion, the DM’s preference increases with the evaluation (the larger the better). We also assume that the marginal value functions are normalized, i.e. for all and .
Model (1) can be rewritten by integrating the weights in the marginal value functions as follows:
Equation (1) can then be reformulated as follows:
(2) 
The marginal value functions, or, more briefly, the marginals take their values in the interval , for all . Note that a preference that can be represented by a value function is necessarily a weak order, i.e. a transitive and complete relation. Such a relation is also called a ranking (ties are allowed).
2.2 UTA methods for ranking and sorting problems
The UTA method was originally designed jaquetlsiskos1982 () to learn the preference relation of the DM on the basis of partial knowledge of this preference. It is supposed that the DM is able to rank some pairs of alternatives a priori, without further analysis. Assuming that the DM’s preference on the set of all alternatives is a ranking which is representable by an additive value function, UTA is a method for learning one such function which is compatible with the DM’s a priori ranking of certain pairs of alternatives.
Let denote the set of pairs of alternatives such that the DM knows a priori that he/she strictly prefers to . More precisely, if , we have , which means and not . The DM may also know that he/she is indifferent between some pairs of alternatives. These constitute the set . Whenever , we have , i.e. and . We denote by the set containing the learning alternatives, i.e. these used for the comparisons in sets and . These two sets and the vectors of performances of the alternatives contained in these two sets constitute the learning set which serves as input to the learning algorithm.
Linear programming is used to infer the parameters of the UTA model. Each pairwise comparison of the set and is translated into a constraint. For each pair of alternatives , we have and for each pair of alternatives , we have . Note that these constraints may prove incompatible. In order to have a feasible linear program in all cases, two positive slack variable, and , are introduced for each alternative in . The objective function of UTA is given by:
(3) 
and the constraints by:
(4) 
If we assume that the unknown marginals are piecewise linear, all the constraints above can be formulated in linear fashion and the corresponding optimization program can be handled by a LP solver. Note that the range of each criterion has to be split in a number of segments that have to be fixed a priori (i.e. they are not variables in the program).
A variant of UTA for learning to sort alternatives in ordered categories is known as UTADIS. The idea was formulated in the initial paper jaquetlsiskos1982 () and further used and developed in doumposzopounidis2002 (); zopounidisdoumpos99 (). Let denote the categories. They are numbered in increasing order of preference, i.e., an alternative assigned to is preferred to any alternative assigned to for . It is assumed that the alternatives assignment is compatible with the dominance relation, i.e., an alternative which is at least as good as another on all criteria is not assigned to a lower category. The learning set consists of a subset of alternatives of which the assignment to one of the categories is known (or the DM is able to assign these alternatives a priori). The problem is to learn an additive value function and thresholds such that alternative is assigned to category if for to (setting to 0 and to infinity, i.e. a sufficiently large value). A mathematical programming formulation of this problem is easily obtained by substituting the first two lines of (4) by the following three sets of constraints:
(5) 
where denotes the alternatives in the learning set that are assigned to category . Assuming that marginals are piecewise linear, allows for a linear programming formulation as it is the case with UTA.
3 UTApoly: additive value functions with polynomial marginals
In this section we present a new way to elicit marginal value functions using semidefinite programming. We first give the motivations for this new method. Then we describe it.
3.1 Motivation
UTA methods use piecewise linear functions to model the marginal value functions. Opting for such functions allows to use the linear programs presented in the previous section and linear programming solvers to infer an additive value ranking or sorting model. However by considering piecewise linear marginals with breakpoints at predefined places, original UTA methods have two important drawbacks: these options limit the interpretability and flexibility of the additive value model.
Interpretability. There is a longstanding tradition in Economics, especially in the classical theory of consumer behavior (see e.g. SilberbergSuen2001 ()), which assumes that utility (or value) functions are differentiable and interpret their first and second (partial) derivatives in relation with the preferences and behavior of the customer. Multiple criteria decision analysis, based on value functions, stems from the same tradition. Tradeoffs or marginal rates of substitution are generally thought of as changing smoothly (see e.g. keeneyraiffa1976 (), p. 83 :“Throughout we assume that we are in a wellbehaved world where all functions have smooth second derivatives”). Although piecewise linear marginals can provide good approximations for the value of any derivable function, they are not fully satisfactory as an explanatory model. This is especially the case when the breakpoints are fixed arbitrarily (e.g. equally spaced in the criterion domains). Such a choice may well fail to correctly reflect the DM’s feelings about where the marginal rate of substitution starts to grow more quickly (resp. to diminish) or shows an inflexion. In other words, the qualitative behavior of the first and second derivatives of the “true” marginal value function might be poorly approximated by resorting to piecewise linear models, while this behavior might have an intuitive meaning for the DM. Therefore, considering piecewise linear marginals might lead to final models that fail to convince the DM even though they fit the learning set accurately.
Flexibility. Restricting the shape of the marginals to piecewise linear functions with a fixed number of pieces may hamper the expressivity of the additive value function model. This is especially detrimental when large learning sets are available as is the case in Machine Learning applications^{1}^{1}1It is seldom so in MCDA applications where the size of the learning set rarely exceeds a few dozens records..
The following ad hoc case aims to illustrate the loss in flexibility incurred due to the piecewise linear hypothesis. We hereafter illustrate the case of a single piece, i.e. the linear case, whereas the same question arises whatever the fixed number of segments. Consider a ranking problem in which alternatives are assessed on two criteria. The DM states that the topranked alternatives are , , which are tied (rank 1), followed by (rank 2) while is strictly less preferred than the others (rank 3). The evaluations and ranks of these alternatives are displayed in Table 1.
alternative  criterion 1  criterion 2  rank 

100  0  
0  100  
25  75  
75  25 
Assume that we plan to use a UTA model with marginals involving a single linear piece (i.e. a weighted sum). Such an UTA model cannot at the same time distinguish and and express that and are tied. The fact that and are tied indeed implies that the criteria weights are equal (we can set them to w.l.o.g.). The value on each marginal varies from 0 to 0.5. The worst value (0) corresponds to the worst performance (0) and the best value (0.5) to the best performance (100) on each criterion (see the marginal value functions represented by dashed lines in Figure 1). Using these marginals, the scores of the four alternatives are obtained through linear interpolation and displayed in Table 2. We observe that all alternatives receive the same value 0.5. It is therefore not possible to discriminate alternatives and without increasing the number of linear pieces or considering nonlinear marginals. In this case, we shall consider using nonlinear marginals.
UTA score  0.5  0.5  0.5  0.5 
UTApoly score  0.5  0.5  0.46  0.33 
In case polynomials are allowed for, instead of piecewise linear functions, to model the marginals, the DM’s preferences can be accurately represented. Figure 1 shows the case of polynomials of degree 3 used as marginals (plain line). The scores of the alternatives computed with these marginals are displayed in Table 2. They comply with the DM’s preferences.
Obviously it would have been possible to reproduce the DM’s ranking using more than one linear piece marginals in an UTA model. However, when the breakpoints are fixed in advance, it is easy to construct an example, similar to the above one, in which the DM’s ranking cannot be reproduced using a linear function between successive breakpoints while a polynomial spline will do.
The two methods introduced below, UTApoly in the rest of this section and UTAsplines in Section 4, replace the piecewise linear marginals of UTA by polynomials and polyomial splines, respectively.
3.2 Basic facts about nonnegative polynomials
In the last few years, significant improvements have been made in formulating and solving optimization problems in which constraints are expressed in the form of polynomial (in)equalities and with a polynomial objective function; see, e.g., gloptipoly (); gloptipoly3 (). These new techniques are useful for various applications; see Las09 () and the references therein. A problem arising in many applications, including the present one, is to guarantee the nonnegativity of functions of several variables. In our case, we have to make sure not only that marginals are non negative but also that they are nondecreasing, i.e. that their derivative is nonnegative. Testing the nonnegativity of a polynomial of several variables and of a degree equal to or greater than 4 is NPhard murtykabadi1987 (). In parillo2003 (), an approach based on convex optimization techniques has been proposed in order to find an approximate solution to this problem.
The approach proposed in parillo2003 () is based on the following theorem about nonnegative polynomials.
Theorem 1 (Hilbert).
A polynomial is nonnegative if it is possible to decompose it as a sum of squares (SOS):
(6) 
The condition given above is sufficient but not necessary, there exist nonnegative polynomials that cannot be decomposed as a sum of squares blekherman2006 (). However, it has been proved by Hilbert that a nonnegative polynomial of one variable is always a sum of squares parillo2003 (). We give the proof here because it is remarkably simple and elegant.
Theorem 2 (Hilbert).
A nonnegative polynomial in one variable is always a SOS.
Proof.
Consider a polynomial of degree , . Since is nonnegative, must be even. The value of should be greater than 0, otherwise . As every polynomial of degree admits roots, one can write as follows:
in which and for are pairs of conjugate complex numbers and for are distinct real numbers where . All the values of the exponents are even. Indeed, consider a subset of indices, , such that are odd. Let be a permutation of these indices such that . For , we would have , a contradiction. As all the value are even, we can rewrite as follows:
in which some pairs , have no imaginary part. Let and where is the imaginary part of the complex number and , two polynomials with real coefficients. Finally, the product of these two terms gives a sum of two squares: . ∎
Let us consider the problem of determining a nonnegative polynomial of one variable and degree . We use the following canonical form to represent this polynomial:
(7)  
To guarantee the nonnegativity of this polynomial, we have to ensure that it can be represented as a sum of squares like in Equation (6). Note that a nonnegative polynomial will always have an even degree since either the limit at positive or negative infinity of a polynomial of odd degree is negative. Let , the polynomial reads:
Defining and (where stands for the matrix transposition operation), we can express as follows:
Note that the matrix is symmetric and positive semidefinite (PSD), which we denote , since for all . Therefore, to ensure that is nonnegative, it is necessary to find a matrix of dimension such that and . It turns out that this condition is also sufficient. This follows from the following lemma.
Lemma 3.
.
The above decomposition is called the Cholesky decomposition of matrix ; see B. To summarize, a polynomial in one variable is nonnegative if and only if there exists such that .
The coefficients of the polynomial expressed in its canonical form (7) are obtained by summing the offdiagonal entries of the matrix , as follows:
We can express the value of the coefficients of the polynomial as follows:
(8) 
The value of can be computed with both expressions. Finding a nonnegative univariate polynomial consists in finding a semidefinite positive matrix . Summing the offdiagonal entries of this matrix allows to control the coefficients of the polynomial;
In some applications, it is not necessary to ensure the nonnegativity of the polynomial on but only in an interval . If the nonnegativity constraint has to be guaranteed only in a given interval for a polynomial , then the following theorem holds.
Theorem 4 (Hilbert).
A polynomial in one variable is nonnegative in the interval , if and only if where and are SOS.
Given the above theorem, if we want to ensure the nonnegativity of the polynomial of degree on the interval , we have to find two matrices and of size , with , that are positive semidefinite. We denote these matrices and their indices as follows:
Since and are positive semidefinite, the products and , with , are always nonnegative.
To obtain a polynomial that is nonnegative in the interval , its coefficients have to be chosen such that:
If the degree of the polynomial is even then the value of is equal to 0. The values can be expressed in the following more compact form:
3.3 Semidefinite programming applied to UTA methods
In the perspective of building more natural marginal value functions, we use semidefinite programming (SDP) to learn polynomial marginals instead of piecewise linear ones. SDP has become a standard tool in convex optimization, being a generalization of linear programming and secondorder cone programming. It allows to optimize linear functions over an affine subspace of the set of positive semidefinite matrices; see, e.g., vdb1996sdp () and the references therein.
There are two variants of the new UTApoly method. Firstly, we describe the approach that consists in using polynomials that are overall monotone, i.e. monotone on the set of all real numbers. Then we describe the second approach considering polynomials that are monotone only on a given interval.
3.3.1 Enforcing monotonicity of the marginals on the set of real numbers
In the new proposed model, we define the value function on each criterion as a polynomial of degree :
(9) 
To be compliant with the requirements of the theory of additive value functions, the polynomials used as marginals should be nonnegative and monotone over the criteria domains. To ensure monotonicity, the derivative of the marginal value function has to be nonnegative, hence we impose that the derivative of each value function is a sum of squares. The degree of the derivative is therefore even which implies that is odd. This requirement reads:
with a PSD matrix of dimension , a vector of size with :
By using SDP, we impose the matrix to be semidefinite positive and we set the following constraints on the values, for :
In UTApoly, the marginal value functions and monotonicity conditions on marginals given in Equation (4) and (5) are replaced by the following constraints:
(10) 
The optimization program composed of the objective given in Equation (3) and the set of constraints given in Equations (4) and (10) can be solved using convex programming, more precisely, semidefinite programming parillo2003 (). We refer to this new mathematical program as to UTApoly. An explicit UTApoly formulation for a simple problem involving 2 criteria and 3 alternatives is provided in A for illustrative purposes.
3.3.2 Enforcing monotonicity of the marginals on the criteria domains
Ensuring the monotonicity of each marginal on the domain of each criterion (instead of the whole real line) is sufficient to satisfy the requirements of the additive value function model. To do so, we use Theorem 4 and only impose the nonnegativity of the marginal derivative on the domain of each criterion. This results in the following condition on the derivative of the polynomial , for all :
In the above equation, and are two PSD matrices of size and a vector of size , where :
The value for are obtained as follows:
If the degree is odd, then we have since .
In convex programming, in order to have polynomial marginals that are monotone on an interval, the monotonicity constraints in UTA have to be replaced by the following ones:
(11) 
4 UTAsplines: additive value functions with splines marginals
In this section we describe a variant of UTApoly which consists in using several polynomials for each value function. We first recall some theory about splines. Then we describe the new method called UTAsplines.
4.1 Splines
We recall here the definition of a spline. We detail the ones that are the most commonly used.
4.1.1 Definition
A spline of degree is a function that interpolates the set of points for , with such that:

for ;

is a set of polynomials of degree equal to or smaller than , on each interval (at least one of the polynomials has a degree equal to );

the derivative of are continuous up to a given degree on .
The degree of a spline corresponds to its highest polynomial degree. If all the polynomials have the same degree, the spline is said to be uniform.
The continuity of the spline at the connection points is ensured up to a given derivative. Usually, the continuity of the spline is guaranteed up to the second derivative (). It ensures the continuity of the slope and concavity at the connection points.
4.1.2 Cubic splines
The most common uniform splines are the ones of degree 3 (), also called cubic splines. A cubic spline consists of a set of third degree polynomials which are continuous up to the second derivative at their connection points.
We denote by the polynomial of the spline going from connection point to connection point . Formally, each polynomial of the spline has the following form:
The use of cubic splines requires the determination of four parameters: , , and . If the spline interpolates points, there are overall parameters to determine.
Imposing the equality up to the second derivative at the connection points amounts to enforce the following constraints:
(12) 
Since there are constraints and parameters, two degrees of freedom remain. They can be set in different ways. For instance, one can impose and . This corresponds to imposing zero curvature at both endpoints of the spline.
4.2 UTAsplines: using splines as marginals
We give some detail on how using splines to model marginal value functions of an additive value function model. We formulate a semidefinite program that learns the parameters of such a model.
4.2.1 Overview
Using splines continuous up to either the first or the second derivative instead of piecewise linear functions for the marginal value functions aims at obtaining more natural functions around the breakpoints.
With UTApoly, the flexibility of the model is improved by using polynomials of higher degrees. In order to further improve the flexibility of the model, we propose now to hybridize the original UTA method which splits the criterion domain into equal parts with the UTApoly approach which uses polynomials to model the marginal value functions. We call this new disaggregation procedures UTAsplines. The UTAsplines method combines the use of piecewise functions for the marginals (as in UTA) and the use polynomials (as in UTApoly) for each piece of the function.
Compared to UTA, in UTAsplines the continuity of the marginal can be ensured up to the any derivative at the connection points. It enables to obtain more natural marginals which have a continuous curvature.
Constraints concerning the concavity/convexity of the marginal value functions on some subintervals can also be specified, if the information is available or if the decision maker is able to specify such constraints. This makes it possible to “control” the shape of the obtained model and improve its interpretability by the decision maker.
4.2.2 Description of UTAsplines
In UTAsplines, we model marginals as uniform splines of degree . Formally the marginal of criterion reads:
where denotes a uniform spline of degree composed of pieces. Each piece of the spline is a polynomial of degree denoted by , . Formally it reads:
The pairs and denote respectively the coordinates of the initial and final points of the piece of the spline. The points