UTA-poly and UTA-splines: additive value functions with polynomial marginals

# UTA-poly and UTA-splines: additive value functions with polynomial marginals

Olivier Sobrie Nicolas Gillis Vincent Mousseau Marc Pirlot Université de Mons, Faculté Polytechnique, 9 rue de Houdain, 7000 Mons, Belgium CentraleSupélec, Laboratoire Génie Industriel, Grande Voie des Vignes, 92295 Châtenay-Malabry, France
###### Abstract

Additive utility function models are widely used in multiple criteria decision analysis. In such models, a numerical value is associated to each alternative involved in the decision problem. It is computed by aggregating the scores of the alternative on the different criteria of the decision problem. The score of an alternative is determined by a marginal value function that evolves monotonically as a function of the performance of the alternative on this criterion. Determining the shape of the marginals is not easy for a decision maker. It is easier for him/her to make statements such as “alternative is preferred to ”. In order to help the decision maker, UTA disaggregation procedures use linear programming to approximate the marginals by piecewise linear functions based only on such statements. In this paper, we propose to infer polynomials and splines instead of piecewise linear functions for the marginals. In this aim, we use semidefinite programming instead of linear programming. We illustrate this new elicitation method and present some experimental results.

###### keywords:
Multiple criteria decision analysis, UTA method, Additive value function model, Preference learning, Disaggregation, Ordinal regression, Semidefinite programming

## 1 Introduction

The theory of value functions aims at assigning a number to each alternative in such a way that the decision maker’s preference order on the alternatives is the same as the order on the numbers associated with the alternatives. The number or value associated to an alternative is a monotone function of its evaluations on the various relevant criteria. For preferences satisfying some additional properties (including preferential independence), the value of an alternative can be obtained as the sum of marginal value functions each depending only on a single criterion (keeneyraiffa1976, , Chapter 6).

These functions usually are monotone, i.e., marginal value functions either increase or decrease with the assessment of the alternative on the associated criterion. Many questioning protocols have been proposed aiming to elicit an additive value function (keeneyraiffa1976, ; fishburn67, ) through interactions with the decision maker (DM). These direct elicitation methods are time-consuming and require a substantial cognitive effort from the DM. Therefore, in certain cases, an indirect approach may prove fruitful. The latter consists in learning an additive value model (or a set of such models) from a set of declared or observed preferences. In case we know that the DM prefers alternative to for some pairs , we may infer a model that is compatible with these preferences. Learning approaches have been proposed not only for inferring an additive value function that is used to rank all other alternatives. They have also been used for sorting alternatives in ordered categories (yu1992, ; roybouyssou1993, ; zopounidisdoumpos2002, ). In this model, an alternative is assigned to a category (e.g. “Satisfactory”, “Intermediate”, “Not satisfactory”) whenever its value passes some threshold and does not exceed some other, which are respectively the lower and upper values of the alternatives to be assigned to this category.

The UTA method (jaquetlsiskos1982, ) was the original proposal for this purpose. It uses a linear programming formulation to determine piecewise linear marginal value functions that are compatible with the DM’s known preferences. Several variants of this idea for learning a piecewise linear additive value function on the basis of examples of ordered pairs of alternatives are described in jaquetlsiskos2001 (). The variant used for inferring a rule to assign alternatives to ordered categories on the basis of assignment examples is called UTADIS in zopounidisdoumpos99 () (see also zopounidisdoumpos2002 ()). The interested reader is referred to SiskosInErgFigGre05 () for a comprehensive review of UTA methods, their variants and developments.

A problem with these methods is that, often, the information available about the DM’s preferences is far from determining a single additive value function. In general, the set of piecewise linear value functions compatible with the partial knowledge of the DM’s preferences is a polytope in an appropriate space. Therefore the learning methods that have been proposed either select a “representative” value function or they work with all possible value functions and derive robust conclusions, i.e. information on the DM’s preference that does not depend on the particular choice of a value function in the polytope. Among the latter, one may cite UTA-GMS (grecoetal2008, ; grecoetal2010, ) and GRIP (figueiraetal2009b, ). This research avenue is known under the name robust ordinal regression methods.

The original approach has to face the issue of defining what is a “representative” value function or a default value function. UTA-STAR (jaquetlsiskos1982, ; siskosyanacopoulos1985, ) solves the problem implicitly by returning an “average solution” computed as the mean of “extreme” solutions (this approach is sometimes referred to as “post-optimality analysis” (DoumposEtAl2014, )). Although, UTA-STAR does not give any formal definition of a representative solution, it returns a solution that tends to lie “in the middle” of the polytope determined by the constraints. The idea of centrality, as a definition of representativeness, has been illustrated with the ACUTA method bousetal2010 (), in which the selected value function corresponds to the analytic center of the polytope, and the other formulation, using the Chebyshev center (DoumposEtAl2014, ). On the other hand, KadzinskiEtal2012 () propose a completely different approach to the idea of representativeness. They define five targets and select a representative value function taking into account a prioritization of the targets by the DM in the context of robust ordinal regression methods. The same authors also proposed a method for selecting a representative value function for robust sorting of alternatives in ordered categories (GrecoEtAlReprVFSorting2011, ).

In all the approaches aiming to return a “representative” value function, the marginal value functions are piecewise linear. The choice of such functions is historically motivated by the opportunity of using linear programming solvers (except for ACUTA bousetal2010 ()). Although piecewise linear functions are well-suited for approximating monotone continuous functions, their lack of smoothness (derivability) may make them seem “not natural” in some contexts, especially for economists. Brutal changes in slope at the breakpoints is difficult to explain and justify. Therefore, using smooth functions as marginals is advantageous from an interpretative point of view.

The MIIDAS system siskosetalejor99 () proposes tools to model marginal value functions. Possibly non-linear (and even non-monotone) shapes of marginals can be chosen from parameterized families of curves. The value of the parameters is adjusted by using ad hoc techniques such as the midpoint value. In burgeraetal2002 (), the authors propose an inference method based on a linear program that infers quadratic utility functions in the context of an application to the banking sector.

In this paper, we propose another approach to build the marginals, which is based on semidefinite programming. It allows for learning marginals which are composed of one or several polynomials of degree , being fixed a priori. Besides facilitating the interpretations of the returned marginals, using such functions increases the descriptive power of the model, which is of secondary importance for decision aiding but may be valuable in other applications. In particular, in machine learning, learning sets may involve thousands of pairs of ordered alternatives or assignment examples, which may provide an advantage to more flexible models. Beyond these advantages, the most striking aspect of this work is the fact that a single new optimization technique allows us to deal with polynomial of any degree and piecewise polynomial marginals instead of piecewise linear marginals. The semidefinite programming approach used in this paper for UTA might open new perspectives for the elicitation of other preference models based on additive or partly additive value structures, such as additive differences models (MACBETH bana1994macbeth (); bana2005 ()), and GAI networks (GonzalesEtAl2011, ).

This paper contributes to the field of preference elicitation by proposing a new way to model marginal value functions using polynomials or splines instead of piecewise linear value functions. The paper is organized as follows. Section 2 recalls the principles of UTA methods. We then describe a new method called UTA-poly which computes each marginal as a degree polynomial instead of a piecewise linear function. Section 4 introduces another approach called UTA-splines which is a generalization of UTA and UTA-poly. The shape of the marginals used by UTA-splines are piecewise polynomials or polynomial splines. These methods can be used either for ranking alternatives or for sorting them in ordered categories. The next section gives an illustrative example of the use of UTA-poly and UTA-splines. Finally, we present experimental results comparing the new methods with UTA both in terms of accuracy, model retrieval and computational effort.

## 2 UTA methods

In this section we briefly recall the basics of the additive value function model (see keeneyraiffa1976 () for a classical exposition) and two inference methods that are based on this model.

### 2.1 Additive utility function models

Let denote the preference relation of a DM on a set of alternatives. We assume that each of these alternatives is fully described by a -dimensional vector the components of which are the evaluations of the alternative w.r.t. criteria or attributes. Under some conditions, among which preferential independence (see keeneyraiffa1976 (), p.110), such a preference can be represented by means of an additive value function. To be more precise, let (resp. ) denote an alternative described by the vector (resp. ) of its evaluations on criteria. The preference of the DM is representable by an additive value function if there is a function which associates a value (or score) to each alternative in such a way that whenever the DM prefers to () and

 U(a)=n∑j=1wjuj(aj), (1)

where is a marginal value function defined on the scale or range of criterion and is a weight or tradeoff associated to criterion . Weights can be normalized w.l.o.g., i.e. .

In the sequel, we assume that the range of each criterion is an interval of an ordered set, e.g. the real line. We assume w.l.o.g. that, along each criterion, the DM’s preference increases with the evaluation (the larger the better). We also assume that the marginal value functions are normalized, i.e.  for all and .

Model (1) can be rewritten by integrating the weights in the marginal value functions as follows:

Equation (1) can then be reformulated as follows:

 U(a)=n∑j=1u∗j(aj). (2)

The marginal value functions, or, more briefly, the marginals take their values in the interval , for all . Note that a preference that can be represented by a value function is necessarily a weak order, i.e. a transitive and complete relation. Such a relation is also called a ranking (ties are allowed).

### 2.2 UTA methods for ranking and sorting problems

The UTA method was originally designed jaquetlsiskos1982 () to learn the preference relation of the DM on the basis of partial knowledge of this preference. It is supposed that the DM is able to rank some pairs of alternatives a priori, without further analysis. Assuming that the DM’s preference on the set of all alternatives is a ranking which is representable by an additive value function, UTA is a method for learning one such function which is compatible with the DM’s a priori ranking of certain pairs of alternatives.

Let denote the set of pairs of alternatives such that the DM knows a priori that he/she strictly prefers to . More precisely, if , we have , which means and not . The DM may also know that he/she is indifferent between some pairs of alternatives. These constitute the set . Whenever , we have , i.e.  and . We denote by the set containing the learning alternatives, i.e. these used for the comparisons in sets and . These two sets and the vectors of performances of the alternatives contained in these two sets constitute the learning set which serves as input to the learning algorithm.

Linear programming is used to infer the parameters of the UTA model. Each pairwise comparison of the set and is translated into a constraint. For each pair of alternatives , we have and for each pair of alternatives , we have . Note that these constraints may prove incompatible. In order to have a feasible linear program in all cases, two positive slack variable, and , are introduced for each alternative in . The objective function of UTA is given by:

 minu∗j∑a∈A∗(σ+(a)+σ−(a)) (3)

and the constraints by:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩U(a)−U(b)+σ+(a)−σ−(a)−σ+(b)+σ−(b)>0∀(a,b)∈P,U(a)−U(b)+σ+(a)−σ−(a)−σ+(b)+σ−(b)=0∀(a,b)∈I,∑nj=1u∗j(v2,j)=1,u∗j(v1,j)=0∀j∈N,\parσ+(a)≥0∀a∈A∗,σ−(a)≥0∀a∈A∗,u∗j\omit\span\omitmonotonic∀j∈N. (4)

If we assume that the unknown marginals are piecewise linear, all the constraints above can be formulated in linear fashion and the corresponding optimization program can be handled by a LP solver. Note that the range of each criterion has to be split in a number of segments that have to be fixed a priori (i.e. they are not variables in the program).

A variant of UTA for learning to sort alternatives in ordered categories is known as UTADIS. The idea was formulated in the initial paper jaquetlsiskos1982 () and further used and developed in doumposzopounidis2002 (); zopounidisdoumpos99 (). Let denote the categories. They are numbered in increasing order of preference, i.e., an alternative assigned to is preferred to any alternative assigned to for . It is assumed that the alternatives assignment is compatible with the dominance relation, i.e., an alternative which is at least as good as another on all criteria is not assigned to a lower category. The learning set consists of a subset of alternatives of which the assignment to one of the categories is known (or the DM is able to assign these alternatives a priori). The problem is to learn an additive value function and thresholds such that alternative is assigned to category if for to (setting to 0 and to infinity, i.e. a sufficiently large value). A mathematical programming formulation of this problem is easily obtained by substituting the first two lines of (4) by the following three sets of constraints:

 ⎧⎪ ⎪⎨⎪ ⎪⎩U(a)+σ+(a)≥Uh−1∀a∈A∗h,h={2,...,p},U(a)−σ−(a)

where denotes the alternatives in the learning set that are assigned to category . Assuming that marginals are piecewise linear, allows for a linear programming formulation as it is the case with UTA.

## 3 UTA-poly: additive value functions with polynomial marginals

In this section we present a new way to elicit marginal value functions using semidefinite programming. We first give the motivations for this new method. Then we describe it.

### 3.1 Motivation

UTA methods use piecewise linear functions to model the marginal value functions. Opting for such functions allows to use the linear programs presented in the previous section and linear programming solvers to infer an additive value ranking or sorting model. However by considering piecewise linear marginals with breakpoints at predefined places, original UTA methods have two important drawbacks: these options limit the interpretability and flexibility of the additive value model.

Interpretability. There is a longstanding tradition in Economics, especially in the classical theory of consumer behavior (see e.g. SilberbergSuen2001 ()), which assumes that utility (or value) functions are differentiable and interpret their first and second (partial) derivatives in relation with the preferences and behavior of the customer. Multiple criteria decision analysis, based on value functions, stems from the same tradition. Tradeoffs or marginal rates of substitution are generally thought of as changing smoothly (see e.g. keeneyraiffa1976 (), p. 83 :“Throughout we assume that we are in a well-behaved world where all functions have smooth second derivatives”). Although piecewise linear marginals can provide good approximations for the value of any derivable function, they are not fully satisfactory as an explanatory model. This is especially the case when the breakpoints are fixed arbitrarily (e.g. equally spaced in the criterion domains). Such a choice may well fail to correctly reflect the DM’s feelings about where the marginal rate of substitution starts to grow more quickly (resp. to diminish) or shows an inflexion. In other words, the qualitative behavior of the first and second derivatives of the “true” marginal value function might be poorly approximated by resorting to piecewise linear models, while this behavior might have an intuitive meaning for the DM. Therefore, considering piecewise linear marginals might lead to final models that fail to convince the DM even though they fit the learning set accurately.

Flexibility. Restricting the shape of the marginals to piecewise linear functions with a fixed number of pieces may hamper the expressivity of the additive value function model. This is especially detrimental when large learning sets are available as is the case in Machine Learning applications111It is seldom so in MCDA applications where the size of the learning set rarely exceeds a few dozens records..

The following ad hoc case aims to illustrate the loss in flexibility incurred due to the piecewise linear hypothesis. We hereafter illustrate the case of a single piece, i.e. the linear case, whereas the same question arises whatever the fixed number of segments. Consider a ranking problem in which alternatives are assessed on two criteria. The DM states that the top-ranked alternatives are , , which are tied (rank 1), followed by (rank 2) while is strictly less preferred than the others (rank 3). The evaluations and ranks of these alternatives are displayed in Table 1.

Assume that we plan to use a UTA model with marginals involving a single linear piece (i.e. a weighted sum). Such an UTA model cannot at the same time distinguish and and express that and are tied. The fact that and are tied indeed implies that the criteria weights are equal (we can set them to w.l.o.g.). The value on each marginal varies from 0 to 0.5. The worst value (0) corresponds to the worst performance (0) and the best value (0.5) to the best performance (100) on each criterion (see the marginal value functions represented by dashed lines in Figure 1). Using these marginals, the scores of the four alternatives are obtained through linear interpolation and displayed in Table 2. We observe that all alternatives receive the same value 0.5. It is therefore not possible to discriminate alternatives and without increasing the number of linear pieces or considering nonlinear marginals. In this case, we shall consider using non-linear marginals. Figure 1: Example of UTA and UTA-poly value functions. The dashed lines correspond to the UTA piecewise linear function and the plain lines correspond to polynomials of degree 3.

In case polynomials are allowed for, instead of piecewise linear functions, to model the marginals, the DM’s preferences can be accurately represented. Figure 1 shows the case of polynomials of degree 3 used as marginals (plain line). The scores of the alternatives computed with these marginals are displayed in Table 2. They comply with the DM’s preferences.

Obviously it would have been possible to reproduce the DM’s ranking using more than one linear piece marginals in an UTA model. However, when the breakpoints are fixed in advance, it is easy to construct an example, similar to the above one, in which the DM’s ranking cannot be reproduced using a linear function between successive breakpoints while a polynomial spline will do.

The two methods introduced below, UTA-poly in the rest of this section and UTA-splines in Section 4, replace the piecewise linear marginals of UTA by polynomials and polyomial splines, respectively.

### 3.2 Basic facts about non-negative polynomials

In the last few years, significant improvements have been made in formulating and solving optimization problems in which constraints are expressed in the form of polynomial (in)equalities and with a polynomial objective function; see, e.g., gloptipoly (); gloptipoly3 (). These new techniques are useful for various applications; see Las09 () and the references therein. A problem arising in many applications, including the present one, is to guarantee the non-negativity of functions of several variables. In our case, we have to make sure not only that marginals are non negative but also that they are nondecreasing, i.e. that their derivative is non-negative. Testing the non-negativity of a polynomial of several variables and of a degree equal to or greater than 4 is NP-hard murtykabadi1987 (). In parillo2003 (), an approach based on convex optimization techniques has been proposed in order to find an approximate solution to this problem.

The approach proposed in parillo2003 () is based on the following theorem about non-negative polynomials.

###### Theorem 1 (Hilbert).

A polynomial is non-negative if it is possible to decompose it as a sum of squares (SOS):

 F(z)=∑sf2s(z)with z∈Rn. (6)

The condition given above is sufficient but not necessary, there exist non-negative polynomials that cannot be decomposed as a sum of squares blekherman2006 (). However, it has been proved by Hilbert that a non-negative polynomial of one variable is always a sum of squares parillo2003 (). We give the proof here because it is remarkably simple and elegant.

###### Theorem 2 (Hilbert).

A non-negative polynomial in one variable is always a SOS.

###### Proof.

Consider a polynomial of degree , . Since is non-negative, must be even. The value of should be greater than 0, otherwise . As every polynomial of degree admits roots, one can write as follows:

 p(x)=pDm∏i=1(x−zi)(x−¯zi)n∏j=1(x−tj)αj

in which and for are pairs of conjugate complex numbers and for are distinct real numbers where . All the values of the exponents are even. Indeed, consider a subset of indices, , such that are odd. Let be a permutation of these indices such that . For , we would have , a contradiction. As all the value are even, we can rewrite as follows:

 p(x)=(√pDl∏i=1(x−zi))(√pDl∏i=1(x−¯zi))

in which some pairs , have no imaginary part. Let and where is the imaginary part of the complex number and , two polynomials with real coefficients. Finally, the product of these two terms gives a sum of two squares: . ∎

Let us consider the problem of determining a non-negative polynomial of one variable and degree . We use the following canonical form to represent this polynomial:

 p(x) =p0+p1x+p2x2+…+pDxD (7) =D∑i=0pi⋅xi.

To guarantee the non-negativity of this polynomial, we have to ensure that it can be represented as a sum of squares like in Equation (6). Note that a non-negative polynomial will always have an even degree since either the limit at positive or negative infinity of a polynomial of odd degree is negative. Let , the polynomial reads:

 p(x)=∑sq2s(x)=∑s[d∑i=0bisxi]2.

Defining and (where stands for the matrix transposition operation), we can express as follows:

 p(x) =∑s(bTs¯¯¯x)2=∑s¯¯¯xTbsbTs¯¯¯x=¯¯¯xT[∑sbsbTs]¯¯¯x=¯¯¯xTQ¯¯¯x =⎛⎜ ⎜ ⎜ ⎜⎝1x⋮xd⎞⎟ ⎟ ⎟ ⎟⎠T⎛⎜ ⎜ ⎜ ⎜ ⎜⎝q0,0q0,1⋯q0,dq1,0q1,1⋯q1,d⋮⋮⋱⋮qd,0qd,1⋯qd,d⎞⎟ ⎟ ⎟ ⎟ ⎟⎠⎛⎜ ⎜ ⎜ ⎜⎝1x⋮xd⎞⎟ ⎟ ⎟ ⎟⎠.

Note that the matrix is symmetric and positive semidefinite (PSD), which we denote , since for all . Therefore, to ensure that is non-negative, it is necessary to find a matrix of dimension such that and . It turns out that this condition is also sufficient. This follows from the following lemma.

###### Lemma 3.

.

The above decomposition is called the Cholesky decomposition of matrix ; see B. To summarize, a polynomial in one variable is non-negative if and only if there exists such that .

The coefficients of the polynomial expressed in its canonical form (7) are obtained by summing the off-diagonal entries of the matrix , as follows:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩p0=q0,0,p1=q1,0+q0,1,p2=q2,0+q1,1+q0,2,⋮p2d−1=qd,d−1+qd−1,d,p2d=qd,d.

We can express the value of the coefficients of the polynomial as follows:

 pi={∑ig=0qg,i−gi={0,…,d},∑dg=i−dqg,i−gi={d,…,2d}. (8)

The value of can be computed with both expressions. Finding a non-negative univariate polynomial consists in finding a semidefinite positive matrix . Summing the off-diagonal entries of this matrix allows to control the coefficients of the polynomial;

In some applications, it is not necessary to ensure the non-negativity of the polynomial on but only in an interval . If the non-negativity constraint has to be guaranteed only in a given interval for a polynomial , then the following theorem holds.

###### Theorem 4 (Hilbert).

A polynomial in one variable is non-negative in the interval , if and only if where and are SOS.

Given the above theorem, if we want to ensure the non-negativity of the polynomial of degree on the interval , we have to find two matrices and of size , with , that are positive semidefinite. We denote these matrices and their indices as follows:

 Q=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝q0,0q0,1⋯q0,dq1,0q1,1⋯q1,d⋮⋮⋱⋮qd,0qd,1⋯qd,d⎞⎟ ⎟ ⎟ ⎟ ⎟⎠,R=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝r0,0r0,1⋯r0,dr1,0r1,1⋯r1,d⋮⋮⋱⋮rd,0rd,1⋯rd,d⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

Since and are positive semidefinite, the products and , with , are always non-negative.

To obtain a polynomial that is non-negative in the interval , its coefficients have to be chosen such that:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩p0=v2⋅r0,0−v1⋅q0,0,p1=q0,0−r0,0+v2⋅(r1,0+r0,1)−v1⋅(q1,0−q0,1),p2=(q1,0+q0,1)−(r1,0+r0,1)+v2⋅(r2,0+r1,1+r0,2)−v1⋅(q2,0+q1,1+q0,2),⋮p2d−1=(qd,d−2+qd−1,d−1+qd−2,d)−(rd,d−2+rd−1,d−1+rd−2,d)+v2⋅(rd,d−1+rd−1,d)−v1⋅(qd,d−1+qd−1,d),p2d=(qd,d−1+qd−1,d)−(rd,d−1+rd−1,d)+v2⋅rd,d−v1⋅qd,d,p2d+1=qd,d−rd,d.

If the degree of the polynomial is even then the value of is equal to 0. The values can be expressed in the following more compact form:

 pi=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩v2⋅r0,0−v1⋅q0,0i=0,∑i−1g=0(qg,i−1−g−rg,i−1−g)+∑ig=0(v2⋅rg,i−g−v1⋅qg,i−g)i={1,…,d},∑dg=i−d−1(qg,i−1−g−rg,i−1−g)+∑dg=i−d(v2⋅rg,i−g−v1⋅qg,i−g)i={d+1,…,2d},qd,d−rd,di=2d+1.

### 3.3 Semidefinite programming applied to UTA methods

In the perspective of building more natural marginal value functions, we use semidefinite programming (SDP) to learn polynomial marginals instead of piecewise linear ones. SDP has become a standard tool in convex optimization, being a generalization of linear programming and second-order cone programming. It allows to optimize linear functions over an affine subspace of the set of positive semidefinite matrices; see, e.g., vdb1996sdp () and the references therein.

There are two variants of the new UTA-poly method. Firstly, we describe the approach that consists in using polynomials that are overall monotone, i.e. monotone on the set of all real numbers. Then we describe the second approach considering polynomials that are monotone only on a given interval.

#### 3.3.1 Enforcing monotonicity of the marginals on the set of real numbers

In the new proposed model, we define the value function on each criterion as a polynomial of degree :

 u∗j(aj)=Dj∑i=0pj,i⋅aij. (9)

To be compliant with the requirements of the theory of additive value functions, the polynomials used as marginals should be non-negative and monotone over the criteria domains. To ensure monotonicity, the derivative of the marginal value function has to be non-negative, hence we impose that the derivative of each value function is a sum of squares. The degree of the derivative is therefore even which implies that is odd. This requirement reads:

with a PSD matrix of dimension , a vector of size with :

 Qj=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝qj,0,0qj,0,1⋯qj,0,djqj,1,0qj,1,1⋯qj,1,dj⋮⋮⋱⋮qj,dj,0qj,dj,1⋯qj,dj,dj⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,¯¯¯¯¯aj=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝1aj⋮adjj⎞⎟ ⎟ ⎟ ⎟ ⎟⎠.

By using SDP, we impose the matrix to be semidefinite positive and we set the following constraints on the values, for :

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩pj,1=qj,0,0,2pj,2=qj,1,0+qj,0,1,3pj,3=qj,2,0+qj,1,1+qj,0,2,⋮(2dj)pj,2dj=qj,dj,dj−1+qj,dj−1,dj,(2dj+1)pj,2dj+1=qj,dj,dj.

In UTA-poly, the marginal value functions and monotonicity conditions on marginals given in Equation (4) and (5) are replaced by the following constraints:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩U(a)=∑nj=0∑Dji=0pj,i⋅aij∀a∈A,Qj\omit\span\omitPSD∀j∈N,(i+1)pj,i+1=∑ig=0qj,g,i−gi={0,…,dj},∀j∈N,(i+1)pj,i+1=∑djg=i−djqj,g,i−gi={dj+1,…,2dj},∀j∈N. (10)

The optimization program composed of the objective given in Equation (3) and the set of constraints given in Equations (4) and (10) can be solved using convex programming, more precisely, semidefinite programming parillo2003 (). We refer to this new mathematical program as to UTA-poly. An explicit UTA-poly formulation for a simple problem involving 2 criteria and 3 alternatives is provided in A for illustrative purposes.

#### 3.3.2 Enforcing monotonicity of the marginals on the criteria domains

Ensuring the monotonicity of each marginal on the domain of each criterion (instead of the whole real line) is sufficient to satisfy the requirements of the additive value function model. To do so, we use Theorem 4 and only impose the non-negativity of the marginal derivative on the domain of each criterion. This results in the following condition on the derivative of the polynomial , for all :

In the above equation, and are two PSD matrices of size and a vector of size , where :

 Qj=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝qj,0,0qj,0,1⋯qj,0,djqj,1,0qj,1,1⋯qj,1,dj⋮⋮⋱⋮qj,dj,0qj,dj,1⋯qj,dj,dj⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠,Rj=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝rj,0,0rj,0,1⋯rj,0,djrj,1,0rj,1,1⋯rj,1,dj⋮⋮⋱⋮rj,dj,0rj,dj,1⋯rj,dj,dj⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠.

The value for are obtained as follows:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩pj,1=v2,j⋅rj,0,0−v1,j⋅qj,0,0,2pj,2=qj,0,0−rj,0,0+v2,j⋅(rj,1,0+rj,0,1)−v1,j⋅(qj,1,0+qj,0,1),3pj,3=(qj,1,0+qj,0,1)−(rj,1,0+rj,0,1)+v2,j⋅(rj,2,0+rj,1,1+rj,0,2)−v1,j⋅(qj,2,0+qj,1,1+qj,0,2)⋮(2dj)pj,2dj=(qj,dj,dj−2+qj,dj−1,dj−1+qj,dj−2,dj)−(rj,dj,dj−2+rj,dj−1,dj−1+rj,dj−2,dj)+v2,j⋅(rj,dj,dj−1+rdj−1,dj)−v1,j⋅(qj,dj,dj−1+qj,dj−1,dj),(2dj+1)pj,2dj+1=(qj,dj,dj−1+qj,dj−1,dj)−(rj,dj,dj−1+rj,dj−1,dj)+v2,j⋅rj,dj,dj−v1,j⋅qj,dj,dj,(2dj+2)pj,2dj+2=qj,dj,dj−rj,dj,dj.

If the degree is odd, then we have since .

In convex programming, in order to have polynomial marginals that are monotone on an interval, the monotonicity constraints in UTA have to be replaced by the following ones:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩U(a)=∑nj=0∑Dji=0pj,i⋅aij∀a∈A,Qj,Rj\omit\span\omitPSD∀j∈N,pj,1=v2,j⋅rj,0,0−v1,j⋅qj,0,0,(i+1)pj,i+1=∑i−1g=0(qj,g,i−g−rj,g,i−g)\omit\span\omit+∑ig=0(v2,j⋅rj,g,i−1−g−v1,j⋅qj,g,i−1−g)\omit\span\omiti={0,…,dj},∀j∈N,(i+1)pj,i+1=\omit\span\omit∑djg=i−dj−1(qj,g,i−1−g−rj,g,i−1−g)\omit\span\omit+∑djg=i−dj(v2,j⋅rj,g,i−g−v1,jqj,g,i−g)\omit\span\omiti={dj+1,…,2dj},∀j∈N,(2dj+2)pj,2dj+2=qdj,dj−rdj,dj∀j∈N. (11)

The optimization program composed of the objective given in Equation (3) and the set of constraints given in Equation (4) and (10) can be solved using semidefinite programming.

## 4 UTA-splines: additive value functions with splines marginals

In this section we describe a variant of UTA-poly which consists in using several polynomials for each value function. We first recall some theory about splines. Then we describe the new method called UTA-splines.

### 4.1 Splines

We recall here the definition of a spline. We detail the ones that are the most commonly used.

#### 4.1.1 Definition

A spline of degree is a function that interpolates the set of points for , with such that:

• for ;

• is a set of polynomials of degree equal to or smaller than , on each interval (at least one of the polynomials has a degree equal to );

• the derivative of are continuous up to a given degree on .

The degree of a spline corresponds to its highest polynomial degree. If all the polynomials have the same degree, the spline is said to be uniform.

The continuity of the spline at the connection points is ensured up to a given derivative. Usually, the continuity of the spline is guaranteed up to the second derivative (). It ensures the continuity of the slope and concavity at the connection points.

#### 4.1.2 Cubic splines

The most common uniform splines are the ones of degree 3 (), also called cubic splines. A cubic spline consists of a set of third degree polynomials which are continuous up to the second derivative at their connection points.

We denote by the polynomial of the spline going from connection point to connection point . Formally, each polynomial of the spline has the following form:

 si(x)=si,0+si,1x+si,2x2+si,3x3.

The use of cubic splines requires the determination of four parameters: , , and . If the spline interpolates points, there are overall parameters to determine.

Imposing the equality up to the second derivative at the connection points amounts to enforce the following constraints:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩si(xi)=yii={0,…,q−1},si(xi+1)=yi+1i={0,…,q−1},s′i(xi+1)=s′i+1(xi+1)i={0,…,q−2},s′′i(xi+1)=s′′i+1(xi+1)i={0,…,q−2}. (12)

Since there are constraints and parameters, two degrees of freedom remain. They can be set in different ways. For instance, one can impose and . This corresponds to imposing zero curvature at both endpoints of the spline.

### 4.2 UTA-splines: using splines as marginals

We give some detail on how using splines to model marginal value functions of an additive value function model. We formulate a semidefinite program that learns the parameters of such a model.

#### 4.2.1 Overview

Using splines continuous up to either the first or the second derivative instead of piecewise linear functions for the marginal value functions aims at obtaining more natural functions around the breakpoints.

With UTA-poly, the flexibility of the model is improved by using polynomials of higher degrees. In order to further improve the flexibility of the model, we propose now to hybridize the original UTA method which splits the criterion domain into equal parts with the UTA-poly approach which uses polynomials to model the marginal value functions. We call this new disaggregation procedures UTA-splines. The UTA-splines method combines the use of piecewise functions for the marginals (as in UTA) and the use polynomials (as in UTA-poly) for each piece of the function.

Compared to UTA, in UTA-splines the continuity of the marginal can be ensured up to the any derivative at the connection points. It enables to obtain more natural marginals which have a continuous curvature.

Constraints concerning the concavity/convexity of the marginal value functions on some sub-intervals can also be specified, if the information is available or if the decision maker is able to specify such constraints. This makes it possible to “control” the shape of the obtained model and improve its interpretability by the decision maker.

#### 4.2.2 Description of UTA-splines

In UTA-splines, we model marginals as uniform splines of degree . Formally the marginal of criterion reads:

 u∗j(aj)=SpDs,kj(aj)

where denotes a uniform spline of degree composed of pieces. Each piece of the spline is a polynomial of degree denoted by , . Formally it reads: