1 Introduction
\headevenname\zihao

-5Tsinghua Science and Technology, June 2013, 18(3): 000-000\headoddnameFirst author et al.:Click and Type Your Title Here …

{strip}\zihao

3
Template for Preparation of Manuscripts for
Tsinghua Science and Technology

\zihao

5

This template is to be used for preparing manuscripts for submission to Tsinghua Science and Technology. Use of this template will save time in the review and production processes and will expedite publication. However, use of the template is not a requirement of submission. Do not modify the template in any way (delete spaces, modify font size/line height, etc.).

\zihao

5- TSINGHUA SCIENCE AND TECHNOLOGY

\zihao

5- I S S N l l 1 0 0 7 - 0 2 1 4 l l 0 ? / ? ? l l p p ? ? ?- ? ? ?

\zihao

5- V o l u m e  1 8,  N u m b e r  3,  J u n e  2 0 1 3

{strip}\zihao

3 Ranking with Adaptive Neighbors

\zihao

5 Muge Li, Liangyue Li, and Feiping Nie

\zihao-5 Abstract: Retrieving the most similar objects in a large-scale database for a given query is a fundamental building block in many application domains, ranging from web searches, visual, cross media, and document retrievals. State-of-the-art approaches have mainly focused on capturing the underlying geometry of the data manifolds. Graph-based approaches, in particular, define various diffusion processes on weighted data graphs. Despite success, these approaches rely on fixed-weight graphs, making ranking sensitive to the input affinity matrix. In this study, we propose a new ranking algorithm that simultaneously learns the data affinity matrix and the ranking scores. The proposed optimization formulation assigns adaptive neighbors to each point in the data based on the local connectivity, and the smoothness constraint assigns similar ranking scores to similar data points. We develop a novel and efficient algorithm to solve the optimization problem. Evaluations using synthetic and real datasets suggest that the proposed algorithm can outperform the existing methods.
Key words:
Ranking; Adaptive neighbors; Manifold structure
\zihao

6

Muge Li is with Cixi Hanvos Yucai High School, Ningbo, China, 315300. E-mail: 1606024250@qq.com.
Liangyue Li is with with the School of Computing, Informatics, Decision Systems Engineering, Arizona State University, Tempe, AZ, US, 85281. E-mail: liangyue@asu.edu.
Feiping Nie is with the School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi’an, China, 710072. E-mail: feipingnie@gmail.com.
To whom correspondence should be addressed.
Manuscript received: year-month-day; revised: year-month-day; accepted: year-month-day
\zihao

5

1 Introduction

Retrieving the most similar objects in a large-scale database for a given query is a fundamental building block in many application domains, ranging from web search [page1999pagerank], visual retrieval [he2004manifold, tong2006manifold, bai2017regularized, donoser2013diffusion, iscenefficient], cross media retrieval [yang2009ranking], to document retrieval [Cao:2006:ARS:1148170.1148205]. The most straightforward approach to such retrieval tasks is to compute the pairwise similarities between objects in the Euclidean space as the ranking scores. Nonetheless, high-dimensional data often lie on a nonlinear manifold [roweis2000nonlinear, tenenbaum2000global]. The Euclidean distance based approach largely ignores the intrinsic manifold structure and might degrade the retrieval performance.

State-of-the-art methods mainly focus on capturing the underlying geometry of the data manifold. The most common way is to first represent the data manifold using a weighted graph, wherein each vertex is a data object, and the edge weights are proportional to the pairwise similarities. All the vertices then repeatedly spread their affinities to their neighborhood via the weighted graph until a global stable state is reached. The various diffusion processes mainly differ in the transition matrix and the affinity update scheme [donoser2013diffusion]. Among others, the random walk transition matrix is widely used in PageRank [page1999pagerank], random walk with restart [tong2006fast], self diffusion [wang2012affinity], label propagation [zhu2003semi] and graph transduction [bai2010learning]. The random walk transition matrix is a row-stochastic matrix such that the transition probability is proportional to the edge weights.A slight variant is the symmetric normalized transition matrix used in the Ranking on Data Manifold method [Zhou:2003:RDM:2981345.2981367]. To reduce the effect of noisy nodes, random walks can be restricted to the nearest neighbors by sparsifying the original weighted graph [Szummer:2001:PLC:2980539.2980661, 5206844]. For iterative update of the affinities, the random walk with restart allows for the random surfer to randomly jump to an arbitrary node. The modified diffusion process on the standard graph captures the high-order relations [5206844] and is equivalent to the diffusion process on the Kronecker product graph [yang2013affinity]. Despite success, graph-based ranking methods rely on fixed-weight graphs, making the ranking results sensitive to the input affinity matrix.

In this study, we propose the ranking with adaptive neighbors (RAN) algorithm simultaneously learns the data affinity matrix and the ranking scores. The proposed optimization explores two objectives. First, data points with smaller distance in the Euclidean space have high chance to be neighbors, i.e., more similar. In contrast to other graph-based ranking methods, the similarity is not computed a priori but is learned via optimizing the ranking scores. Consequently, the neighbors of each datum are adaptively assigned. Second, similar data points have similar ranking scores. This is essentially the smoothness constraint in graph transduction methods [wang2008graph]. We develop a novel and efficient algorithm to solve the optimization problem. Evaluations using synthetic and real datasets suggest that the proposed ranking algorithm outperforms existing methods.

In section 2, we present the proposed RAN algorithm. Next, in section 3 we discuss the empirical evaluation results and, in section 4, we summarize the conclusions.

Notations: Throughout the paper, the matrices are written as upper-case letters. For matrix , the -th row and -th element of are denoted by and , respectively. An identity matrix is denoted by , and denotes the column vector with all elements as one. For vector and matrix , and represent all the elements of and are nonnegative.

2 Ranking with Adaptive Neighbors

In this section, we discuss RAN algorithm and then the optimization approach for solving the objective function.

2.1 Proposed Formulation

Given a set of data points with a query indicator vector , where if is the query and otherwise, the task is to find a function that assigns each point in the data a ranking score according to its relevance to the queries. We explore the local connectivity of each point for ranking purposes and in particular consider the -nearest points as the neighbors of a specific node.

Data points separated by small distances in the Euclidean space have high chance to be neighbors. We denote the probability that the -th data point , and the -th data point are neighbors by . Intuitively, if the two data points are separated by a small distance, i.e., is small, then their probability of being connected is likely high. One way to find such probabilities is to solve the following optimization problem:

(1)

where is a vector with the -th element as . Nonetheless, the above optimization problem has a trivial solution, that is, for the nearest data point of , otherwise . This can be addressed by adding a -norm regularization on to drag closer to the center of mass of the simplex defined by . This slight modification gives us the following optimization problem:

(2)

where the second term is the regularization term and is the regularization parameter.

For each data point , we compute its probability of connecting to other data points using Eq. (2). As a result, we assign the neighbors of all the data points by solving the following problem:

(3)

Similar data points have similar ranking scores, essentially a smoothness constraint over the data graph. We assume the matrix is the similarity matrix obtained from assigning the neighbors, where each row is . We write the smoothness constraint as,

(4)

where is the vector of ranking scores for all the data points, is the Laplacian matrix of the affinity matrix, and the degree matrix is a diagonal matrix with the -th diagonal element defined as .

Combining the above and using the information from the query, we derive the final objective function:

(5)

where is a diagonal matrix with (a large constant) if is the query, otherwise . The last term is equivalent to to make the ranking results consistent with the queries. The queries are given much more weights as they reflect the user’s search intentions. In non-queried examples, we do not know a priori whether they meet the user’s intentions and give them lower weights. It is not easy to solve Eq. (5) because and both depend on the similarity matrix . In the next subsection, we propose a novel and efficient algorithm to solve this problem.

2.2 Optimization Solutions

We propose to solve Eq. (5) via an alternative optimization approach. We first fix and then the problem transforms to:

(6)

We take the derivative of the above objective function w.r.t. and set it to 0, obtaining the following linear equation:

(7)

The solution is easily obtained as .

When is fixed, Eq. (5) transforms to:

(8)
(9)

And based on Eq. (4), it is written

(10)

Because the summations are independent of each other given , we can solve the following sub-problem individually for each :

(11)

We denote and , and denote as a vector with the -th element as . Then Eq. (11) is reformulated as:

(12)

Next, we will show how to solve this equation in a closed form using the Lagrange multipliers method. The Lagrangian function of the problem is

(13)

where and are non-negative Lagrangian multipliers.

According to the KKT condition, the optimal solution is

(14)

where is the shorthand for .

It is often desirable to focus on the locality of each point, as it can reduce the effect of noisy data and boost the performance in practice [Nie:2014:CPC:2623330.2623726]. In this study, we will learn the sparse vector and allow to connect to its -nearest neighbors. Such sparsification of would minimize the computational cost.

We sort in ascending order such that . We want to learn the sparse with only nonzero elements, from Eq. (14); thus we have and . Therefore

(15)

Considering the constraint , we obtain

(16)

Substituting Eq. (16) into Eq. (15), we obtain the following inequality for

(17)

For the objective function in Eq. (12) to have an optimal solution , we set to

(18)

The overall is set as the mean of all :

(19)

The algorithm for solving the optimization problem in Eq. (5) is summarized in Algorithm 1.

0:  (1) Data matrix , (2) Query indicator vector ,(3) parameters , .
0:  The ranking scores .
1:  Initialize and compute accordingly;
2:  while not converged do
3:     Define the diagonal matrix as: if and otherwise;
4:     Update by solving Eq. (7) as ;
5:     for  do
6:        Update -th row of by solving Eq. (12)
7:     end for
8:  end while
Algorithm 1 Algorithm to solve problem in Eq. (5)

3 Experiments

In this section, we show the performance of the proposed ranking algorithm RAN (Algorithm 1) on synthetic and real world datasets.

3.1 Synthetic datasets

We randomly generate two synthetic datasets constructed as two moons (Fig. 1) and three rings (Fig. 2) patterns. A query is given in the upper moon and the innermost ring marked in red cross. The task is to rank the remaining data points according to their relevance to the query. We represent the ranking scores returned by RAN using the diameter of the data points such that larger points are more relevant. From Fig. 1, we observe that the ranking scores gradually decrease along the upper moon. The same decreasing trend is also observed in the lower moon. In addition, the ranking scores in the upper moon are generally much higher than in the lower moon. Such ranking outcome is intuitively expected. We make similar observations for the three rings in Fig. 2. The data points in the innermost ring are more relevant than those in the middle ring, which are more relevant than those in the outermost ring. These results clearly show that the proposed RAN can capture the underlying manifold pretty well.

Fig.  1: Ranking Example using Two Moon.
Fig.  2: Ranking Example using Three Ring.

3.2 Real dataset

We compare the retrieval performance on three real image datasets: Yale [georghiades2001few], ORL [samaria1994parameterisation] and USPS [hull1994database].

YALE: Yale contains face images of subjects at different poses and illumination conditions. We extract 11 images at different conditions for 15 subjects. Each image is down-sampled and normalized to zero mean and unit variance. The bandwidth for constructing the weighted graph for the graph based baselines is . We set and for RAN.

ORL: ORL contains contains 400 images with ten different images for 40 different subjects each. The bandwidth for constructing the weighted graph for the graph based baselines is . We set and for RAN.

USPS: This dataset collects images of handwritten digits (0-9) from envelopes of the U.S. Postal Service. We extract 40 images for each digit and normalize them to 16 16 pixels in gray scale. The bandwidth for constructing the weighted graph for the graph based baselines is . We set and for RAN.

On all the datasets, we use each image as query and measure the retrieval accuracy by ranking all the other images. We compare the proposed RAN algorithm with the Euclidean distance based baseline and several other diffusion methods, including self-diffusion (SD) [wang2012affinity], Personalized PageRank (PPR) [haveliwala2002topic], Manifold Ranking [Zhou:2003:RDM:2981345.2981367] and Graph Transduction (GT) [bai2010learning]. The results are shown in Tables 12 and 3. From the results, we can see that the proposed RAN algorithm consistently outperforms all other methods. The straightforward Euclidean distance based baseline is the worst because it ignores the manifold structure in the data. The various diffusion based methods capture the manifold information to a certain extent, but they assume the weighted data graph is fixed. We instead adaptively learn the localized weighted graph optimized for the ranking. To study how the locality of the graph, i.e., the number of neighbors , affects the retrieval performance, we show (Fig. 3) the retrieval performance by varying the number of neighbors on USPS dataset. As it can be seen, it is important to select a reasonable value for for the retrieval. For USPS, the best performance can be achieved at .

Methods Precision@10 Recall@10
Euclidean Distance 66.61 60.55
SD [wang2012affinity] 69.03 62.75
PPR [haveliwala2002topic] 69.03 62.75
Manifold Ranking [Zhou:2003:RDM:2981345.2981367] 68.85 62.59
GT [bai2010learning] 68.91 62.65
RAN (ours) 72.00 65.45
Table 1: Retrieval performance (%) for YALE.
Methods Precision@15 Recall@15
Euclidean Distance 41.56 62.35
SD [wang2012affinity] 46.87 70.30
PPR [haveliwala2002topic] 47.15 70.73
Manifold Ranking [Zhou:2003:RDM:2981345.2981367] 47.35 71.02
GT [bai2010learning] 48.97 73.45
RAN (ours) 49.02 73.53
Table 2: Retrieval performance (%) for ORL.
Methods Precision@50 Recall@50
Euclidean Distance 45.53 56.91
SD [wang2012affinity] 47.42 59.27
PPR [haveliwala2002topic] 47.39 59.24
Manifold Ranking [Zhou:2003:RDM:2981345.2981367] 47.42 59.28
GT [bai2010learning] 46.18 57.72
RAN (ours) 56.19 70.23
Table 3: Retrieval performance (%) for USPS.
Fig.  3: Retrieval Performance (%) v.s. the number of neighbors on USPS.

4 Conclusions

We study the data ranking problem by capturing the underlying geometry of the data manifold. Instead of relying on the fixed-weight data graphs, we propose a new ranking algorithm that is able to learn the data affinity matrix and the ranking scores simultaneously. The proposed optimization formulation assigns adaptive neighbors to each data point based on the local connectivity and the smoothness constraint assigns similar ranking scores to similar data points. An efficient algorithm is developed to solve the optimization problem. Evaluations using synthetic and real datasets demonstrates the superior performance of the proposed algorithm.

References

{strip}{biography}

[yourphotofilename.eps] First B. Author  Photo. Biographies should be limited to one paragraph consisting of the following: sequentially ordered list of degrees, including years achieved; sequentially ordered places of employ concluding with current employment; associa-tion with any official journals or conferences; major profes-sional and/or academic achievements, i.e., best paper awards, research grants, etc.; any publication information (number of papers and titles of books published); current research interests; association with any professional associations.

{biography}

[yourphotofilename.eps] Second B. Author Photo. Biographies should be limited to one paragraph consisting of the following: sequentially ordered list of degrees, including years achieved; sequentially ordered places of employ concluding with current employment; associa-tion with any official journals or conferences; major profes-sional and/or academic achievements, i.e., best paper awards, research grants, etc.; any publication information (number of papers and titles of books published); current research interests; association with any professional associations.

{biography}

[yourphotofilename.eps] Third C. Author Photo. Biographies should be limited to one paragraph consisting of the following: sequentially ordered list of degrees, including years achieved; sequentially ordered places of employ concluding with current employment; associa-tion with any official journals or conferences; major profes-sional and/or academic achievements, i.e., best paper awards, research grants, etc.; any publication information (number of papers and titles of books published); current research interests; association with any professional associations.

{strip}

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
124906
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description