Learning Theory and Algorithms for Revenue Management in Sponsored Search

Learning Theory and Algorithms for Revenue Management in Sponsored Search

Lulu Wang, Huahui Liu, Guanhao Chen, Shaola Ren, Xiaonan Meng, Yi Hu
Alibaba Group, Hangzhou, 310052, China
{sengyun.wll, huahui.lhh, lea.cgh, shaola.rs, xiaonan.mengxn, erwin.huy}@alibaba-inc.com
Abstract

Online advertisement is the main source of revenue for Internet business. Advertisers are typically ranked according to a score that takes into account their bids and potential click-through rates (eCTR). Generally, the likelihood that a user clicks on an ad is often modeled by optimizing for the click through rates rather than the performance of the auction in which the click through rates will be used. This paper attempts to eliminate this disconnection by proposing loss functions for click modeling that are based on final auction performance. In this paper, we address two feasible metrics ( and ) to evaluate the online RPM (revenue per mille) directly rather than the CTR. And then, we design an explicit ranking function by incorporating the calibration factor and price-squashed factor to maximize the revenue. Given the power of deep networks, we also explore an implicit optimal ranking function with deep model. Lastly, various experiments with two real world datasets are presented. In particular, our proposed methods perform better than the state-of-the-art methods with regard to the revenue of the platform.

Learning Theory and Algorithms for Revenue Management in Sponsored Search


Lulu Wang, Huahui Liu, Guanhao Chen, Shaola Ren, Xiaonan Meng, Yi Hu Alibaba Group, Hangzhou, 310052, China {sengyun.wll, huahui.lhh, lea.cgh, shaola.rs, xiaonan.mengxn, erwin.huy}@alibaba-inc.com

1 Introduction

Online advertisement (AD) is the main source of revenue for Internet business. After years of evolution, the mechanism of AD has changed from the pre-allocated style to keyword-based matching of Sponsored (or paid) search. Sponsored search such as Google AdWords and Bing’s Paid Search, is search advertising that shows ads alongside algorithmic search results on search engine results pages (SERPs). Sponsored search has evolved to satisfy users’ need for relevant search results and advertisers’ desire for qualified traffic to their websites. And it is now considered to be among the most effective marketing vehicles available.

Generally, the problem of revenue optimization is typically framed as a question of finding the revenue-maximizing incentive compatible auction [?????]. Advertisers are typically ranked according to a score that takes into account their bids and the potential click-through rates. The revenue space in sponsored search can be divided into two parts: click-through rates prediction (eCTR) and optimal bidding. The prediction tries to estimate the user’s behavior as accurately as possible and it occupies an important position in the advertising system. Bidding tries to find a optimal price for each impression and it is closely related to the ROI (Return on Investment) of advertisers and revenue of advertising platforms. There are also previous works focused on the relationship between eCTR and bid and attempted to maximize the revenue through this way [??]. A number of variants (with parameters) have been used in practice. And almost all the works explore this space of parametrized mechanisms, searching for the optimal designs. However, most of the methods and their variants stay on theoretical analysis level. The advertising system, including our platforms, tunes the parameters in a sandbox environment until the performance converges. The efficiency of this approach is extremely low, and usually we cannot reach the optimal point.

This paper mainly focuses on the problem of revenue management. Our main contributions are summarized as follows:

  1. We propose loss functions for click modeling that are based on final auction performance. From the view of the loss function, we address two metrics ( and ) to indicate the online RPM. To our knowledge, this is the first paper in open literature that tries to evaluate the online RPM rather than the CTR directly.

  2. We explore the implicit and explicit ranking functions to maximize the RPM in sponsored search. Experiments and discussions on two real-world advertising platforms show consistent improvement over existing methods.

2 Preliminaries

2.1 Related Work

The target application of our study is online advertising. Some of the problem issues discussed in this study might be specific to the domain. In this section, we briefly introduce previous works related to the revenue management problem.

Offline Evaluation Metric

We studied papers from the proceedings of the International World Wide Web Conference (WWW), the ACM International conference on Web Search and Data Mining Conference (WSDM), the International Joint Conference on Artificial Intelligence (IJCAI), and the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Conference (SIGKDD) in years 2016 and 2017 in the area of algorithmic search and online advertising. We manually categorized the topic areas of the papers and the evaluation metrics they used, and summarizes that AUC (the Area Under the Receiver Operating Characteristic Curve) is the most widely used evaluation metric111As far as we know, most productive advertising platforms also apply AUC as the offline evaluation metric..

In fact, AUC is potentially suboptimal because the goal, in sponsored search, is not to optimize click action but instead to optimize performance of the auction in which the click through rates will be used. In view of the bidding, we often experience much more discrepancy between the offline (AUC for eCTR) and online performance (RPM) [?]. In this paper, we attempt to eliminate the disconnect by proposing loss functions for click modeling that are based on final auction performance. The experimental results show that the proposed evaluation metrics are highly promising.

Ranking Functions and Auction Mechanism

There are several works investigating on how to learn a reasonable ranking function to maximize the revenue [?????]. The revenue management in sponsored search is closely related to the auction mechanism. As the stakes have grown, the auction mechanism in sponsored search has seen several revisions over the years to improve efficiency and revenue. When first introduced by GoTo in 1998, ads were ranked purely by bid. In cost-per-click (CPC) advertising system, a parametrized family of ranking rules that order ads according to rank score is shared by every major search engine now.

(1)

where is a bid amount, is the estimated position-unbiased CTR [?]. Rank score is the estimated CTR weighted by a cost per click bid. In last decade, there has been an intense research activity in the study of the CTR prediction [????].

Under the assumption that CTRs are measured exactly, it is simple to verify that ranking ads in order of eCTR times bid is economically efficient. However, it is hard to measure the CTRs exactly no matter how many efforts you put in [??]. To address this, previous work mainly focused on two points:

  1. Calibration methods In [?], they use a calibration layer to match predicted CTRs to observed click-through rates. And they improve the calibration by applying a correction functions where is the predicted CTR and indicates a partition of the training data. They use isotonic regression on aggregated data to learn and .

  2. Price-squashed factor Lahaie and Pennock propose a parametrized family of ranking rules that order ads according to scores of the form

    (2)

    where is a parameter, called price-squashed factor or click investment power [?]. If , the auction prefers ads with higher estimated CTRs, otherwise, ads with higher bids. Further, Lahaie show that, in the presence of CTR uncertainty, using less than 1 can be justified on efficiency grounds[?].

In this paper, we incorporate both of the two factors to develop an explicit ranking function for revenue management. Further, given the power of deep network, we try to learn a reasonable ranking function with deep model.

2.2 Problem Formulation

Suppose that the training data is given as lists of feature vectors (refer in particular to eCTR and bid here) and their corresponding lists of labels (, ), . We are to learn a ranking model defined on object (feature vector) . Given a new list of objects (feature vectors) x, the learned ranking model can assign a score to each , . And then sort the objects based on the scores to generate a ranking list (permutation) . The evaluation is conducted at the list level, specifically, a evaluation measure is utilized.

Definition The revenue management is to optimize the ranking accuracy in terms of a performance measure on the training data:

(3)

where is the set of possible ranking functions, i.e., function (2), m denotes the number of samples and means the offline evaluation metrics.

From the definition, we can illustrate that the key points of the revenue management are: 1) an optimal ranking functions; 2) a metric to indicate the online performance . This paper will focus on the two aspects.

3 Methodology

3.1 Offline Evaluation Metric

AUC and Loss Function

AUC is defined as the expectation of ranking a randomly chosen positive sample above a randomly chosen negative one. It is a popular metric for ranking performance evaluation and is extensively used in regression problems with binary labeled samples (e.g. CTR prediction). We can further understand AUC in the view of loss function. The loss in organic search (model the click action only) is shown in Table 1. In the table, means an ad is clicked and means not. The pair (, ) means that a negative sample ranks above a positive one, and we miss a click.

Group Loss
0
0
1
0
Table 1: Loss in organic search ranking

Let be the features of sample . Let be the label of sample with and for positive and negative samples respectively. Let be the predicted ranking score of sample . To generalize the above table, in the view of loss function, we can address a formal definition of AUC.

(4)

where is the indicator function, means the expectation of an event and (for rigor and can be ignored). M and N is the number of positive and negative samples respectively.

Due to its discrete nature, AUC is neither applicable to real value labeled samples (e.g. RPM ranking) nor could be optimized directly. However, the formalization of AUC sheds light on two straight forward ideas.

  1. Real-Valued AUC () : the is not necessarily being binary and the AUC could be naturally extended for problems with real value labeled samples.

  2. Soft AUC () : by replacing the discrete indicator function with its continuous approximation(e.g. sigmoid), the AUC itself could be optimized with gradient based methods.

Real-Valued AUC and Loss Function

Different from the rank order by pure eCTRs, the problem involves the bid factor in sponsored search (Table 2). In the table, is the bidding for and means the ad is clicked, ((), ()) means that a negative ranks a above positive , and we lost revenue.

Group Loss
Table 2: Loss in sponsored search ranking

Inspired by the formulation of AUC, at the first glance, we can define a Real-Valued AUC () by relaxing the in the original AUC from binary into real values.

(5)

Note that the above definition is asymmetric as the correct ranking action being rewarded with while the incorrect one staying unpunished. This asymmetry is degenerated in binary valued cases since the reward is either 1 or 0. However, it might be problematic in real valued cases. To address this issue, the original can be fixed as follows222If there is no special description, the following refers to this expression..

(6)

In sponsored search, AUC, especially AUC measured only on eCTR, may make some discrepancy and even produce misleading estimations when using it as the indicator for online RPM. Instead of characterizing click-through, depict the online RPM directly. The superiority of makes it more suitable for advertising scenarios and can be used as an offline measure of online RPM. The general solution of is described in Algorithm 1. The metric is bounded between so that it can be used as an offline evaluation for online RPM.

1:A sequence of (,,)
2:The offline evaluation metric
3:Sort the sequence by in descending order;
4:for ; ;  do
5:     for ; ;  do
6:         
7:     end for
8:end for
9:, where is the total loss of the sequence ranked by in descending order.
10:return
Algorithm 1 The algorithm of

Soft AUC

The could be defined by replacing the hard indicator function in the original AUC with soft ones (e.g. sigmoid). In this way, is derivable with respect to the parameters.

(7)

And an empirical by involving a predictor parameter on sample set could be defined as follows with as normalizer.

(8)

3.2 Optimal Ranking Function

Explicit Ranking Function

Based on previous research results [??], we combine the calibration methods and price-squashed factor to develop an explicit ranking function for revenue management. In practice, the explicit ranking function is designed as follows.

(9)

where is a calibration factor ( is the predicted CTR, is the price-squashed factor to tune the weight between eCTR and bid, and is the calibrated eCTRs. And we use a piecewise linear function to cope with the complicated shapes in bias curves. In this way, the definition of the problem (function 3) can be further refined as follows,

(10)

where is the ranking function with the form of function 9. Two algorithms are presented to find the optimal parameters.

Grid Search Method Since the is not derivable with respect to the parameter, we can only use the grid search methods to solve this problem. The detailed algorithm is shown in Algorithm 2.

1:The sequence {(, , ), …, (, , )}
2:Optimal
3:Initialization: initialize =0, =0, =0.1 and =2.0
4:for  do
5:     Calculate with Algorithm 1
6:     if  then
7:         
8:         
9:     end if
10:end for
11:return
Algorithm 2 The Grid search method for revenue management

Gradient Descent Method Different from , the is derivable with respect to , it could be maximized with gradient based methods. However, the computational complexity is unacceptable for industrial problems with hundreds of millions of samples. To tackle this issue, we adopts mini-batched gradient method (Algorithm 3). At the beginning, the whole sample set is randomly divided into a series of sub sets, each of which contains tractable number(e.g. 100) of samples. Then those sub sets are repeatedly fed into the optimizer and the is consistently updated until convergence. Experimental results show that our method converges to with both and maximized.

1:sample set
2: which minimizes -
3:Initialize randomly
4:Split into sub sets , … ,
5:while not converged yet do
6:     for ; ;  do
7:         Compute
8:         Update
9:     end for
10:end while
11:return
Algorithm 3 Mini-Batched Gradient Descent Method for revenue management

Implicit Ranking Function

The popularity of deep learning has attracted the attention of countless researchers. One of the most impressive facts about neural networks is that they can fit to any function. That is, no matter what the function, there is guaranteed to be a neural network so that for every possible input, , the value (or some close approximation) is output from the network [?]. Given the power of deep networks, we explore to learn a reasonable ranking function with deep model. The structure of the model is show in Figure 1. In practice, we use the wide and deep networks [?] to train a CTR prediction model. And the estimated CTR and the bid are fully connected with 3-hidden layers. The loss function of the task is the proposed and we use AdaGrad to learn the implicit function of eCTR and bid.

Figure 1: The deep networks to learn an implicit ranking function

3.3 Summary of The Methods

This paper has two major innovations: 1) From the view of loss function, we propose metrics for click that are based on final auction performance. The metrics are addressed to indicate the online RPM directly instead of the CTR, which is significant in sponsored search. 2)We explore the implicit and explicit ranking functions to maximize the online RPM. The summary of the methods is shown in Table 3.

Methods Metric Rank Func Optimization
Explicit Grid Search
Explicit Gradient Descent
Implicit Gradient Descent
Table 3: Summary of the methods

4 Experiments and benchmarking

We depict our experiments for benchmarking the proposed methods in this section. The experimental results indicate that our proposed metrics are more effective than any other state-of-the-art metrics. Further, we make an exploration and discussion for the optimal ranking function.

4.1 Experimental Data Set

Throughout the paper we show motivating examples and the analyses of the model performance on two e-commerce search engine, www.alibaba.com and www.aliexpress.com333We intend to make the data and code available for open research.. In particular, www.alibaba.com is a b2b (business to business) e-commerce search engine and www.aliexpress.com is mainly about b2c (business to customer). The experimental results on the two cross-domain platforms demonstrate the generality of the presented methods.

4.2 Performance of the Evaluation Metrics

In this section, we evaluate the proposed metrics and existing metrics (mainly AUC) on the two data sets listed in Section 4.1, and compare their performance. We use a Confusion Matrix to measure the performance of the offline evaluation metric (Table 4). The matrix can be very intuitive to show the performance of online and offline. Our platforms have hundreds of pages and traffic sources and we select the most important 50 pages from Aliexpress platform and 30 pages from Alibaba platform as the experimental environment. The selected pages contribute to the main revenue of the platforms (91.5% and 70.6% respectively), and their traffic distributions are relatively stable, which is suitable for comparison experiments. We train a new model and tune parameter settings based on historic logs data to collect a set of experimental data. In order to verify the stability and effectiveness of the proposed metrics, we collect experimental data from the productive system for 14 days. We finally collect 1120 sets of experimental data. Table 4 summarize offline and online matrices which are tested on online A/B testing environments on Alibaba and Aliexpress with real-time user traffic.

Online+1 Online-2
Offline+3 504 89
Offline-4 22 505
(a) AUC Matrix
Online+ Online-
Offline+ 581 12
Offline- 4 523
(b) Matrix
Online+ Online-
Offline+ 582 11
Offline- 4 523
(c) Matrix
Table 4: The Confusion Matrices of different evaluation metrics

As Table 4 shows, while AUC is a quite reliable method to assess the performance of predictive models, it still suffers from drawbacks in the sponsored search. From the table, we can draw the following conclusions:

  1. It has been observed that higher AUC dose not necessarily mean better ranking always. In our observation, 8% () of the samples perform well in the offline but bad on the online system.

  2. On the other hand, a lower AUC dose not necessarily mean a worse performance in the online environment. 2% () of the inconsistencies may be quite misleading, and we may miss the optimal solution (In practice, AUC as the indicator, the 2% will not be released online). Despite the widespread use, in general, the AUC is neither sufficient nor necessary for online performance in sponsored search.

  3. The proposed metrics eliminate the discrepancy between online and offline greatly. The and have a comparable performance, and beat the AUC well. The main reason is that and try to model the performance of the whole ranking function, while AUC merely measures the accuracy of the click-though rates without considering the factor of the bid.

4.3 Convergence of

Figure 2: Convergence of and the equivalent of Algorithm 2 and Algorithm 3
Alibaba.com Aliexpress.com
Methods Rank function Objective Metric RPM CTR CPC RPM CTR CPC
Baseline Manual tuning - - - - - -
Method1 +9.65% +11.68% -2.14% +10.01% +28.48% -14.98%
Method2 +9.92% +12.71% -3.68% +12.97% +31.97% -14.60%
Table 5: Performance of A/B test with real-world traffic

The is proposed to solve the non-derivable issue of , so that we can use gradient descent method to optimize the parameters. Theoretically, the is approximately equivalent to . In this paper, we design an experiment to verify the convergence of and the equivalence of and . In the experiment, the ranking function is the form of function 2. And we use Algorithm 2 and Algorithm 3 to optimize the parameter respectively. The experimental result is shown in Figure 2. Comparing Figure 2(a) and Figure 2(b) we can conclude that converges well. Comparing Figure 2(b) and Figure 2(c) we come to further conclusion that Algorithm 2 and Algorithm 3 have the equivalent performance, and the optimal value of is 0.43.

4.4 Exploration of Ranking Function

Motivated by [???], we design an explicit ranking function (Function 9). In view of the power of deep network, we design a deep model to learn the optimal implicit ranking function. By comparing these two methods, we come to some interesting findings which are shown in Figure 3. From the figure, we can draw the following conclusions:

  1. Theoretically, the explicit ranking function is a special case of the implicit ranking function. However, experimental results show that the designed ranking function and the model-based function have a considerable performance. The two approaches converge to the same optimal value.

  2. The implicit ranking function convergence faster than the explicit one. Multiple rounds of experiments show that deep networks make it easier to capture the functional relationship between and .

Figure 3: Exploration of Ranking Function

4.5 Performance of Online A/B Test

We have deployed our proposed strategies on Alibaba and AliExpress platforms, which are two mainstream platforms in the global e-commerce market. The and are used as the final evaluation metrics in our system. In view of the performance, we use the explicit ranking function in our production system finally. The proposed approach can be regarded as a post-processing process based on the existing click-through prediction model. For the sake of comparability, the baseline models and our proposed model are constructed on the same feature representation. Parameters are tuned separately and we report the best results.

Table 5 is the A/B test result of online systems. We show performance over three metrics. The experimental results show that the methods described in this paper outperform the state-of-the-art models on RPM, CTR and CPC. The proposed methods have brought significant improvement to the RPM of the platforms. It is well to be reminded, our direct optimization object is the platform’s RPM. From the results, however, we can illustrate that we achieved the goal without compromising the advertiser’s benefit and the customer’s search experience. On the contrary, we improved the CTR and advertisers’ ROI at the same time.

5 Conclusion and discussion

In this work we looked into the revenue management problem that contains the Alibaba and Aliexpress as special cases. From the view of loss function, we propose two metrics, and for click modeling that are based on final auction performance. The metrics are potentially more optimal than AUC because the goal is to depict the online RPM directly. A lot of theoretical analysis and experimental results verify the superiority of the proposed metrics as an indicator for the online RPM. We also explored the ranking functions, both implicit and explicit ones, to maximize the revenue in sponsored search. The methods are deployed on two production platforms. Outstanding profit gain over the baseline were observed in online A/B tests with real-world traffic.

For future work, we will analyze the factor of position bias [??] in modeling the revenue management. We also plan to further explore the implicit ranking functions to maximize the online revenue.

References

  • [Chen and Yan, 2012] Ye Chen and Tak W Yan. Position-normalized click prediction in search advertising. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 795–803. ACM, 2012.
  • [Cheng et al., 2016] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pages 7–10. ACM, 2016.
  • [Dhar and Ghose, 2010] Vasant Dhar and Anindya Ghose. Research commentary—sponsored search and market efficiency. Information Systems Research, 21(4):760–772, 2010.
  • [Drutsa, 2017] Alexey Drutsa. Horizon-independent optimal pricing in repeated auctions with truthful and strategic buyers. In Proceedings of the 26th International Conference on World Wide Web, pages 33–42. International World Wide Web Conferences Steering Committee, 2017.
  • [Fain and Pedersen, 2006] Daniel C Fain and Jan O Pedersen. Sponsored search: A brief history. Bulletin of the Association for Information Science and Technology, 32(2):12–13, 2006.
  • [Guo et al., 2017] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. Deepfm: A factorization-machine based neural network for ctr prediction. In Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 1725–1731, 2017.
  • [He et al., 2014] Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine Atallah, Ralf Herbrich, Stuart Bowers, et al. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the Eighth International Workshop on Data Mining for Online Advertising, pages 1–9. ACM, 2014.
  • [Hofmann et al., 2014] Katja Hofmann, Anne Schuth, Alejandro Bellogin, and Maarten De Rijke. Effects of position bias on click-based recommender evaluation. In European Conference on Information Retrieval, pages 624–630. Springer, 2014.
  • [Juan et al., 2016] Yuchin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. Field-aware factorization machines for ctr prediction. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 43–50. ACM, 2016.
  • [Lahaie and Mcafee, 2011] Sébastien Lahaie and R Preston Mcafee. A Bayesian Approach to Efficient Ranking in Sponsored Search. Springer Berlin Heidelberg, 2011.
  • [Lahaie and Pennock, 2007] Sébastien Lahaie and David M Pennock. Revenue analysis of a family of ranking rules for keyword auctions. In Proceedings of the 8th ACM conference on Electronic commerce, pages 50–56. ACM, 2007.
  • [Lahaie et al., 2007] Sébastien Lahaie, David M Pennock, Amin Saberi, and Rakesh V Vohra. Sponsored search auctions. Algorithmic game theory, pages 699–716, 2007.
  • [McMahan et al., 2013] H Brendan McMahan, Gary Holt, David Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, et al. Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1222–1230. ACM, 2013.
  • [Medina and Mohri, 2014] Andres M Medina and Mehryar Mohri. Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 262–270, 2014.
  • [Nielsen, 2015] Michael A Nielsen. Neural networks and deep learning. Determination Press, 2015.
  • [Puhr et al., 2017] Rainer Puhr, Georg Heinze, Mariana Nold, Lara Lusa, and Angelika Geroldinger. Firth’s logistic regression with rare events: accurate effect estimates and predictions? Statistics in medicine, 36(14):2302–2317, 2017.
  • [Roberts et al., 2016] Ben Roberts, Dinan Gunawardena, Ian A Kash, and Peter Key. Ranking and tradeoffs in sponsored search auctions. ACM Transactions on Economics and Computation (TEAC), 4(3):17, 2016.
  • [Rong et al., 2017] Jiang Rong, Tao Qin, Bo An, and Tie-Yan Liu. Revenue maximization for finitely repeated ad auctions. In AAAI, pages 663–669, 2017.
  • [Shen and Su, 2007] Zuo-Jun Max Shen and Xuanming Su. Customer behavior modeling in revenue management and auctions: A review and new research opportunities. Production and operations management, 16(6):713–728, 2007.
  • [Thompson and Leyton-Brown, 2013] David RM Thompson and Kevin Leyton-Brown. Revenue optimization in the generalized second-price auction. In Proceedings of the fourteenth ACM conference on Electronic commerce, pages 837–852. ACM, 2013.
  • [Yi et al., 2013] Jeonghee Yi, Ye Chen, Jie Li, Swaraj Sett, and Tak W Yan. Predictive model performance: Offline and online evaluations. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1294–1302. ACM, 2013.
  • [Zhu et al., 2009a] Yunzhang Zhu, Gang Wang, Junli Yang, Dakan Wang, Jun Yan, and Zheng Chen. Revenue optimization with relevance constraint in sponsored search. In Proceedings of the Third International Workshop on Data Mining and Audience Intelligence for Advertising, pages 55–60. ACM, 2009.
  • [Zhu et al., 2009b] Yunzhang Zhu, Gang Wang, Junli Yang, Dakan Wang, Jun Yan, Jian Hu, and Zheng Chen. Optimizing search engine revenue in sponsored search. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 588–595. ACM, 2009.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
212518
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description