Modeling Generalized Rate-Distortion Functions
Many multimedia applications require precise understanding of the rate-distortion characteristics measured by the function relating visual quality to media attributes, for which we term it the generalized rate-distortion (GRD) function. In this study, we explore the GRD behavior of compressed digital videos in a three-dimensional space of bitrate, resolution, and viewing device/condition. Our analysis on a large-scale video dataset reveals that empirical parametric models are systematically biased while exhaustive search methods require excessive computation time to depict the GRD surfaces. By exploiting the properties that all GRD functions share, we develop an Robust Axial-Monotonic Clough-Tocher (RAMCT) interpolation method to model the GRD function. This model allows us to accurately reconstruct the complete GRD function of a source video content from a moderate number of measurements. To further reduce the computational cost, we present a novel sampling scheme based on a probabilistic model and an information measure. The proposed sampling method constructs a sequence of quality queries by minimizing the overall informativeness in the remaining samples. Experimental results show that the proposed algorithm significantly outperforms state-of-the-art approaches in accuracy and efficiency. Finally, we demonstrate the usage of the proposed model in three applications: rate-distortion curve prediction, per-title encoding profile generation, and video encoder comparison.
Rate-distortion theory provides the theoretical foundations for lossy data compression and are widely employed in image and video compression schemes. Many multimedia applications require precise measurements of rate-distortion functions to characterize source signal and maximize user Quality-of-Experience (QoE). Examples of applications that explicitly use rate-distortion measurements are codec evaluation , rate-distortion optimization , video quality assessment (VQA) , encoding representation recommendation [41, 34, 15, 10], and QoE optimization of streaming videos [38, 11].
Digital videos usually undergo a variety of transforms and processes in the content delivery chain, as shown in Fig. 1. To address the growing heterogeneity of display devices, contents, and access network capacity, source videos are encoded into different bitrates, spatial resolutions, frame rates, and bit depths before transmitted to the client. In an adaptive streaming video distribution environment , based on the bandwidth, buffering, and computation constraints, client devices adaptively select a proper video representation on a per-time segment basis to download and render. Each process influences the visual quality of a video in a different way, which can be jointly characterized by a generalized rate-distortion (GRD) function. In general, this attribute-distortion mapping comprises several complex factors, such as source content, operation mode/type of encoder, rendering system, and human visual system (HVS) characteristics. In this work, we assume the GRD surface is a function , where the input of the function is the video representation consisting of bitrate and spatial resolution, the output of the function is the perceptual video quality at different viewing condition, and represents the number of viewing devices, respectively. Furthermore, the GRD function is content- and encoder-dependent.
Despite the tremendous growth in computational multimedia over the last few decades, estimating a GRD function is difficult, expensive, and time-consuming. Specifically, probing the quality of a single sample in the GRD space involves sophisticated video encoding and quality assessment, both of which are expensive processes. For example, the recently announced highly competitive AV1 video encoder  and video quality assessment model VMAF  could be over 1000 times and 10 times slower than real-time for full high-definition (19201080) video content. Given the massive volume of multimedia data on the Internet, the real challenge is to produce an accurate estimate of the GRD function with a minimal number of samples.
We aim to develop a GRD function estimation framework with three desirable properties:
Accuracy: It produces asymptotically unbiased estimation of GRD function, independent of the source video complexity and the encoder mechanism.
Speed: It requires a minimal number of samples to reconstruct a full GRD function.
Mathematical soundness: The GRD model has to be mathematically well-behaved, making it readily applicable to a variety of computational multimedia applications.
To achieve accuracy, we analyze the properties that all GRD functions share, based on which we formulate the GRD function approximation problem as a quadratic programming problem. The solution of the optimization problem provides an optimal interpolation model lying in the theoretical GRD function space. To achieve speed, we propose an efficient sampling algorithm that constructs a set of queries to maximize the expected information gain. The sampling scheme results in a unique sampling sequence invariant to source content, enabling parallel encoding and quality assessment processes. To achieve mathematical soundness, the GRD model is inherited from the Clough-Toucher (CT) interpolation method, and the function is differentiable everywhere on the domain of interests. Extensive experiments demonstrate that the resulting GRD function estimation framework achieves consistent improvement in speed and accuracy compared to the existing methods. The superiority and usefulness of the proposed algorithm are also evident by three applications.
Ii Related work
Although rate-distortion theory has been successfully employed in many multimedia applications, the research in the GRD surface modeling has only become a scientific field of study in the past decade. Existing methods can be roughly categorized based on their assumptions about the shape of a GRD function. The first model class only makes weak assumptions about the properties of the GRD functions. For example,  assumes the continuity of GRD functions and apply linear interpolation to estimate the response function after densely sampling the video representation space. However, the exhaustive search process is computationally expensive, not to mention the number of samples required increases exponentially with respect to the dimension of input space.
By contrast, the second class of models make strong a priori assumptions about the form of the GRD function to alleviate the need of excessive training samples. For example,  assumes the video quality exhibits an exponential relationship with respect to the quantization step, spatial resolution, and frame rate. Alternatively, Toni et al. [34, 25] derived a reciprocal function to model the GRD function. Similarly,  modeled the rate-quality curve at each spatial resolution with a logarithmic function. A significant limitation of these models is that domains of the analytic functional forms are restricted only to the bitrate dimension and several discrete resolutions, lacking the flexibility to incorporate other dimensions such as frame rate and bit depth, and the capability to predict the GRD behaviors at novel resolutions.
In addition to the specific limitations the two kinds of models may respectively have, they suffer from the same problem that the training samples in the GRD space are either manually picked or randomly selected, neglecting the difference in the informativeness of samples. While many recent works acknowledge the importance of GRD function [41, 34, 15, 10], a careful analysis and modeling of the response has yet to be done. We wish to address this void. In doing so, we seek a good compromise between 1) global and rigid models depending on random training samples and 2) local and indefinite models requiring exhaustive search in the video representation space.
Iii Modeling Generalized Rate-Distortion Functions
We begin by stating our assumptions. Our first assumption is that the GRD function is smooth. In theory, the Shannon lower bound, the infimum of the required bitrate to achieve a certain quality, is guaranteed to be continuous with respect to the target distortion . On the other hand, successive change in the spatial resolution would gradually deviate the frequency component and entropy of the source signal, resulting in smooth transition in the perceived quality. In practice, many subjective experiments have empirically shown the smoothness of GRD functions [29, 40]. Furthermore, it is beneficial to impose continuity on GRD functions for better mathematical properties. For instance, the GRD surface is desired to be differentiable in many multimedia applications [34, 11].
Our second assumption is that the GRD function is axial-monotonic111In this work, we use rate-distortion function and rate-quality function interchangeably. Without loss of generality, we assume the function to be axial monotonically increasing. If is axial monotonically decreasing, we replace the given response with the function , where is the maximum value of quality.. According to the rate-distortion theory , video quality increases monotonically with respect to the amount of resources it takes in the lossy compression. However, such monotonicity constraint may not apply to the spatial resolution. It has been demonstrated that encoding at high spatial resolution may even result in lower video quality than encoding at low spatial resolution under the same bitrate combined with upsampling and interpolation . To be specific, encoding at high resolution with insufficient bitrate would produce artifacts such as blocking, ringing, and contouring, whereas encoding at low resolution with upsampling and interpolation would introduce blurring. The resulting distortions are further amplified or alleviated by the characteristics of the viewing device and viewing conditions, which interplay with HVS features such as the contrast sensitivity function . A few sample GRD surfaces with their corresponding source videos are illustrated in Fig. 2.
Our third assumption is that the quality measurement is precise. Because the HVS is the ultimate receiver in most applications, subjective evaluation is a straightforward and reliable approach to evaluate the quality of digital videos. Traditional subjective experiment protocol models a subject’s perceived quality as a random variable, assuming the quality labeling process to be stochastic. Because subjective experiment is expensive and time consuming, it is hardly used in the GRD function approximation process. In practice, objective VQA methods that produce precise quality predictions are often employed to generate ground truth samples in the GRD function. Therefore, a GRD function should pass through the quality scores of objective VQA evaluated on the encoded video representations.
Under these assumptions, we define the space of GRD functions as:
where , , , and represent the total number of training samples, bitrate, spatial resolution, and quality of the -th training sample, respectively.
In the subsequent sections, we introduce the proposed GRD model. Section III-A and III-C review the traditional CT method and the monotonicity condition of cubic polynomial Bézier function, which the proposed model relies on. The proposed continuity condition, optimization framework, and robust axial-monotonic CT algorithm are novel contributions that are detailed in Section III-B, III-D, and III-E, respectively.
Iii-a Review of Clough-Tocher Method
Since first introduced in 1960’s , the CT method has been the most widely used multi-dimensional scattered data interpolant, thanks to its continuity and low computational complexity [2, 4]. Consider the scattered points located in the plane and their values over the plane, the triangulation of the scattered points in the plane induces a piecewise triangular surface over the plane, whose nodes are the points . In CT method, a piecewise cubic function is employed as the interpolant for each triangle. Specifically, each triangle is further divided into three equivalent subtriangles, where a cubic function in the form of Bézier surface is estimated. Hereafter, we refer to the overall triangle as the macrotriangle and its subtriangles as microtriangles. The split triangular net is known as the control net, which is demonstrated in Fig. 3. For clarity and brevity, we also denote the macrotriangle edge that is opposite to the vertex by , and the internal microtriangle edge that connects and by . Mathematically, a cubic Bézier surface in the microtriangle can be formulated as
where represents the control net value at and is the barycentric coordinates with regard to the three vertices of the microtriangle. The barycentric coordinates of a point with regard to can be defined as
where means the directional area of the triangle formed by points and is positive when is counter-clockwise. The conversion from Cartesian coordinate to barycentric coordinate is lengthy and thus omitted here. Interested readers may refer to  for more details.
We note that 10 parameters are required to define a Bézier surface on each microtriangle. In the case of microtriangle , the parameters , , and are associated with point , , and , respectively. The rest of the parameters are associated with the 6 trisection points on the three edges of . At the first glance, we need to determine 30 parameters for the interpolant in one triangle with only 3 equality constraints given by , , and . Fortunately, the degree of freedom can be dramatically reduced with certain smoothness constraints. Under the assumption within macrotriangles, each two Bézier surfaces share the same curves at the their common boundaries , , and , leaving 19 free parameters in the macrotriangle . The inner-triangle continuity removes 7 additional degree of freedoms by enforcing the shaded neighboring microtriangles in Fig. 3 to be coplanar . To ensure inter-triangle continuity, a standard approach is to assume the cross-boundary derivatives of the neighboring macrotriangles to be collinear, which further reduces the degree of freedom to 9. Taking into account the three known values at , , and , we eventually have 6 unknown parameters in each macrotriangle. Although the gradients at vertices is not always available in practice, in most cases they can be estimated by considering the known values not only in the vertices of the triangle in question, but also in its neighbors. The most commonly used method is to estimate the gradients by minimizing the second-order derivatives along all Bézier curves . Readers who are interested in the details of the CT method may refer to [28, 19, 2, 4].
The original CT method suffers from at least three limitations in approximating GRD functions. First, it picks the normal of the edges as the direction of cross-boundary derivative . However, this choice gives an interpolant that is not invariant under affine transforms. This has some undesirable consequences: for a very narrow triangle, the spline can develop huge oscillations . Second, the interpolant composite of piece-wise Bézier polynomials is not axial-monotonic, even when the given points are axial monotonic. Third, the CT algorithm achieves the external smoothness by estimating the gradients at three vertices , and by assuming the normal derivative at the triangle boundary to be linear. The linear assumption is somewhat arbitrary and may violate monotonicity we want to achieve. We will address the three limitations in the subsequent sections.
Iii-B Affine-Invariant C1 Continuity
In this section, we propose an affine invariant CT interpolant. Instead of the normal derivative at the triangle boundary , we consider to be parallel to , i. e.
and are partial derivatives of the Bézier surface at and is a cyclic permutation of . Since this quantity transforms similarly as the gradient under affine transforms, the resulting interpolant is affine-invariant .
We also lift the unwanted linear constraints on the cross-boundary derivatives, elevating the number of parameters in a macrotriangle back to 9. In summary, the equality constraints in (2) can be factorized into the matrix form for simplicity
where , , , , and represent the values of control net and unknown derivatives, respectively. Therefore, finding the interpolant of the macrotriangle corresponds to determining the 9 unknown parameters in .
Besides the inner macrotriangle constraints, we also want to keep consistent across the triangle boundary to ensure external smoothness. As a result, the following equality constraints need to be added for each edge with adjacent triangles
Iii-C Axial Monotonicity
This section aims to derive the sufficient constraints on for the Bézier surface in the macrotriangle to be axial-monotonic. In general, the interpolant composite of piece-wise Bézier polynomials is not monotonic even though the sampled points are monotonic. Several works have been done to derive sufficient conditions for a univariate or bivariate Bézier function [21, 24]. We adopt the sufficient condition proposed in , where it was proved that the cubic Bézier surface in a microtriangle is axial-monotonic when all the 6 triangular patches of its control net (e.g. , , , , , and in ) are axial-monotonic. By combining the sufficient conditions in all three microtriangles and the inner triangle continuity, we obtain
We can summarize the monotonicity constraint in matrix form
More details on how we construct and are given in the Appendix.
Iii-D Optimization-based Solutions
To determine the unknown derivatives, we propose to minimize the total curvature of the interpolated surface under the smoothness assumption. Directly computing the total curvature is computationally intractable. Alternatively, we minimize the curvature of Bézier curves at the edges of each microtriangle as its approximation. Specifically, in , the objective function is written as
where the weight is introduced to cancel the double counting of the external edges.
Consider an external boundary , whose Bézier control net coefficients are , , , and . The integral of the second order derivative of the Bézier curve on can be represented in terms of the four coefficients as
is the length of .
Similarly, we get the other part of the objective function from an internal boundary , whose coefficients are and .
is the length of .
where and .
Substituting into (11), we get
In summary, finding the axial-monotonic interpolant corresponds to solving the following optimization problem
Note that the constraints are linear with respect to and is positive-semidefinite. Thus, finding turns into a standard problem of quadratic programming, which can be efficiently solved by the existing convex programming packages .
Iii-E Robust Axial-Monotonic Clough-Tocher Method
Here we propose our Robust Axial-Monotonic Clough-Tocher (RAMCT) method. The inequality constraints in (6) are sufficient conditions for -axial monotonicity. However, the sufficient conditions cannot be satisfied in some extreme cases, making the primary solution infeasible. To relax these constraints, we introduce hinge loss to some of these inequalities, motivated by the success of Support Vector Machine . Specifically, the modified inequality constraints are formulated as
where and . Note that (14a),(14c) are identical to (5a),(5c) because they are necessary conditions of axial monotonicity (See Appendix for proof). Rewriting these constraints in the matrix form, we obtain
By substituting (3) into the inequality above, we finally obtain the inequality constraints in terms of the unknowns and the auxiliary variables as
The objective function is then modified accordingly,
where is the weighting parameter. Substituting into (16), we get
Iv Information-theoretic sampling
In this section, we first explore the informativeness of samples in the GRD space via a probabilistic model. We then present an information-theoretic sampling strategy that optimally selects the samples, offering enormous savings in time and computational resources.
Let be a vector of discrete samples on a GRD function uniformly distributed in the bitrate-resolution space, where is the total number of sample points on the grid. Given that the GRD function is smooth, when the sampling grid is dense, these discrete samples provide a good description of the continuous GRD function. In particular, when the GRD function is band-limited, it can be fully recovered from these samples when the sampling density is larger than the Nyquist rate. Assuming x is created from GRD functions of real-world video content, we model x as an -dimensional random variable, for which the probability density function follows a multivariate Normal distribution. The total uncertainty of x is characterized by its joint entropy given by
where is the determinant operator. If the full vector x is further divided into two parts such that and , and the portion has been resolved by , then the remaining uncertainty is given by the conditional entropy
As a special case, we aim to find one sample that most efficiently reduces the uncertainty of GRD estimation. This is found by minimizing the log determinant of the conditional covariance matrix 
where and is the row index of .
Minimizing (21) directly is computationally expensive, especially when the dimensionality is high. Alternatively, we minimize the upper bound of the conditional entropy
where and denotes identity matrix. The sample with the minimum average loss in (22) over all viewing devices is most informative. Once the optimal sample index is obtained, we encode the video at the -th representation, evaluate its quality with objective VQA algorithms, and update the conditional covariance matrix in (20). The process is applied iteratively until the overall uncertainty in the system is reduced below a certain threshold . We summarize the proposed uncertainty sampling method in Algorithm 1, where represents the bitrate and spatial resolution at the -th representation.
Remark: To get a sense of what type of samples will be chosen by the proposed algorithm, we analyze several influencing factors in the objective function (22):
By the basic properties of trace, the objective function in the uncertainty sampling can be factorized as
Thus, is a decreasing function with respect to when . This indicates that samples with large uncertainty are more likely to be selected than those with small uncertainty.
According to (20), ,
suggesting the rate of reduction in the uncertainty of sample is proportional to its squared correlation with the selected sample in the -th iteration. Fig. 4 shows an empirical covariance matrix estimated from our video dataset that will be detailed in the next section, from which we observe that the GRD functions typically exhibit high correlation in a local region. Combining the first observation above, we conclude that the next optimal choice of sample should be selected from the region where labeled samples are sparse.
Note that knowing that alters the variance, though the new variance does not depend on the specific value of . The independence has two important consequences. First, the proposed sampling scheme is general enough to accommodate GRD estimators from all classes. More importantly, the algorithm results in a unique sampling sequence for all GRD functions. In other words, we can generate a lookup table of optimal querying order, making the sampling process fully parallelizable.
|sample #||Reciprocal ||Logarithmic ||PCHIP||CT||RAMCT|
|sample #||Reciprocal ||Logarithmic ||PCHIP||CT||RAMCT|
In this section, we first describe the experimental setups including our GRD function database, the implementation details of the proposed algorithm, and the evaluation criteria. We then compare the proposed algorithm with existing GRD estimation methods.
V-a Experimental setups
GRD Function Database: We construct a new video database which contains 250 pristine videos that span a great diversity of video content. An important consideration in selecting the videos is that they need to be representative of the videos we see in the daily life. Therefore, we resort to the Internet and elaborately select 200 keywords to search for creative common licensed videos. We initially obtain more than 700 4K videos. Many of these videos contain significant distortions, including heavy compression artifacts, noise, blur, and other distortions due to improper operations during video acquisition and sharing. To make sure that the videos are of pristine quality, we carefully inspect each of the videos multiple times by zooming in and remove those videos with visible distortions. We further reduce artifacts and other unwanted contaminations by downsampling the videos to a size of 1920 1080 pixels, from which we extract 10 seconds semantically coherent video clips. Eventually, we end up with 250 high quality videos.
Using the aforementioned sequences as the source, each video is distorted by the following processes sequentially:
Spatial downsample: We downsample source videos using bi-cubic filter to six spatial resolutions (1920 1080, 1280 720, 720 480, 512 384, 384 288, 320 240) according to the list of Netflix certified devices .
In total, we obtain 540 (hypothetical reference circuit) 250 (source) 3 (encoder) = 405,000 test samples (currently the largest in the VQA community). We evaluate the quality of each video at a given spatial resolution, bitrate, and five commonly used display devices including cellphone, tablet, laptop, desktop, and TV using SSIMplus  for the following reasons. First, SSIMplus is currently the only HVS motivated spatial resolution and display device-adapted VQA model that is shown to outperform other state-of-the-art quality measures in terms of accuracy and speed [30, 17]. Second, a simplified VQA model SSIM  has been demonstrated to perform well in estimating the GRD functions . The resulting dense samples of SSIMplus are regarded as the ground truth of GRD functions (The range of SSIMplus is from 0 to 100 with 100 indicating perfect quality). Our GRD modeling approach does not constrain itself on any specific VQA methods. When other ways of generating dense ground-truth samples are available, the same GRD modeling approach may also be applied.
Implementation Details: We initialize the scattered network with delaunay triangulation , inherited from CT method . The balance weight in (16) is set to . In our current experiments, the performance of the proposed RAMCT is fairly insensitive to variations of the value. We employed OSQP  to solve the quadratic programming problem, where the maximum number of iterations is set to . The stopping criteria threshold is set to 540 (the total number of representation samples in the discretized GRD function space) 10 (the standard deviation of mean opinion score in the LIVE Video Quality Assessment database), resulting in an average sample number of 38. When is below the threshold, we conclude that the uncertainty in the system can be explained by the disagreement between subjects. Therefore, further improvement in prediction accuracy may not be as meaningful. Since a triangulation only covers the convex hull of the scattered point set, extrapolation beyond the convex hull is not possible. In order to make a fair comparison, we initialize the training set as the representations with maximum and minimum bitrates at all spatial resolutions. To construct the covariance matrix described in Section IV as well as test the proposed algorithm, we randomly segregated the database into a training set of 200 GRD functions and a testing set with 50 GRD functions. The random split is repeated 50 times and the median performance is reported.
Evaluation Criteria: We test the performance of the GRD estimators in terms of both accuracy and rate of convergence. Specifically, we used two metrics to evaluate the accuracy. The mean squared error (MSE) and norm of the error values are computed between the estimated function and the actual function for each source content. The median results are then computed over all testing functions. All interpolation models can fit increasingly complex GRD functions at the cost of using many parameters. What distinguishes these models from each other is the rate and manner with which the quality of the approximation varies with the number of training samples.
We test five GRD function models including reciprocal regression , logarithmic regression , 1D piecewise cubic Hermite interpolating polynomial (PCHIP), CT interpolation, and the proposed RAMCT on the aforementioned database. To evaluate the performance of the uncertainty sampling algorithm, we apply it on the five GRD models above and compare its performance with random sampling scheme as the baseline. For random sampling, the initial set of training sample is set as the representations with the maximum and minimum bitrates at all spatial resolutions to allow fair comparison. The training process with random sampling was repeated 50 times and the median performance is reported.
Table I and II show the prediction accuracy on the database, from which the key observations are summarized as follows. First, the models that assume a certain analytic functional form are consistently biased, failing to accurately fit GRD functions even with all samples probed. On the other hand, the existing interpolation models usually take more than 100 random samples to converge, although they are asymptotically unbiased. By contrast, the proposed RAMCT model converges with only a moderate number of samples. Second, we analyze the core contributors of RAMCT with deliberate selection of competing models. Specifically, the 1D monotonic interpolant PCHIP significantly outperforms the 2D generic interpolant CT, suggesting the importance of axial monotonicity. RAMCT achieves even better performance by exploiting the 2D structure and jointly modeling the GRD functions. Third, we observe strong generalizability of the proposed uncertainty sampling strategy evident by the significant improvement over random sampling on all models. The performance improvement is most salient on the proposed model. In general, RAMCT is able to accurately model GRD functions with only 30 labeled samples, based on which the reciprocal model merely have sufficient known variables to initialize fitting. To gain a concrete impression, we also recorded the execution time of the entire GRD estimation pipeline including video encoding, objective VQA, and GRD function approximation with the competing algorithms on a computer with 3.6GHz CPU and 16G RAM. RAMCT with uncertainty sampling takes around 10 minutes to reduce below 5, which is more than 100 times faster than the tradition regression models with random sampling.
The application scope of GRD model is much broader than VQA. Here we demonstrate three use cases of the proposed model.
Vi-a Rate-Distortion Curve at Novel Resolutions
Given a set of rate-distortion curves at multiple resolutions, it is desirable to predict the rate-distortion performance at novel resolutions, especially when there exists a mismatch between the supported viewing device of downstream content delivery network and the recommended encoding profiles. Traditional methods linearly interpolate the rate-distortion curve at novel resolutions , neglecting the characteristics of GRD functions. Fig. 5 compares the linearly interpolated and RAMCT-interpolated rate-distortion curves at 960540 with the ground truth SSIMplus curve, from which we have several observations. First, the linearly interpolated curve shares the same intersection with the neighboring curves at 740480 and 1280720, inducing consistent bias to the prediction. The proposed RAMCT model is able to accurately predict the quality at the intersection of the neighboring curves by taking all known rate-distortion curves into consideration. Second, the linearly interpolated rate-distortion curve always lies between its neighboring curves, suggesting that the predicted quality at any bitrate is lower than the quality on one of its neighboring curves. This behavior contradicts the fact that each resolution may have a bitrate region in which it outperforms other resolutions . On the contrary, RAMCT better preserves the general trend of resolution-quality curve at different bitrate, thanks to the regularization imposed by the condition at given nodes. Third, RAMCT outperforms the linear interpolation model in predicting the ground truth rate-distortion curve across all bitrates. The experimental results also justify the effectiveness of the and smoothness prior used in RAMCT.
To further validate the performance of the proposed GRD model at novel spatial resolutions, we predict the rate-distortion curves of 20 randomly selected source videos from the dataset at three novel resolutions (640360, 960540, and 1600900). The evaluated bitrate ranges from 100 kbps to 9 Mbps with a step size of 100 kbps. The results are listed in Table III. We can observe that RAMCT outperforms the linear model  with a clear margin at novel resolutions.
Vi-B Per-Title Encoding Profile Generation
To overcome the heterogeneity in users’ network conditions and display devices, video service providers often encode videos at multiple bitrates and spatial resolutions. However, the selection of the encoding profiles are either hard-coded, resulting in sub-optimal QoE due to the negligence of the difference in source video complexities, or selected based on interactive objective measurement and subjective judgement that are inconsistent and time-consuming. To deliver the best quality video to consumers, each title should receive a unique bitrate ladder, tailored to its specific complexity characteristics. We introduce a quality-driven per-title optimization framework to automatically select the best encoding configurations, where the proposed GRD model serves as the key component.
Content delivery networks often aim to deliver videos at certain quality levels to satisfy different viewers. It is beneficial to minimize the bitrate usage in the encoding profile when achieving the objective. Mathematically, the quality-driven bitrate ladder selection problem can be formulated as a constrained optimization problem. Specifically, for the -th representation,
where , , , and represent the bitrate, the spatial resolution, the GRD function, the target quality level of video representation , and the total number of video representations, respectively. Solving the optimization problem requires precise knowledge of the GRD function. Thanks to the effectiveness and differentiability of RAMCT, the proposed model can be incorporated with gradient-based optimization tools  to solve the per-title optimization problem. (Interested readers may refer to the appendix for more details on how we solve the optimization problem).
To validate the proposed per-title encoding profile selection algorithm, we apply the algorithm to generate bitrate ladders using H.264  for 50 randomly selected videos in the aforementioned dataset. We set the target quality levels as to cover diverse quality range and to match the total number of representations in standard recommendations . For simplicity, we optimize the representation sets for only one viewing device (cellphone), while the procedure can be readily extended to multiple devices to generate a more comprehensive representation set. In Fig. 6, we compare the rate-quality curve of representation sets generated by the proposed algorithm, recommendations by Netflix , Apple , and Microsoft  for three videos with different complexities, from which the key observations are as follows. First, contrasting the hand-crafted bitrate ladders, the encoding profile generated by the proposed algorithm is content adaptive. Specifically, the encoding bitrate increases with respect to the complexity of the source video as illustrated from Fig. 6(a) to Fig. 6(c). Second, the proposed method achieves the highest quality at all bitrate levels. The performance improvement is mainly introduced by the encoding strategy at the convex hull encompassing the individual per-resolution rate-distortion curves . Table IV provides a full summary of the Bjøntegaard-Delta bitrate (BD-Rate) , indicating the required overhead in bitrate to achieve the same SSIMplus values. We observe that the proposed framework outperforms the existing hard-coded representation sets by at least 47%.
Vi-C Codec Comparison
In the past decade, there has been a tremendous growth in video compression algorithms, thanks to the fast development of computational multimedia. With many video encoders at hand, it becomes pivotal to compare their performance, so as to find the best algorithm as well as directions for further advancement. Bjøntegaard-Delta model [8, 9] has become the most commonly used objective coding efficiency measurement. BjøÃ¸ntegaard-Delta PSNR (BD-PSNR) and BD-Rate are typically computed as the difference in bitrate and quality (measured in PSNR) based on the interpolated rate-distortion curves
where and are the logarithmic-scale bitrate, and are the quality of the interpolated reference and test bitrate curves, respectively. and are the effective domain and range of the rate-distortion curves. However, BD-PSNR and BD-Rate do not take spatial resolution and viewing condition into consideration. Fig. 7 shows two GRD surfaces generated by H.264  and HEVC encoders for a source video. Although H.264 performs on par with HEVC at low resolutions, it requires higher bitrate to achieve the same target quality at high resolutions. Therefore, applying BD-Rate on a single resolution is not sufficient to fairly compare the overall performance between encoders. To this regard, we propose generalized quality gain () and rate gain () models as