A user model for JND-based video quality assessment: theory and applications

A user model for JND-based video quality assessment: theory and applications

Abstract

The video quality assessment (VQA) technology has attracted a lot of attention in recent years due to an increasing demand of video streaming services. Existing VQA methods are designed to predict video quality in terms of the mean opinion score (MOS) calibrated by humans in subjective experiments. However, they cannot predict the satisfied user ratio (SUR) of an aggregated viewer group. Furthermore, they provide little guidance to video coding parameter selection, e.g. the Quantization Parameter (QP) of a set of consecutive frames, in practical video streaming services. To overcome these shortcomings, the just-noticeable-difference (JND) based VQA methodology has been proposed as an alternative. It is observed experimentally that the JND location is a normally distributed random variable. In this work, we explain this distribution by proposing a user model that takes both subject variabilities and content variabilities into account. This model is built upon user’s capability to discern the quality difference between video clips encoded with different QPs. Moreover, it analyzes video content characteristics to account for inter-content variability. The proposed user model is validated on the data collected in the VideoSet. It is demonstrated that the model is flexible to predict SUR distribution of a specific user group.

Video Quality Assessment, Just Noticeable Difference, Satisfied User Ratio
\authorinfo

Further author information: (Send correspondence to Haiqiang Wang)
Haiqiang Wang: E-mail: haiqianw@usc.edu

1 Introduction

Although being expensive in time and money, the subjective experiment is the ultimate method to quantify the perceptual quality of compressed video. Obtaining accurate and robust labels based on subjective votings provided by human observers is a critical step in Quality of Experience (QoE) evaluation. A typical subjective experiment [assembly2003methodology] involves: 1) selecting several representative stimuli, 2) presenting them to a group of subjects and 3) assigning quality scores to them by subjects. The collected subjective scores should go through a cleaning and modeling process before being used to validate the performance of objective video quality assessment metrics.

Absolute Category Rating (ACR) is one of the most commonly used subjective test methods. Test video clips are displayed on a screen for a certain amount of time and observers rate their perceived quality using an abstract scale [itu1999subjective], such as “Excellent (5)”, “Good (4)”, “Fair (3)”, “Poor (2)” and “Bad (1)”. There are two approaches in aggregating multiple scores on a given clip. They are the mean opinion score (MOS) and the difference mean opinion score (DMOS). The MOS is computed as the average score from all subjects while the DMOS is calculated from the difference between the raw quality scores of the reference and the test images.

Both MOS and DMOS are popular in the quality assessment community. However, they have several limitations [chen2009crowdsourceable, ye2014active]. The MOS scale is as an interval scale rather than an ordinal scale. It is assumed that there is a linear relationship between the MOS distance and the cognitive distance. For example, a quality drop from “Excellent” to “Good” is treated the same as that from “Poor” to “Bad”. There is no difference to a metric learning system as the same ordinal distance is preserved (i.e. the quality distance is 1 for both cases in the aforementioned 5-level scale). However, human viewing experience is quite different when the quality changes at different levels. It is also rare to find a video clip exhibiting poor or bad quality in real-life video applications. As a consequence, the number of useful quality levels drops from five to three. It is too coarse for video quality measurement.

The second challenge is that scores from subjects are typically assumed to be independently and identically distributed (i.i.d.) random variables. This assumption rarely holds. Given multiple quality votings on the same content, individual voting contributes equally in the MOS aggregation method [whitehill2009whose]. Subjects may have different levels of expertise on perceived video quality. A critical viewer may give low quality ratings on coded clips whose quality is still good to the majority [li2014confidence]. The same phenomenon occurs in all presented stimuli. The absolute category rating method is confusing to subjects as they have different understanding and interpretation of the rating scale.

To overcome the limitations of the MOS method, the just-noticeable-difference (JND) based VQA methodology was proposed in [lin2015experimental] as an alternative. A viewer is asked to compare a pair of coded clips and determine whether noticeable difference can be observed or not. The pair consists of two stimuli, i.e. a distorted stimulus (comparison) and an anchor preserving the targeted quality. A bisection search is adopted to reduce the number of pair comparisons. The JND reflects the boundary of perceived quality levels, which is well suited for the determination of the optimal image/video quality with minimum bit rates. For example, the first JND, whose anchor is the source clip, is the boundary between “Excellent” and “Good” categories. The boundary is subjectively decided rather than empirically selected by the experiment designer.

In MOS or JND-based VQA methods, subjective data are noisy due to the nature of “subjective opinion”. In the extreme case, some subjects submit random answers rather than good-faith attempts to label. Even worse, adversary votings may happen due to malice or a systematic misinterpretation of the task. Thus, it is critical to study subject capability and reliability to alleviate their effects in the VQA task.

In this work, we propose a user model that takes subject bias and inconsistency into account. The perceived quality of compressed video is characterized by the satisfied user ratio (SUR). The SUR value is a continuous random variable depending on subject and content factors. We study the SUR difference as it varies with user profile as well as content with variable level of difficulty. The proposed model aggregates quality ratings per user group to address inter-group difference. The proposed user model is validated on the data collected in the VideoSet[wang2017videoset]. It is demonstrated that the model is flexible to predict SUR distribution of a specific user group.

The rest of this paper is organized as follows. Related work is reviewed in Sec. 2. The proposed user model is presented in Sec. 3. Experimental results are shown in Sec. 4. Finally, concluding remarks are given in Sec. 5.

2 Related work

There were several popular datasets available in the video quality assessment community, such as LIVE [seshadrinathan2010study], VQEG-HD [video2010report], MCL-V [Lin20151], and NETFLIX-TEST [li2017recover], using the MOS aggregation approach. Recently, efforts have been made to examine MOS-based subjective test methods. Various methods were proposed from different perspectives to address the limitations mentioned in Section 1.

A theoretical subject model [janowski2015accuracy] was proposed to model the three major factors that influence MOS accuracy: subject bias, subject inaccuracy, and stimulus scoring difficulty. It was reported that the distribution of these three factors spanned about of the rating scale. Especially, the subject error terms explained previously observed inconsistencies both within a single subject’s data and also the lab-to-lab differences. A perceptually weighted rank correlation indicator [8272001] was proposed, which rewarded the capability of corrected ranking high-quality images and suppressed the attention towards insensitive rank mistakes. A generative model [li2017recover] was proposed to jointly recover content and subject factors by solving a maximum likelihood estimation problem. However, these models were proposed for the traditional MOS-based approaches.

Recently, there has been a large amount of efforts in JND-based video quality analysis. The human visual system (HVS) cannot perceive small pixel variation in coded video until the difference reaches a certain level. However, the difference of selected contents for ranking in traditional MOS-based framework was sufficiently large for the majority of subjects. We could conduct fine-grained quality analysis by directly measuring the JND threshold of each subject. There were several datasets [wang2017videoset, jin2016jndhvei, mcl_jcv] proposed with the JND methodology. Corresponding JND prediction methods were proposed in [huang2017measure, wang2017prediction]. However, the JND location was analyzed in a data-driven fashion. It was simply modeled by the mean value of multiple JND samples with heuristic subject rejection approach.

A probability model [wang2018jnd] was proposed to offer new insights to the JND phenomenon. Inspired by [li2017recover], the proposed generative model decomposed JND-based video quality score into subject and content factors. A close-form expression was derived to estimate the JND location by aggregating multiple binary decisions. It was shown that the JND samples followed Normal distribution which was parameterized by the subject and content factors. These unknown factors were jointly optimized by solving a maximum likelihood estimation (MLE) problem.

3 Proposed user model

In this section, we present the proposed user model based on the JND methodology. Let denote a reference video content, which can be compressed into a set of clips , , where is the quantization parameter (QP) index used in H.264/AVC. Typically, clip has a higher PSNR value than clip , if , and is the losslessly coded copy of .

The JND of coded clips characterizes the distortion visibility threshold with respect to a given anchor, . Through the subjective experiment, JND points can be obtained from a sequence of consecutive Noticeable/Unnoticeable difference tests between clips pair where . For example, the anchor for the first JND point is and it remains the same while searching for the first JND point. A bisection search is adopted to effectively update and reduce the total number of comparisons.

Consider a VQA dataset consisting of contents and subjects, the JND data matrix is modeled as . Individual JND location for and , is obtained through six rounds of comparison. The following analysis is conducted on the data matrix to recover underlying subject and content factors.

It was demonstrated in [wang2018jnd] that the perceived video quality depends on several causal factors: 1) the bias of the subject bias, 2) the inconsistency of a subject, 3) the average JND location, 4) the difficulty of a content to evaluate. The JND location of content from subject can be expressed as

(1)

where and are content factors while and are subject factors. The difficulty of a content is modeled by . A larger value means that its masking effects are stronger and the most experienced experts still have difficulty in spotting artifacts in compressed clips. The bias of a subject is modeled by parameter . If , the subject is more sensitive to quality degradation in compressed video clips. If , the subject is less sensitive to distortions. The sensitivity of an averaged subject has a bias around . Moreover, the subject variance, , captures the inconsistency of the quality votings from subject . A consistent subject evaluate all sequences attentively.

3.1 Satisfied user ratio on a specific user group

Under the assumption that content and subject are independent factors on perceived video quality, the JND position can be expressed by a Gaussian distribution in form of

(2)

where and . The unknown parameters are for and , where denotes the corresponding parameter set. All unknown parameters can be jointly estimated via the Maximum Likelihood Estimation (MLE) method given the subjective data matrix . This is a well-formulated parameter inference approach and we refer interested viewers to [li2017recover, wang2018jnd] for more details.

Among the four parameters , we have limited control on content factors, i.e. and . Content factors should be independent parameters that are input to a quality model. In practice, it is difficult, sometimes even impossible, to model subject inconsistency (i.e., the term), as it is viewer’s freedom to decide how much attention to pay to the video content.

On the other hand, the subject bias term (i.e. ) is a consistent prior of each subject. It is reasonable to model the subject bias and integrate it into a SUR model. We can roughly classify users into three groups based on the bias estimated from MLE. The user model aims to provide a flexible system to accommodate different viewer groups:

  • Viewers who are easy-to-satisfy (ES), corresponding to a larger ;

  • Viewers who have normal sensitivity (NS), corresponding to a neural ;

  • Viewers who are hard-to-satisfy (HS), corresponding to a smaller .

Figure 1: Consecutive frames of contents (top) and (bottom), respectively.

Furthermore, a viewer is said to be satisfied if one cannot perceive quality difference between the compressed clip and its anchor. The Satisfied User Ratio (SUR) of video clip on user group can be expressed as

(3)

where is the th group of subjects and denotes the cardinality. or if the th subject can or cannot see the difference between compressed clip and its anchor, respectively. The summation term in the right-hand-side of Eq. (3) is the empirical cumulative distribution function (CDF) of random variable . Then, by substituting Eq. (2) into Eq. (3), we obtain a compact expression for the SUR curve as

(4)

where is the Q-function of the normal distribution. By dividing users into different groups, the model achieves small intra-group variance and large inter-group variance. We can model JND and SUR more precisely. Alternatively, a universal model could be generalized by replacing by the union of all subjects, i.e. .

4 Experimental results

We evaluate the performance of the proposed user model using real JND data from the VideoSet [wang2017videoset] and compare it with the MOS method. The VideoSet contains 220 video contents in four resolutions and three JND points per resolution per content. During the subjective test, the dataset was split into 15 subsets and each subset was evaluated independently by a group of subjects. The group size was around 35. We adopt a subset of the first JND point on 720p video in our experiment. It contains 15 video contents evaluated by 37 subjects.

(a)
(b)
(c)
(d)
Figure 2: Visualization of cleaned JND data and estimated subject and content factors: (a) cleaned JND data, where each pixel represents one JND location and a brighter pixel means the JND happens at a larger QP, (b) estimated content difficulty (i.e. ) using the MLE method, (c) estimated subject bias and inconsistency (i.e. ), and (d) estimated JND locations using the MLE and the MOS methods, respectively. The error bars in subfigures represent confidence interval.

4.1 Parameter Inference

The cleaned JND scores are shown in Figure 1(a) and the estimated subject bias and inconsistency are shown in Figure 1(c), respectively. Please note that 5 subjects were identified as unreliable subjects and their quality votings were removed. These subjects have a larger bias value or inconsistent measures. We refer interested readers to [wang2018jnd] for further details.

Figure 1(b) shows the estimated content difficulty. Content is a scene about toddlers playing in a fountain. The masking effect is strong due to water drops in the background and moving objects. Thus, compression artifacts are difficult to perceive, and it has the highest content difficulty. On the other hand, content is a scene captured by a still camera. It focuses on speakers with blurred still background. The content difficulty is low as the masking effect is weak, and compression artifacts are more noticeable. Representative frame thumbnails are given in Fig. 1.

The estimated JND location using the MLE method and the MOS method are compared in Figure 1(d). The MLE approach offers more reliable estimation as its confidence intervals are much tighter than those estimated by the MOS method.

4.2 SUR on different viewer groups

We classify viewers into different viewer groups based on the estimated subject bias from cleaned JND data. The distribution of subject bias and inconsistency are given in Figure 3. The left and middle figures are the histogram of their statistics, respectively. For a large percentage of viewers, their bias and inconsistency are in a reasonable range (i.e. for the subject bias and for subject inconsistency, respectively). The right figure is the scatter plot of these two factors. We do not observe strong correlation between them.

Figure 3: Illustration of subject factors. Left: the histogram of the subject bias. Middle: the histogram of subject inconsistency. Right: the scatter plot of subject inconsistency versus the subject bias.

In the following, we use video and as input contents to demonstrate the effectiveness of the proposed user model. Under fixed content factors, we compare the SUR differences between different viewer groups. Content has a strong masking effect so that it is difficult to evaluate (HC, “Hard Content”). Content has a weak masking effect and it is easy to evaluate (EC, “Easy Content”).

Figure 4 shows the effect of subject factors on the SUR curve. There are 6 SUR curves by combining different content factors and subject factors. The input parameters are obtained from MLE. They are set as follows.

  • The subject bias is set to -4, 0, 4 for HS, NS and ES, respectively.

  • Subject inconsistency is set to 2 for all subjects.

  • The averaged JND locations are set to 31.7 and 30.39 for clip and , respectively.

  • The content difficulty levels are set to 3.962 and 1.326 for clip and , respectively.

We have the following two observations.

  1. SUR difference for normal users
    Consider the middle curves of EC and HC contents. Subjects in this group have normal sensitivity and we use this group to represent the majority. Intuitively, the content diversity is large if we visually examine those two clips. However, if we target at , which is the counterpart of the mean value in the MOS method, the QP location from modeled SUR curve is pretty close. The difference increases when the SUR deviates from the location. For contents that have a weak masking effect (shown in blue curve), they are less resistant to compression distortion and SUR drops sharply once artifacts become noticeable. In contrast, for contents that have a strong masking effect (shown in red curve), they have better discriminatory power on subject capability so that the SUR curve drops slowly. Given the same extra bitrate quota, we could expect a higher SUR gain from EC than HC. It takes much more effort to satisfy critical users when the content has a strong masking effect. We conclude that it is essential to study content difficulty and subject capability to better model perceived quality of compressed video.

  2. SUR difference for different user groups
    The SUR difference is considerably large among different user groups on the same content. We observe a gap between the three curves for both contents. The SUR curve of normal users is shifted by the subject bias in Eq. (4). Although the neutral user group covers the majority of users, we believe that a quality model would better characterize QoE by taking the user capability into consideration.

The above observations can be easily explained using the proposed user model. It shows the value and power of our study.

Figure 4: Illustration of the proposed user model. The blue and red curves demonstrate the SUR of EC and HC contents, respectively. For each content, the three curves show the SUR difference between different user groups.

5 Conclusion and future work

A flexible user model was proposed in this work by considering the subject and content factors in the JND framework. The QoE of a group of users was characterized by the Satisfied User Ratio (SUR) while the JND location of content from subject was modeled as a random variable parameterized by subject and content factors. The model parameters can be estimated by the MLE method using a set of JND-based subjective test data. As an application of the proposed user model, we studied SUR curves that are influenced by different user profiles and contents of different difficult levels. It was shown that the subject capability significantly affects the SUR curves, especially at the middle range of the quality curve.

Apparently, the proposed user model provides valuable insights on the quality assessment problem. We would like to explore these insights for better SUR prediction for new contents in the future.

References

264061
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description