Full Reference Objective Quality Assessmentfor Reconstructed Background Images

Full Reference Objective Quality Assessment
for Reconstructed Background Images

Aditee Shrotre,  and Lina J Karam,  Copyright (c) 2017 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@ieee.org.
The authors are with the School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ, 85287 E-mail: {ashrotre,karam}@asu.edu.missing
Abstract

With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.

Background Reconstruction, Image Quality Assessment, Image Database, Subjective Evaluation, Perceptual Quality, Objective Quality Metric

I Introduction

A clean background image has great significance in multiple applications. It can be used for video surveillance [1], activity recognition [2], object detection and tracking [3], [4], street view imaging and location-based services on web-based maps [5, 6], and texturing 3D models obtained from multiple photographs or videos [7]. But acquiring a clean photograph of a scene is seldom possible. There are always some unwanted objects occluding the background of interest. The technique of acquiring a clean background image by removing the occlusions using frames from a video or multiple views of a scene, is known as background reconstruction or background initialization. Many algorithms have been proposed for initializing the background images from videos, for example, [8, 9, 10, 11, 12, 13, 14]; and also from multiple images such as [15, 16, 17].

Background initialization or reconstruction is crippled by multiple challenges. The pseudo-stationary background (e.g., waving trees, waves in water, etc.) poses additional challenges in separating the moving foreground objects from the relatively stationary background pixels. The illumination conditions can vary across the images thus changing the global characteristics of each image. The illumination changes cause local phenomena such as shadows, reflections and shading, which change the local characteristics of the background across the images or frames in a video. Finally, the removal of ‘foreground’ objects from the scene creates holes in the background that need to be filled in with pixels that maintain the continuity of the background texture and structures in the recovered image. Thus the background reconstruction algorithms can be characterized by two main tasks: 1. foreground detection, in which the foreground is separated from the background by classifying pixels as foreground or background; 2. background recovery, in which the holes formed due to foreground removal are filled.

The performance of a background extraction algorithm depends on two factors: 1. its ability to detect the foreground objects in the scene and completely eliminate them; and 2. the perceived quality of the reconstructed background image. Traditional statistical techniques such as Peak Signal to Noise Ratio (PSNR), Average Gray-level Error (AGE), total number of error pixels (EPs), percentage of EPs (pEP), number of Clustered Error Pixels (CEPs) and percentage of CEPs (pCEPs) [18] quantify the performance of the algorithm in its ability to remove foreground objects from a scene to a certain extent, but they do not give an indication of the perceived quality of the generated background image. On the other hand, the existing Image Quality Assessment (IQA) techniques such as Multi-scale Similarity metric (MS-SSIM) [19] and Color image Quality Measures (CQM) [20] used by the authors in [21] to compare different background reconstruction algorithms are not designed to identify any residual foreground objects in the scene. Lack of a quality metric that can reliably assess the performance of background reconstruction algorithms by quantifying both aspects of a reconstructed background image motivated the development of the proposed Reconstructed Background visual Quality Index (RBQI). RBQI uses the contrast, structure and color information to determine the presence of any residual foreground objects in the reconstructed background image as compared to the reference background image and to detect any unnaturalness introduced by the reconstruction algorithm that affects the perceived quality of the reconstructed background image.

This paper also presents two datasets that are constructed to assess the performance of the proposed as well as popular existing objective quality assessment methods in predicting the perceived visual quality of the reconstructed background images. The datasets consist of reconstructed background images generated using different background reconstruction algorithms in the literature along with the corresponding subjective ratings. Some of the existing datasets such as video surveillance datasets (Wallflower [22], I2R [23]), background subtraction datasets (UCSD [24], CMU [25]) and object tracking evaluation dataset (“Performance Evaluation of Tracking and Surveillance (PETS)”) are not suited for this application as they do not provide reconstructed background images but just the foreground masks as ground-truth. The more recent database “Scene Background Modeling Net” (SBMNet) [26] is targeted at comparing the performance of the background initialization algorithms but it does not provide any subjective ratings for the reconstructed background images. Hence the SBMNet database [26] is not suited for benchmarking the performance of objective background visual quality assessment. The datasets proposed in this work are the first and currently the only datasets that can be used for benchmarking existing and future metrics developed to assess the quality of reconstructed background images.

The rest of the paper is organized as follows. In Section II we highlight the limitations of existing popular assessment methods [27]. We introduce the new benchmarking datasets in Section III along with the details of the subjective tests. In Section IV, we propose a new index that makes use of a probability summation model to combine structure and color characteristics at multiples scales for quantifying the perceived quality in reconstructed background images. Performance evaluation results for the existing and proposed objective visual quality assessment methods are presented in Section V for reconstructed background images. Finally, we conclude the paper in Section VI and also provide directions for future research.

Ii Existing Full Reference Background Quality Assessment Techniques and their limitations

Existing background reconstruction quality metrics can be classified into two categories: statistical and image quality assessment (IQA) techniques, depending on the type of features used for measuring the similarity between the reconstructed background image and reference background image.

Ii-a Statistical Techniques

Statistical techniques use intensity values at co-located pixels in the reference and reconstructed background images to measure the similarity. Popular statistical techniques [18] that have been traditionally used for judging the performance of background initialization algorithms are briefly explained here.

(i) Average Gray-level Error (): AGE is calculated as the absolute difference between the gray levels of the co-located pixels in the reference and reconstructed background image.

(ii) Error Pixels (): gives the total number of error pixels. A pixel is classified as an error pixel if the absolute difference between the corresponding pixels in the reference and reconstructed background images is greater than an empirically selected threshold .

(iii) Percentage Error Pixels (): Percentage of the error pixels, calculated as , where is the total number of pixels in the image.

(iv) Clustered Error Pixels (): gives the total number of clustered error pixels. A clustered error pixel is defined as the error pixel whose 4 connected pixels are also classified as error pixels.

(v) Percentage Clustered Error Pixels (): Percentage of the clustered error pixels, calculated as , where is the total number of pixels in the image.

Though these techniques have been used to judge the quality of the reconstructed background images, their performance has not been previously evaluated. As we show in Section V and as noted by the authors in [27], the statistical techniques were found to not correlate well with the subjective quality scores.

Ii-B Image Quality Assessment

The existing Full Reference Image Quality Assessment (FR-IQA) techniques use perceptually inspired features for measuring the similarity between two images. Though these techniques have been shown to work reasonably well while assessing images affected by distortions such as blur, compression artifacts and noise, these techniques have not been designed for assessing the quality of reconstructed background images. In [21] popular FR-IQA techniques including Peak Signal to Noise ratio (PSNR), Multi-scale Similarity metric (MS-SSIM) [19] and Color image Quality Measure (CQM) [20], were adopted for objectively comparing the performance of the different background reconstruction algorithms; however, no performance evaluation was carried out to support the choice of these techniques. Other popular IQA techniques include Structural Similarity Index (SSIM) [28], visual signal-to-noise ratio (VSNR) [29], visual information fidelity (VIF) [30], pixel-based VIF (VIFP) [30], universal quality index (UQI) [31], image fidelity criterion (IFC) [32], noise quality measure (NQM) [33], weighted signal-to-noise ratio (WSNR) [34], feature similarity index (FSIM) [35], FSIM with color (FSIMc) [35], spectral residual based similarity (SR-SIM) [36] and saliency-based SSIM (SalSSIM) [37]. The suitability of these techniques for evaluating the quality of reconstructed background images remains unexplored.

As the first contribution of this paper we present two benchmarking datasets that can be used for comparing the performance of different techniques in objectively assessing the perceived quality of the reconstructed background images. These datasets contain reconstructed background images along with their subjective ratings, details of which are discussed in Section III-A. When the statistical and IQA techniques were tested on these datasets, none of the techniques were found to correlate well with the subjective scores as discussed in Section V. This motivated our second contribution, the objective Reconstructed Background Quality Index (RBQI) that is shown to outperform all the existing techniques in assessing the perceived visual quality of reconstructed background images.

Iii Subjective Quality Assessment of Reconstructed Background Images

Iii-a Databases

In this section we present two different datasets constructed as part of this work to serve as benchmarks for comparing existing and future techniques developed for assessing the quality of reconstructed background images. The images and subjective experiments for both datasets are described in the subsequent subsections.

Each dataset contains the original sequence of images or videos that are used as inputs to the different reconstruction algorithms, the background images reconstructed by the different algorithms and the corresponding subjective scores.

Iii-A1 Reconstructed Background Quality (ReBaQ) Dataset

\thesubsubfigure Street
(Outdoor Scene)
\thesubsubfigure Hall
(Indoor Scene)
\thesubsubfigure Wall
(Textured background)
\thesubsubfigure WetFloor
(Water as low-contrast foreground)
((a)) Scenes with static backgrounds from the ReBaQ dataset.
\thesubsubfigure Building
(Reflective)
\thesubsubfigure Escalator
(Large Motion)
\thesubsubfigure llumination
(Illumination Variations)
\thesubsubfigure Park
(Small Motion)
((b)) Scenes with pseudo-stationary backgrounds from the ReBaQ dataset.
Fig. 1: Reference background images for different scenes in the Reconstructed Background Quality (ReBaQ) Dataset. Each reference background image corresponds to a captured scene background without foreground objects.

This database consists of sequences of multiple images for eight different scenes. Every image sequence consists of 8 different views such that the background is visible at every pixel in at least one of the views. A reference background image that is free of any foreground objects is also captured for every scene. Figure 1 shows the reference images corresponding to each of the eight different scenes in this database.

Each of the image sequences is used as input to twelve different background reconstruction algorithms [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. The 144 () background images generated by these algorithms along with the corresponding reference images for the scene are then used for the subjective evaluation. Each of the scenes pose a different challenge for the background reconstruction algorithms. For example, “Street” and “Wall” are outdoor sequences with textured backgrounds while the “Hall” is an indoor sequence with textured background. The “WetFloor” sequence challenges the underlying principal of many background reconstruction algorithms with water appearing as a low-contrast foreground object. The “Escalator” sequence has large motion in the background due to the moving escalator, while “Park” has smaller motion in the background due to waving trees. The “Illumination” sequence exhibits changing light sources, directions and intensities while the “Building” sequence has changing reflections in the background. Broadly, the dataset contains two categories based on the scene characteristics: (i) Static, the scenes for which all the pixels in the background are stationary; and (ii) Dynamic, the scenes for which there are non-stationary background pixels (e.g., moving escalator, waving trees, varying reflections). Four out of the eight scenes in the ReBaQ dataset are categorized as Static and the remaining four are categorized as Dynamic scenes. The reference background images corresponding to the static scenes are shown in Figure 1\subreffig:staticReBaQ. Although there are reflections on the floor in the “WetFloor” sequence, it does not exhibit variations at the time of recording and hence is categorized as static background scene. The reference background images corresponding to the dynamic background scenes are shown in Figure 1\subreffig:dynReBaQ.

Iii-A2 SBMNet based Reconstructed Background Quality (S-ReBaQ) Dataset

\thesubsubfigure 511
(Basic)
\thesubsubfigure Advertisement Board
(Background Motion)
\thesubsubfigure AVSS2007
(Intermittent motion)
\thesubsubfigure Badminton
(Jitter)
\thesubsubfigure Blurred
(Basic)
\thesubsubfigure Board
(Cluttered)
\thesubsubfigure Boulevard
(Jitter)
\thesubsubfigure Boulevard Jam
(Cluttered)
\thesubsubfigure Bus Station
(Intermittent Motion)
\thesubsubfigure Bus Stop in Morning
(Long Video)
\thesubsubfigure Camera Parameter
(Illumination Changes)
\thesubsubfigure CUHK Square
(Short Video)
\thesubsubfigure Dynamic Background
(Short video)
Fig. 2: Reference background images for different scenes in the SBMNet based Reconstructed Background Quality (S-ReBaQ) Dataset. Each reference background image corresponds to a captured scene background without foreground objects.

This dataset is created from the videos in the Scene Background Modeling Net (SBMNet) dataset [26] used for the Scene Background Modeling Challenge (SBMC) 2016 [38]. SMBNet consists of image sequences corresponding to a total of 79 scenes. These image sequences are representative of typical indoor and outdoor visual data captured in surveillance, smart environment, and video database scenarios. The spatial resolutions of the sequences corresponding to different scenes vary from 240x240 to 800x600. The length of the sequences also varies from 6 to 9,370 images. The authors of SBMNet categorize these scenes into eight different classes based on the challenges posed [26]: (a) Basic category represents a mixture of mild challenges typical of the shadows, Dynamic Background, Camera Jitter and Intermittent Object Motion categories; (b) Background motion category includes scenes with strong (parasitic) background motion; for example, in the “Advertisement Board” sequence the advertisement board in the scene periodically changes; (c) Intermittent Motion category includes sequences with scenarios known for causing “ghosting” artifacts in the detected motion; (d) Jitter category contains indoor and outdoor sequences captured by unstable cameras; (e) Clutter category includes sequences containing a large number of foreground moving objects occluding a large portion of the background; (f) Illumination Changes category contains indoor sequences containing strong and mild illumination changes; (g) Very Long category contains sequences each with more than 3,500 images; (h) Very Short category contains sequences with a limited number of images (less than 20). The authors of SBMNet [26] provide reference background images for only 13 scenes out of the 79 scenes. There is at least one scene corresponding to each category with reference background image available. We use only these 13 scenes for which the reference background images are provided. Figure 2 shows the reference background images corresponding to the scenes in this database with the categories from SBMNet [26] in brackets. Background images that were reconstructed by 14 algorithms submitted to SBMC [16, 12, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48] corresponding to the selected 13 scenes were used in this work for conducting subjective tests. As a result, a total of 182 () reconstructed background images along with their corresponding subjective scores form the S-ReBaQ dataset.

Iii-B Subjective Evaluation

Fig. 3: Subjective test Graphical User Interface (GUI).
Fig. 4: Example of excellent image quality but with all of the foreground visible.

The subjective ratings are obtained by asking the human subjects to rate the similarity of the reconstructed background images to the reference background images. The subjects had to score the images based on three aspects: 1) overall perceived visual image quality; 2) visibility or presence of foreground objects, and 3) perceived background reconstruction quality. The subjects had to score the image quality on a 5-point scale, with 1 being assigned to the lowest rating of ‘Bad’ and 5 assigned to the highest rating of ‘Excellent’. The second aspect was determining the presence of foreground objects. For our application, we defined the foreground object as any object that is not present in the reference image. The foreground visibility was scored on a 5-point scale marked as: ‘1-All foreground visible’, ‘2-Mostly visible’, ‘3-Partly visible but annoying’, ‘4-Partly visible but not annoying’ and ‘5-None visible’. The background reconstruction quality was also measured using a 5-point scale similar to that of the image quality, but the choices were limited based on how the first two aspects of an image were scored. For example, as illustrated in Figure 4, if the image quality was rated as excellent but the foreground object visibility was rated 1 (all visible), the reconstructed background quality cannot be scored to be very high. The background reconstruction quality scores, referred to as raw scores in the rest of the paper, are used for calculating the Mean Opinion Score (MOS).

We adopted a double-stimulus technique in which the reference and the reconstructed background images were presented side-by-side [49] to each subject as shown in Figure 3 and 4. Though the same testing strategy and set up was used for the ReBaQ and S-ReBaQ datasets described in Section III-A, the tests for each dataset were conducted in separate sessions.

As discussed in [27], the subjective experiments were carried out on a 23-inch Alienware monitor with a resolution of 1920x1080. Before the experiment, the monitor was reset to its factory settings. The setup was placed in a laboratory under normal office illumination conditions. Subjects were asked to sit at a viewing distance of 2.5 times the monitor height.

Seventeen subjects participated in the subjective test for the ReBaQ dataset, while sixteen subjects participated in the subjective test for the S-ReBaQ dataset. The subjects were tested for vision and color blindness using the Snellen chart [50] and Ishihara color vision test [51], respectively. A training session was conducted before the actual subjective testing, in which the subjects were shown few images covering different quality levels and distortions of the reconstructed background images and their responses were noted to confirm their understanding of the tests.

((a)) Four out of eight images from the input sequence ”Escalator”.
\thesubsubfigure [11], MOS=1.5882
\thesubsubfigure [12], MOS=2.2353
\thesubsubfigure [13], MOS=2.2941
\thesubsubfigure [16], MOS=4.1176
((b)) Background images reconstructed by different algorithms and corresponding MOS scores.
Fig. 5: Example input sequence and recovered background images with corresponding MOS scores from the ReBaQ dataset.

Since the number of participating subjects was less than 20 for each of the datasets, the raw scores obtained by subjective evaluation were screened using the procedure in ITU-R BT 500.13 [49]. The kurtosis of the scores is determined as the ratio of the fourth order moment and the square of the second order moment. If the kurtosis lies between 2 and 4, the distribution of the scores can be assumed to be normal. If more than 5% of the scores given by a particular subject lie outside the range of 2 standard deviations from the mean scores in case of normally distributed scores, that subject is rejected. For the scores that are not normally distributed the range is determined as times the standard deviation. In our study two subjects were found to be outliers and the corresponding scores were rejected for the ReBaQ dataset, while no subject was rejected for the S-ReBaQ dataset. MOS scores were calculated as the average of the raw scores retained after outlier removal. The raw scores and MOS scores with the standard deviations are provided along with the database.

Figure 5 shows an input sequence for a scene in the ReBaQ dataset together with reconstructed background images using different algorithms and corresponding MOS scores.

Iv Proposed Reconstructed Background Quality Index

In this section we propose a full-reference quality index that can automatically assess the perceived quality of the reconstructed background images. The proposed Reconstructed Background Quality Index (RBQI) uses a probability summation model to combine visual characteristics at multiple scales and quantify the deterioration in the perceived quality of the reconstructed background image due to the presence of any residual foreground objects or unnaturalness that may be introduced by the background reconstruction algorithm. The motivation for RBQI comes from the fact that the quality of a reconstructed background image depends on two factors namely: (i) the visibility of the foreground objects, and (ii) the visible artifacts introduced while reconstructing the background image.

Fig. 6: Block diagram describing the computation of the proposed Reconstructed Background Quality Index (RBQI).

A block diagram of the proposed quality index (RBQI) is shown in Figure 6. An -level multi-scale decomposition of the reference and reconstructed background images is obtained through lowpass filtering using an averaging filter [19] and downsampling, where corresponds to finest scale and corresponds to the coarsest scale. For each level , contrast, structure and color differences are computed locally at each pixel to produce a contrast-structure difference map and a color difference map. The difference maps are combined in local regions within each scale and later across scales using a ‘probability summation model’ to predict the perceived quality of the reconstructed background image. More details about the computation of the difference maps and the proposed RBQI based on a probability summation model are provided below.

Iv-a Structure Difference Map ()

An image can be decomposed into three different components: luminance, contrast and structure [28]. By comparing these components, similarity between two images can be calculated [28, 19]. A reconstructed background image is formed by mosaicing together parts of different input images, hence, preservation of the local luminance from the reference background image is of low relevance as long as the structure continuity is maintained. Any sudden variation in the local luminance across the reconstructed background image manifests itself as contrast or structure deviation from the reference image. Thus, in our application we consider only contrast and structure for comparing the reference and reconstructed background images while leaving out the luminance component. These contrast and structure differences between the reference and the reconstructed background images, calculated at each pixel, give us the ‘contrast-structure difference map’ referred to as ‘structure map’ for short in the rest of the paper.

First the structure similarity between the reference and the reconstructed background image, referred to as Structure Index (), is calculated using [28]:

(1)

where is the reference background image, is the reconstructed background image, and are the standard deviations of the reference and reconstructed background image, respectively. is the cross-correlation between the reference and reconstructed background images at location . is a small constant to avoid instability and is calculated as , is set to and is the maximum possible value of the pixel intensity ( in this case) [28]. A higher value indicates higher similarity between the pixels in the reference and reconstructed background images.

The background scenes often contain pseudo-stationary objects such as waving trees, escalator, local and global illumination changes. Even though these pseudo-stationary pixels belong to the background, because of the presence of motion, they are likely to be classified as foreground pixels. For this reason the pseudo-stationary backgrounds pose an additional challenge for the quality assessment algorithms. Just comparing co-located pixel neighborhoods in the two considered images is not sufficient in the presence of such dynamic backgrounds, our algorithm uses a search window of size centered at the current pixel in the reconstructed image, where is an odd value. The is calculated between the pixel at location in the reference image and pixels within the search window centered at pixel in the reconstructed image. The resulting matrix is of size . The modified Equation (1) to calculate for every pixel location in the window centered at is given as:

(2)

where

The maximum value of the matrix is taken to be the final value for the pixel at location as given below:

(3)

The map takes on values between [-1,1].

((a)) scale =0

((b)) scale =1

((c)) scale =2
Fig. 7: Quality map with for the background image shown in Figure 5\subreffig:EscOut and reconstructed using the method in [12]. The darker regions indicate larger structure differences between the reference and the reconstructed background images.

In the proposed method, the map is computed at different scales denoted as . The quality maps generated at three different scales for the background image shown in Figure 5\subreffig:EscOut and reconstructed using method in [12] are shown in Figure 7. The darker regions in these images indicate larger structure differences between the reference and the reconstructed background images while the lighter regions indicate higher similarities.

The structure difference map is calculated using the map at each scale as follows:

(4)

takes on values between [0,1] where the value of corresponds to no difference while corresponds to largest difference.

Iv-B Color Distance ()

The map is vulnerable to failures while detecting differences in areas of background images with no textures or no structural information and/or with objects of same luminance but different color. Hence we incorporate the color information at every scale while calculating the RBQI. The reference and the reconstructed images are converted to the color space and filtered using a lowpass Gaussian filter. The color difference between the filtered reference and reconstructed background images at each scale is then calculated as the Euclidian distance between the values of co-located pixels as follows:

(5)

In (5), for the color space components the scale index was dropped from the notation for convenience.

Iv-C Computation of the Reconstructed Background Quality Index (RBQI) based on Probability Summation

As indicated previously, the reference and reconstructed background images are decomposed each into a multi-scale pyramid with levels. Structure difference maps and color difference maps are computed at every level as described in Equations (4) and (5), respectively. These difference maps are pooled together within the scale and later across all scales using a probability summation model [52] to give the final RBQI.

The probability summation model as described in [52] considers an ensemble of independent difference detectors at every pixel location in the image. These detectors predict the probability of perceiving the difference between the reference and the reconstructed background images at the corresponding pixel location based on its neighborhood characteristics in the reference image. Using this model, the probability of the structure difference detector signaling presence of a structure difference at pixel location at level can be modeled as an exponential of the form:

(6)

where is a parameter chosen to increase the correspondence of RBQI with experimentally determined MOS scores on a training dataset and is a parameter whose value depends upon the texture characteristics of the neighborhood centered at in the reference image. The value of is chosen to take into account that differences in structure are less perceptible in textured areas as compared to non-textured areas.

In order to determine the value of , every pixel in the reference image is classified as textured or non-textured using the technique in [53]. This method first calculates the local variance at each pixel using a 3x3 window centered around it. Based on the computed variances a pixel is classified as edge, texture or uniform. By considering the number of edge, texture and uniform pixels in the 8x8 neighborhood of the pixel, it is further classified into one of the six types: uniform, uniform/texture, texture, edge/texture, medium edge and strong edge. For our application we label the pixels classified as ‘texture’ and ‘edge/texture’ as ’textured’ pixels and we label the rest as ‘non-textured’ pixels.

Let, be the flag indicating that a pixel is textured. Thus values of can be expressed as:

(7)

In our implementation we chose the value of resulting in a value of close to zero when a pixel is classified as textured.

Similarly, the probability of the color difference detector signaling the presence of a color difference at pixel location at level can be modeled as:

(8)

where is found in a similar way to and corresponds to the Adaptive Just Noticeable Distortion (AJNCD) calculated at every pixel in the color space as given in [54]:

(9)

where is set to 2.3 [55], is the mean background luminance of the pixel at and is the maximum luminance gradient across pixel . In Equation (9), is the scaling factor used for adjusting the dimension of ellipsoid along the chroma axis as is given by [54]:

(10)

where and correspond to the and color values of pixel located at in the Lab color space, respectively. is the scaling factor that simulates the local luminance texture masking as is given by:

(11)

where is the weighting factor as described in [54]. Thus, varies at every pixel location based on the distance between the chroma values and texture masking properties of its neighborhood.

A pixel at the -th level is said to have no distortion if and only if neither the structure difference detector nor the color difference detector at location signal the presence of any differences. Thus, the probability of detecting no difference between reference and reconstructed background images at pixel and level can be written as:

(12)

Substituting Equation (6) and Equation (8) for and , respectively, in the above equation, we get:

(13)

A less localized probability of difference detection can be computed by adopting the probability summation hypothesis which pools over the localized detection probabilities over a region  [52]. The probability summation hypothesis is based on the following two assumptions: 1) no difference is detected if none of the detectors in the region sense the presence of distortion, and 2) the probabilities of detection at all locations in the region are independent. Then the probability of no difference detection over the region is given by:

(14)

Substituting Equation (12) in the above equation gives:

(15)

where

(16)
(17)

In the human visual system, the highest visual acuity is limited to the size of foveal region, which covers approximately of visual angle. In our work, we consider the image regions as foveal regions approximated by non-overlapping image blocks.

The probability of no distortion detection over the -th level is obtained by pooling the no detection probabilities over all the regions and is given by:

(18)

or

(19)

where

(20)
(21)

Thus the final probability of detecting no distortion in a reconstructed background image is obtained by pooling the no detection probabilities over all scales , , as follows:

(22)

or

(23)

where

(24)
(25)

From Equation (24), it can be seen that and take the form of a Minkowski metric with exponent and , respectively.

By substituting the values , , , , and in Equation (23) and simplifying, we get:

(26)

where

(27)

Thus the probability of detecting a difference between the reference image and a reconstructed background image is given as:

(28)

As it can be seen from Equation (28), a lower value of results in a lower probability of difference detection while a higher value results in a higher probability of difference detection. Therefore, can be used to assess the perceived quality in the reconstructed background image, with a lower value of corresponding to a higher perceived quality.

The final Reconstructed Background Quality Index (RBQI) for a reconstructed background image is calculated using the logarithm of as follows:

(29)

As increases the value of RBQI increases implying more perceived distortion and thus lower quality of the reconstructed background image. The logarithmic mapping models the saturation effect, i.e., beyond a certain point the maximum annoyance level is reached and more distortion does not affect the quality.

V Results

In this section we analyze the performance of RBQI in terms of its ability to predict the subjective ratings for the perceived quality of reconstructed background images. We evaluate the performance of the proposed quality index in terms of its prediction accuracy, prediction monotonicity and prediction consistency and provide comparisons with the existing statistical and IQA techniques. In our implementation, we set , and . We also evaluate the performance of RBQI for different scales and neighborhood search windows. We conduct a series of hypothesis tests based on the prediction residuals (errors in predictions) after nonlinear regression. These tests help in making statistically meaningful conclusions on the index’s performance.

We use the two databases ReBaQ and S-ReBaQ described in Section III-A to quantify and compare the performance of RBQI. For performance evaluation, we employ three most commonly used metrics: (i) Spearman rank-order correlation coefficient; (ii) Pearson correlation coefficient; and (iii) root mean squared error (RMSE). A 4-parameter regression function [56] is applied to IQA metrics to provide a non-linear mapping between the objective scores and the subjective mean opinion scores (MOS):

(30)

where denotes the predicted quality for the th image and denotes the quality score after fitting, and , are the regression model parameters.

((a)) ReBaQ-Static
((b)) ReBaQ-Dynamic
((c)) S-ReBaQ
Fig. 8: Scatter plots of the MOS vs. Metric Scores on different datasets.

Figure 8 shows the scatter plots of MOS versus the prediction scores using the proposed technique along with the corresponding fitting curve calculated using (30).

V-a Performance Comparison

{adjustbox}

center ReBaQ-Static ReBaQ-Dynamic PCC SROCC RMSE P P PCC SROCC RMSE P P Statistical Measures AGE 0.7776 0.6348 0.6050 0.000000 0.000000 0.4999 0.2303 0.7644 0.005000 0.051600 EPs 0.3976 0.5093 0.8829 0.000000 0.000000 0.1208 0.2771 0.8761 0.007600 0.018500 pEPs 0.8058 0.6170 0.5698 0.000000 0.000000 0.4734 0.2771 0.8825 0.007600 0.018500 CEPs 0.5719 0.6939 0.7893 0.000000 0.000000 0.5951 0.7549 0.7092 0.000000 0.000000 pCEPs 0.6281 0.7843 0.9622 0.000000 0.000000 0.6418 0.7940 0.8826 0.000000 0.000000 Image Quality Assessment Metrics PSNR 0.8324 0.7040 0.5331 0.000000 0.000000 0.5133 0.4179 0.7575 0.000004 0.000263 SSIM[28] 0.5914 0.5168 0.7759 0.000000 0.000177 0.0135 0.0264 0.8826 0.910238 0.822439 MS-SSIM[19] 0.7230 0.7085 0.6648 0.000000 0.000000 0.5087 0.4466 0.7598 0.000005 0.000085 VSNR[29] 0.5216 0.3986 0.8209 0.000003 0.000531 0.5090 0.1538 0.7597 0.000005 0.198310 VIF[30] 0.3625 0.0843 0.8968 0.001754 0.484429 0.3103 0.3328 0.8390 0.199921 0.236522 VIFP[30] 0.5122 0.3684 0.8265 0.000004 0.001470 0.4864 0.1004 0.7711 0.000015 0.403684 UQI[31] 0.6197 0.7581 0.9622 0.000000 0.000000 0.6262 0.7450 0.8826 0.000000 0.000000 IFC[32] 0.5003 0.3771 0.8331 0.000008 0.001105 0.4306 0.1024 0.7966 0.000160 0.394409 NQM[33] 0.8251 0.8602 0.5437 0.000000 0.000000 0.6898 0.6600 0.6390 0.000000 0.000000 WSNR[34] 0.8013 0.7389 0.5756 0.000000 0.000000 0.6409 0.5760 0.6775 0.000000 0.000000 FSIM[35] 0.7209 0.6970 0.6668 0.000000 0.000000 0.5131 0.3283 0.7575 0.000004 0.004922 FSIMc[35] 0.7274 0.7033 0.6603 0.000000 0.000000 0.5144 0.3310 0.7568 0.000004 0.004559 SRSIM[36] 0.7906 0.7862 0.5892 0.000000 0.000000 0.5512 0.5376 0.7364 0.000001 0.000001 SalSSIM[37] 0.5983 0.5217 0.7710 0.000000 0.000003 0.4866 0.3200 0.7710 0.000015 0.006198 CQM[20] 0.6401 0.5755 0.7393 0.000000 0.000000 0.7050 0.7610 0.6259 0.000000 0.000000 RBQI 0.9006 0.8592 0.4183 0.000000 0.000000 0.7908 0.6773 0.5402 0.000000 0.000000

TABLE I: Comparison of RBQI vs. Statistical measures and IQA techniques on the ReBaQ dataset.
S-ReBaQ
PCC SROCC RMSE P P

Statistical Measures

AGE 0.6453 0.6238 2.2373 0.392900 0.000000
EPs 0.4202 0.1426 1.2049 0.000000 0.000000
pEPs 0.0505 0.4990 1.6676 0.498331 0.000000
CEPs 0.6283 0.6666 0.8491 0.000000 0.000000
pCEPs 0.8346 0.8380 0.6011 0.000000 0.000000

Image Quality Assessment Metrics

PSNR 0.7099 0.6834 0.7686 0.000000 0.000000
SSIM[28] 0.5975 0.5827 0.8751 0.000000 0.000000
MS-SSIM[19] 0.8048 0.8030 0.6478 0.000000 0.000000
VSNR[29] 0.0850 0.1717 1.0874 0.253675 0.486686
VIF[30] 0.1027 0.2064 1.0914 0.167842 0.005305
VIFP[30] 0.6081 0.6240 0.8664 0.000000 0.000000
UQI[31] 0.6316 0.5932 0.8461 0.000000 0.000000
IFC[32] 0.6235 0.6020 0.8533 0.000000 0.000000
NQM[33] 0.7950 0.7816 0.6621 0.000000 0.000000
WSNR[34] 0.7176 0.6888 0.7601 0.000000 0.000000
FSIM[35] 0.7243 0.7157 0.7525 0.000000 0.000000
FSIMc[35] 0.7278 0.7172 0.7484 0.000000 0.000000
SRSIM[36] 0.7853 0.7538 0.6757 0.000000 0.000000
SalSSIM[37] 0.7356 0.7300 0.7393 0.000000 0.000000
CQM[20] 0.2634 0.3645 1.0531 0.000327 0.000276
RBQI 0.8613 0.8222 0.5545 0.000000 0.000000
TABLE II: Comparison of RBQI vs. Statistical measures and IQA techniques on the S-ReBaQ dataset.

Tables I and II show the obtained performance evaluation results of the proposed RBQI technique on the ReBaQ and S-ReBaQ datasets, respectively, as compared to the existing statistical and FR-IQA algorithms. The results show that the proposed quality index yields higher correlation with the subjective scores as compared to any other existing technique. The statistical techniques are shown to not correlate well with the subjective scores on either of the datasets. Among the FR-IQA algorithms, the performance of the NQM [33] comes close to the proposed technique for scenes with static background images, i.e., for the ReBaQ dataset, as it considers the effects of contrast sensitivity, luminance variations, contrast interaction between spatial frequencies and contrast masking effect while weighting the SNR between the ground truth and reconstructed image. The more popular MS-SSIM [19] technique is shown to not correlate well with the subjective scores for the ReBaQ database. This is because the MS-SSIM calculates the final quality index of the image by just averaging over the entire image. In the problem of background reconstruction the error might occupy a relatively small area as compared to the image size, thereby under-penalizing the residual foreground. RBQI provides a higher correlation with the subjective scores by a margin of 6% over MS-SSIM on S-ReBaQ. None of the FR-IQA or statistical techniques were found to correlate with the scores in the ReBaQ dataset. This is because the assumption of pixel-to-pixel correspondence is no longer valid in the presence of pseudo-stationary background. The proposed RBQI technique uses a neighborhood window to handle such backgrounds, thereby improving the performance over NQM [33] by a margin of 10% and by 30% over MS-SSIM [19]. CQM [20] used in the Scene Background Modeling Challenge 2016 (SBMC) [38] and [21] to compare the performance of the algorithms is shown to perform very poorly on all three datasets and hence is not a good choice for evaluating the quality of reconstructed background images and thus is not suitable for comparing the performance of background reconstruction algorithms.

The P-value is the probability of getting a correlation as large as the observed value by random chance, while the variables are independent. If the P-value is less than 0.05 then the correlation is significant. The P-values (P and P) reported in Tables I and II indicate that most of the correlation scores are statistically significant.

ReBaQ ReBaQ
PCC SROCC RMSE PCC SROCC RMSE
nhood=1 0.7931 0.8314 0.5077 0.6395 0.6539 0.5662
nhood=9 0.9015 0.8581 0.4911 0.7834 0.6683 0.5394
nhood=17 0.9006 0.8581 0.4837 0.7908 0.6762 0.4374
nhood=33 0.9001 0.8581 0.4896 0.7818 0.6683 0.4769
((a)) Simulation results with different neighborhood search window sizes
ReBaQ ReBaQ
PCC SROCC RMSE PCC SROCC RMSE
L=1 0.8190 0.8183 0.6667 0.5561 0.5520 0.7335
L=2 0.8597 0.8310 0.5521 0.7281 0.6482 0.6050
L=3 0.9006 0.8592 0.5077 0.7908 0.6773 0.5662
L=4 0.9006 0.8581 0.4915 0.7954 0.6797 0.5350
L=5 0.9006 0.8581 0.4883 0.8087 0.6881 0.5191
((b)) Simulation results with different number of scales
TABLE III: Performance comparison for different values of parameters on the ReBaQ dataset.

V-B Model Parameter Selection

The proposed quality index accepts four parameters: 1. , dimensions of the window centered around the current pixel for calculating the ; 2. , number of multi-scale levels; 3. , used in the calculation of in Equation (6); and 4. , used in the calculation of in Equation (8). In Table III, we evaluate our algorithm with different values for the parameters. These simulations were run only on the ReBaQ dataset. Table III\subreftable:nhoodCor shows the effect of varying values on the performance of RBQI. The performance of RBQI for ReBaQ did not significantly change with the increase in the neighborhood search window size as expected, but the performance of RBQI increased drastically for the ReBaQ dataset from to before starting to drop at . Thus we chose for all our experiments. Table III\subreftable:scaleCor gives performance results for different number of scales. As a tradeoff between the computation complexity and prediction accuracy we chose the number of scales to be . The probability summation model parameters and were found such that they maximized the correlation between RBQI and MOS scores on a training dataset consisting of randomly selected images from the ReBaQ dataset. Values were found to correlate well with the subjective tests.

These parameters remained unchanged for the experiments conducted on the S-ReBaQ dataset to obtain the values in Table II.

Vi Conclusion

In this paper we addressed the problem of quality evaluation of reconstructed background images. We first proposed two different datasets for benchmarking the performance of existing and future techniques proposed to evaluate the quality of reconstructed background images. Then we proposed the first full-reference Reconstructed Background Quality Index (RBQI) to objectively measure the perceived quality of the reconstructed background images.

The RBQI uses the probability summation model to combine visual characteristics at multiple scales to quantify the deterioration in the perceived quality of the reconstructed background image due to the presence of any foreground objects or unnaturalness that may be introduced by the background reconstruction algorithm. The use of a neighborhood search window while calculating the contrast and structure differences provides further boost in the performance in the presence of pseudo-stationary background while not affecting the performance on scenes with static background. The probability summation model penalizes only the perceived differences across the reference and reconstructed background images while the unperceived differences do not affect the RBQI, thereby giving better correlation with the subjective scores. Experimental results on the benchmarking datasets showed that the proposed measure out-performed all the existing statistical and IQA techniques in estimating the perceived quality of reconstructed background images.

References

  • [1] R. M. Colque and G. Cámara-Chávez, “Progressive background image generation of surveillance traffic videos based on a temporal histogram ruled by a reward/penalty function,” in Conference on Graphics, Patterns and Images, Aug 2011, pp. 297–304.
  • [2] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747–757, Aug 2000.
  • [3] L. Li, W. Huang, I. Y. H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Transactions on Image Processing, vol. 13, no. 11, pp. 1459–1472, Nov 2004.
  • [4] F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, “Multicamera people tracking with a probabilistic occupancy map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 267–282, Feb 2008.
  • [5] A. Flores and S. Belongie, “Removing pedestrians from google street view images,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2010, pp. 53–58.
  • [6] W. D. Jones, “Microsoft and google vie for virtual world domination,” IEEE Spectrum, vol. 43, no. 7, pp. 16–18, 2006.
  • [7] E. Zheng, Q. Chen, X. Yang, and Y. Liu, “Robust 3d modeling from silhouette cues,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, April 2009, pp. 1265–1268.
  • [8] L. Maddalena and A. Petrosino, “A self-organizing approach to background subtraction for visual surveillance applications,” IEEE Transactions on Image Processing,, vol. 17, no. 7, pp. 1168–1177, July 2008.
  • [9] S. Varadarajan, L. Karam, and D. Florencio, “Background subtraction using spatio-temporal continuities,” in Proc. European Workshop on Visual Information Processing, July 2010, pp. 144–148.
  • [10] D. Farin, P. de With, and W. Effelsberg, “Robust background estimation for complex video sequences,” in Proc. IEEE International Conference on Image Processing, vol. 1, Sept 2003, pp. 145–148.
  • [11] H.-H. Hsiao and J.-J. Leou, “Background initialization and foreground segmentation for bootstrapping video sequences,” EURASIP Journal on Image and Video Processing, vol. 2013, p. 12.
  • [12] V. Reddy, C. Sanderson, and B. Lovell, “A low-complexity algorithm for static background estimation from cluttered image sequences in surveillance contexts,” EURASIP Journal on Image and Video Processing, pp. 1:1–1:14, Oct 2010.
  • [13] J. Yao and J. Odobez, “Multi-layer background subtraction based on color and texture,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition,, June 2007, pp. 1–8.
  • [14] A. Colombari and A. Fusiello, “Patch-based background initialization in heavily cluttered video,” IEEE Transactions on Image Processing, vol. 19, no. 4, pp. 926–933, April 2010.
  • [15] C. Herley, “Automatic occlusion removal from minimum number of images,” in Proc. IEEE International Conference on Image Processing, vol. 2, Sept 2005, pp. I1046–1049.
  • [16] A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM Transactions on Graphics, vol. 23, no. 3, pp. 294–302, Aug 2004.
  • [17] A. Shrotre and L. Karam, “Background recovery from multiple images,” in Proc. IEEE Digital Signal Processing and Signal Processing Education Meeting, Aug 2013, pp. 135–140.
  • [18] L. Maddalena and A. Petrosino, “Towards benchmarking scene background initialization,” in Proc. International Conference on Image Analysis and Processing, Sept 2015, pp. 469–476.
  • [19] Z. Wang, E. Simoncelli, and A. Bovik, “Multiscale structural similarity for image quality assessment,” in Proc. Asilomar Conference on Signals, Systems and Computers, vol. 2, Nov 2003, pp. 1398–1402.
  • [20] Y. Yalman and İ. Ertürk, “A new color image quality measure based on YUV transformation and PSNR for human vision system,” Turkish Journal of Electrical Engineering & Computer Sciences, vol. 21, no. 2, pp. 603–612, 2013.
  • [21] T. Bouwmans, L. Maddalena, and A. Petrosino, “Scene background initialization: A taxonomy,” Pattern Recognition Letters, vol. 96, pp. 3–11, Sept 2017.
  • [22] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” in Proc. IEEE International Conference on Computer Vision,, vol. 1, 1999, pp. 255–261.
  • [23] L. Li, W. Huang, I.-H. Gu, and Q. Tian, “Statistical modeling of complex backgrounds for foreground object detection,” IEEE Transactions on Image Processing,, vol. 13, no. 11, pp. 1459–1472, Nov 2004.
  • [24] V. Mahadevan and N. Vasconcelos, “Spatiotemporal saliency in dynamic scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 171–177, Jan 2010.
  • [25] Y. Sheikh and M. Shah, “Bayesian modeling of dynamic scenes for object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 11, pp. 1778–1792, Nov 2005.
  • [26] P. Jodoin, L. Maddalena, and A. Petrosino. (2016) Scene Background Modeling dataset. [Online]. Available: www.SceneBackgroundModeling.net
  • [27] A. Shrotre and L. Karam, “Visual quality assessment of reconstructed background images,” in Proc. International Conference on Quality of Multimedia Experience, June 2016, pp. 1–6.
  • [28] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, April 2004.
  • [29] D. Chandler and S. Hemami, “VSNR: A wavelet-based visual signal-to-noise ratio for natural images,” IEEE Transactions on Image Processing, vol. 16, no. 9, pp. 2284–2298, Sept 2007.
  • [30] H. Sheikh and A. Bovik, “Image information and visual quality,” IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 430–444, Feb 2006.
  • [31] Z. Wang and A. Bovik, “A universal image quality index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81–84, March 2002.
  • [32] H. Sheikh, A. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2117–2128, Dec 2005.
  • [33] N. Damera-Venkata, T. Kite, W. Geisler, B. Evans, and A. Bovik, “Image quality assessment based on a degradation model,” IEEE Transactions on Image Processing, vol. 9, no. 4, pp. 636–650, April 2000.
  • [34] T. Mitsa and K. Varkur, “Evaluation of contrast sensitivity functions for the formulation of quality measures incorporated in halftoning algorithms,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 5, April 1993, pp. 301–304.
  • [35] L. Zhang, D. Zhang, X. Mo, and D. Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Transactions on Image Processing, vol. 20, no. 8, pp. 2378–2386, Aug 2011.
  • [36] L. Zhang and H. Li, “SR-SIM: A fast and high performance iqa index based on spectral residual,” in Proc. IEEE International Conference on Image Processing, Sept 2012, pp. 1473–1476.
  • [37] W. Akamine and M. Farias, “Incorporating visual attention models into video quality metrics,” in SPIE Proceedings, vol. 9016, 2014, pp. 90 160O–1–90 160O–9.
  • [38] (2016) Scene Background Modeling Challenge. [Online]. Available: http://www.icpr2016.org/site/session/scene-background-modeling-sbmc2016/
  • [39] B. Laugraud, S. Piérard, and M. Van Droogenbroeck, “LaBGen-P: A pixel-level stationary background generation method based on LaBGen,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 107–113.
  • [40] L. Maddalena and A. Petrosino, “Extracting a background image by a multi-modal scene background model,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 143–148.
  • [41] S. Javed, S. K. Jung, A. Mahmood, and T. Bouwmans, “Motion-Aware Graph Regularized RPCA for background modeling of complex scenes,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 120–125.
  • [42] W. Liu, Y. Cai, M. Zhang, H. Li, and H. Gu, “Scene background estimation based on temporal median filter with gaussian filtering,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 132–136.
  • [43] G. Ramirez-Alonso, J. A. Ramirez-Quintana, and M. I. Chacon-Murguia, “Temporal weighted learning model for background estimation with an automatic re-initialization stage and adaptive parameters update,” Pattern Recognition Letters, vol. 96, no. Supplement C, pp. 34–44, 2017.
  • [44] T. Minematsu, A. Shimada, and R. I. Taniguchi, “Background initialization based on bidirectional analysis and consensus voting,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 126–131.
  • [45] M. Piccardi, “Background subtraction techniques: A review,” in Proc. IEEE International Conference on Systems, Man and Cybernetics, vol. 4, Oct 2004, pp. 3099–3104.
  • [46] I. Halfaoui, F. Bouzaraa, and O. Urfalioglu, “CNN-based initial background estimation,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 101–106.
  • [47] M. I. Chacon-Murguia, J. A. Ramirez-Quintana, and G. Ramirez-Alonso, “Evaluation of the background modeling method auto-adaptive parallel neural network architecture in the SBMnet dataset,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 137–142.
  • [48] D. Ortego, J. C. SanMiguel, and J. M. Martínez, “Rejection based multipath reconstruction for background estimation in SBMnet 2016 dataset,” in Proc. International Conference on Pattern Recognition, Dec 2016, pp. 114–119.
  • [49] “Methodology for the subjective assessment of the quality of television pictures,” International Telecommunications Union, Tech. Rep. ITU-R BT.500-13, Jan 2012.
  • [50] H. Snellen, “Probebuchstaben zur bestimmung der sehschärfe,” Utrecht, 1862.
  • [51] http://colorvisiontesting.com/ishihara.htm.
  • [52] J. Robson and N. Graham, “Probability summation and regional variation in contrast sensitivity across the visual field,” Vision Research, vol. 21, no. 3, pp. 409–418, 1981.
  • [53] J. Su and R. Mersereau, “Post-procesing for artifact reduction in jpeg-compressed images,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1995, pp. 2363–2366.
  • [54] C.-H. Chou and K.-C. Liu, “Colour image compression based on the measure of just noticeable colour difference,” IET Image Processing, vol. 2, no. 6, pp. 304–322, Dec 2008.
  • [55] M. Mahy, L. Eycken, and A. Oosterlinck, “Evaluation of uniform color spaces developed after the adoption of cielab and cieluv,” Color Research & Application, vol. 19, no. 2, pp. 105–121, 1994.
  • [56] Final report from the Video Quality Experts Group on the validation of objective models of video quality assessment, VQEG Std., 2000.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
120602
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description