Empirical evaluation of full-reference image quality metrics on MDID database

Empirical evaluation of full-reference image quality metrics on MDID database

Domonkos Varga
Department of Networked Systems and Services
Budapest University of Technology and Economics
Abstract

In this study, our goal is to give a comprehensive evaluation of 32 state-of-the-art FR-IQA metrics using the recently published MDID. This database contains distorted images derived from a set of reference, pristine images using random types and levels of distortions. Specifically, Gaussian noise, Gaussian blur, contrast change, JPEG noise, and JPEG2000 noise were considered.

\keywords

Full-reference image quality assessment

1 Introduction

The goal of objective image quality assessment is to design mathematical models that are able to predict the perceptual quality of digital images. The classification of objective image quality assessment algorithms is based on the accessibility of the reference image. In the case of reference image is unavailable image quality assessment is considered as a no-reference (NR) one. Reduced-reference (RR) methods have only partial information about the reference image, while full-reference (FR) algorithms have full access to the reference image.

The research of objective image quality assessment demands databases that contain images with the corresponding MOS values. To this end, a number of image quality databases have been made publicly available. Roughly speaking, these databases can be categorized into three groups. The first one contains a smaller set of pristine, reference digital images and artificially distorted images derived from the pristine images considering different artificial distortions at different intensity levels. The second group contains only digital images with authentic distortions collected from photographers, so pristine images cannot be found in such databases. Virtanen et al. [37] were first to introduce this type of database for images by releasing CID2013. As a consequence, the development of FR methods is connected to the first group of databases. In contrast Waterloo Exploration [19] and KADIS-700k [17] databases are meant to provide an alternative evaluation of objective image quality assessment models, by means of paired comparisons. That is why, they contain a set of reference (pristine) images, distorted images, and distortion levels. In contrast to other databases, they do not provide MOS values. Information about major publicly available image quality assessment databases are summarized in Table 1.

In this study, we provide a comprehensive evaluation of 32 full-reference image quality assessment (FR-IQA) algorithms on MDID database. In contrast to other available image quality databases, the images in MDID contain multiple types of distortions simultaneously.

The rest of this study is organized as follows. There are a number of publicly available image quality databases, such as IVC [15], LIVE IQA [30], A57 [5], Toyoma [36], TID2008 [24], CSIQ [14], IVC-LAR [3], MMSP 3D [12], IRSQ [21], [20], TID2013 [23], CID2013 [37], LIVE In the Wild [11], Waterloo Exploration [19], MDID [31], KonIQ-10k [16], KADID-10k [18], and KADIS-700k [18]. In Section 2, we give a brief introduction to each of them. In Section 3, we give a comprehensive evaluation of 31 full-reference image quality assessment (FR-IQA) algorithms on MDID database. Finally, a conclusion is drawn in Section 4.

2 Image quality databases

IVC111http://www2.irccyn.ec-nantes.fr/ivcdb/ [15] database consists of 10 pristine images, and 235 distorted images, including four types of distortions (JPEG, JPEG2000, locally adaptive resolution coding, blurring). Quality score ratings (1 to 5) are provided in the form of MOS.

LIVE Image Quality Database222http://www.live.ece.utexas.edu/research/quality/subjective.htm (LIVE IQA) [30] has two releases, Release 1 and Release 2. Laboratory for Image and Video Engineering (University of Texas at Austin) conducted an extensive experiment to obtain scores from human subjects for a number of images distorted with different distortion types. Release 2 has more distortion types — JPEG (169 images), JPEG2000 (175 images), Gaussian blur (145 images, White noise (145 images), bit errors in JPEG2000 bit stream (145 images). The subjective quality scores in this database are DMOS (Differential MOS), ranging from 0 to 100.

A57 Database333http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=26 [5] has 3 pristine images, and 54 distorted images, including six types of distortions (JPEG, JPEG2000, JPEG2000 with dynamic contrast-based quantization, quantization of the LH subbands of DWT, additive Gaussian white noise, Gaussian blurring). Quality score ratings (0 to 1) are provided in the form of DMOS.

Toyoma Database [36] consists of 14 pristine images, and 168 distorted images, including two types of distortions (JPEG, JPEG2000). Quality score ratings (1 to 5) are provided in the form of MOS.

Tampere Image Database 2008444http://www.ponomarenko.info/tid2008.htm (TID2008) [24] contains 25 reference images and 1,700 distorted images (25 reference images types of distortions levels of distortions). The MOS was obtained from the results of 838 experiments carried out by observers from three countries. 838 observers have performed 256,428 comparisons of visual quality of distorted images or 512,856 evaluations of relative visual quality in image pairs. Higher value of MOS (0 - minimal, 9 - maximal, MSE of each score is 0.019) corresponds to higher visual quality of the image. A file enclosed “mos.txt” contains the Mean Opinion Score for each distorted image.

Computational and Subjective Image Quality555http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23 (CSIQ) [14] database consists of 30 original images, each distorted using one of six types of distortions, each at four to five different levels of distortion. The images were subjectively rated based on a linear displacement of the images across four calibrated monitors placed side-by-side with equal viewing distance to the observer. The database contains 5,000 subjective ratings from 35 different — both male and female — observers. Quality score ratings (0 to 1) are provided in the form of DMOS.

IVC-LAR666http://ivc.univ-nantes.fr/en/databases/LAR/ [3] database contains 8 pristine images (4 natural images and 4 art images), and 120 distorted images, consisting of three types of distortions (JPEG, JPEG2000, locally adaptive resolution coding). Quality score ratings (1 to 5) are provided in the form of MOS.

Wireless Imaging Quality777https://computervisiononline.com/dataset/1105138665 (WIQ) Database [8], [9] consists of 7 reference images and 80 distorted images. The subjective quality scores are given in DMOS, ranging from 0 to 100.

In contrast to other publicly available image quality databases MMSP 3D Image Quality Assessment Database888https://mmspg.epfl.ch/downloads/3diqa/ [12] consists of stereoscopic images with a resolution of pixels. Specifically, 10 indoor and outdoor scenes were captured with a wide variety of colors, textures, and depth structures. Furthermore, 6 different stimuli have been considered corresponding to different camera distances (10, 20, 30, 40, 50, and 60 cm) for each scene.

Image Retargeting Subjective Quality999http://ivp.ee.cuhk.edu.hk/projects/demo/retargeting/index.html (IRSQ) Database [21], [20] consists of 57 reference images grouped into four attributes, specfically face and people, clear foreground object, natural scenery, and geometric structure. Moreover, ten different retargeting methods (cropping, seam carving, scaling, shift-map editing, scale and stretch, etc.) are applied to generate retargeted images. In total, 171 test images can be found in this database.

Tampere Image Database 2013101010http://www.ponomarenko.info/tid2013.htm (TID2013) [23] contains 25 reference images and 3,000 distorted images (25 reference images types of distortions levels of distortions). MOS (Mean Opinion Score) is provided as subjective score, ranging from 0 to 9.

The CID2013111111http://www.helsinki.fi/psychology/groups/visualcognition/ [37] database contains 474 images with authentic distortions captured by 79 imaging devices, such as mobile phones, digital still cameras, and digital single-lens reflex cameras.

LIVE In the Wild Image Quality Challenge Database121212http://live.ece.utexas.edu/research/ChallengeDB/ [11] contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices. The LIVE In the Wild Image Quality Database has over 350,000 opinion scores on 1,162 images evaluated by over 8,100 unique human observers.

Waterloo Exploration131313https://ece.uwaterloo.ca/ k29ma/exploration/ [19] database consists of 4,744 reference images and 94,880 distorted images created from them. Instead of collecting MOS for each test image, the authors introduced three alternative test criteria to evaluate the performance of IQA models, such as discriminability test (D-test), listwise ranking consistency test (L-test), and pairwise preference consistency test (P-test).

In contrast to other databases considering artificial distortions, MDID141414https://www.sz.tsinghua.edu.cn/labs/vipl/mdid.html [31] obtains distorted images from reference images with random types and levels of distortions. In this way, each distorted image contains multiple types of distortions simultaneously. Gaussian noise, Gaussian blur, contrast change, JPEG noise, and JPEG2000 noise were considered.

The main challenge in applying state-of-the-art deep learning methods to predict image quality in-the-wild is the relatively small size of existing quality scored datasets. The reason for the lack of larger datasets is the massive resources required in generating diverse and publishable content. In KonIQ-10k151515http://database.mmsp-kn.de/koniq-10k-database.html [16] a new systematic and scalable approach is presented to create large-scale, authentic image datasets for image quality assessment. KonIQ-10k [16] consists of 10,073 images, on which large scale crowdsourcing experiments has been carried out in order to obtain reliable quality ratings from 1,467 crowd workers (1.2 million ratings) [29]. During the test users exhibiting unusual scoring behavior were removed.

KADID-10k161616http://database.mmsp-kn.de/kadid-10k-database.html [18] consists of 81 pristine images and distorted images derived from the pristine images considering different distortion types at 5 intensity levels (). In contrast, KADIS-700k [18] contains pristine images and distorted images were derived using different distortion types at 5 intensity levels but MOS values are not given in this database.

Database Year Reference images Test images Distortion type Subjective score
IVC [15] 2005 10 235 artificial MOS (1-5)
LIVE IQA [30] 2006 29 779 artificial DMOS (0-100)
A57 [5] 2007 3 54 artificial DMOS (0-1)
Toyoma [36] 2008 14 168 artificial MOS (1-5)
TID2008 [24] 2008 25 1,700 artificial MOS (0-9)
CSIQ [14] 2009 30 866 artificial DMOS (0-1)
IVC-LAR [3] 2009 8 120 artificial MOS (1-5)
WIQ [8], [9] 2009 7 80 artificial DMOS (0-100)
MMSP 3D [12] 2009 9 54 artificial MOS (0-100)
IRSQ [21], [20] 2011 57 171 artificial MOS (0-5)
TID2013 [23] 2013 25 3,000 artificial MOS (0-9)
CID2013 [37] 2013 8 474 authentic MOS (0-9)
LIVE In the Wild [11] 2016 - 1,162 authentic MOS (1-5)
Waterloo Exploration [19] 2016 4,744 94,880 artificial -
MDID [31] 2017 20 1600 artificial MOS (0-8)
KonIQ-10k [16] 2018 - 10,073 authentic MOS (1-5)
KADID-10k [17] 2019 81 10,125 artificial MOS (1-5)
KADIS-700k [17] 2019 140,000 700,000 artificial -
Table 1: Major publicly available image quality assessment databases. Publicly available image quality databases can be divided into three groups. The first one contains a smaller set of reference images and artificially distorted images are derived from them using different noise types at different intensity levels. There are also databases which contains only pristine images, distorted images, and distortion levels without MOS.

3 Experimental results

Method Year PLCC SROCC KROCC
BLeSS-SR-SIM [33] 2016 0.7535 0.8148 0.6258
BLeSS-FSIM [33] 2016 0.8193 0.8467 0.6576
BLeSS-FSIMc [33] 2016 0.8527 0.8827 0.7018
CBM [10] 2005 0.7367 0.7212 0.5306
CSV [34] 2016 0.8785 0.8814 0.6998
CW-SSIM [28] 2009 0.5900 0.6148 0.4450
DSS [4] 2015 0.8714 0.8661 0.6793
ESSIM [47] 2013 0.6694 0.8253 0.6349
FSIM [45] 2011 0.8591 0.8870 0.7074
FSIMc [45] 2011 0.8639 0.8902 0.7122
GMSD [42] 2013 0.8544 0.8617 0.6797
HaarPSI [27] 2018 0.9051 0.9028 0.7340
MAD [14] 2010 0.7439 0.7243 0.5327
MCSD [38] 2016 0.8386 0.8457 0.6622
MDSI (’mult’) [22] 2016 0.8130 0.8278 0.6441
MDSI (’sum’) [22] 2016 0.8249 0.8363 0.6527
MS-SSIM [41] 2003 0.7884 0.8292 0.6360
MS-UNIQUE [26] 2017 0.8604 0.8712 0.6893
NQM [6] 2000 0.6177 0.5869 0.4143
PerSIM [32] 2015 0.8282 0.8196 0.6296
PSNR-HVS [7] 2006 0.679 0.6637 0.4845
PSNR-HVS-M [25] 2007 0.6875 0.6739 0.4944
QILV [1] 2006 0.3296 0.4592 0.3214
QSSIM [13] 2011 0.8022 0.8014 0.6074
RFSIM [46] 2010 0.7035 0.6758 0.4884
SCIELAB [48] 1997 0.2552 0.1232 0.0824
SR-SIM [43] 2012 0.7948 0.8517 0.6683
SSIM [39] 2004 0.5798 0.5761 0.4105
SSIM CNN [2] 2018 0.8706 0.8804 0.6992
SUMMER [35] 2019 0.7427 0.7343 0.5434
UQI [40] 2002 0.2175 0.3608 0.2476
VSI [44] 2014 0.7883 0.8570 0.6710
Table 2: Performance comparison of 31 FR-IQA algorithms on MDID database.

The evaluation of objective visual quality assessment is based on the correlation between the predicted and the ground-truth quality scores. Pearson’s linear correlation coefficient (PLCC) and Spearman’s rank order correlation coefficient (SROCC) are widely applied to this end. Furthermore, some authors give the Kendall’s rank order correlation coefficient as well.

The PLCC between data set and is defined as

(1)

where and stand for the average of set and , and denote the th elements of set and , respectively. For two ranked sets A and B SROCC is defined as

(2)

where and are the middle ranks of set A and B. KROCC between dataset and can be calculated as

(3)

where is the length of the input vectors, is the number of concordant pairs between and , and is the number of discordant pairs between and .

We collected 31 FR-IQA metrics whose source codes are available online. Furthermore, we reimplemented SSIM CNN171717https://github.com/Skythianos/Pretrained-CNNs-for-full-reference-image-quality-assessment [2] in MATLAB R2019a. In Table 2, we present PLCC, SROCC, and KROCC values measured over the MDID database. It can be clearly seen from the results that there is still a lot of space for the improvement of FR-IQA algorithms because only HaarPSI [27] was able to produce PLCC and SROCC values higher than 0.9. Furthermore, only three methods — FSIM [45], FSIMc [45], HaarPSI [27] — were able to produce KROCC values higher than 0.7.

4 Conclusion

First, we gave information about the mostly applied image quality databases. Subsequently, we extensively evaluated 32 state-of-the-art FR-IQA methods on MDID database whose images contain multiple types of distortions simultaneously. We dmonstrated that there is still a lot of space for the improvement of FR-IQA algorithms because only HaarPSI [27] was able to produce PLCC and SROCC values higher than 0.9.

References

  • [1] S. Aja-Fernandez, R. S. J. Estepar, C. Alberola-Lopez, and C. Westin (2006) Image quality assessment based on local variance. In 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 4815–4818. Cited by: Table 2.
  • [2] S. A. Amirshahi, M. Pedersen, and A. Beghdadi (2018) Reviving traditional image quality metrics using cnns. In Color and Imaging Conference, Vol. 2018, pp. 241–246. Cited by: Table 2, §3.
  • [3] M. Babel (2009) Subjective quality assessment of lar coded art images. Note: http://www.irccyn.ec-nantes.fr/ autrusse/Databases/ Cited by: §1, Table 1, §2.
  • [4] A. Balanov, A. Schwartz, Y. Moshe, and N. Peleg (2015) Image quality assessment based on dct subband similarity. In 2015 IEEE International Conference on Image Processing (ICIP), pp. 2105–2109. Cited by: Table 2.
  • [5] D. M. Chandler and S. S. Hemami (2007) VSNR: a wavelet-based visual signal-to-noise ratio for natural images. IEEE transactions on image processing 16 (9), pp. 2284–2298. Cited by: §1, Table 1, §2.
  • [6] N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik (2000) Image quality assessment based on a degradation model. IEEE transactions on image processing 9 (4), pp. 636–650. Cited by: Table 2.
  • [7] K. Egiazarian, J. Astola, N. Ponomarenko, V. Lukin, F. Battisti, and M. Carli (2006) New full-reference quality metrics based on hvs. In Proceedings of the Second International Workshop on Video Processing and Quality Metrics, Vol. 4. Cited by: Table 2.
  • [8] U. Engelke, M. Kusuma, H. Zepernick, and M. Caldera (2009) Reduced-reference metric design for objective perceptual quality assessment in wireless imaging. Signal Processing: Image Communication 24 (7), pp. 525–547. Cited by: Table 1, §2.
  • [9] U. Engelke, H. Zepernick, and T. M. Kusuma (2010) Subjective quality assessment for wireless image communication: the wireless imaging quality database. In International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM), Cited by: Table 1, §2.
  • [10] X. Gao, T. Wang, and J. Li (2005) A content-based image quality metric. In International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing, pp. 231–240. Cited by: Table 2.
  • [11] D. Ghadiyaram and A. C. Bovik (2015) Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing 25 (1), pp. 372–387. Cited by: §1, Table 1, §2.
  • [12] L. Goldmann, F. De Simone, and T. Ebrahimi (2010) Impact of acquisition distortion on the quality of stereoscopic images. In Proceedings of the International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Cited by: §1, Table 1, §2.
  • [13] A. Kolaman and O. Yadid-Pecht (2011) Quaternion structural similarity: a new quality index for color images. IEEE Transactions on Image Processing 21 (4), pp. 1526–1536. Cited by: Table 2.
  • [14] E. C. Larson and D. M. Chandler (2010) Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging 19 (1), pp. 011006. Cited by: §1, Table 1, §2, Table 2.
  • [15] P. Le Callet and F. Autrusseau (2005) Subjective quality assessment irccyn/ivc database. Note: http://www.irccyn.ec-nantes.fr/ivcdb/ Cited by: §1, Table 1, §2.
  • [16] H. Lin, V. Hosu, and D. Saupe (2018) KonIQ-10k: towards an ecologically valid and large-scale iqa database. arXiv preprint arXiv:1803.08489. Cited by: §1, Table 1, §2.
  • [17] H. Lin, V. Hosu, and D. Saupe (2019) KADID-10k: a large-scale artificially distorted iqa database. In 2019 Tenth International Conference on Quality of Multimedia Experience (QoMEX), pp. 1–3. Cited by: §1, Table 1.
  • [18] H. Lin, V. Hosu, and D. Saupe (2019) KADID-10k: a large-scale artificially distorted iqa database. In 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), pp. 1–3. Cited by: §1, §2.
  • [19] K. Ma, Z. Duanmu, Q. Wu, Z. Wang, H. Yong, H. Li, and L. Zhang (2017) Waterloo exploration database: New challenges for image quality assessment models. IEEE Transactions on Image Processing 26 (2), pp. 1004–1016. Cited by: §1, §1, Table 1, §2.
  • [20] L. Ma, W. Lin, C. Deng, and K. N. Ngan (2012) Study of subjective and objective quality assessment of retargeted images. In Circuits and Systems (ISCAS), 2012 IEEE International Symposium on, pp. 2677–2680. Cited by: §1, Table 1, §2.
  • [21] L. Ma, W. Lin, C. Deng, and K. N. Ngan (2012) Image retargeting quality assessment: a study of subjective scores and objective metrics. IEEE Journal of Selected Topics in Signal Processing 6 (6), pp. 626–639. Cited by: §1, Table 1, §2.
  • [22] H. Z. Nafchi, A. Shahkolaei, R. Hedjam, and M. Cheriet (2016) Mean deviation similarity index: efficient and reliable full-reference image quality evaluator. IEEE Access 4, pp. 5579–5590. Cited by: Table 2.
  • [23] N. Ponomarenko, O. Ieremeiev, V. Lukin, K. Egiazarian, L. Jin, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al. (2013) Color image database tid2013: peculiarities and preliminary results. In Visual Information Processing (EUVIP), 2013 4th European Workshop on, pp. 106–111. Cited by: §1, Table 1, §2.
  • [24] N. Ponomarenko, V. Lukin, A. Zelensky, K. Egiazarian, M. Carli, and F. Battisti (2009) TID2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics 10 (4), pp. 30–45. Cited by: §1, Table 1, §2.
  • [25] N. Ponomarenko, F. Silvestri, K. Egiazarian, M. Carli, J. Astola, and V. Lukin (2007) On between-coefficient contrast masking of dct basis functions. In Proceedings of the third international workshop on video processing and quality metrics, Vol. 4. Cited by: Table 2.
  • [26] M. Prabhushankar, D. Temel, and G. AlRegib (2017) Ms-unique: multi-model and sharpness-weighted unsupervised image quality estimation. Electronic Imaging 2017 (12), pp. 30–35. Cited by: Table 2.
  • [27] R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand (2018) A haar wavelet-based perceptual similarity index for image quality assessment. Signal Processing: Image Communication 61, pp. 33–43. Cited by: Table 2, §3, §4.
  • [28] M. P. Sampat, Z. Wang, S. Gupta, A. C. Bovik, and M. K. Markey (2009) Complex wavelet structural similarity: a new image similarity index. IEEE transactions on image processing 18 (11), pp. 2385–2401. Cited by: Table 2.
  • [29] D. Saupe, F. Hahn, V. Hosu, I. Zingman, M. Rana, and S. Li (2016) Crowd workers proven useful: a comparative study of subjective video quality assessment. In QoMEX 2016: 8th International Conference on Quality of Multimedia Experience, Cited by: §2.
  • [30] H. R. Sheikh, M. F. Sabir, and A. C. Bovik (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on image processing 15 (11), pp. 3440–3451. Cited by: §1, Table 1, §2.
  • [31] W. Sun, F. Zhou, and Q. Liao (2017) MDID: a multiply distorted image database for image quality assessment. Pattern Recognition 61, pp. 153–168. Cited by: §1, Table 1, §2.
  • [32] D. Temel and G. AlRegib (2015) PerSIM: multi-resolution image quality assessment in the perceptually uniform color domain. In 2015 IEEE International Conference on Image Processing (ICIP), pp. 1682–1686. Cited by: Table 2.
  • [33] D. Temel and G. AlRegib (2016) BLeSS: bio-inspired low-level spatiochromatic similarity assisted image quality assessment. In 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. Cited by: Table 2.
  • [34] D. Temel and G. AlRegib (2016) CSV: image quality assessment based on color, structure, and visual system. Signal Processing: Image Communication 48, pp. 92–103. Cited by: Table 2.
  • [35] D. Temel and G. AlRegib (2019) Perceptual image quality assessment through spectral analysis of error representations. Signal Processing: Image Communication 70, pp. 37–46. Cited by: Table 2.
  • [36] S. Tourancheau, F. Autrusseau, P. Sazzad, and Y. Horita (2008) Impact of the subjective dataset on the performance of image quality metrics. In IEEE International Conference on Image Processing 2008. ICIP 2008., Cited by: §1, Table 1, §2.
  • [37] T. Virtanen, M. Nuutinen, M. Vaahteranoksa, P. Oittinen, and J. Häkkinen (2014) CID2013: a database for evaluating no-reference image quality assessment algorithms. IEEE Transactions on Image Processing 24 (1), pp. 390–402. Cited by: §1, §1, Table 1, §2.
  • [38] T. Wang, L. Zhang, H. Jia, B. Li, and H. Shu (2016) Multiscale contrast similarity deviation: an effective and efficient index for perceptual image quality assessment. Signal Processing: Image Communication 45, pp. 1–9. Cited by: Table 2.
  • [39] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, et al. (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: Table 2.
  • [40] Z. Wang and A. C. Bovik (2002) A universal image quality index. IEEE signal processing letters 9 (3), pp. 81–84. Cited by: Table 2.
  • [41] Z. Wang, E. P. Simoncelli, and A. C. Bovik (2003) Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2, pp. 1398–1402. Cited by: Table 2.
  • [42] W. Xue, L. Zhang, X. Mou, and A. C. Bovik (2013) Gradient magnitude similarity deviation: a highly efficient perceptual image quality index. IEEE Transactions on Image Processing 23 (2), pp. 684–695. Cited by: Table 2.
  • [43] L. Zhang and H. Li (2012) SR-sim: a fast and high performance iqa index based on spectral residual. In 2012 19th IEEE international conference on image processing, pp. 1473–1476. Cited by: Table 2.
  • [44] L. Zhang, Y. Shen, and H. Li (2014) VSI: a visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing 23 (10), pp. 4270–4281. Cited by: Table 2.
  • [45] L. Zhang, L. Zhang, X. Mou, and D. Zhang (2011) FSIM: a feature similarity index for image quality assessment. IEEE transactions on Image Processing 20 (8), pp. 2378–2386. Cited by: Table 2, §3.
  • [46] L. Zhang, L. Zhang, and X. Mou (2010) RFSIM: a feature based image quality assessment metric using riesz transforms. In 2010 IEEE International Conference on Image Processing, pp. 321–324. Cited by: Table 2.
  • [47] X. Zhang, X. Feng, W. Wang, and W. Xue (2013) Edge strength similarity for image quality assessment. IEEE Signal processing letters 20 (4), pp. 319–322. Cited by: Table 2.
  • [48] X. Zhang, D. A. Silverstein, J. E. Farrell, and B. A. Wandell (1997) Color image quality metric s-cielab and its application on halftone texture visibility. In Proceedings IEEE COMPCON 97. Digest of Papers, pp. 44–48. Cited by: Table 2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393352
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description