Task-Driven Super Resolution:Object Detection in Low-resolution Images

Task-Driven Super Resolution:
Object Detection in Low-resolution Images

Muhammad Haris TTI, Japan TTI-C, United States
{mharis, ukita}@toyota-ti.ac.jp, greg@ttic.edu
   Greg Shakhnarovich TTI, Japan TTI-C, United States
{mharis, ukita}@toyota-ti.ac.jp, greg@ttic.edu
   and Norimichi Ukita TTI, Japan TTI-C, United States
{mharis, ukita}@toyota-ti.ac.jp, greg@ttic.edu
Abstract

We consider how image super resolution (SR) can contribute to an object detection task in low-resolution images. Intuitively, SR gives a positive impact on the object detection task. While several previous works demonstrated that this intuition is correct, SR and detector are optimized independently in these works. This paper proposes a novel framework to train a deep neural network where the SR sub-network explicitly incorporates a detection loss in its training objective, via a tradeoff with a traditional detection loss. This end-to-end training procedure allows us to train SR preprocessing for any differentiable detector. We demonstrate that our task-driven SR consistently and significantly improves accuracy of an object detector on low-resolution images for a variety of conditions and scaling factors.

Keywords:
Super Resolution, Object Detection, End-to-End Learning, Task-Driven Image Processing

1 Introduction

Image Super Resolution (SR) belongs to image restoration and enhancement (e.g., denoising and deblurring) algorithms, widely studied in computer vision and graphics. In both communities, the goal is to reconstruct an image from a degenerated version as accurately as possible. The quality of the reconstructed image is evaluated by pixel-based quantitative metrics such as PSNR (peak signal-to-noise ratio) and SSIM (structure similarity) [1]. Recently-proposed perceptual quality [2, 3, 4] can be also employed for evaluation as well as for optimizing the reconstruction model. Relationships between the pixel-based and perceptual quality metrics have been investigated in the literature [5, 6] in order to harmonize these two kinds of metrics. Ultimately, the goal of SR is still to restore an image as well as possible in accordance with criteria in human visual perception.

One connection between SR, and other image restoration tools, and visual recognition is that despite continuing advances in visual recognition, it remains vulnerable to a wide range of image degradation, including low resolution and blur [7, 8]. Image restoration such as SR can serve as an input enhancement step to alleviate this vulnerability. For example, accuracy of many recognition tasks can be improved by deblurring [9, 10, 11, 12] or denoising [13]. SR has been also shown to be effective for such preprocessing for several recognition tasks [14, 15, 16, 17, 18].

(a) HR (b) LR (c) Bicubic SR (d) SR (no task) (e) Task driven SR
(proposed)
PSNR: 21.26 PSNR: 22.02 PSNR: 21.54
Figure 1: Scale sensitivity in object recognition and the effectiveness of our proposed method (i.e., end-to-end learning in accordance with an object recognition task). Images shown in the top row show (a) an original high resolution image, (b) its low-resolution image (here -size, padded with black), (c) SR image obtained by bicubic interpolation, (d) SR image obtained by the SR model optimized with no regard to detection, and (e) SR image obtained by our proposed task-driven SR method, using the same model as in (d). For each of the reconstructed HR images, we also report PSNR w.r.t. the original. Despite ostensibly lower PSNR, the TDSR result recovers the correct detection results with high scores, in this case even suppressing a false detection present in the original HR input, and at the same produces a plausible looking HR image.

Typically, in such applications the SR is trained in isolation from the downstream task, with the only connection through the selection of images to train or fine-tune the SR method (e.g., for character recognition, SR is trained on character images).

We propose to bridge this divide, and explicitly incorporate the objective of the downstream task (such as object detection) into training of an SR module. Figure 1 illustrates the effect of our proposed, task-driven approach to SR. SR images (c), (d), and (e) generated from a low-resolution (LR) image (b) can successfully bring recognition accuracy close to the score of their original high-resolution (HR) image (a).

Our approach is motivated by two observations:

SR is ill-posed:

many HR images when downsampled produce the same LR image. We expect that the additional cue given by the downstream task objective such as detection may help guide the SR solution.

Human perception and machine perception differ:

It is known that big differences are observed between human and machine perceptions, in particular, with highly-complex deep networks. This is perhaps best exemplified by adversarial images [19, 20, 21] that can “fool” machine perception but not human. Thus, if our goal is to super-resolve an image for machine consumption, we believe it is prudent to explicitly “cater” to the machine perception when learning SR.

The two SR images in Fig 1 (d) and (e) illustrate these points. Both look similar to the human eye, but the detection results by a network differ significantly. Furthermore, the conventional measure of reconstruction quality (PSNR) fails to capture the difference, assigning significantly higher value to (d) which yields to much worse detection results. The main contributions of this paper are:

  • An approach to super-resolution that uses the power of end-to-end training in deep learning to combine low-level and high-level vision objectives, leading to what we call Task-Driven Super Resolution (TDSR). As a means of increasing robustness of object detection to low resolution inputs, this approach provides results substantially better than other SR methods, and is potentially applicable to a broad range of low-level image processing tools and high-level tasks.

  • A novel view of super-resolution, explicitly acknowledging the generative or semantic aspects of SR in high scaling factors, which we hope will encourage additional work in the community to help further reduce the gap between low-level and high-level vision.

2 Related Work

While there has been much work on super-resolution and on evaluating and improving some measure of perceptual quality of images, comparatively little work exist on optimizing image restoration tools for machine perception.

2.0.1 Image quality assessment

Image restoration and enhancement require appropriate quality assessment metrics both for evaluation and (when machine learning is used) as training objectives. As mentioned in Sec. 1, PSNR and SSIM [1] are widely used as such metrics, focusing on comparing a reconstructed/estimated image with its ground truth image. There exist methods for quality assessment that do not require a reference ground truth image [22, 23], including some that use deep neural networks to learn the metrics [24, 25].

Several quality assessment metrics [26, 27, 28] have been evaluated specifically for SR, including no-reference metrics [29]. However all of these metrics are a proxy for (assumed or approximated) human judgment perceptual quality, and do not consider high-level visual tasks such as recognition.

Some task-dependent quality assessment metrics have been proposed for certain tasks, including biometrics [30], face recognition [31], and object recognition [32], showing improvements vs. the task-agnostic metrics. None of them, however, have been used in a joint learning framework with the underlying image enhancement such as SR.

2.0.2 Image Super Resolution

A huge variety of image SR techniques have been proposed; see survey papers [33, 34, 35] for more details. While self-contained SR is attractive (e.g., self-similarity based SR [36, 37, 38]), most recent SR algorithms utilize external training images for higher performance; for example, exemplar based [39, 40, 41], regression based [42, 43], and web-retrieval based [44]. The effectiveness of using both self and external images is explored in [45, 46].

Like other vision problems, SR has benefited from recent advances in deep convolutional neural networks (DCNNs). SRCNN [47] enhances the spatial resolution of an input LR image by hand-crafted upsampling filters. The enlarged image is then improved by a DCNN. Further improvements are achieved with more advanced architectures, introducing residual connections [48, 49] and recursive layers [50], however the use of the hand-crafted upsampling filters remains an impediment. That can be alleviated by embedding an upsampling layer into a DCNN [51, 52, 53]. Progressive upsampling [54] is also effective for leveraging information from different scales. By sharing the SR features at different scales by iterative forward and backward projections, DBPN-SR [55] enables the networks to preserve the HR components by learning various up- and down-sampling operators while generating deeper features.

While deep features provided by DCNNs allow us to preserve clear high-frequency photo-realistic textures, it is difficult to completely eliminate blur artifacts. This problem has been addressed by introduction of novel objectives, such as perceptual similarity [2, 3] and adversarial losses [56, 57]. Finally, the two ideas can be combined, incorporating perceptual similarity into generative adversarial networks (GANs) in SRGAN [58].

In contrast to prior work, we explicitly incorporate the objective of a well defined, discriminative task (such as detection) into the SR framework.

2.0.3 Object detection

Most state-of-the-art object detection algorithms extract or evaluate object proposals (e.g., bounding boxes) [59, 60, 61, 62, 63] within a query image and evaluate the “objectness” of each bounding box for object detection, using DCNN features computed or pooled over each box. In many recent models, the mechanism for producing candidate boxes is incorporated into the network architecture [64, 65].

Unlike approaches using object proposals, SSD [66] and YOLO9000 [67] use pre-set default boxes (a.k.a. anchor boxes) covering a query image. The objectness score is computed for each object category in all boxes while its spatial parameters (e.g., location, scale, and aspect ratio) are optimized. This streamlines the computation at test time and produces extremely fast, as well as accurate, detection framework.

2.0.4 Cnnnections to generative models

There is also an interesting connection between our approach and the gradient-based adversarial images [19] as well as the popular “neural art” technique called DeepDream [68]. In both of those, an input image (at full resolution) is modified using gradient descent with the objective to achieve certain output for an image classification network. For adversarial images the goal is to make the network predict an incorrect class, while in DeepDream the goals are aesthetic.

3 Task Driven Super-resolution

Our method relies on two building blocks: a super-resolution (SR) network and a task network . The SR network maps a low-resolution image to a high-resolution image , where denotes all the parameters of the network. The task network takes an image and outputs a (possibly structured) prediction . We refer to these predictors as “networks” because they are currently likely to be deep neural networks. However our approach does not presume anything about and beyond differentiability.

We assume that the task network has been trained and its parameters remain fixed throughout (and will, for brevity, be omitted from notation). Thus, our method is applicable to any task network, and can be used to make an off-the-shelf network that fails on low resolution inputs more robust to such inputs. It can be used for a variety of tasks, for example, depth estimation or semantic segmentation. However in this paper we restrict our attention to the object detection task, in which consists of a set of scored bounding boxes for given object classes.

3.1 Component networks

We use the recently proposed Deep Back-Projection Networks (DBPN) [55] as the SR component. The DBPN achieve state of the art or competitive results on standard SR benchmarks, when trained with the MSE reconstruction loss

(1)

where ranges of the pixel indices in the HR image .

As the detector, we use the Single Shot MultiBox Detector (SSD) [66]. The SSD detector works with a set of default bounding boxes, covering a range of positions, scales and aspect ratios; each box is scored for presence of an object from every class. Given the ground truth for an image , a subset of default boxes is matched to the ground truth boxes, this matches forming the predicted detections . The task (detection) loss of SSD is combined of of confidence loss and localization loss:

(2)

The confidence loss penalizes incorrect class predictions for the matched boxes. The localization loss penalizes displacement of boxes vs. the ground truth, using smooth distance. Both losses in (2) are differentiable with respect to their inputs.

Importantly, every default bounding box in SSD is associated with a set of cells in feature maps (activation layers) computed by a convolutional neural network. As a result, since the loss in (2) decomposes over boxes, it is a differentiable function of the network activations and thus a function of the pixels in the input image, allowing us to incorporate this task loss in the TDSR objective described below.

Both of our chosen component networks have code made publicly available by their authors, and can be trained end to end, providing a convenient testbed for our approach; many other choices are possible, in particular for the detector component, but we do not explore them in this paper.

3.2 Task driven training

Normally, learning-based SR systems are trained using some sort of reconstruction loss , such as mean (over pixels) squared error (MSE) between and the downsampled version of superresolved by . In contrast, the detector is trained with a surrogate loss intended to improve the measure of its accuracy, typically measured as the average precision (AP) for one class, and the mean AP (mAP) over classes for the entire data set/task.

Let be the image from the detection data set, with detection ground truth labels , and let denote downscaling of an image by a fixed factor. We propose the compound loss, which on the example is given by

(3)

where and are weights determining relative strength of the fidelity term (reconstruction loss) and the semantic term (detection loss). Under the assumption that both and are differentiable, we can use the chain rule, and compute the gradient of with respect to its input, the super-resolved . Then this per-pixel gradient is combined with the per-pixel gradient of the reconstruction loss : The SR parameters are then updated using standard back-propagation from this combined gradient:

(4)

3.2.1 Interpretation

As mentioned in Section 1, SR is an ill-posed problem. At sufficiently high upscaling factors, it resembles (conditional) image generation more than image restoration, since a large amount of information destroyed in downscaling process must effectively be “hallucinated”. Most current image generation methods, such based on GANs or autoencoders, either do not explicitly regard the semantic content of the generated image, or “hardcode” it into the generator by training only on images of a specific class. Our objective (3) encourages the image to both look good to a human (similar to the original) and look correct to the machine (yield the same recognition results). The values of and control this tradeoff. With , we effectively ignore the downstream task, and get the traditional, MSE-driven SR learning, with the limitations for downstream detection discussed in Section 2 and demonstrated in Section 4.

With we effectively ignore the original high resolution image, and the objective is purely semantic. In this case, intuitively, if the “SR” method were to simply paste a fixed canonical object of the correct class at the appropriate location and scale in the image, and the detector correctly picks up on these objects, we get a perfect value of the task loss. However, in this hypothetical scenario we would in effect replace the SR with a LR detector. That of course would bring up back to the original challenges of LR detection. We also would not get the extra benefit of creating human-interpretable intermediate HR image, connected to the original LR input.

We expect the optimal tradeoff to be somewhere between these scenarios, incorporating meaningful contributions from both the reconstruction and the detection objectives. The precise “mixing” of the two is subject to algorithm design, as detailed below.

3.3 Training schedules

The definition of loss in (3) depends on the values of and , and we can consider a number of settings, both static (fixed weights) and dynamic (weights changing through training). We describe these here, and evaluate them in Section 4.

Fine-tune Generally, we assume that has been trained for super-resolution for a given factor on images from a domain that could be different from the domain of . We can simply fine-tune SR on the new domain, without incorporating the task loss: , .

Balanced We can start with a phase of fine-tuning the SR on reconstruction only (, ) and then increase to a non-zero value, introducing task-driven component. Note that the appropriate relative magnitude of with respect to will depend not only on the desired tradeoff between the objectives, but also on the relative scale of the two loss functions.

Task only Alternatively, we can forgo the reconstruction driven phase, and fine-tune with task loss only, , .

Gradual Finally, we can gradually increase , from zero to a high value, training with each value for a number of iterations. We could expect this schedule to provide a more gentle introduction of the task objective, gradually refining the initially purely reconstruction-driven SR.

4 Experimental Results

4.1 Implementation Details

Base networks DBPN [55] constructs mutually-connected up- and down-sampling layers each of which represents different types of image degradation and high-resolution components. The stack of up- and down- projection units creates an efficient way to iteratively minimize the reconstruction error, to reconstruct a huge variety of super-resolution features, and to enable large scaling factors such as enlargement. We used the setting recommended by the authors: “a convolutional layer with four striding and two padding” and “a convolutional layer with eight striding and two padding” are used for and SRs, respectively, in order to construct a projection unit. For object detection, we use SSD300 where the input size is pixels. The network uses VGG16 through conv5_3 layer, then uses conv4_3, conv7 (fc7), conv8_2, conv9_2, conv10_2, and conv11_2 as feature maps to predict the location and confidence score of each detected object. The code for both networks are publicly accessible in the internet.

Datasets We initialized all experiments with DBPN model pretrained on the DIV2K data set [69], made available by the authors of [55]. We used SSD network pretrained on PASCAL VOC0712 trainval and publicly available as well. When fine-tuning DBPN in our experiments, with or without task-driven objective, we reused PASCAL VOC0712 trainval, with data augmentation. The augmentation consists of photometric distortion, scaling, flipping, random cropping that are recommended to train SSD. Test images on VOC2007 were used for testing in all experiments. The input of DBPN was a LR image that was obtained by bicubic downscaling the original (HR, ) image from the data set with a particular scaling factor (i.e., or in our experiments, corresponding to and SR).

Training setting We used a batch size of 6. The learning rate was initialized to for all layers and decreased by a factor of 10 for every iterations in the total iterations for training runs consisting of 300,000 iterations. For optimization, we used Adam with momentum set to . All experiments were conducted using PyTorch on NVIDIA TITAN X GPUs.

4.2 Comparison of Training Schedules

Following the discussion in Sec. 3.3, we investigate different settings and schedules for values of and that control the reconstruction-detection tradeoff in (3). Table 1 shows PSNR and mAP for a number of schedules described on the left in (::) format, indicating training for iterations with the corresponding values of (weight on reconstruction loss) and (weight on detection loss); indicates continuation of training. The schedules are (a) SR: baseline using pretrained SR not fine-tuned on Pascal; (b) SR-FT: fine-tuned for 100k iterations; (c) SR-FT+: fine-tuned for 300k iterations; (d) TDSR-0.1: balanced schedule in which after 100k of reconstruction-only training, we introduce detection loss with the constant weight of ; (e) TDSR-0.01: same but the ; (f) TDSR-DET: so only detection (AP) loss is used to fine-tune SR for 300k iterations; and finally (g) TDSR-grad: gradual increase of to 1 throughout the 300k iterations.

The values in the table provide us with multiple observations. First, it helps to fine-tune SR on the new domain, so SR-FT has much higher PSNR and mAP than SR. It helps to fine-tune for longer, hence better results with SR-FT+ (in both PSNR and mAP), but we start observing diminishing returns. Switching to variants of TDSR, we see a dramatic increase in mAP accuracy. As the relative value of becomes larger, we get additional improvements, but at the cost of a significant decline in PSNR (and as we see in Fig. 2 and in Section 4.4, in visual quality). However, for a certain regime, namely TDSR-0.01, we see a much higher mAP than the no-task values, with only a marginal decline in PSNR. We thus identify this schedule as the best based on our experiments. Finally, the numbers in the table further illustrate that higher PSNR must not correspond to better detection results.

(a) HR (b) SR-FT+ (c) TDSR-DET (d) TDSR-Grad (e) TDSR-0.01
PSNR: 22.02 dB PSNR: 16.63 dB PSNR: 19.45 dB PSNR: 21.54 dB
Figure 2: Comparison on training schedules on . PSNR values are for this image only.
HR: 75.78% mAP
Setting -iter : PSNR mAP PSNR mAP
SR 0k:1:0 22.80 41.9 17.50 10.6
SR-FT 100k:1:0 26.60 52.6 22.70 22.0
SR-FT+ 100k:1:0+200k:1:0 26.67 53.6 22.81 22.9
TDSR-0.1 100k:1:0+200k:1:0.1 25.13 61.6 21.08 36.1
TDSR-0.01 100k:1:0+200k:1:0.01 24.01 62.2 22.24 37.5
TDSR-DET 300k:0:1 17.02 61.0 16.72 37.4
TDSR-Grad 100k:1:0+70k:1:0.01+70k:1:0.1+60k:1:1 21.80 61.5 19.78 37.2
Table 1: Comparison of training schedules for (3), evaluated on VOC2007 test. indicates training for iterations with the given , values. See text for additional explanations. The best score in each column is colored by red.

Table 2 shows detailed results for comparing our TDSR method to other SR approaches, including the baseline bicubic SR, and a recently proposed state-of-the-art SR method (SRGAN [58]). Comparison to SRGAN is particularly interesting since it uses a different kind of objective (adversarial/perceptual) which may be assumed to be better suited for task-driven SR. Note that all the other SR models were just pretrained, and not fine-tuned on Pascal. We also compared results obtained directly from LR images (padded with black to fit to the pretrained SSD300 detector).

We see that reduction in resolution has a drastic effect on the mAP of the detector, dropping it from 75.8 to 41.7 for and 16.6 for . This is presumably due to both the actual loss of information, and the limitations of the detector architecture which may miss small bounding boxes. The performance is not significantly improved by non-task-driven SR methods, which in some cases actually harm it further! However, our proposed TDSR approach obtains significantly better results for both scaling factors, and recovers a significant fraction of the detection accuracy lost in LR.

Method -iter :
LR - 41.7 16.6
Bicubic - 41.3 11.2
SRGAN [58] - 44.6 13.4
DBPN [55] - 41.9 10.6
SR-FT 100 52.6 22.0
SR-FT+ 100+200 53.6 22.9
TDSR 100+200 62.2 37.5
Table 2: VOC2007 test detection results on and enlargement. Note: Original images (HR) obtained 75.8% mAP

4.3 Comparison with Different SR Methods in More Difficult Scenarios

In realistic settings, images are afflicted by additional sources of corruption, which can aggravate the already serious damage from reduction in resolution. In the final set of experiments, we evaluate ours and other methods in such settings. Here, the images (during both train and test phases) were also degenerated by blur or noise, prior to downscaling and processing by SR and detector. As with other experiments, we kept the same originally pretrained SSD detector as before.

Blurred Images Every HR image was blurred by Gaussian kernel, . In training the SR network, both in pure SR fine-tuning and in TDSR joint optimization, the objective () was defined with respect to the original (clean) HR images.

The results of this experiment are shown in Table 3. As with clean images, our proposed method outperforms all other approaches for both scaling factors, even obtaining a small (and likely insignificant) improvement compared to the blurry HR inputs! This application of our method can be thought of as task-driven deblurring by super-resolution.

Method -iter :
LR - 40.1 16.2
Bicubic - 42.9 11.8
SR-FT - 54.7 23.9
SR-FT+ 100+200 55.5 25.1
TDSR 100+200 63.8 39.1
Table 3: Analysis on blur images. Note: Original images (HR+Blur) obtained 63.3% mAP

Noisy Images In a similar vein, we evaluate the SR methods on images affected by Gaussian noise () prior to downscaling. Again, penalizes error w.r.t. the clean HR image.

The mAP on noise HR images is 57.3, an almost 20 points drop compared to the clean HR images. The results are shown in Table 4. As with blur, our proposed method outperforms significantly all other approaches for both scaling factors.

Method -iter :
LR - 39.0 14.5
Bicubic - 21.2 2.84
SR-FT - 41.5 11.6
SR-FT+ 100+200 42.7 12.6
TDSR 100+200 50.1 22.7
Table 4: Analysis on noise images. Note: Original images (HR+Noise) obtained 57.3% mAP

4.4 Qualitative Analysis

Figures 34, and 5 show examples of our results compared with those of other methods. The results for SRGAN [58] and SR-FT+ sometimes confuse the detector and recognize it as different object classes, again indicating that optimizing and high PSNR do not necessarily correlate with the accuracy. Meanwhile, unique pattern that produced by our proposed optimization helps the detector to recognize the objects better. Note that the TDSR does produce, in many images, artifacts somewhat reminiscent of those in DeepDream [68], but those are mild, and are offset by a drastically increased detection accuracy.








(a) HR (b) LR (c) Bicubic (d) SRGAN [58] (e) SR-FT+ (f) TDSR
Figure 3: Sample results for and . Zoom in to see detection labels and scores.







(a) HR+Blur (b) LR (c) Bicubic (d) SR-FT+ (e) TDSR
Figure 4: Sample results on blur images for and . Zoom in to see detection labels and scores.







(a) HR+Noise (b) LR (c) Bicubic (d) SR-FT+ (e) TDSR
Figure 5: Sample results on noise images for and . Zoom in to see detection labels and scores.

5 Conclusions

We have proposed a novel objective for training super-resolution: a compound loss that caters to the downstream semantic task, and not just to the pixel-wise image reconstruction task as traditionally done. Our results, which consistently exceed alternative SR methods in all conditions, indicate that modern end-to-end training enables joint optimization of tasks what has traditionally been separated into low-level vision (super-resolution) and high-level vision (object detection). These results also suggest some avenues for future work. The first is to investigate task-driven SR methods for additional visual tasks, such as semantic segmentation, image captioning, etc. A complementary direction is to extend the task-driven formulation to other image reconstruction and enhancement tools. For instance, we have demonstrated some success in “deblurring by SR”, and one can expect further improvement when using a properly designed deblurring network combined with task-driven objectives. Finally, the community may be well served by a continuing quest for better image quality metrics, to replace or augment simplistic reconstruction losses such as PSNR; in this context we believe adversarial loss functions to be promising.

APPENDIX: Supplementary Material

Appendix A Networks Architecture

Our method relies on two sequential building blocks: Super-resolution (DBPN [55]) and Task Network (SSD [66]). All network configuration remains the same as the proposal from the original author. On Fig. 6, the SR network transforms a low-resolution image to a high-resolution image . Then, the task network takes an image from SR network to produce prediction .

Figure 6: Network Architecture

Appendix B Graphs on mAP and PSNR

Figure 7 and 8 show graphs where the vertical and horizontal axes denote mAP/PSNR and iterations, respectively, on and in balance setting. It shows that the balance setting successfully increases the accuracy (mAP) while maintaining a good quality of images (PSNR).

Figure 7: Graph of mAP and PSNR on
Figure 8: Graph of mAP and PSNR on

Appendix C Visual Results

We provide more detection results in Fig. 910111213, and 14.








(a) HR (b) LR (c) Bicubic (d) SRGAN [58] (e) SR-FT+ (f) TDSR
Figure 9: Sample results for and . Zoom in to see detection labels and scores.







(a) HR (b) LR (c) Bicubic (d) SRGAN [58] (e) SR-FT+ (f) TDSR
Figure 10: Sample results for and . Zoom in to see detection labels and scores.







(a) HR+Blur (b) LR (c) Bicubic (d) SR-FT+ (e) TDSR
Figure 11: Sample results on blur images for and . Zoom in to see detection labels and scores.







(a) HR (b) LR (c) Bicubic (d) SR-FT+ (e) TDSR
Figure 12: Sample results on blur images for and . Zoom in to see detection labels and scores.







(a) HR+Noise (b) LR (c) Bicubic (d) SR-FT+ (e) TDSR
Figure 13: Sample results on noise images for and . Zoom in to see detection labels and scores.







(a) HR (b) LR (c) Bicubic (d) SR-FT+ (e) TDSR
Figure 14: Sample results on noise images for and . Zoom in to see detection labels and scores.

References

  • [1] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13(4) (2004) 600–612
  • [2] Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, Springer (2016) 694–711
  • [3] Dosovitskiy, A., Brox, T.: Generating images with perceptual similarity metrics based on deep networks. In: Advances in Neural Information Processing Systems. (2016) 658–666
  • [4] Sajjadi, M.S., Schölkopf, B., Hirsch, M.: Enhancenet: Single image super-resolution through automated texture synthesis. In: Computer Vision (ICCV), 2017 IEEE International Conference on, IEEE (2017) 4501–4510
  • [5] Hanhart, P., Korshunov, P., Ebrahimi, T.: Benchmarking of quality metrics on ultra-high definition video sequences. In: Digital Signal Processing (DSP), 2013 18th International Conference on, IEEE (2013) 1–8
  • [6] Kundu, D., Evans, B.L.: Full-reference visual quality assessment for synthetic images: A subjective study. In: Image Processing (ICIP), 2015 IEEE International Conference on, IEEE (2015) 2374–2378
  • [7] Vasiljevic, I., Chakrabarti, A., Shakhnarovich, G.: Examining the impact of blur on recognition by convolutional networks. arXiv preprint arXiv:1611.05760 (2016)
  • [8] Dodge, S., Karam, L.: Understanding how image quality affects deep neural networks. In: Quality of Multimedia Experience (QoMEX), 2016 Eighth International Conference on. (2016)
  • [9] Nishiyama, M., Hadid, A., Takeshima, H., Shotton, J., Kozakaya, T., Yamaguchi, O.: Facial deblur inference using subspace analysis for recognition of blurred faces. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(4) (2011) 838–845
  • [10] Chen, X., He, X., Yang, J., Wu, Q.: An effective document image deblurring algorithm. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, IEEE (2011) 369–376
  • [11] Hradiš, M., Kotera, J., Zemcík, P., Šroubek, F.: Convolutional neural networks for direct text deblurring. In: Proceedings of BMVC. Volume 10. (2015)
  • [12] Xiao, L., Wang, J., Heidrich, W., Hirsch, M.: Learning high-order filters for efficient blind deconvolution of document photographs. In: European Conference on Computer Vision, Springer (2016) 734–749
  • [13] Milani, S., Bernardini, R., Rinaldo, R.: Adaptive denoising filtering for object detection applications. In: Image Processing (ICIP), 2012 19th IEEE International Conference on, IEEE (2012) 1013–1016
  • [14] Dai, D., Wang, Y., Chen, Y., Van Gool, L.: Is image super-resolution helpful for other vision tasks? In: Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, IEEE (2016) 1–9
  • [15] Hennings-Yeomans, P.H., Baker, S., Kumar, B.V.: Simultaneous super-resolution and feature extraction for recognition of low-resolution faces. In: Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, IEEE (2008) 1–8
  • [16] Hennings-Yeomans, P.H., Kumar, B.V., Baker, S.: Robust low-resolution face identification and verification using high-resolution features. In: Image Processing (ICIP), 2009 16th IEEE International Conference on, IEEE (2009) 33–36
  • [17] Shekhar, S., Patel, V.M., Chellappa, R.: Synthesis-based recognition of low resolution faces. In: Biometrics (IJCB), 2011 International Joint Conference on, IEEE (2011) 1–6
  • [18] Bilgazyev, E., Efraty, B.A., Shah, S.K., Kakadiaris, I.A.: Sparse representation-based super resolution for face recognition at a distance. In: BMVC. (2011) 1–11
  • [19] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
  • [20] Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 427–436
  • [21] Moosavi Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2016)
  • [22] Luo, H.: A training-based no-reference image quality assessment algorithm. In: Image Processing, 2004. ICIP’04. 2004 International Conference on. Volume 5., IEEE (2004) 2973–2976
  • [23] Tang, H., Joshi, N., Kapoor, A.: Blind image quality assessment using semi-supervised rectifier networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2014) 2877–2884
  • [24] Kang, L., Ye, P., Li, Y., Doermann, D.: Convolutional neural networks for no-reference image quality assessment. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2014) 1733–1740
  • [25] Ma, K., Liu, W., Zhang, K., Duanmu, Z., Wang, Z., Zuo, W.: End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing 27(3) (2018) 1202–1213
  • [26] Reibman, A.R., Bell, R.M., Gray, S.: Quality assessment for super-resolution image enhancement. In: Image Processing, 2006 IEEE International Conference on, IEEE (2006) 2017–2020
  • [27] Yeganeh, H., Rostami, M., Wang, Z.: Objective quality assessment for image super-resolution: A natural scene statistics approach. In: Image Processing (ICIP), 2012 19th IEEE International Conference on, IEEE (2012) 1481–1484
  • [28] Fang, Y., Liu, J., Zhang, Y., Lin, W., Guo, Z.: Quality assessment for image super-resolution based on energy change and texture variation. In: Image Processing (ICIP), 2016 IEEE International Conference on, IEEE (2016) 2057–2061
  • [29] Ma, C., Yang, C.Y., Yang, X., Yang, M.H.: Learning a no-reference quality metric for single-image super-resolution. Computer Vision and Image Understanding 158 (2017) 1–16
  • [30] Galbally, J., Marcel, S., Fiérrez, J.: Image quality assessment for fake biometric detection: Application to iris, fingerprint, and face recognition. IEEE Trans. Image Processing 23(2) (2014) 710–724
  • [31] Pulecio, C.G.R., Benítez-Restrepo, H.D., Bovik, A.C.: Image quality assessment to enhance infrared face recognition. In: Image Processing (ICIP), 2017 IEEE International Conference on, IEEE (2017) 805–809
  • [32] Yuan, T., Zheng, X., Hu, X., Zhou, W., Wang, W.: A method for the evaluation of image quality according to the recognition effectiveness of objects in the optical remote sensing image using machine learning algorithm. PLoS One 9(1) (2014)
  • [33] van Ouwerkerk, J.D.: Image super-resolution survey. Image Vision Comput. 24(10) (2006) 1039–1052
  • [34] Nasrollahi, K., Moeslund, T.B.: Super-resolution: a comprehensive survey. Mach. Vis. Appl. 25(6) (2014) 1423–1468
  • [35] Yang, C.Y., Ma, C., Yang, M.H.: Single-image super-resolution: A benchmark. In: European Conference on Computer Vision, Springer (2014) 372–386
  • [36] Yang, C.Y., Huang, J.B., Yang, M.H.: Exploiting self-similarities for single frame super-resolution. In: Asian conference on computer vision, Springer (2010) 497–510
  • [37] Michaeli, T., Irani, M.: Nonparametric blind super-resolution. In: Computer Vision (ICCV), 2013 IEEE International Conference on, IEEE (2013) 945–952
  • [38] Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 5197–5206
  • [39] Timofte, R., Rothe, R., Van Gool, L.: Seven ways to improve example-based single image super resolution. In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, IEEE (2016) 1865–1873
  • [40] Yang, J., Wang, Z., Lin, Z., Cohen, S., Huang, T.: Coupled dictionary training for image super-resolution. IEEE transactions on image processing 21(8) (2012) 3467–3478
  • [41] Bansal, A., Sheikh, Y., Ramanan, D.: Pixelnn: Example-based image synthesis. In: ICLR. (2018)
  • [42] Kim, K.I., Kwon, Y.: Single-image super-resolution using sparse regression and natural image prior. IEEE transactions on pattern analysis and machine intelligence 32(6) (2010) 1127–1133
  • [43] Pérez-Pellitero, E., Salvador, J., Ruiz-Hidalgo, J., Rosenhahn, B.: Psyco: Manifold span reduction for super resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 1837–1845
  • [44] Yue, H., Sun, X., Yang, J., Wu, F.: Landmark image super-resolution by retrieving web images. IEEE Trans. Image Processing 22(12) (2013) 4865–4878
  • [45] Yang, J., Lin, Z., Cohen, S.: Fast image super-resolution based on in-place example regression. In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, IEEE (2013) 1059–1066
  • [46] Wang, Z., Yang, Y., Wang, Z., Chang, S., Yang, J., Huang, T.S.: Learning super-resolution jointly from external and internal examples. IEEE Trans. Image Processing 24(11) (2015) 4359–4371
  • [47] Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38(2) (2016) 295–307
  • [48] Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (June 2016) 1646–1654
  • [49] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 1–9
  • [50] Kim, J., Kwon Lee, J., Mu Lee, K.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 1637–1645
  • [51] Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: European Conference on Computer Vision, Springer (2016) 391–407
  • [52] Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 1874–1883
  • [53] Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. (July 2017)
  • [54] Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition. (2017)
  • [55] Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. arXiv preprint arXiv:1803.02735 (2018)
  • [56] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. (2014) 2672–2680
  • [57] Yu, X., Porikli, F.: Ultra-resolving face images by discriminative generative networks. In: European Conference on Computer Vision, Springer (2016) 318–333
  • [58] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (Jul 2017)
  • [59] Uijlings, J.R.R., van de Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for object recognition. International Journal of Computer Vision 104(2) (2013) 154–171
  • [60] Zitnick, C.L., Dollár, P.: Edge boxes: Locating object proposals from edges. In: European Conference on Computer Vision, Springer (2014) 391–405
  • [61] Girshick, R.B.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015. (2015) 1440–1448
  • [62] Hosang, J.H., Benenson, R., Dollár, P., Schiele, B.: What makes for effective detection proposals? IEEE Trans. Pattern Anal. Mach. Intell. 38(4) (2016) 814–830
  • [63] Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38(1) (2016) 142–158
  • [64] Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence 39(6) (2017) 1137–1149
  • [65] He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Computer Vision (ICCV), 2017 IEEE International Conference on, IEEE (2017) 2980–2988
  • [66] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision, Springer (2016) 21–37
  • [67] Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. arXiv preprint 1612 (2016)
  • [68] Mordvintsev, A., Tyka, M., Olah, C.: Inceptionism: Going deeper into neural networks. https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html (June 2015)
  • [69] Agustsson, E., Timofte, R.: Ntire 2017 challenge on single image super-resolution: Dataset and study. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. (July 2017)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
258723
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description