Spatial-Temporal Residue Network Based In-Loop Filter for Video Coding
Deep learning has demonstrated tremendous break through in the area of image/video processing. In this paper, a spatial-temporal residue network (STResNet) based in-loop filter is proposed to suppress visual artifacts such as blocking, ringing in video coding. Specifically, the spatial and temporal information is jointly exploited by taking both current block and co-located block in reference frame into consideration during the processing of in-loop filter. The architecture of STResNet only consists of four convolution layers which shows hospitality to memory and coding complexity. Moreover, to fully adapt the input content and improve the performance of the proposed in-loop filter, coding tree unit (CTU) level control flag is applied in the sense of rate-distortion optimization.
Extensive experimental results show that our scheme provides up to 5.1% bit-rate reduction compared to the state-of-the-art video coding standard.
patial-Temporal Network, In-loop Filter, High Efficiency Video Coding
Video compression is usually characterized by coding bits and perceived distortions of the reconstructed video. To alleviate the distortions caused by video compression, many in-loop and out-loop filters have been studied for artifacts reduction of compressed videos    . Norkin et al.  proposed a deblocking algorithm to suppress discontinuities between adjacent blocks boundary. Fu et al.  developed a statistical method to train an offset at the encoder side and signal it to decoder to compensate the ringing effect induced by transform and quantization. Zhang et al.   incorporated the low rank algorithm into HEVC to develop non-local based adaptive loop filters.
Recently, convolutional neural network (CNN) achieved great success in image processing and video compression. An artifacts reduction convolutional neural network (AR-CNN) algorithm was proposed for JPEG artifacts reduction by Dong et al. . which boosted the restoration quality of JPEG coded images. Wang et al.  also provided another network for the same task. Park et al.  directly integrated the super resolution convolutional neural network (SR-CNN)  into HEVC to replace deblocking filter and sample adaptive offset. However, this approach may lack its generalization ability because the same sequences were used for training and testing. Dai et al.  investigated the variable size transform of HEVC and proposed variable-filter-size residue-learning convolutional neural network (VR-CNN) for post processing in HEVC intra coding. A decoder-end post-processing deep convolution network was investigated to boost the quality of decoded frames by Wang et al. . Li et al.  designed an auto-encoder like CNN framework for image compression by using importance map for bit allocation. Balle et al.  proposed an end-to-end framework form image compression and a novel method for rate estimation was proposed.
In this paper, we propose a spatial-temporal residue network (STResNet) based in-loop filter for HEVC inter coding by utilizing both spatial and temporal information. In contrast with conventional CNN-based filter for video coding  , the temporal information is also taken into account for in-loop filters to improve the quality of compressed frames. In particular, the residue learning approach  is adopted to accelerate the training process in our designed network and boost the coding performance. To investigate the compatibility with the state-of-the-art video compression algorithm, we integrate the proposed STResNet into HM 16.15  as a novel inloop filtering method after sample adaptive offset (SAO). Moreover, the coding tree unit (CTU) level control flags are designed to guarantee the performance of STResNet by rate-distortion optimization. Experimental results indicate that STResNet provides higher visual quality and reduces on average 1.3% bit-rate for HEVC in random access configuration. The complexity and memory cost are also lower than the deep neural networks in .
The organization of this paper is described as follows. Section 2 provides the details of the proposed STResNet in-loop filter. Section 3 discusses the training process. Experimental results are shown in Section 4. Finally, Section 5 concludes this paper.
2 Spatial Temporal Residue Network
In this section, we mainly describe the detailed architecture of proposed spatial temporal residue network (STResNet) in-loop filters. First, we discuss the network structure and parameters of STResNet. Then, the integration details of STResNet into HEVC are introduced.
2.1 Network Structure
The structure of proposed STResNet is shown in Fig. 1. STResNet is a fully convolution network with four layers and also the feature map numbers of each layer are provided in Fig. 1 and listed in Table 1. There are two inputs for STResNet. One is the current block and the other is the co-located block of current block in the previous coded frame, which is obtained from the closest reference frame of the current one. The first layer contains two types of convolution operations which act as spatial CNN to extract spatial features respectively. Filter sizes for spatial CNNs are and . For the temporal CNN layer, the outputs of the first layer are stacked for temporal CNN through fusion of feature maps and the filter size of which is set to be . Two more convolution layers are followed by the temporal CNN layer, and both of them utilize filters. It is worth noting that rectified linear units (ReLU)  are adopted as nonlinear mapping in STResNet for all convolution layers. To accelerate the speed of training, we design the four convolution layers as residue learning  and the final output is the element-wise sum of current block and the output of the fourth convolution layer. The detailed parameters of STResNet’s are listed in Table 1.
|Feature Map Number||32||32||16||8||1|
|Total Param Number||11464|
2.2 STResNet in HEVC
We integrate the proposed STResNet into the latest version of HEVC reference software (HM 16.15) as an additional method of in-loop filter after sample adaptive offset (SAO). To fully explore the performance of STResNet, we define the coding tree unit (CTU) as our filtering unit, which means STResNet takes each CTU and co-located CTU in its closest reference frame as input. To control on/off of the proposed in-loop filter, the rate-distortion (R-D) optimization strategy is employed by comparing the R-D cost and where
Here, and indicate the distortion without the proposed in-loop filter and after the proposed in-loop filters respectively. and denote the coding bits of the two scenarios. If , then the flag is enabled and vice-versa. Since in-loop filter does not introduce additional coding information, the rate term without proposed in-loop filters equals to the rate term after proposed filters , such that only computing the distortion term (in terms of mean-square-error, MSE) is sufficient.
3 Network Training
This section discusses the training procedures of the proposed STResNet from three aspects. First, the generation of training data is provided. Secondly, our training strategy is described. Finally, our training hyper-parameters are listed.
3.1 Generation of Training Data
Each training sample is denoted as (, , ) where and represent the input current block and its co-located block in the closet reference frame, as illustrated in Fig. 1. Moreover, denotes the original signal of . To generate the training data, we first compress three high-definition (HD) video sequences in AVS2 test sequences, (taishan, beach and pkugirls)  by using HM 16.15 with random access (RA) configuration. Since our STResNet is integrated after SAO, we turn ON both deblocking filter and SAO during compression of the three sequences. The first 100 frames of these three sequences are chosen to generate the training data. Moreover, these three sequences are compressed using four different quantization parameters (QP: 22, 27, 32, 37) to generated training data for different model. Note that we set all QP offsets to zero in RA configuration, which indicates all B frames are coded with the same QP as I frame. Second, we set the block size of each training sample to be , which indicates that the size of , and is . Third, for the -th frame (), can be extracted in it and can be found at the same spatial index of the th frame’s closest reference frame. denotes the original pixel of . To obtain more training data, we extracted the training samples with 10 pixels overlap in the luminance channel of each frame. Hence, we obtained 313,088 training samples for each QP. Finally, we randomly shuffled all cropped training samples.
3.2 Training Strategy
To distinguish different quality levels, the networks are individually trained in terms of different coding configurations. For training samples (, , ), the objective function of training is to minimize the following loss function Eqn.( 2) where represents the set of hyper parameters we use during training and denotes the STResNet.
We train our STResNet by using the deep learning framework Caffe  on a NVIDIA TITAN X GPU. Since there are four different QPs: 22, 27, 32, 37, one model is trained for each QP value.
For training configuration, zero padding is used for each convolution layer to ensure the same size of input and output images. Gaussian random initialization with standard error is used to initialize all convolution weights and all bias terms are initialized with zero. The batchsize of each iteration is set to be . We use the first-order gradient-based optimization  to train our objective functions Eqn. 2. The momentum of optimization are set to be 0.9 and the value of momentum2 is adaptive to different QP. We use different learning rates when training models for different QPs.
|Base Learning Rate||1e-6||1e-7||1e-8||1e-8|
This subsection shows the hyper-parameters we utilize during our training process. Table 2 illustrates the basic learning rate, momentum of  and total iteration time of each QP. For QP 32 and 37, all the hyper-parameters are the same. With the deceasing of QP, we increase the base learning rate and other hyper-parameters, e.g., the momentum2.
4 Experimental Results
Extensive experimental results are provided in this section. Specifically, the proposed STResNet is integrated into the HEVC reference software to act as a new method of in-loop filter after SAO. In the proposed STResNet, only the luminance is processed using the trained model. Moreover, CTU level control flags are used to maximize of performance of STResNet based on rate-distortion optimization. In the RDO process, we only consider the distortion of luma channel. Four typical quantization parameters are tested: 22, 27, 32, 37, and for each QP the corresponding network is used. All experiments are conducted under random access (RA) configuration with zero QP offset for all B frames. The perceptual visual quality comparison and brief complexity analysis are also discussed in the following subsections. The first 100 frames of all test sequences are compressed for testing.
4.1 Objective Quality
To evaluate the coding efficiency, we compute the BD-rate  to show the performances of STResNet. We test our coding performances of proposed STResNet on HEVC common test sequences  (from Class B to Class E) which are illustrated in Table 3. Experimental results of the proposed STResNet with CTU level control is shown in Table 3. Anchor is HM16.15 with deblock and SAO turning ON. STResNet achieves 1.3% bit-rate savings on average for random access configuration. We report the coding performance of Y components since we only apply STResNet for luminance channel.
4.2 Subjective Quality
In this subsection, we also compare the visual quality of reconstructed images as shown in Fig. 2. A crop of each image is enlarged in the bottom-right corner of each image. It is obvious that the image processed by STResNet suppresses artifacts and distortions and produces better visual quality.
4.3 Complexity Analysis
To evaluate the time complexity of our algorithm, we test the proposed algorithm and record the encoding and decoding (enc/dec) time. The information of testing environment is Intel i7 4770k CPU and CPU version Caffe . Moveover, the operating system is Windows 10 64-bit home basic and the memory for the PC is 24 GB. We illustrate the time complexity by using time increasing ratio.
where T is original enc/dec time and is the proposed enc/dec time. The proposed STResNet is integrated into HEVC reference software and the forward operation of network relies on libcaffe . All of the time tests are conducted without GPU acceleration which means that we integrate the CPU version libcaffe  into HM 16.15. The ratio between encoding time of proposed STResNet in-loop filter and anchor encoding time is on average 135.7% while the ratio of decoding time is on average 703.1% for all test sequences. The details of enc/dec time for each class of HEVC common test sequences are listed in Table 4. It is worth noting that the decoding time increase of Class D is greater than other class. The explanation of this phenomenon is that the possibility of choosing STResNet among all CTU is higher than other classes.
In this paper, a novel in-loop filter based on spatial and temporal residue learning (STResNet) is proposed for compensating the signal distortions and improving the visual quality of HEVC standard. Specifically, the spatial-temporal coherences are jointly exploited to infer the pristine visual signal, such that the current frame can be reconstructed by feeding the current frame as well as the preceding ones into the spatial-temporal residue network. In particular, the networks are individually trained in terms of different coding configurations to distinguish different quality levels and coding strategies. To further explore the performances, the rate-distortion optimization strategy is employed in the proposed in-loop filter for CTU level control. With the proposed STResNet in-loop filter, texture and high-frequency details can be efficiently restored, leading to better visual quality. Experimental results show that the proposed scheme achieves on average 1.3% bit-rate reduction for all test sequences.
This work was supported in part by the National Natural Science Foundation of China (61632001, 61421062), National Basic Research Program of China (973 Program, 2015CB351800), and the Top-Notch Young Talents Program of China, which are gratefully acknowledged.
- Wiegand, T., et al. Overview of the H. 264/AVC video coding standard. IEEE Transactions on circuits and systems for video technology, 13(7), 560-576.
- Sullivan, G. J. et al. Overview of the high efficiency video coding (HEVC) standard. IEEE Transactions on circuits and systems for video technology, 22(12), 1649-1668.
- Norkin, A., et al. HEVC deblocking filter. IEEE Transactions on Circuits and Systems for Video Technology, 22(12), 1746-1754.
- Fu, C. M., et al. Sample adaptive offset in the HEVC standard. IEEE Transactions on Circuits and Systems for Video technology, 22(12), 1755-1764.
- Zhang, X., et al. High Efficiency Image Coding via Near-Optimal Filtering. IEEE Signal Processing Letters.
- Zhang, X., et al. Adaptive loop filter with temporal prediction. In Picture Coding Symposium (PCS), 2012 (pp. 437-440). IEEE.
- Zhang, X., et al. Compression artifact reduction by overlapped-block transform coefficient estimation with block similarity. IEEE transactions on image processing, 22(12), 4613-4626.
- Krutz, A., et al. Adaptive global motion temporal filtering for high efficiency video coding. IEEE Transactions on Circuits and Systems for Video Technology, 22(12), 1802-1812.
- Dong, C., et al. Compression artifacts reduction by a deep convolutional network. In Proceedings of the IEEE International Conference on Computer Vision (pp. 576-584).
- Wang, Z., et al. D3: Deep dual-domain based fast restoration of JPEG-compressed images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2764-2772).
- Dai, Y., et al. A convolutional neural network approach for post-processing in hevc intra coding. In International Conference on Multimedia Modeling (pp. 28-39). Springer, Cham.
- Park, et al. CNN-based in-loop filtering for coding efficiency improvement. In Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), 2016 IEEE 12th (pp. 1-5). IEEE.
- Dong, C., et al. Learning a deep convolutional network for image super-resolution. In European Conference on Computer Vision (pp. 184-199). Springer, Cham.
- He, K., et al. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
- Bossen, et al. HEVC reference software manual. JCTVC-D404, Daegu, Korea.
- Nair, V., et al. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 807-814).
- Zheng, X. AVS2-P2 common test condition. In AVS Document (AVS-N2020). Audio Video Coding Standard (AVS) meeting, Shenzhen, China.
- Jia, Yangqing, et al. ”Caffe: Convolutional architecture for fast feature embedding.” Proceedings of the 22nd ACM international conference on Multimedia. ACM, 2014.
- Kingma, D., et al. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- Bossen, F. Common test conditions and software reference configurations. JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 5th meeting, Jan. 2011.
- Wang, T., et al. A Novel Deep Learning-Based Method of Improving Coding Efficiency from the Decoder-End for HEVC. In Data Compression Conference (DCC), 2017 (pp. 410-419). IEEE.
- Bjontegaard, G. Calculation of average PSNR differences between RD-curves. In ITU-T Q. 6/SG16 VCEG, 15th Meeting, Austin, Texas, USA, April, 2001.
- Zhang, X., et al. Low-rank based nonlocal adaptive loop filter for high efficiency video compression. IEEE Transactions on Circuits and Systems for Video Technology.
- Li, M., Zuo, W., Gu, S., Zhao, D., & Zhang, D. Learning Convolutional Networks for Content-weighted Image Compression. arXiv preprint arXiv:1703.10553.
- Ball¨¦, J., et al. End-to-end optimized image compression. arXiv preprint arXiv:1611.01704.
- Ma, S., et al. Nonlocal in-loop filter: The way toward next-generation video coding?. IEEE MultiMedia, 23(2), 16-26.