Joint Reconstruction of Multi-view Compressed Images Part of this work has been accepted to the European Signal Processing Conference (EUSIPCO), Bucharest, Romania, Aug. 2012 [1].

Joint Reconstruction of Multi-view Compressed Images thanks: Part of this work has been accepted to the European Signal Processing Conference (EUSIPCO), Bucharest, Romania, Aug. 2012 [1].

Vijayaraghavan Thirumalai,   and Pascal Frossard,   The authors are with Signal Processing Laboratory - LTS4, Institute of Electrical Engineering, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne 1015, Switzerland. e-mail: (vijayaraghavan.thirumalai@epfl.ch; pascal.frossard@epfl.ch).
Abstract

The distributed representation of correlated multi-view images is an important problem that arise in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed correlated images are jointly decoded in order to improve the reconstruction quality of all the compressed images. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among different cameras. A central decoder first estimates the underlying correlation model from the independently compressed images which will be used for the joint signal recovery. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images that comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be consistent with their compressed versions. We show by experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our proposed algorithm compares advantageously to state-of-the-art distributed coding schemes based on disparity learning and on the DISCOVER.

{keywords}

Distributed compression, Joint reconstruction, Optimization, Multi-view images, Depth estimation.

I Introduction

In recent years, vision sensor networks have been gaining an ever increasing popularity enforced by the availability of cheap semiconductor components. These systems usually acquire multiple correlated images of the same 3D scene from different viewpoints. Compression techniques shall exploit this correlation in order to efficiently represent the 3D scene information. The distributed coding paradigm becomes particularly attractive in such settings; it permits to efficiently exploit the correlation between images with low encoding complexity and minimal inter-sensor communication, which directly translate into power savings in sensor networks. In the distributed compression framework, a central decoder jointly reconstructs the visual information from the compressed images by exploiting the correlation between the samples. This permits to achieve a good rate-distortion tradeoff in the representation of correlated multi-view images, even if the encoding is performed independently.

The first information-theoretic results on distributed source coding appeared in the late seventies for the noiseless [2] and noisy cases [3]. However, most results in distributed coding have remained non-constructive for about three decades. Practical DSC schemes have then been designed by establishing a relation between the Slepian-Wolf theorem and channel coding [4]. Based on the results in [4], several distributed coding schemes for video and multi-view images have been proposed in the literature [5, 6]. In such schemes, a feedback channel is generally used for accurately controlling the Slepian-Wolf coding rate. Unfortunately, this results in increased latency and bandwidth usage due to the multiple requests from the decoder. These schemes can thus hardly be used in real time applications. One solution to avoid the feedback channel is to use a separate encoding rate control module to precisely control the Slepian-Wolf coding rate [7]. The overall computational complexity at the encoder becomes non-negligible due to this rate control module. In this paper, we build a distributed coding scheme, where the correlated compressed images are directly transmitted to the joint decoder without implementing any Slepian-Wolf coding; this avoids the necessity for complex estimation of the statistical correlation estimation and of the coding rate at the encoder.

Fig. 1: Schematic representation of our proposed framework. The images and are correlated through displacement of scene objects due to positioning of the cameras and .

We consider a scenario where a set of cameras are distributed in a 3D scene. In most practical deployments of such systems, the images captured by the different cameras are likely to be correlated. The captured images are encoded independently using standard encoding solutions and are transmitted to the central decoder. Here, we assume that the images are compressed using balanced rate allocation, which permits to share the transmission and computational costs equally among the sensors. It thus prevents the necessity for hierarchical relationship among the sensors. The central decoder builds a correlation model from the compressed images which is used to jointly decode the multi-view images. The joint reconstruction is formulated as a convex optimization problem. It reconstructs the multi-view images that are consistent with the underlying correlation information and with the compressed images information. While reconstructing the images, we also effectively handle the occlusions that commonly arise in multi-view imaging. We solve the joint reconstruction problem using effective parallel proximal algorithms [8].

We evaluate the performance of our novel joint decoding scheme in several multi-view datasets. Experimental results demonstrate that the proposed distributed coding solution improves the rate-distortion performance of the separate coding results by taking advantage of the inter-view correlation. We show that the quality of the decoded images is quite balanced for a given bit rate, as expected from a symmetric coding solution. We observe that our scheme, at low bit rate, performs close to the joint encoding solutions based on H.264, when the block size used for motion compensation is set to . Finally, we show that our framework outperforms state-of-the-art distributed coding solutions based on disparity learning [9] and on the DISCOVER codec [10], in terms of rate-distortion performance. It certainly provides an interesting alternative to most classical DSC solutions [5, 6, 7], since it does not require any statistical correlation information at the encoder.

Only very few works in the literature address the distributed compression problem without using a channel encoder or a feedback channel. In [11], a distributed coding technique for compressing the multi-view images has been proposed, where a joint decoder reconstructs the views from low resolution images using super-resolution techniques. In more details, each sensor transmits a low resolution compressed version of the original image to the decoder. At the decoder, these low resolution images are registered with respect to a reference image, where the image registration is performed by shape analysis and image warping. The registered low resolution images are then jointly processed to decode a high resolution image using image super-resolution techniques. However, this framework requires communication between the encoders in order to facilitate the registration, e.g., the transmission of feature points. Other works in super-resolution use multiple compressed images that are fused for improved resolution [12]. Such techniques usually target reconstruction of a single high resolution image from multiple compressed images. Alternatively, techniques have been developed in [13, 14] to decode a single high quality image from several encoded versions of the same source image or videos. This is achieved by solving an optimization problem that enforces the final reconstructed image to be consistent with all the compressed copies. Our main target in this paper is to jointly improve the quality of multiple compressed correlated (multi-view) images and not to increase the spatial resolution of the compressed images or to extract a single high quality image. More recently, Schenkel et al. [15] have considered a distributed representation of image pairs. In particular, they have proposed an optimization framework to enhance the quality of the JPEG compressed images. This work, however, considered an asymmetric scenario that requires a reference image for joint decoding.

The rest of the paper is organized as follows. The joint decoding algorithm along with the optimization framework for joint reconstruction is described in Section II. In Section III, we present the optimization algorithm based on proximal splitting methods. In Section IV, we present the experimental results for the joint reconstruction of pairs of images. Section V describes the extension of our proposed framework to decode multiple images along with the simulation results. Finally, in Section VI we draw some concluding remarks.

Ii Joint Decoding of Image Pairs

We consider the scenario illustrated in Fig. 1, where a pair of cameras and project the 3D visual information on the 2D plane and (with resolution ), respectively. The images and are compressed independently using standard encoding solutions (e.g., JPEG, H.264 intra) and are transmitted to a central decoder. The joint decoder has the access to the compressed version of the correlated images and its main objective is to improve the quality of all the compressed views by exploiting the underlying inter-view correlation. We first propose to estimate the correlation between images from the decoded images and , which is effectively modeled by a dense depth image . The joint reconstruction stage then uses the depth information and enhances the quality of the decoded images and . Note that one could solve a joint problem to estimate simultaneously the correlation information and the improved images. However, such a joint optimization problem would be hard to solve with a complex objective function. Therefore, we propose to split the problem in two steps: (i) we estimate a correlation information from the decoded images; and (ii) we carry out joint reconstruction using the estimated correlation information. These two steps are detailed in the rest of this section.

Ii-a Depth Estimation

The first task is to estimate the correlation between images, which typically consists in a depth image. In general, the dense depth information is estimated by matching the corresponding pixels between images. Several algorithms have been proposed in the literature to compute dense depth images. For more details, we refer the reader to [16]. In this work, we estimate a dense depth image from the compressed images in a regularized energy minimization framework, where the energy is composed of a data term and a smoothness term . A dense depth image is obtained by minimizing the energy function as

(1)

where balances the importance of the data and smoothness terms, and represents the candidate depth images. The candidate depth values for every pixel position are discrete; this is constructed by uniformly sampling the inverse depth in the range , where and are the minimal and maximal depth values in the scene, respectively [17].

We now discuss in more details the components of the energy function of Eq. (1). The data term, is used to match the pixels across views by assuming that the 3D scene surfaces are Lambertian, i.e., the intensity is consistent irrespective of the viewpoints. It is computed as

(2)

where and represent the image dimensions and represent a pixel position. The most commonly used pixel-based cost function includes squared intensity differences and absolute intensity differences. In this work, we use square intensity difference to measure the disagreement of assigning a depth value to the pixel location . Mathematically, it is computed as

(3)

where is a warping function that warps the image using the depth value . This warping, in general, is a two step process [18]. First the pixel position in the image is projected to the world coordinate system. This projection step is represented as

(4)

where is the intrinsic camera matrix of the camera and represent the extrinsic camera parameters with respect to the global coordinate system. Then, the 3D point is projected on the coordinates of the camera with the internal and external camera parameters, respectively as and . This projection step can be described as

(5)

Finally, the pixel location of the warped image is taken as , where rounds to the nearest integer.

The smoothness term, is used to enforce consistent depth values at neighboring pixel locations and . It is measured as

(6)

where represents the usual four-pixel neighborhood and sets an upper level on the smoothness penalty such that discontinuities can be preserved [19].

We can finally rewrite the regularized energy objective function for the depth estimation problem as

(7)

This cost function is used in the optimization problem of Eq. (1), which is usually a non-convex problem. Several minimization algorithms exist in the literature to solve Eq. (1), e.g., Simulated annealing [20], Belief Propagation [21], Graph Cuts [22, 23]. Among these solutions, the optimization techniques based on Graph Cuts compute the minimum energy in polynomial time and they generally give better results than the other techniques [16]. Motivated by this, in our work, we solve the minimization problem of Eq. (1) using Graph Cut techniques.

Ii-B Image Warping as Linear Transformation

Before describing our joint reconstruction problem, we show how the image warping operation in Eq. (3) can be written as matrix multiplication of the form 111For consistency, we use the compressed image ; however, this matrix multiplication holds even if one uses the original image for warping.; this linear representation offers a more flexible formulation of our joint reconstruction problem. The reshaping operator produces a vector from the matrix , where represents the row of the matrix and denotes the usual transpose operator. For our convenience, we also define another operator that takes the vector and gives back the matrix , i.e., this operator performs the inverse operations corresponding to . The matrix describes the warping by re-arranging the elements of . Its construction is described in this section.

We have shown earlier that the warping function shifts the pixel position in the reference image to the position in the target image. Alternatively, this pixel shift between images can be represented using a horizontal component and a vertical component of the motion field as . Note that this motion field can be easily computed from Eqs. (4) and (5), once the depth information and the camera parameters are known. Now, our goal is to represent the motion compensation operation as a linear transformation given as

(8)

Here, represents the warped image and is a matrix of dimensions whose entries are determined by the horizontal and vertical components of the motion field in the row, i.e., and .

In general, the elements of the matrix can be found in two ways: (i) forward warping; and (ii) inverse warping. In this work, we propose to construct the matrix based on forward warping; this permits easier handling of the occluded pixels as shown later. Given a motion vector, the elements of the matrix are given by

(9)

If (e.g., at image boundaries), we set so that the dimensions of the matrix stays . It should be noted that the matrix formed using Eq. (9) contains multiple entries with values of ‘’ in each row. This is because several pixels in the source image can be mapped to the same location in the destination image during forward warping. In such cases, for a given row index we keep only the last ‘’ entry in the matrix while the remaining ones in the row are set to zero. This is motivated by the fact that, during forward warping when multiple source pixels are mapped to the same destination point , the intensity value of the last source pixel is assigned to the destination pixel 222We assume that the pixels are scanned from left to right and then top to bottom.. Furthermore, it is interesting to note that some of the rows in the matrix do not contain any entry with value of ‘’, i.e., all entries in row of are zeros. This means that the set of pixel locations in the warped image has zero value, where is the set of row indexes in the matrix that do not contain any entry with value of ‘’. These pixel positions represent holes in the warped image that define the occluded regions. Finally, the row in the warped image is represented as

(10)

Thus, it is clear that the matrix shifts the pixels in by the corresponding motion vector in order to form . In a similar way, we can construct the matrix , and thus we can represent the image warping as . Finally, note that similar operations can also be performed with an inverse mapping. For details related to the construction of the matrix based on inverse warping, we refer the reader to [24, Ch. 6, p. 95].

Ii-C Joint reconstruction

We now discuss our novel joint reconstruction algorithm that takes benefit of the estimated correlation information given by the matrix (or ) in order to reconstruct the images. We propose to reconstruct an image pair as a solution to the following optimization problem:

(11)
s.t.

Here, and represent the decoded views (see Fig. 1) and represents the total-variation (TV) norm. The first two constraints of Eq. (11) forces the reconstructed images and to be close to the respective decoded images and . The last constraint encourages the reconstructed images to be consistent with the correlation information represented by , i.e., the warped image should be consistent with the image . Finally, the TV prior term ensures that the reconstructed images and are smooth. In general, inclusion of the prior knowledge brings effective reduction in the search space, which leads to efficient optimization solutions. The optimization problem of Eq. (11), therefore reconstructs a pair of TV smooth images that is consistent with both the compressed images and the correlation information. In our framework, we use the TV prior on the reconstructed images, however one could also use a sparsity prior that minimizes the norm of the coefficients in a sparse representation of the images [25, 26]

In the above formulation, it is clear that we measure the correlation consistency of all the pixels in the image and the warped image . However, this assumption is not true in multi-view imaging scenarios, as there are often problems due to occlusions. This indicates that we need to consider only the pixels that appear in both the views and we need to ignore the holes in the warped image while enforcing consistency between and . The positions of holes in the warped image correspond to the row indexes in the matrix that do not contain any value of ‘’, i.e., all entries in a given row are zero. Once these rows are identified, we simply ignore that contribution while we measure the correlation consistency between the images and . More formally, let be the set of indexes of these rows. Let us denote a diagonal matrix that is formed as

(12)

where . For effective occlusion handling, the joint reconstruction problem of Eq. (11) can be modified as

(OPT-1)
s.t.

Note that, by setting , we get the optimization problem of Eq. (11) that considers the consistency of all the pixels in and . We show later that the quality of the reconstructed images are improved, when our joint decoding problem OPT-1 is solved with the matrix constructed using Eq. (12). Finally, the depth estimation and the joint reconstruction steps could be iterated several times. In our experiments, however, we have not observed any significant improvement in the quality of the reconstructed images by repeating these two steps.

Iii Optimization methodology

We propose now a solution for the joint reconstruction problem OPT-1. We first show that the optimization problem is convex. Then, we propose an effective solution based on proximal methods.

Proposition 1

The OPT-1 optimization problem is convex.

{proof}

Our objective is to show that all the functions in OPT-1 problem are convex. However, it is quite easy to check that the functions and , are convex [27]. So, we have to show that the last constraint is a convex function.

Let , where , and . The function can be represented as

The second derivative of the function is given as

Here, , where represents the identity matrix and follows from = for any . This means that the Hessian function is positive semi-definite and thus is convex.

We now propose an optimization methodology to solve OPT-1 convex problem with proximal splitting methods [8]. For mathematical convenience, we rewrite OPT-1 as

(13)
s.t.

where , , , , and represents the identity matrix. Recall that (for simplicity we omit the subscript in Eq. (13)) is the operator that outputs a matrix of dimensions from a column vector of dimensions . The optimization problem of Eq. (13) can be visualized as a special case of general convex problem as

(14)

where the functions [8]. is the class of lower semicontinuous convex functions from to that are not infinity everywhere. For the optimization problem given in Eq. (13), the functions in the representation of Eq. (14) are

  1. ,

  2. ,


  3. i.e., is the indicator function of the closed convex set


  4. where


  5. where

The solution to the problem of Eq. (14) can be estimated by generating the recursive sequence , where the function is given as . The proximity operator is defined as the . The main difficulty with these iterations is the computation of the operator. There is no closed form expression to compute the , especially when the function is the cumulative sum of two or more functions. In such cases, instead of computing the directly for the combined function , one can perform a sequence of calculations involving separately the individual operators . The algorithms in this class are known as splitting methods [8], which lead to an easily implementable algorithm.

We describe in more details the methodology to compute the prox for the functions . For the function , the can be computed using Chambolle’s algorithm [28]. A similar approach can be used to compute the . The function can be represented as , where and . The set represents the -ball defined as . Then, the can be computed using the following closed form expression:

(15)

[29], where represents the conjugate transpose of . The with can be computed using radial projection [8] as

(16)

The for the function can also be solved using Eq. (15) by setting and . Finally, the function can be represented with and an affine operator , i.e., . As the operator is not a tight frame, the can be computed using an iterative scheme [29]. Let , and and be the frame constants with . The can be calculated iteratively [29] as

(17)
(18)

where and . It has been shown that both and converge linearly and the best convergence rate is attained when .

In our work, we use the parallel proximal algorithm (PPXA) proposed by Combettes et al. [8] to solve Eq. (14), as this algorithm can be easily implementable on multicore architectures due to its parallel structure. The PPXA algorithm starts with an initial solution and computes the in each iteration and the results are used to update the current solution . The iterative procedure for computing the prox of functions , and the updating steps are repeated until convergence is reached. The authors have shown that the sequence generated by the PPXA algorithm is guaranteed to converge to the solution of problems such as the one given in Eq. (14).

Iv Experimental Results

Iv-a Setup

We study now the performance of our distributed representation scheme for the joint reconstruction of pairs of compressed images. The experiments are carried out in six natural datasets namely, Tsukuba (views center and right), Venus (views 2 and 6) [16], Plastic (views 1 and 2) [30], Flowergarden (frames 5 and 6), Breakdancers (views 0 and 2) and Ballet (views 3 and 4) [31]. The first four datasets have been captured by a camera array where the different viewpoints correspond to translating the camera along one of the image coordinate axis. In such a scenario, the motion of objects due to the viewpoint change is restricted to the horizontal direction with no motion along the vertical direction. The depth estimation is thus a one-dimensional search problem and the data cost function given in Eq. (2) is modified accordingly.

We compress the images independently using an H.264 intra coding scheme; this implementation is carried out using the JM reference software version 18 [32]. The bit rate at the encoder is varied by changing the quantization parameter (QP) in the H.264 coding scheme. In our experiments, we use six different QP parameters, namely in order to generate the rate-distortion (RD) plots. Also, we use the same QP value while encoding the images and , in order to ensure balanced rate allocation among different cameras. We estimate a depth image from the decoded images and by solving the regularized energy minimization problem of Eq. (1) using -expansion algorithm in Graph Cuts [22]. Unless stated explicitly, we solve the OPT-1 optimization problem with matrix constructed using Eq. (12). The smoothness parameters of the depth estimation problem of Eq. (7) and the parameters of the OPT-1 joint reconstruction problem are given in Table I for all the six datasets; these parameters are selected based on trial and error experiments. The solution to the OPT-1 problem is computed by running the PPXA algorithm for 100 iterations.

We report in this section the performance of the proposed joint reconstruction scheme and highlight the benefit of exploiting the inter-view correlation while decoding the images. We then study the effect of compression on the quality of the estimated depth images. Then, we analyze the importance of the matrix that enforces correlation consistency only on the corresponding pixels (i.e., the pixels that are not occluded) on the quality of the reconstructed images. Finally, we compare the rate-distortion performance of our scheme w.r.t. state-of-the-art distributed coding solutions and joint encoding algorithms.

Dataset
Tsukuba 190 4 3 2
Venus 220 4 1 2
Plastic 120 4 1 2
Flowergarden 170 3 1 1.25
Breakdancer 300 160 2 1
Ballet 290 160 1 2.2
TABLE I: The parameters in Eq. (7) and in the OPT-1 problem used in our experiments.

Fig. 2: Comparison of the rate-distortion performance between the independent and the joint decoding schemes as well as H.264-based joint encoding schemes: (a) Venus dataset; and (b) Flowergarden dataset.
Fig. 3: Comparison of the rate-distortion performances between the independent and the joint decoding schemes as well as H.264-based joint encoding schemes for the Breakdancers dataset.

Iv-B Performance Analysis

We first compare our joint reconstruction results with respect to a scheme where the images are reconstructed independently. Fig. 2(a), Fig. 2(b) and Fig. 3 compare the overall quality of the decoded images between the independent (denoted as H.264 Intra) and the joint decoding solutions (denoted as Proposed), respectively for the Venus, Flowergarden and Breakdancers datasets. The x-axis represent the total number of bits spent on encoding the images and the y-axis represent the mean PSNR value of the reconstructed images and . From the plots, we see that the proposed joint reconstruction scheme performs better than the independent reconstruction scheme by a margin of about  dB,  dB and  dB respectively for the different datasets. This confirms that the proposed joint decoding framework is effective in exploiting the inter-view correlation while reconstructing the images. Similar experimental results have been observed on other datasets. When compared to the first two datasets, the gain due to joint reconstruction for the Breakdancers dataset is smaller as confirmed in Fig. 3. It is well known that this dataset is weakly correlated due to large camera spacing [31], hence the gain provided by the joint decoding is small.

We then quantitatively compare the RD performances between the joint and the independent coding schemes using the Bjontegaard metric [33]. In our experiments, we use the first four points in the RD plot for the computation in order to highlight the benefit in the low bit rate region; this corresponds to the QP values . The relative rate savings due to joint reconstruction for all the six datasets is available in the second column of Table II. From the values in Table II we see that the benefit of joint reconstruction depends on the correlation among the images; in general, higher the correlation, the better the performance. For example, we see that the Flowergarden dataset gives rate savings on average compared to H.264 intra due to very high correlation. On the other hand, the Breakdancers and Ballet datasets only provide about rate savings due to weak correlation mainly because of large distances between the cameras. Though the gain is small for these datasets, we show later that the performance of our scheme competes with the performance of the joint encoding solutions based on H.264 at low bit rates.

Dataset Proposed Proposed: H.264: 4x4 H.264
True depth
Tsukuba 14.9 20.5 15.8 42.4
Venus 15.9 21.3 12.2 44.9
Plastic 10.3 11 8.4 28.7
Flowergarden 22.8 29.3 29.2 46.5
Breakdancer 5.8 6.6 -6.5 8.9
Ballet 4.4 6.1 -8.9 2.7
TABLE II: Rate savings with respect to the independent coding schemes based on H.264 intra for the stereo images. The average rate savings () is computed using the Bjontegaard metric [33] for the QP values .

Fig. 4: Comparison of the estimated depth image from compressed images with respect to the actual depth information for the Venus dataset. (a) Groundtruth disparity image ; (b) computed disparity image at QP = 51; (c) disparity error at QP = 51. The pixels with absolute error greater than one is marked in white. The percentage of white pixels is . (d) Computed disparity image at QP = ; (e) disparity error at rate at QP = . The percentage of white pixels is . The parameter represents the product of the focal length and the baseline distance between the cameras and .

Fig. 5: Comparison of the estimated depth image from compressed images with respect to the actual depth information for the Flowergarden dataset. (a) Groundtruth disparity image ; (b) computed disparity image at QP = ; (c) disparity error at QP = . The pixels with absolute error greater than one is marked in white. The percentage of white pixels is . (d) Computed disparity image at QP = ; (e) disparity error at rate at QP = . The percentage of white pixels is . The parameter represents the product of the focal length and the baseline distance between the cameras and .

We then carry out the same experiments in a scenario, where the images are jointly reconstructed using a correlation model that is estimated from the original images. This scheme thus serves as a benchmark for the joint reconstruction, since the correlation is accurately known at the decoder. The corresponding results are denoted as proposed: True depth in Fig. 2. The corresponding rate savings compared to the independent compression based on H.264 intra is given in the third column of Table II. At low bit rates, in general, we see that our scheme is away from the upper bound performance due to the poor quality of the depth estimation from compressed images. For example, in Fig. 2(b) (for Flowergarden dataset) we see that at bit rate of 0.2 (i.e., QP = ), the proposed scheme is away from the upper bound performance by a margin of around  dB. As a result, we see in Table II that the rate savings is better, when the actual depth information is used for the joint reconstruction compared to the performance of the scheme where the depth information is estimated from compressed images. We show in Fig. 4(b) and Fig. 4(d) the inverse depth images (i.e., disparity images) estimated from the decoded images that are encoded with QP = (resp. total bit rate = 0.08 bpp) and QP = (resp. total bit rate = 0.98 bpp), respectively for the Venus dataset. Comparing the respective disparity images with respect to the actual disparity information in Fig. 4(a) we observe poor quality disparity results for QP = . Quantitatively, the errors in the disparity images are found to be and , respectively for QP = and QP = , when it is measured as the percentage of pixels with an absolute error greater than one. This confirms that the quantization noise in the compressed images are not properly handled while estimating the correlation information. Similar conclusions can be derived for the Flowergarden dataset from Fig. 5, where, in general, the estimated depth information from highly compressed images is not accurate. Developing robust correlation estimation techniques to alleviate this problem is the target of our future works. We finally see in Fig. 2 that the reconstruction quality achieved with the correlation estimated from compressed images converges to the upper-bound performance when the rate increases or equivalently, when the quality of decoded images and improves.

Fig. 6: Importance of the matrix in the OPT-1 optimization problem. (a) Original right image; and (b) reconstructed right image obtained as a solution of the OPT-1 problem when . (c) and (d) Reconstructed right and left images, respectively obtained as a solution of the OPT-1 problem, when the matrix is constructed based on Eq. (12). The PSNR values of the reconstructed images are: (b) dB; (c) dB; and (d) dB. The experiments are carried out in the Tsukuba stereo dataset, where the images are encoded with a QP value of .

Fig. 7: Comparison of the rate-distortion performance between the independent (JPEG) and the joint decoding schemes as well as the DSC scheme based on disparity learning [9]: (a) Venus dataset; and (b) Flowergarden dataset. In this plot, the independent compression of the images and is performed using the JPEG coding scheme.

We now analyze the importance of the matrix in the optimization problem OPT-1 which enables us to measure the correlation consistency objective only to the non-occluded pixels, i.e., the holes in the warped image are ignored while measuring the correlation consistency between the images and . In order to highlight the benefit, we first solve the OPT-1 joint reconstruction problem by setting . The corresponding reconstructed right image is shown in Fig. 6(b). Comparing it with the original right view in Fig. 6(a), we see that the visual artifacts are noticeable in the reconstructed right image . In particular, we notice strong artifacts along the edges of the lamp holder and in the face regions; this is mainly due to the improper handling of the occluded pixels. Quantitatively, the PSNR of the reconstructed image is  dB (respectively the quality of the reconstructed left view is  dB). We then solve the OPT-1 optimization problem with a matrix constructed using Eq. (12). The corresponding reconstructed right image and left image is available in Fig. 6(c) and Fig. 6(d), respectively. We now do not see any annoying artifacts in the reconstructed image due to the effective handling of the occlusions via the matrix . Also, the quality of the reconstructed images becomes quite similar and the respective values for the right and left views are  dB and  dB.

We then compare the RD performance of our scheme to a distributed coding solution (DSC) based on the LDPC encoding of DCT coefficients, where the disparity field is estimated at the decoder using Expectation Maximization (EM) algorithms [9]. The resulting RD performance is given in Fig. 7(a) and Fig. 7(b) (denoted as Disparity learning) for the Venus and Flowergarden datasets, respectively. In the DSC scheme, the Wyner-Ziv image is decoded with the JPEG-coded reference image as the side information. In order to have a fair comparison between the proposed scheme and this DSC scheme [9], we carry out our joint reconstruction experiments with the JPEG compressed images. That is, instead of H.264 intra we now use JPEG for independently compressing the images and . Then, from the JPEG coded images and , we jointly reconstruct a pair of images and using the methodology described in Section II. The resulting RD performance of the proposed scheme is available in Fig. 7(a) and Fig. 7(b), respectively for both datasets. We first notice that the proposed joint reconstruction scheme improves the quality of the compressed images; this is consistent with our earlier observations. We further observe that the disparity learning scheme marginally improves the quality of the compressed images only at low bit rates, however, it fails to perform better than the JPEG coding scheme at high bit rates. Also, we note that the DSC scheme in [9] requires a feedback channel in order to accurately control the LDPC encoding rate, while our proposed solution does not require any statistical correlation modeling at the encoder nor any feedback channel; this clearly highlights the benefits of the proposed solution.

For the sake of completeness, we finally compare the performance of our scheme compared to the joint encoding solutions based on H.264. In particular, the joint compression of views is carried out by setting the profile ID = 128; this corresponds to the stereo profile [32]. In this profile, one of the images (say ) is encoded as a I-frame while the remaining view (say ) is encoded as a P-frame. We consider two different settings in the H.264 motion estimation, which is performed with a variable and a fixed macroblock size of . The RD performance corresponding to both cases (resp. denoted as H.264 and H.264: 44 blocks) is available in Fig. 2(a), Fig. 2(b) and Fig. 3 for the Venus, Flowergarden and Breakdancers datasets, respectively. Also, we report in the columns 4 and 5 of Table II, the rate savings of the joint encoding scheme compared to the H.264 intra scheme. First, it is interesting to note that for rectified images (or when the camera motion is horizontal), our scheme competes with the H.264 joint encoding performance when a block size is set to . However, our scheme could not perform as well at high bit rates due to the lack of texture encoding. In other words, our scheme decodes the images by exploiting the geometrical correlation information while the visual information along the texture and edges are not perfectly captured. However, for the non-rectified images like the Breakdancers dataset (see Fig. 3), we see that our scheme competes with the joint encoding solutions based on H.264. Similar conclusions can be derived for the Ballet dataset in Table II, where the proposed scheme provides rate savings of 4.4, while H.264 saves only 2.7. This is because, when the images are not rectified, which is the case in the Breakdancers and Ballet datasets, the block-based motion compensation is not an ideal model to capture the inter-view correlation. Also, for the same reason, we see in Fig. 3 that the H.264 joint encoding with blocks performs even worse than the H.264 intra coding scheme; this is indicated with a negative sign in Table II.

V Joint Reconstruction of multiple images

V-a Optimization Problem

So far, we have focused on the distributed representation of pairs of images. Now, we describe the extension of our framework to datasets with correlated images that are captured by the cameras from different viewpoints. We further assume that the cameras are calibrated, where we denote the intrinsic camera matrix respectively, for the cameras as . Also, let and , respectively represent the rotation and translation of the cameras with respect to the global coordinate system. Similarly to the stereo setup, the correlated images are compressed independently (e.g., H.264 intra or JPEG) with a balanced rate allocation. The compressed visual information is transmitted to the central decoder, where we jointly process all the compressed views in order to take benefit of the inter-view correlation for improved reconstruction quality. In particular, as carried out in stereo decoding framework, we first estimate a depth image from the decoded images (resp. ) and we use it for joint signal recovery. The reconstructed images are respectively given as .

We propose to estimate the depth image from the decoded images in a regularized energy minimization framework as a tradeoff between a data term and a smoothness term . The depth image is estimated by minimizing the energy that is represented as

(19)

where represents the candidate depth images. Note that this formulation is similar to Eq. (1) in the stereo case.

The data term in the multi-view setup should measure the cost of assigning a depth image that is globally consistent with all the compressed images. In the literature, there are plenty of works that address the problem of finding a good multi-view data cost function with global consistency, e.g., [34, 17, 35]. In this work, for the sake of simplicity, we propose to compute the global photo consistency as the cumulative sum of the data term given in Eq. (2). That is, the global photo consistency term is given as

(20)

where is the warping function that projects the intensity values in the view to the view using the depth information . As described previously in Section II-A, this warping is a two step process. We first project the pixels from view to the global coordinate system using Eq. (4) and then it is projected to the view using the camera parameters and (see Eq. (5)). The objective of the smoothness cost is to enforce consistency in the depth solution. For a candidate depth image , the smoothness energy is computed using Eq. (6). Finally, the minimization problem of Eq. (19) can be solved using strong optimization techniques (e.g., Graph Cuts) in order to estimate a depth image from the decoded images. At last, we note that one could estimate a more accurate depth information by considering additional energy terms in the energy model of Eq. (19) in order to properly account for the occlusions, global scene visibility, etc. More details are available in the overview paper [34].

Now, we focus on the joint decoding problem, where we are interested in the reconstruction of correlated images from the compressed information ; this is carried out by exploiting the correlation that is given in terms of depth information or from the operator derived from the depth as described in Section II-B. In particular, we can represent the warping operation as matrix multiplication of the form , where represents an approximation of the image at viewpoint . We propose to jointly reconstruct the multi-view images as a solution to the following optimization problem:

(OPT-2)
s.t.

where (see Eq. (12)) is a diagonal matrix that is constructed using a similar procedure described in Section II-C; this allows to measure the correlation consistency to only to those pixels that are available in all the views. From the above equation, we see that the proposed reconstruction algorithm estimates TV smooth images that are consistent with both the compressed and the correlation (depth) informations. It is interesting to note that by setting in OPT-2, we get the stereo joint reconstruction problem OPT-1.

Finally, using the results derived in Prop. 1 it is easy to check that the optimization problem OPT-2 is convex. Therefore, our multi-view joint reconstruction problem OPT-2 can also be solved using proximal splitting methods. We can rewrite the OPT-2 problem as

(21)
s.t.

Here, , , , , and the matrix is given as

(22)

It can be noted that the above optimization problem is an extension to the one described in Eq. (13), where the TV prior, measurement and correlation consistency objectives are now applied to all the images. Therefore, the prox operators for the objective function and the constraints of Eq. (21) can be computed as described in Section III.

V-B Performance Evaluation

We now evaluate the performance of the multi-view joint reconstruction algorithm using five images (center, left, right, bottom and top views) of the Tsukuba [17], three views (views 0, 1 and 2) of the Plastic [30], three views (views 0, 2 and 4) of the Breakdancers and three views (views 3, 4 and 5) of the Ballet [31]. Similarly to the stereo setup, we independently encode the multi-view images using H.264 intra by varying the QP values. At the joint decoder, we estimate a depth image from the compressed images by solving Eq. (19) with parameters = and , respectively for the different datasets. Then, using the estimated depth image we jointly decode the multiple views as a solution to the problem OPT-2 with the matrix constructed using Eq. (12). This problem is solved with the parameters and , respectively for the datasets. Finally, we iterate the PPXA algorithm for 100 times in order to reconstruct the correlated images.

Fig. 8: Comparison of the rate-distortion performance between the independent, proposed, DISCOVER [10] and H.264-based joint encoding schemes: (a) Tsukuba dataset; and (b) Plastic dataset. The joint reconstruction is performed with and views, respectively for the Tsukuba and Plastic datasets.

Fig. 9: Comparison of the rate-distortion performance between the independent, proposed, DISCOVER [10] and H.264-based joint encoding schemes: (a) Breakdancers dataset; and (b) Ballet dataset. The joint reconstruction is performed with views.

We first compare our results with a stereo setup, where the depth estimation and the joint reconstruction steps are carried out with pairs of images. In more details, we take as being the center image in Tsukuba, the view 1 in Plastic, the view 2 in Breakdancers and the view 4 in Ballet, respectively and we perform joint decoding between the image and rest of images by selecting different pairs of images independently (all pairs include ). For example, for the Tsukuba dataset, we perform the depth estimation and the joint reconstruction steps in the following order: (i) center and right views; (ii) center and left views; (iii) center and top views; and (iv) center and bottom views. After decoding all the images, we take the mean PSNR of all the reconstructed images. Note that, in this setup the center image is reconstructed four times. For a fair comparison, we keep the reconstructed image that gives highest PSNR with respect to . In a similar way, the experiments are carried out for the other datasets, where we perform the joint reconstruction of pairs of images and then compute the average PSNR of the reconstructed images. The resulting RD performance is denoted as Proposed: Stereo in Fig. 8(a), Fig. 8(b), Fig.