# Joint Reconstruction of Multi-view Compressed Images Part of this work has been accepted to the European Signal Processing Conference (EUSIPCO), Bucharest, Romania, Aug. 2012 [1].

## Abstract

The distributed representation of correlated multi-view images is an important problem that arise in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed correlated images are jointly decoded in order to improve the reconstruction quality of all the compressed images. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among different cameras. A central decoder first estimates the underlying correlation model from the independently compressed images which will be used for the joint signal recovery. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images that comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be consistent with their compressed versions. We show by experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our proposed algorithm compares advantageously to state-of-the-art distributed coding schemes based on disparity learning and on the DISCOVER.

## 1Introduction

In recent years, vision sensor networks have been gaining an ever increasing popularity enforced by the availability of cheap semiconductor components. These systems usually acquire multiple correlated images of the same 3D scene from different viewpoints. Compression techniques shall exploit this correlation in order to efficiently represent the 3D scene information. The distributed coding paradigm becomes particularly attractive in such settings; it permits to efficiently exploit the correlation between images with low encoding complexity and minimal inter-sensor communication, which directly translate into power savings in sensor networks. In the distributed compression framework, a central decoder jointly reconstructs the visual information from the compressed images by exploiting the correlation between the samples. This permits to achieve a good rate-distortion tradeoff in the representation of correlated multi-view images, even if the encoding is performed independently.

The first information-theoretic results on distributed source coding appeared in the late seventies for the noiseless [2] and noisy cases [3]. However, most results in distributed coding have remained non-constructive for about three decades. Practical DSC schemes have then been designed by establishing a relation between the Slepian-Wolf theorem and channel coding [4]. Based on the results in [4], several distributed coding schemes for video and multi-view images have been proposed in the literature [5]. In such schemes, a feedback channel is generally used for accurately controlling the Slepian-Wolf coding rate. Unfortunately, this results in increased latency and bandwidth usage due to the multiple requests from the decoder. These schemes can thus hardly be used in real time applications. One solution to avoid the feedback channel is to use a separate encoding rate control module to precisely control the Slepian-Wolf coding rate [7]. The overall computational complexity at the encoder becomes non-negligible due to this rate control module. In this paper, we build a distributed coding scheme, where the correlated compressed images are directly transmitted to the joint decoder without implementing any Slepian-Wolf coding; this avoids the necessity for complex estimation of the statistical correlation estimation and of the coding rate at the encoder.

We consider a scenario where a set of cameras are distributed in a 3D scene. In most practical deployments of such systems, the images captured by the different cameras are likely to be correlated. The captured images are encoded independently using standard encoding solutions and are transmitted to the central decoder. Here, we assume that the images are compressed using balanced rate allocation, which permits to share the transmission and computational costs equally among the sensors. It thus prevents the necessity for hierarchical relationship among the sensors. The central decoder builds a correlation model from the compressed images which is used to jointly decode the multi-view images. The joint reconstruction is formulated as a convex optimization problem. It reconstructs the multi-view images that are consistent with the underlying correlation information and with the compressed images information. While reconstructing the images, we also effectively handle the occlusions that commonly arise in multi-view imaging. We solve the joint reconstruction problem using effective parallel proximal algorithms [8].

We evaluate the performance of our novel joint decoding scheme in several multi-view datasets. Experimental results demonstrate that the proposed distributed coding solution improves the rate-distortion performance of the separate coding results by taking advantage of the inter-view correlation. We show that the quality of the decoded images is quite balanced for a given bit rate, as expected from a symmetric coding solution. We observe that our scheme, at low bit rate, performs close to the joint encoding solutions based on H.264, when the block size used for motion compensation is set to . Finally, we show that our framework outperforms state-of-the-art distributed coding solutions based on disparity learning [9] and on the DISCOVER codec [10], in terms of rate-distortion performance. It certainly provides an interesting alternative to most classical DSC solutions [5], since it does not require any statistical correlation information at the encoder.

Only very few works in the literature address the distributed compression problem without using a channel encoder or a feedback channel. In [11], a distributed coding technique for compressing the multi-view images has been proposed, where a joint decoder reconstructs the views from low resolution images using super-resolution techniques. In more details, each sensor transmits a low resolution compressed version of the original image to the decoder. At the decoder, these low resolution images are registered with respect to a reference image, where the image registration is performed by shape analysis and image warping. The registered low resolution images are then jointly processed to decode a high resolution image using image super-resolution techniques. However, this framework requires communication between the encoders in order to facilitate the registration, e.g., the transmission of feature points. Other works in super-resolution use multiple compressed images that are fused for improved resolution [12]. Such techniques usually target reconstruction of a *single* high resolution image from multiple compressed images. Alternatively, techniques have been developed in [13] to decode a *single* high quality image from several encoded versions of the same source image or videos. This is achieved by solving an optimization problem that enforces the final reconstructed image to be consistent with all the compressed copies. Our main target in this paper is to jointly improve the quality of *multiple* compressed correlated (multi-view) images and not to increase the spatial resolution of the compressed images or to extract a single high quality image. More recently, Schenkel *et al.* [15] have considered a distributed representation of image pairs. In particular, they have proposed an optimization framework to enhance the quality of the JPEG compressed images. This work, however, considered an asymmetric scenario that requires a reference image for joint decoding.

The rest of the paper is organized as follows. The joint decoding algorithm along with the optimization framework for joint reconstruction is described in Section 2. In Section 3, we present the optimization algorithm based on proximal splitting methods. In Section 4, we present the experimental results for the joint reconstruction of pairs of images. Section 5 describes the extension of our proposed framework to decode multiple images along with the simulation results. Finally, in Section 6 we draw some concluding remarks.

## 2Joint Decoding of Image Pairs

We consider the scenario illustrated in Fig. ?, where a pair of cameras and project the 3D visual information on the 2D plane and (with resolution ), respectively. The images and are compressed independently using standard encoding solutions (e.g., JPEG, H.264 intra) and are transmitted to a central decoder. The joint decoder has the access to the compressed version of the correlated images and its main objective is to improve the quality of all the compressed views by exploiting the underlying inter-view correlation. We first propose to estimate the correlation between images from the decoded images and , which is effectively modeled by a dense depth image . The joint reconstruction stage then uses the depth information and enhances the quality of the decoded images and . Note that one could solve a joint problem to estimate simultaneously the correlation information and the improved images. However, such a joint optimization problem would be hard to solve with a complex objective function. Therefore, we propose to split the problem in two steps: (i) we estimate a correlation information from the decoded images; and (ii) we carry out joint reconstruction using the estimated correlation information. These two steps are detailed in the rest of this section.

### 2.1Depth Estimation

The first task is to estimate the correlation between images, which typically consists in a depth image. In general, the dense depth information is estimated by matching the corresponding pixels between images. Several algorithms have been proposed in the literature to compute dense depth images. For more details, we refer the reader to [16]. In this work, we estimate a dense depth image from the compressed images in a regularized energy minimization framework, where the energy is composed of a data term and a smoothness term . A dense depth image is obtained by minimizing the energy function as

where balances the importance of the data and smoothness terms, and represents the candidate depth images. The candidate depth values for every pixel position are discrete; this is constructed by uniformly sampling the inverse depth in the range , where and are the minimal and maximal depth values in the scene, respectively [17].

We now discuss in more details the components of the energy function of Eq. (Equation 1). The data term, is used to match the pixels across views by assuming that the 3D scene surfaces are Lambertian, i.e., the intensity is consistent irrespective of the viewpoints. It is computed as

where and represent the image dimensions and represent a pixel position. The most commonly used pixel-based cost function includes squared intensity differences and absolute intensity differences. In this work, we use square intensity difference to measure the disagreement of assigning a depth value to the pixel location . Mathematically, it is computed as

where is a warping function that warps the image using the depth value . This warping, in general, is a two step process [18]. First the pixel position in the image is projected to the world coordinate system. This projection step is represented as

where is the intrinsic camera matrix of the camera and represent the extrinsic camera parameters with respect to the global coordinate system. Then, the 3D point is projected on the coordinates of the camera with the internal and external camera parameters, respectively as and . This projection step can be described as

Finally, the pixel location of the warped image is taken as , where rounds to the nearest integer.

The smoothness term, is used to enforce consistent depth values at neighboring pixel locations and . It is measured as

where represents the usual four-pixel neighborhood and sets an upper level on the smoothness penalty such that discontinuities can be preserved [19].

We can finally rewrite the regularized energy objective function for the depth estimation problem as

This cost function is used in the optimization problem of Eq. (Equation 1), which is usually a non-convex problem. Several minimization algorithms exist in the literature to solve Eq. (Equation 1), e.g., Simulated annealing [20], Belief Propagation [21], Graph Cuts [22]. Among these solutions, the optimization techniques based on Graph Cuts compute the minimum energy in polynomial time and they generally give better results than the other techniques [16]. Motivated by this, in our work, we solve the minimization problem of Eq. (Equation 1) using Graph Cut techniques.

### 2.2Image Warping as Linear Transformation

Before describing our joint reconstruction problem, we show how the image warping operation in Eq. (Equation 3) can be written as matrix multiplication of the form ^{1}

We have shown earlier that the warping function shifts the pixel position in the reference image to the position in the target image. Alternatively, this pixel shift between images can be represented using a horizontal component and a vertical component of the motion field as . Note that this motion field can be easily computed from Eqs. (Equation 4) and (Equation 5), once the depth information and the camera parameters are known. Now, our goal is to represent the motion compensation operation as a linear transformation given as

Here, represents the warped image and is a matrix of dimensions whose entries are determined by the horizontal and vertical components of the motion field in the row, i.e., and .

In general, the elements of the matrix can be found in two ways: (i) forward warping; and (ii) inverse warping. In this work, we propose to construct the matrix based on forward warping; this permits easier handling of the occluded pixels as shown later. Given a motion vector, the elements of the matrix are given by

If (e.g., at image boundaries), we set so that the dimensions of the matrix stays . It should be noted that the matrix formed using Eq. (Equation 9) contains multiple entries with values of ’’ in each row. This is because several pixels in the source image can be mapped to the same location in the destination image during forward warping. In such cases, for a given row index we keep only the last ’’ entry in the matrix while the remaining ones in the row are set to zero. This is motivated by the fact that, during forward warping when multiple source pixels are mapped to the same destination point , the intensity value of the last source pixel is assigned to the destination pixel ^{2}

Thus, it is clear that the matrix shifts the pixels in by the corresponding motion vector in order to form . In a similar way, we can construct the matrix , and thus we can represent the image warping as . Finally, note that similar operations can also be performed with an inverse mapping. For details related to the construction of the matrix based on inverse warping, we refer the reader to [24].

### 2.3Joint reconstruction

We now discuss our novel joint reconstruction algorithm that takes benefit of the estimated correlation information given by the matrix (or ) in order to reconstruct the images. We propose to reconstruct an image pair as a solution to the following optimization problem:

Here, and represent the decoded views (see Fig. ?) and represents the total-variation (TV) norm. The first two constraints of Eq. (Equation 10) forces the reconstructed images and to be close to the respective decoded images and . The last constraint encourages the reconstructed images to be consistent with the correlation information represented by , i.e., the warped image should be consistent with the image . Finally, the TV prior term ensures that the reconstructed images and are smooth. In general, inclusion of the prior knowledge brings effective reduction in the search space, which leads to efficient optimization solutions. The optimization problem of Eq. (Equation 10), therefore reconstructs a pair of TV smooth images that is consistent with both the compressed images and the correlation information. In our framework, we use the TV prior on the reconstructed images, however one could also use a sparsity prior that minimizes the norm of the coefficients in a sparse representation of the images [25]

In the above formulation, it is clear that we measure the correlation consistency of all the pixels in the image and the warped image . However, this assumption is not true in multi-view imaging scenarios, as there are often problems due to occlusions. This indicates that we need to consider only the pixels that appear in both the views and we need to ignore the holes in the warped image while enforcing consistency between and . The positions of holes in the warped image correspond to the row indexes in the matrix that do not contain any value of ’’, i.e., all entries in a given row are zero. Once these rows are identified, we simply ignore that contribution while we measure the correlation consistency between the images and . More formally, let be the set of indexes of these rows. Let us denote a diagonal matrix that is formed as

where . For effective occlusion handling, the joint reconstruction problem of Eq. (Equation 10) can be modified as

Note that, by setting , we get the optimization problem of Eq. (Equation 10) that considers the consistency of all the pixels in and . We show later that the quality of the reconstructed images are improved, when our joint decoding problem OPT-1 is solved with the matrix constructed using Eq. (Equation 11). Finally, the depth estimation and the joint reconstruction steps could be iterated several times. In our experiments, however, we have not observed any significant improvement in the quality of the reconstructed images by repeating these two steps.

## 3Optimization methodology

We propose now a solution for the joint reconstruction problem Equation 12. We first show that the optimization problem is convex. Then, we propose an effective solution based on proximal methods.

Our objective is to show that all the functions in Equation 12 problem are convex. However, it is quite easy to check that the functions and , are convex [27]. So, we have to show that the last constraint is a convex function.

Let , where , and . The function can be represented as

The second derivative of the function is given as

Here, , where represents the identity matrix and follows from = for any . This means that the Hessian function is positive semi-definite and thus is convex.

We now propose an optimization methodology to solve Equation 12 convex problem with proximal splitting methods [8]. For mathematical convenience, we rewrite Equation 12 as

where , , , , and represents the identity matrix. Recall that (for simplicity we omit the subscript in Eq. (Equation 14)) is the operator that outputs a matrix of dimensions from a column vector of dimensions . The optimization problem of Eq. (Equation 14) can be visualized as a special case of general convex problem as

where the functions [8]. is the class of lower semicontinuous convex functions from to that are not infinity everywhere. For the optimization problem given in Eq. (Equation 14), the functions in the representation of Eq. (Equation 15) are

,

,

i.e., is the indicator function of the closed convex set

where

where

The solution to the problem of Eq. (Equation 15) can be estimated by generating the recursive sequence , where the function is given as . The proximity operator is defined as the . The main difficulty with these iterations is the computation of the operator. There is no closed form expression to compute the , especially when the function is the cumulative sum of two or more functions. In such cases, instead of computing the directly for the combined function , one can perform a sequence of calculations involving separately the individual operators . The algorithms in this class are known as *splitting methods* [8], which lead to an easily implementable algorithm.

We describe in more details the methodology to compute the *prox* for the functions . For the function , the can be computed using Chambolle’s algorithm [28]. A similar approach can be used to compute the . The function can be represented as , where and . The set represents the -ball defined as . Then, the can be computed using the following closed form expression:

[29], where represents the conjugate transpose of . The with can be computed using radial projection [8] as

The for the function can also be solved using Eq. (Equation 16) by setting and . Finally, the function can be represented with and an affine operator , i.e., . As the operator is not a tight frame, the can be computed using an iterative scheme [29]. Let , and and be the frame constants with . The can be calculated iteratively [29] as

where and . It has been shown that both and converge linearly and the best convergence rate is attained when .

In our work, we use the parallel proximal algorithm (PPXA) proposed by Combettes *et al.* [8] to solve Eq. (Equation 15), as this algorithm can be easily implementable on multicore architectures due to its parallel structure. The PPXA algorithm starts with an initial solution and computes the in each iteration and the results are used to update the current solution . The iterative procedure for computing the *prox* of functions , and the updating steps are repeated until convergence is reached. The authors have shown that the sequence generated by the PPXA algorithm is guaranteed to converge to the solution of problems such as the one given in Eq. (Equation 15).

## 4Experimental Results

### 4.1Setup

We study now the performance of our distributed representation scheme for the joint reconstruction of pairs of compressed images. The experiments are carried out in six natural datasets namely, *Tsukuba* (views center and right), *Venus* (views 2 and 6) [16], *Plastic* (views 1 and 2) [30], *Flowergarden* (frames 5 and 6), *Breakdancers* (views 0 and 2) and *Ballet* (views 3 and 4) [31]. The first four datasets have been captured by a camera array where the different viewpoints correspond to translating the camera along one of the image coordinate axis. In such a scenario, the motion of objects due to the viewpoint change is restricted to the horizontal direction with no motion along the vertical direction. The depth estimation is thus a one-dimensional search problem and the data cost function given in Eq. (Equation 2) is modified accordingly.

We compress the images independently using an H.264 intra coding scheme; this implementation is carried out using the JM reference software version 18 [32]. The bit rate at the encoder is varied by changing the quantization parameter (QP) in the H.264 coding scheme. In our experiments, we use six different QP parameters, namely in order to generate the rate-distortion (RD) plots. Also, we use the same QP value while encoding the images and , in order to ensure balanced rate allocation among different cameras. We estimate a depth image from the decoded images and by solving the regularized energy minimization problem of Eq. (Equation 1) using -expansion algorithm in Graph Cuts [22]. Unless stated explicitly, we solve the OPT-1 optimization problem with matrix constructed using Eq. (Equation 11). The smoothness parameters of the depth estimation problem of Eq. (Equation 7) and the parameters of the OPT-1 joint reconstruction problem are given in Table 1 for all the six datasets; these parameters are selected based on trial and error experiments. The solution to the OPT-1 problem is computed by running the PPXA algorithm for 100 iterations.

We report in this section the performance of the proposed joint reconstruction scheme and highlight the benefit of exploiting the inter-view correlation while decoding the images. We then study the effect of compression on the quality of the estimated depth images. Then, we analyze the importance of the matrix that enforces correlation consistency only on the corresponding pixels (i.e., the pixels that are not occluded) on the quality of the reconstructed images. Finally, we compare the rate-distortion performance of our scheme w.r.t. state-of-the-art distributed coding solutions and joint encoding algorithms.

Dataset | ||||
---|---|---|---|---|

Tsukuba | 190 | 4 | 3 | 2 |

Venus | 220 | 4 | 1 | 2 |

Plastic | 120 | 4 | 1 | 2 |

Flowergarden | 170 | 3 | 1 | 1.25 |

Breakdancer | 300 | 160 | 2 | 1 |

Ballet | 290 | 160 | 1 | 2.2 |

=3in

### 4.2Performance Analysis

We first compare our joint reconstruction results with respect to a scheme where the images are reconstructed independently. Fig. ?(a), Fig. ?(b) and Fig. ? compare the overall quality of the decoded images between the independent (denoted as *H.264 Intra*) and the joint decoding solutions (denoted as *Proposed*), respectively for the Venus, Flowergarden and Breakdancers datasets. The x-axis represent the total number of bits spent on encoding the images and the y-axis represent the mean PSNR value of the reconstructed images and . From the plots, we see that the proposed joint reconstruction scheme performs better than the independent reconstruction scheme by a margin of about dB, dB and dB respectively for the different datasets. This confirms that the proposed joint decoding framework is effective in exploiting the inter-view correlation while reconstructing the images. Similar experimental results have been observed on other datasets. When compared to the first two datasets, the gain due to joint reconstruction for the Breakdancers dataset is smaller as confirmed in Fig. ?. It is well known that this dataset is weakly correlated due to large camera spacing [31], hence the gain provided by the joint decoding is small.

We then quantitatively compare the RD performances between the joint and the independent coding schemes using the Bjontegaard metric [33]. In our experiments, we use the first four points in the RD plot for the computation in order to highlight the benefit in the low bit rate region; this corresponds to the QP values . The relative rate savings due to joint reconstruction for all the six datasets is available in the second column of Table 2. From the values in Table 2 we see that the benefit of joint reconstruction depends on the correlation among the images; in general, higher the correlation, the better the performance. For example, we see that the Flowergarden dataset gives rate savings on average compared to H.264 intra due to very high correlation. On the other hand, the Breakdancers and Ballet datasets only provide about rate savings due to weak correlation mainly because of large distances between the cameras. Though the gain is small for these datasets, we show later that the performance of our scheme competes with the performance of the joint encoding solutions based on H.264 at low bit rates.

Dataset | Proposed | Proposed: | H.264: 4x4 | H.264 |

True depth | ||||

Tsukuba | 14.9 | 20.5 | 15.8 | 42.4 |

Venus | 15.9 | 21.3 | 12.2 | 44.9 |

Plastic | 10.3 | 11 | 8.4 | 28.7 |

Flowergarden | 22.8 | 29.3 | 29.2 | 46.5 |

Breakdancer | 5.8 | 6.6 | -6.5 | 8.9 |

Ballet | 4.4 | 6.1 | -8.9 | 2.7 |

s/D_gs/D |s/D_g-s/D| >1s/D|s/D_g-s/D| >1

s/D_gs/D |s/D_g-s/D| >1s/D|s/D_g-s/D| >1

We then carry out the same experiments in a scenario, where the images are jointly reconstructed using a correlation model that is estimated from the original images. This scheme thus serves as a benchmark for the joint reconstruction, since the correlation is accurately known at the decoder. The corresponding results are denoted as *proposed: True depth* in Fig. ?. The corresponding rate savings compared to the independent compression based on H.264 intra is given in the third column of Table 2. At low bit rates, in general, we see that our scheme is away from the upper bound performance due to the poor quality of the depth estimation from compressed images. For example, in Fig. ?(b) (for Flowergarden dataset) we see that at bit rate of 0.2 (i.e., QP = ), the proposed scheme is away from the upper bound performance by a margin of around dB. As a result, we see in Table 2 that the rate savings is better, when the actual depth information is used for the joint reconstruction compared to the performance of the scheme where the depth information is estimated from compressed images. We show in Fig. ?(b) and Fig. ?(d) the inverse depth images (i.e., disparity images) estimated from the decoded images that are encoded with QP = (resp. total bit rate = 0.08 bpp) and QP = (resp. total bit rate = 0.98 bpp), respectively for the Venus dataset. Comparing the respective disparity images with respect to the actual disparity information in Fig. ?(a) we observe poor quality disparity results for QP = . Quantitatively, the errors in the disparity images are found to be and , respectively for QP = and QP = , when it is measured as the percentage of pixels with an absolute error greater than one. This confirms that the quantization noise in the compressed images are not properly handled while estimating the correlation information. Similar conclusions can be derived for the Flowergarden dataset from Fig. ?, where, in general, the estimated depth information from highly compressed images is not accurate. Developing robust correlation estimation techniques to alleviate this problem is the target of our future works. We finally see in Fig. ? that the reconstruction quality achieved with the correlation estimated from compressed images converges to the upper-bound performance when the rate increases or equivalently, when the quality of decoded images and improves.

_2_2_2_1

We now analyze the importance of the matrix in the optimization problem Equation 12 which enables us to measure the correlation consistency objective only to the non-occluded pixels, i.e., the holes in the warped image are ignored while measuring the correlation consistency between the images and . In order to highlight the benefit, we first solve the OPT-1 joint reconstruction problem by setting . The corresponding reconstructed right image is shown in Fig. ?(b). Comparing it with the original right view in Fig. ?(a), we see that the visual artifacts are noticeable in the reconstructed right image . In particular, we notice strong artifacts along the edges of the *lamp holder* and in the *face* regions; this is mainly due to the improper handling of the occluded pixels. Quantitatively, the PSNR of the reconstructed image is dB (respectively the quality of the reconstructed left view is dB). We then solve the OPT-1 optimization problem with a matrix constructed using Eq. (Equation 11). The corresponding reconstructed right image and left image is available in Fig. ?(c) and Fig. ?(d), respectively. We now do not see any annoying artifacts in the reconstructed image due to the effective handling of the occlusions via the matrix . Also, the quality of the reconstructed images becomes quite similar and the respective values for the right and left views are dB and dB.

We then compare the RD performance of our scheme to a distributed coding solution (DSC) based on the LDPC encoding of DCT coefficients, where the disparity field is estimated at the decoder using Expectation Maximization (EM) algorithms [9]. The resulting RD performance is given in Fig. ?(a) and Fig. ?(b) (denoted as *Disparity learning*) for the Venus and Flowergarden datasets, respectively. In the DSC scheme, the Wyner-Ziv image is decoded with the JPEG-coded reference image as the side information. In order to have a fair comparison between the proposed scheme and this DSC scheme [9], we carry out our joint reconstruction experiments with the JPEG compressed images. That is, instead of H.264 intra we now use JPEG for independently compressing the images and . Then, from the JPEG coded images and , we jointly reconstruct a pair of images and using the methodology described in Section 2. The resulting RD performance of the proposed scheme is available in Fig. ?(a) and Fig. ?(b), respectively for both datasets. We first notice that the proposed joint reconstruction scheme improves the quality of the compressed images; this is consistent with our earlier observations. We further observe that the disparity learning scheme marginally improves the quality of the compressed images only at low bit rates, however, it fails to perform better than the JPEG coding scheme at high bit rates. Also, we note that the DSC scheme in [9] requires a feedback channel in order to accurately control the LDPC encoding rate, while our proposed solution does not require any statistical correlation modeling at the encoder nor any feedback channel; this clearly highlights the benefits of the proposed solution.

For the sake of completeness, we finally compare the performance of our scheme compared to the joint encoding solutions based on H.264. In particular, the joint compression of views is carried out by setting the profile ID = 128; this corresponds to the stereo profile [32]. In this profile, one of the images (say ) is encoded as a I-frame while the remaining view (say ) is encoded as a P-frame. We consider two different settings in the H.264 motion estimation, which is performed with a variable and a fixed macroblock size of . The RD performance corresponding to both cases (resp. denoted as *H.264* and *H.264: 44 blocks*) is available in Fig. ?(a), Fig. ?(b) and Fig. ? for the Venus, Flowergarden and Breakdancers datasets, respectively. Also, we report in the columns 4 and 5 of Table 2, the rate savings of the joint encoding scheme compared to the H.264 intra scheme. First, it is interesting to note that for rectified images (or when the camera motion is horizontal), our scheme competes with the H.264 joint encoding performance when a block size is set to . However, our scheme could not perform as well at high bit rates due to the lack of texture encoding. In other words, our scheme decodes the images by exploiting the geometrical correlation information while the visual information along the texture and edges are not perfectly captured. However, for the non-rectified images like the Breakdancers dataset (see Fig. ?), we see that our scheme competes with the joint encoding solutions based on H.264. Similar conclusions can be derived for the Ballet dataset in Table 2, where the proposed scheme provides rate savings of 4.4, while H.264 saves only 2.7. This is because, when the images are not rectified, which is the case in the Breakdancers and Ballet datasets, the block-based motion compensation is not an ideal model to capture the inter-view correlation. Also, for the same reason, we see in Fig. ? that the H.264 joint encoding with blocks performs even worse than the H.264 intra coding scheme; this is indicated with a negative sign in Table 2.

## 5Joint Reconstruction of multiple images

### 5.1Optimization Problem

So far, we have focused on the distributed representation of pairs of images. Now, we describe the extension of our framework to datasets with correlated images that are captured by the cameras from different viewpoints. We further assume that the cameras are calibrated, where we denote the intrinsic camera matrix respectively, for the cameras as . Also, let and , respectively represent the rotation and translation of the cameras with respect to the global coordinate system. Similarly to the stereo setup, the correlated images are compressed independently (e.g., H.264 intra or JPEG) with a balanced rate allocation. The compressed visual information is transmitted to the central decoder, where we jointly process all the compressed views in order to take benefit of the inter-view correlation for improved reconstruction quality. In particular, as carried out in stereo decoding framework, we first estimate a depth image from the decoded images (resp. ) and we use it for joint signal recovery. The reconstructed images are respectively given as .

We propose to estimate the depth image from the decoded images in a regularized energy minimization framework as a tradeoff between a data term and a smoothness term . The depth image is estimated by minimizing the energy that is represented as

where represents the candidate depth images. Note that this formulation is similar to Eq. (Equation 1) in the stereo case.

The data term in the multi-view setup should measure the cost of assigning a depth image that is globally consistent with all the compressed images. In the literature, there are plenty of works that address the problem of finding a good multi-view data cost function with global consistency, e.g., [34]. In this work, for the sake of simplicity, we propose to compute the global photo consistency as the cumulative sum of the data term given in Eq. (Equation 2). That is, the global photo consistency term is given as

where is the warping function that projects the intensity values in the view to the view using the depth information . As described previously in Section 2.1, this warping is a two step process. We first project the pixels from view to the global coordinate system using Eq. (Equation 4) and then it is projected to the view using the camera parameters and (see Eq. (Equation 5)). The objective of the smoothness cost is to enforce consistency in the depth solution. For a candidate depth image , the smoothness energy is computed using Eq. (Equation 6). Finally, the minimization problem of Eq. (Equation 19) can be solved using strong optimization techniques (e.g., Graph Cuts) in order to estimate a depth image from the decoded images. At last, we note that one could estimate a more accurate depth information by considering additional energy terms in the energy model of Eq. (Equation 19) in order to properly account for the occlusions, global scene visibility, etc. More details are available in the overview paper [34].

Now, we focus on the joint decoding problem, where we are interested in the reconstruction of correlated images from the compressed information ; this is carried out by exploiting the correlation that is given in terms of depth information or from the operator derived from the depth as described in Section 2.2. In particular, we can represent the warping operation as matrix multiplication of the form , where represents an approximation of the image at viewpoint . We propose to jointly reconstruct the multi-view images as a solution to the following optimization problem:

where (see Eq. (Equation 11)) is a diagonal matrix that is constructed using a similar procedure described in Section 2.3; this allows to measure the correlation consistency to only to those pixels that are available in all the views. From the above equation, we see that the proposed reconstruction algorithm estimates TV smooth images that are consistent with both the compressed and the correlation (depth) informations. It is interesting to note that by setting in OPT-2, we get the stereo joint reconstruction problem OPT-1.

Finally, using the results derived in Prop. ? it is easy to check that the optimization problem OPT-2 is convex. Therefore, our multi-view joint reconstruction problem OPT-2 can also be solved using proximal splitting methods. We can rewrite the OPT-2 problem as

Here, , , , , and the matrix is given as

It can be noted that the above optimization problem is an extension to the one described in Eq. (Equation 14), where the TV prior, measurement and correlation consistency objectives are now applied to all the images. Therefore, the *prox* operators for the objective function and the constraints of Eq. (Equation 22) can be computed as described in Section 3.

### 5.2Performance Evaluation

We now evaluate the performance of the multi-view joint reconstruction algorithm using five images (center, left, right, bottom and top views) of the *Tsukuba* [17], three views (views 0, 1 and 2) of the *Plastic* [30], three views (views 0, 2 and 4) of the *Breakdancers* and three views (views 3, 4 and 5) of the *Ballet* [31]. Similarly to the stereo setup, we independently encode the multi-view images using H.264 intra by varying the QP values. At the joint decoder, we estimate a depth image from the compressed images by solving Eq. (Equation 19) with parameters = and , respectively for the different datasets. Then, using the estimated depth image we jointly decode the multiple views as a solution to the problem OPT-2 with the matrix constructed using Eq. (Equation 11). This problem is solved with the parameters and , respectively for the datasets. Finally, we iterate the PPXA algorithm for 100 times in order to reconstruct the correlated images.

We first compare our results with a stereo setup, where the depth estimation and the joint reconstruction steps are carried out with pairs of images. In more details, we take as being the center image in Tsukuba, the view 1 in Plastic, the view 2 in Breakdancers and the view 4 in Ballet, respectively and we perform joint decoding between the image and rest of images by selecting different pairs of images independently (all pairs include ). For example, for the Tsukuba dataset, we perform the depth estimation and the joint reconstruction steps in the following order: (i) center and right views; (ii) center and left views; (iii) center and top views; and (iv) center and bottom views. After decoding all the images, we take the mean PSNR of all the reconstructed images. Note that, in this setup the center image is reconstructed four times. For a fair comparison, we keep the reconstructed image that gives highest PSNR with respect to . In a similar way, the experiments are carried out for the other datasets, where we perform the joint reconstruction of pairs of images and then compute the average PSNR of the reconstructed images. The resulting RD performance is denoted as *Proposed: Stereo* in Fig. ?(a), Fig. ?(b), Fig. ?(a) and Fig. ?(b) for the different datasets. From Fig. ( ?) and Fig. ( ?) it is clear that the proposed joint multi-view reconstruction scheme (denoted as *Proposed: Multiview*) performs better than the algorithm where the images are handled in pairs. It clearly highlights the benefits of our proposed solution. We also calculate the rate savings compared to an H.264 intra encoding and the results are tabulated in the second and third columns of Table 3. It is clear that the rate savings are higher in the multi-view setup than in the stereo setup. Finally, we note that the proposed multi-view joint decoding framework is a simple extension of the stereo image reconstruction algorithm. Still, it permits to show experimentally that it is beneficial to handle all the multi-view images simultaneously at the decoder rather decoding them by pairs. We strongly believe that the rate-distortion performance in the multi-view problem can be further improved when the depth information is estimated more accurately. For instance, this can be achieved by explicitly considering the visibility and occlusion constraints in the depth estimation framework, e.g., [17]. We leave this topic as part of our future work.

Data set | Proposed: | Proposed: | H.264: 4x4 | H.264 |

Stereo | Multiview | |||

Tsukuba | 14.7 | 19.2 | 20.3 | 77.8 |

Plastic | 10.2 | 13.2 | 11.5 | 45.5 |

Breakdancers | 5.7 | 7.8 | -1.5 | 14.7 |

Ballet | 4.2 | 6.6 | -2.3 | 9.2 |

We then compare the RD performance of our multi-view joint decoding algorithm to a state-of-the-art distributed coding scheme (DSC) based on the DISCOVER [10]. The DSC experiments are carried out in the following settings. In the Tsukuba dataset, we consider four views, namely left, right, top and bottom images as the key frames, and the center view is considered as the Wyner-Ziv frame. At the decoder, we generate a side information by fusing two side information images that are generated based on motion compensated interpolation: (i) from the left and right decoded views; and (ii) from the top and bottom decoded views. This fusion step is implemented using the algorithm proposed in [36]. For the other datasets, we consider the two extreme views as the key frames and the center view is considered as the Wyner-Ziv frame. In this scenario, a side information image is generated based on motion compensated interpolation from the decoded key frames. The resulting rate-distortion performance is available in Fig. ? and Fig. ? (denoted as *DISCOVER*). Comparing the performance of the proposed scheme (denoted as *Proposed: Multiview*) and the DISCOVER scheme, we show that our scheme outperforms the distributed coding solution. Note that this is the case even in the Tsukuba dataset, where four images are fused together to estimate the best possible side information. Furthermore, we can see that the DSC scheme based on DISCOVER actually performs worse (expect for the Tsukuba dataset) than the H.264 intra scheme where all the images are decoded independently. This is mainly due to the poor quality of the side information image generated based on motion compensated interpolation. In other words, the linear motion assumption is not an ideal model for capturing the correlation between images captured in multi-view camera networks. Finally, it is interesting to note that our joint decoding framework does not require a Slepian-Wolf encoder nor any feedback channel, while the DISCOVER coding scheme requires a feedback channel to ensure successful decoding; this comes at the price of high latency due to multiple requests from the decoder [5].

For the sake of completeness, we finally compare the performance of our scheme with respect to the joint encoding framework based on H.264 with an coding structure. More precisely, we consider one of the views as the I-frame (this is the views center, 0, 0 and 3 for the different datasets, respectively.), and the remaining views are encoded as P-frames. We perform the joint encoding experiments where the motion compensation is carried out in both variable and fixed block size of . The resulting rate-distortion performance is available in Fig. ? and Fig. ?. The corresponding rate savings with respect to the H.264 intra are available in columns 4 and 5 of Table 3. From the plots (see Figs. ? and ?) and from Table 3, it is clear that our proposed multi-view reconstruction scheme competes and sometimes beats the performance of H.264 44 scheme at low bit rates; this is consistent with the tendencies we have observed in the stereo experiments. However, at high bit rates our scheme performs worse than the H.264 joint coding scheme due to suboptimal representation of high frequency components such as edges and textures. Contrarily to H.264, our scheme is however distributed and this reduces the complexity at the encoders, which is attractive for distributed processing applications.

## 6Conclusions

In this paper, we have proposed a novel rate balanced distributed representation scheme for compressing the correlated multi-view images captured in camera networks. In contrary to the classical DSC schemes, our scheme compresses the images independently without knowing the inter-view statistical relationship between the images at the encoder. We have proposed a novel joint decoding algorithm based on a constrained optimization problem that permits to improve the reconstruction quality by exploiting the correlation between images. We have shown that our joint reconstruction problem is convex, so that it can be efficiently solved using proximal methods. Simulation results confirm that the proposed joint representation algorithm is successful in improving the reconstruction quality of the compressed images with a balanced quality between the images. Furthermore, we have shown by experiments that the proposed coding scheme outperforms state-of-the-art distributed coding solutions based on disparity learning and on the DISCOVER. Therefore, our scheme certainly provides an effective solution for distributed image processing with low encoding complexity, since it does not require a Slepian-Wolf encoder nor a feedback channel. Our future work focuses on developing robust techniques to estimate more accurate correlation information from highly compressed images.

## 7Acknowledgments

The authors would like to thank Dr. Thomas Maugey for many insightful discussions and for his help in the experimental comparisons with the DISCOVER distributed coding scheme.

### Footnotes

- For consistency, we use the compressed image ; however, this matrix multiplication holds even if one uses the original image for warping.
- We assume that the pixels are scanned from left to right and then top to bottom.

### References

- V. Thirumalai and P. Frossard, “Joint reconstruction of correlated images from compressed images,” in
*Proc. European Signal and Image Processing Conference (accepted)*, 2012. - D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,”
*IEEE Transactions on Information Theory*, vol. 19, pp. 471 – 480, 1973. - A. D. Wyner and J. Ziv, “The rate-distortion function for source coding with side-information at the decoder,”
*IEEE Transactions on Information Theory*, vol. 22(1), pp. 1 –10, 1976. - S. S. Pradhan and K. Ramchandran, “Distributed source coding using syndromes (DISCUS),”
*IEEE Transactions on Information Theory*, vol. 49(3), pp. 626–643, 2003. - B. Girod, A. Aaron, S. Rane, and D. Rebollo-Monedero, “Distributed video coding,”
*Proc. of the IEEE*, vol. 93, pp. 71 – 83, 2005. - C. Guillemot, F. Pereira, L. Torres, T. Ebrahimi, R. Leonardi, and J. Ostermann, “Distibuted monoview and multiview video coding,”
*IEEE Signal Processing Magazine*, vol. 24, no. 5, pp. 67–76, 2007. - R. Puri, A. Majumdar, and K. Ramachandran, “PRISM: A video coding paradigm with motion estimation at the decoder,”
*IEEE Trans. on Image Processing*, vol. 16, no. 10, pp. 2436–2448, 2007. - =2plus 43minus 4 P. L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal processing,”
*In Fixed Point Algorithms for Inverse Problems in Science and Engineering. Springer*, 2010. [Online]. Available: http://arxiv.org/abs/0912.3522v4 =0pt - D. Varodayan, Y. C. Lin, A. Mavlankar, M. Flierl, and B. Girod, “Wyner-Ziv coding of stereo images with unsupervised learning of disparity,” in
*Proc. Picture Coding Symposium*, 2007. - X. Artigas, J. Ascenso, M. Dalai, S. Klomp, D. Kubasov, and M. Ouaret, “The DISCOVER codec: Architecture, techniques and evaluation,” in
*Proc. Picture Coding Symposium*, 2007. - R. Wagner, R. Nowak, and R. Baraniuk, “Distributed image compression for sensor networks using correspondence analysis and super-resolution,” in
*Proc. IEEE International Conference on Image Processing*, 2003. - S. Park, M. Park, and M. Kang, “Super-resolution image reconstruction: a technical overview,”
*IEEE Signal Processing Magazine*, vol. 20, no. 3, pp. 21–36, 2003. - M. Dalai, S. Malavasi, and R. Leonardi, “Consistent image decoding from multiple lossy versions,” in
*Proc. ACM Multimedia workshop*, 2010. - C. Wang, G. Yang, and Y.-P. Tan, “Reconstructing videos from multiple compressed copies,”
*IEEE Trans. on Circuts and Systems for Video Technology*, vol. 19, no. 9, pp. 1342–1351, 2009. - M. B. Schenkel, C. Luo, P. Frossard, and F. Wu, “Joint decoding of stereo image pairs,” in
*Proc. IEEE International Conference on Image Processing*, 2010. - D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense stereo,”
*International Journal on Computer Vision*, vol. 47, pp. 7–42, 2002. - V. Kolmogorov and R. Zabih, “Multi-camera scene reconstruction via graph cuts,”
*Computer Vision—ECCV 2002*, pp. 8–40, 2002. - R. I. Hartley and A. Zisserman,
*Multiple View Geometry in Computer Vision*.1em plus 0.5em minus 0.4emCambridge University Press, 2004. - O. Veksler, “Efficient graph based energy minimization methods in computer vision,” Ph.D. dissertation, Cornell University, 1999.
- S. T. Barnard, “Stochastic stereo matching over scale,”
*International Journal of Computer Vision*, vol. 3, no. 1, pp. 17–32, 1989. - P. Felzenszwalb and D. Huttenlocher, “Efficient belief propagation for early vision,”
*International Journal on Computer Vision*, vol. 70, no. 1, pp. 41–54, 2006. - Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,”
*IEEE Trans. on Pattern Analysis and Machine Intelligence*, vol. 23, no. 11, pp. 1222–1239, Jan 2002. - V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts,”
*IEEE Trans. on Pattern Analysis and Machine Intelligence*, vol. 26, no. 2, pp. 65–81, 2004. - =2plus 43minus 4 V. Thirumalai, “Distributed Compressed Representation of Correlated Image Sets,” Ph.D. dissertation, Swiss Federal Institute of Technology (EPFL), Lausanne, 2012. [Online]. Available: http://library.epfl.ch/theses/?nr=5264 =0pt
- D. Donoho, “Compressed sensing,”
*IEEE Trans. Information Theory*, vol. 52, pp. 1289–1306, 2006. - E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,”
*IEEE Trans. Information Theory*, vol. 52, pp. 489–509, 2006. - S. P. Boyd and L. Vandenberghe,
*Convex optimization*.1em plus 0.5em minus 0.4emCambridge University Press, New York, 2004. - A. Chambolle, “An algorithm for total variation minimization and applications,”
*Jour. Math. Imag. Vis.*, pp. 89–97, 2004. - M. J. Fadili and J. L. Starck, “Monotone operator splitting for optimization problems in sparse recovery,”
*Proc. IEEE International Conference on Image Processing*, pp. 1461–1464, 2010. - D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in
*Proc. IEEE Conference on Computer Vision and Pattern Recognition*, 2007. - =2plus 43minus 4 C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High-quality video view interpolation using a layered representation,” in
*ACM SIGGRAPH and ACM Trans. on Graphics*, 2004. [Online]. Available: http://research.microsoft.com/en-us/um/people/sbkang/3dvideodownload/ =0pt - =2plus 43minus 4 “Key technology area (KTA) software of the ITU-T.” [Online]. Available: http://iphome.hhi.de/suehring/tml =0pt
- G. Bjontegaard, “Calculation of average PSNR differences between RD curves,” 13th VCEG-M33 Meeting, Austin, TX, USA, Tech. Rep., 2001.
- S. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A comparison and evaluation of multi-view stereo reconstruction algorithms,” in
*Computer Vision and Pattern Recognition*, 2006. - C. Strecha, R. Fransens, and L. V. Gool, “Combined depth and outlier estimation in multi-view stereo,” in
*Computer Vision and Pattern Recognition*, 2006. - T. Maugey, W. Miled, M. Cagnazzo, and B. Pesquet-Popescu, “Fusion schemes for multiview distributed video coding,” in
*Proc. European Signal and Image Processing Conference*, 2009.