MeasurementAdaptive Sparse Image Sampling and Recovery
Abstract
This paper presents an adaptive and intelligent sparse model for digital image sampling and recovery. In the proposed sampler, we adaptively determine the number of required samples for retrieving image based on spacefrequencygradient information content of image patches. By leveraging texture in space, sparsity locations in DCT domain, and directional decomposition of gradients, the sampler structure consists of a combination of uniform, random, and nonuniform sampling strategies. For reconstruction, we model the recovery problem as a twostate cellular automaton to iteratively restore image with scalable windows from generation to generation. We demonstrate the recovery algorithm quickly converges after a few generations for an image with arbitrary degree of texture. For a given number of measurements, extensive experiments on standard imagesets, infrared, and megapixel range imaging devices show that the proposed measurement matrix considerably increases the overall recovery performance, or equivalently decreases the number of sampled pixels for a specific recovery quality compared to random sampling matrix and Gaussian linear combinations employed by the stateoftheart compressive sensing methods. In practice, the proposed measurementadaptive sampling/recovery framework includes various applications from intelligent compressive imagingbased acquisition devices to computer vision and graphics, and image processing technology. Simulation codes are available online for reproduction purposes.
I Introduction
Current digital imaging devices at first acquire images and then separately compress them leading to big datarelated problems. Contrarily, the aim of emerging Compressive Sensing (CS) theory is to merge sampling and compressing into one step by introducing the concept of “compressing during sensing” for reducing datarate, acquisition time, power consumption, and device manufacturing cost with demanding applications to nextgeneration InfraRed (IR), remote sensing and Magnetic Resonance Imaging (MRI) systems [1, 2, 3]. The theory of CS states that signals with a sparse representation in a specific domain can be recovered at a rate less than the traditional ShannonNyquist rate theorem [4, 5]. DCT, wavelet, and gradient spaces represent paradigms in which natural images are sparse or at least compressible. The later is not fully invertible.
Ia Considered Scenario and Related Arts
In the literature, significant attempts have been done to compressively sample data at a very low sampling rate [6, 7, 8, 9, 10]. Conventional CSbased sampling/recovery strategies only exploit the sparsityprior of signals in an appropriate domain [11, 12, 13]. On the other hand, theoretical compressive sampling analyses rely on using Gaussian or random sampling matrices to measure signals [14]. However, to design sensing matrix, recent scientific studies discover considering signal modelprior in sampling phase can significantly improve recovery quality [7, 8, 10]. In this regard, the present paper concentrates on finding a way to adaptively and sparsely sense image scene in order to simply and efficiently retrieve the image.
Ji et al. in [6] introduced an adaptive Bayesian CS (BCS) with abilities such as determining sufficient number of measurements and confidence of recovery. The researchers in [7] proposed a nonuniform CSbased image sampling/recovery method in which a Hidden Markov Tree (HMT) is utilized to consider the correlation among sparse wavelet coefficients in both sampling and reconstructing phases, socalled uHMTNCS. In [8], a variable density sampling strategy was designed in the frequency domain which exploits priori statistical distributions of images in the wavelet space. Malloy and Nowak in [9] suggested an adaptive compressed sensing algorithm that in comparison to standard random nonadaptive design matrices requires nearly the same number of measurements but succeeds at lower SNR levels. Yang et al. in [10] designed an efficient Mixed AdaptiveRandom (MAR) sensing matrix which employs image edge information with a sensor array 16 times less than the ultimate length of recovered image. The Iterative Method with Adaptive Thresholding (IMAT) [13] and its modified version equipped with interpolation, IMATI [15], are extensions of the Iterative Hard Thresholding (IHT) family for sparse recovery [12]. Instead of a fixed thresholding, they adaptively threshold coefficients in a transform domain such as Discrete Cosine Transform (DCT) during iterations. To recover sparse signals from Gaussian measurements, the authors in [16] presented a general iterative framework based on proximal methods called Iterative SparsificationProjection (ISP). ISP family contains the wellknown SL0 algorithm [17] as a special case. In [18], a learningbased approach by jointly optimizing the sensing and overcomplete dictionary matrices was introduced which performs better than random matrices or the matrices that are optimized without learning dictionary. In [19], a variational BCS in complex wavelet domain was suggested. This model, called TSCWGDPHMT, considers sparsity and structural information of images.
IB Motivations and Contributions
Although aforementioned efforts result in performance improvement somewhat, it seems that the role of Artificial Intelligence (AI) in the context of CSbased sampling/recovery is yet negligible. In this paper, we take a step towards considering AI and present an adaptive and intelligent model for digital image sampling and recovery. For a given number of measurements, we show the proposed sensing matrix considerably increases the overall recovery performance, or equivalently decreases the number of sampling points for a specific recovery quality compared to Gaussian, random, and dynamic sampling matrices employed by the stateoftheart CS methods [16, 10, 19, 6, 13, 11]. The number of required samples in the spatial domain to reconstruct image is adaptively determined based on spacefrequencygradient information content of image. To this intent, a mixture of uniform, random, and nonuniform sampling strategies is suggested [2]. The devised intelligent sampling mechanism donates the advantages of simple, fast, and efficient recovery compared to complicated methods [7, 16, 19]. For reconstruction, we model the recovery as a Cellular Automaton (CA) machine to iteratively restore the image with scalable windows from generation to generation. To the best of the authors’ knowledge, this is the first work for modeling of image reconstruction issue via a CA. The convergence of the proposed CAbased recovery algorithm is guaranteed after a few generations, thus making it suitable for reconstructing the present megapixel range images, whereas sophisticated approaches such as ISP [16], TSCWGDPHMT [19], and BCS [6] fail to process and store such highdimensional signals with generalpurpose computers.
The adaptivity in signal measurements helps to automatically select enough sampling points based on local information content of the scene under view. For instance, lowpass images are sampled with low rates, whereas cluttered scenes are acquired with more samples for an accurate reconstruction. Generally, available methods in the literature do not have such a degree of flexibility and for each signal are manually adjusted at a fixed rate. This undesirable phenomenon may lead to dataredundancy in smooth images or informationloss in textured ones. In practice, the measurementadaptive image sampling/recovery scheme includes various applications from intelligent compressive imagingbased acquisition devices [20] to computer vision and graphics, and image processing technology, e.g. inpainting, remote sensing missing recovery, denoising of impulsive noise, and meshbased image representation [21]. For reproduction purposes, simulation codes are available online^{1}^{1}1http://ee.sharif.edu/imat/.
Briefly, the main contributions of the paper are

proposing a new measurementadaptive image sampling mechanism consisting of three uniform, random, and nonuniform samplers which, in a synergistically manner, incorporates local spacefrequencygradient information content of the image,

introducing a novel nonuniform sampler based on directional gradients,

suggesting a novel cellular automaton model for image recovery which its convergence is guaranteed after a few generations, and

exposing various examinations to demonstrate the efficiency of the proposed sampling/recovery approach under different settings and application areas.
Ii Intelligent Sampling
Based on the content of local image patches, the proposed sampling method performs flexibly at various rates to capture required informative samples for recovery step. Figure 1 depicts the block diagram of the suggested measurementadaptive sampler. As shown, the system gets a graylevel image patch and gives the related binary mask. Due to psychovisual considerations, we assume that patches are square with the length . The final sampling mask is generated from the union of three uniform, random, and nonuniform patterns obtained from spatial, frequency, and gradient spaces, respectively. In the further subsections, we discuss about each sampling strategy in detail.
Iia Texture Measure
To adaptively sense the information content of each local image patch, it may be required to measure a representative criterion of signal’s complexity [22]. Here, we utilized Shannon’s joint entropy determined from Haralick’s cooccurrence matrix [23, 24] as a quantitative metric to measure image block texture. We calculate the texture percentage as
(1) 
where the graylevel cooccurrence matrix is the joint Probability Mass Function (PMF) which is obtained by the quantized or scaled version of the input image patch, namely . The joint probabilities can be defined in the horizontal, vertical, main diagonal, and offdiagonal directions. We used the horizontal neighbor as defined in MATLAB by , where , , and . The variables and represent row and column, respectively. We considered the number of distinct gray levels in as , hence . The normalizer in (1) denotes the maximum entropy of , which obtains from the uniform distribution, i.e. , . Thus, , which .
IiB Uniform Sampler
In the proposed uniform sampling strategy, we punch local patches at certain regular locations based on their local texture content in space domain. In fact, the punch operation makes appropriate regular patterns for a stable recovery. As shown in Fig. 1, the uniform sampler at first gets the image patch . Afterwards, the textureness of the signal is estimated. The number of punched points is proportional to the calculated texture percentage. By this intuition, whenever the textureness is high, we decorate the image with denser regular points and vice versa. Therefore, the binary mask of uniform pattern is determined by the following fuzzy rule
(2) 
in which the matrices , , , , and are assigned for verylow, low, bandpass, high, and veryhigh textures, respectively. Figure 2 shows the regular lattices employed in the proposed uniform sampler. The number of live cells (punched points) in the matrices , , , , and are , , , , and , respectively. This reveals the relation between the texture and the number of live cells is nonlinear with an increasing exponential form. The configuration of punched points affects the recovery performance. Suboptimal lattices should have minimum number of sample points and maximum number of live cells in a predefined neighboring of missing samples. They should also consider patch boundaries and local correlation. We tried to experimentally design patterns based on such constraints.
IiC Random Sampler
The intermediate layer of Fig. 1 depicts the proposed random sampler. Here, we first convert the data matrix from the spatial domain to the frequency space, , by using 2D DCT transform as , where the matrix represents the transformation kernel of DCT. In DCT domain, samples are decorrelated and sparse. After an adaptive quantization of DCT coefficients, we estimate the rate of random sampling. Accordingly, a uniformly distributed random mask is finally generated. In the following sections, we discuss about creating this mask.
IiC1 Random Sampling Rate Measure
To measure sparsity, one way is to threshold insignificant coefficients in a transform domain [13]. However, such thresholding may be inaccurate due to varying information of the signal under process/acquisition. To overcome this problem, we suggest adaptive quantization of DCT coefficients inspired from the following observation.
Observation 1 (Adaptive quantization)
In JPEG compression standard where DCT transform is utilized, the results in [22] reveal that the number of zero entries of quantized DCT coefficients is inversely proportional to both compression quality level, , and the image block texture, . Now, let the matrix be the quantized DCT matrix for a image patch, then , where denotes the norm and the operator counts the number of zero entries of a matrix/vector. From the above results, we have both and . By assuming their equivalence, we have . This shows the texture percentage defined in (1) can be a good criterion for adaptive quantization of information in the frequency domain to measure the necessary random sampling rate.
Details: Based on Observation 1, we calculate the quantized DCT matrix as
(3) 
where the symbols and denote the entrybyentry division and the nearest integer, respectively. The matrix , , also represents the quantization table related to the texture . For determining , we exploited the Quantization Tables (QTs) introduced by Independent JPEG Group (IJG) [25]. Based on the value of , the corresponding quantization matrix is chosen and applied (See Appendix.). Therefore, we adaptively derive sparsity number from the number of nonzero entries of quantized DCT coefficients by . Now, we suggest measuring the rate of random sampling as the following formula, which is consistent with the theory of CS, i.e. ,
(4) 
in which , and and are tunable factors. The parameter changes the number of undersampled points, whereas the coefficient prevents from decaying sampling rate on welltextured patches and saturates it at a relatively fixed rate to control unnecessary information. By setting and , the function in (4) is a nondecreasing curve vs . It is important to note that, in practical situations, the number of nonzero quantized DCT coefficients, , may be a large value even near the length of signal, . This phenomenon occurs especially in high quality level settings or textured regions. Generally, high performance compressive sensing in such a dense scenario is problematic because compressive sensing theory for realworld applications imposes the sampling rate to be at most 10% [26]. We tried to consider this limitation for calculating the rate of random sampling in (4) by setting and , which covers both sparse and dense cases.
IiC2 Graduate Randomization Procedure
Instead of blind random sampling adopted by researchers of compressive sensing field, we introduce a Graduate Randomization Procedure (GRP). In this procedure, by receiving both the feedback of the sampling rate in frequency domain and the generated uniform mask, we add a certain randomness to the uniform pattern. In other words, from low to high textured patches, GRP gradually adds random nonuniformity to the uniform base lattice. It is important to note that random samples are selected and added to locations other than uniform marks. In order to generate the random pattern, the GRP algorithm is implemented as follows

Find the number of zero elements in the uniform pattern, i.e. , and their locations by storing in the set , where and represent the row and column of the location, respectively.

Shuffle the location of zero entries and substitute the set with the result.

Truncate (4) by
(5) 
Generate integer values randomly without replacement in the range , namely .

Define the matrix of random pattern, , and initialize it with , where represents a zero matrix.

For to , at first, assign , and then .
IiD Nonuniform Sampler
It is very important to preserve the structure of image like edges in recovery side. Generally, edge preservation is a difficult task in interpolation even for sophisticated algorithms. Due to fuzziness nature of image edges, an edgeprior sampler can reduce uncertainty in reconstruction phase. Although, the GRP carry this task somewhat, in wellstructured horizontal, vertical, and diagonal edges, or periodic configurations which their recovery is perceptually important from subjective evaluation viewpoint, more edge samples are required to reduce uncertainty and unambiguously recover the missing scattered samples. Hence, we suggest a sparsity locationaware algorithm to capture horizontal, vertical, and diagonal edges in sampling phase.
To implement the above idea, we address sparsity locations in different frequency regions. Figure 3 depicts a partitioning of 2D frequency domain defined in [27, 22]. Except for the single DC mode , the five subsets of AC modes exist, namely to corresponding to low, horizontal, vertical, diagonal, and high frequencies information, respectively. If nonzero quantized DCT coefficients are sparsely located in horizontal, vertical, or diagonal regions, the corresponding edge points are sampled in space domain. For instance, in a block of wellstructured vertical edges, the nonzero coefficients are only appeared in the partitions and . To detect the edge maps, we first determine a directional decomposition of the image patch gradient [28, 29]. For this purpose, we utilized Sobel operator to calculate the gradient of image block, , , and considered with four cardinal directions N, E, S, W in addition to four ordinal directions NE, SE, SW, NW to decompose the block. In order to preserve the image structure in the boundary of blocks, we also acquired border pixels for computing the gradient. Let , , be directional gradient features. To project the gradient vector in  plane onto the two adjacent directions
(6) 
(7) 
the following rules are utilized.

If , then assign ; else if , then ; else if , then ; else if , then .

If , then assign ; else if , then ; else if , then ; else if , then .
Afterwards, we apply a threshold to the magnitude of two directions with stronger gradients to suppress redundant uninformative samples. Figure 4 illustrates the directional decomposition of the gradient vector into two adjacent directions. For this example, we have , , , , , , , and based on the specified rules. The steps of nonuniform sampling algorithm are implemented as follows.

Initialize directional gradient matrices, i.e. , .

Extract directional gradient features as explained above for the vector .

Obtain the normalized versions of , , between and .

Define the matrix of nonuniform pattern, , and initialize it as .

For determining the structure of edges, initialize a binary string with the length of the number of AC frequency regions, , such as .

For to , obtain the set . If , set .

If the string , which respectively represent vertical, horizontal, diagonal, verticaldiagonal, or horizontaldiagonal edges, at first, sort the set . Afterwards, for the two directions having stronger gradients, calculate its gradient magnitude, and then renormalize it between and . Finally, for to and to , if the resulting magnitude of gradient exceeds the predefined threshold , set .
IiE MeasurementAdaptive Sampling Algorithm
As shown in Fig. 1, the ultimate binary mask is generated by the union of uniform, random, and nonuniform patterns as
(8) 
The regularity, randomness, and structure of patterns in the above combined form create naturally a sampling mask with chaotic behavior. For selecting the threshold in the nonuniform sampler, a tradeoff between subjective quality enhancement and increased redundancy exists. If , then . Therefore, we empirically set to maintain sampling near an optimal subShannonNyquist rate. The downsampled image patch is constructed from
(9) 
in which the symbol denotes Hadamard product. Algorithm 1 summarizes the proposed adaptive intelligent image sampling strategy. For the previously captured image of the dimension , the proposed blockwise algorithm can be repeated on all nonoverlapping image patches to get the overall sampling mask and the subsampled image .
Iii Cellular Automatonbased Image Recovery
As mentioned in the literature, different approaches exist for image recovery [7, 8, 9, 18, 16, 13, 19, 12, 11, 6, 17]. Here, we tailor a novel technique to recover scattered samples obtained from the intelligent sampling stage. We model the reconstruction mechanism by a dynamic CA. Cellular automata are simple intelligent agents with fundamental characteristics of locality, uniformity, and synchronicity [30]. They are composed of discrete cells equipped with special rules to be able to solve sophisticated problems. Various applications have been found for CA in image processing tasks such as noise removal, edge detection, and forgery localization [31].
Iiia Cellular Automaton Modeling for Recovery
In modeling, we consider a 2state CA machine, in which the states of dead and live cells represent the zero and one values in the sampling mask , respectively. Contrary to conventional fixedneighbor CA models, the proposed CAbased recoverer performs as an iterative method which applies variablescale windows to the sampled image . The size of square window, , increases at each generation of CA. In summary, we define the elements of model as , in which represents the standard deviation of a Gaussian kernel, is a coefficient for increasing during generations, and and denote the matrix at next generation of and the recovered image, respectively. The rule behind the suggested recovery algorithm is simple yet efficient as follows. By considering the correlation among adjacent samples, missing pixels are reconstructed via a Gaussianweighted averaging. We find live cells of around the central dead cell and extract their corresponding weights in Moore neighborhood [30], namely the dynamiclength vectors and , respectively. If at least one live cell exists, then, the dead cell is replaced with the weighted mean of live cells in the subsampled image and that dead cell will revive at next generation. At each generation of CAbased recoverer, the model elements are updated. Algorithm 2 presents the pseudocode of CAbased image recovery algorithm, in which the symbols and represent the floor and ceiling functions, respectively. We utilized the replication rule to cope with boundary conditions of border cells.
After the last generation, we utilize a post processing stage to alleviate possible blockiness artifacts due to the patchbased nature of recovery. This phenomenon may be observed in plain regions. Based on the coding style of measurementadaptive sampler, the number of live cells within a window can be an appropriate criterion to discriminate plain regions from textured ones in the recovery phase. Therefore, we apply a conditional smoothing filter only for flat regions. To this intent, at first, we initialize the postprocessed result as . Afterwards, for all and , we scan the image with a square window of the length . If the number of live cells in the initial sampling mask inside the window are less than the threshold , the corresponding pixel values of in the window are smoothed via the Gaussian kernel , for which . Then, the result is placed into . In experiments, we set the coefficient and the standard deviation of smoother .
IiiB Convergence Analysis
As illustrated in Fig. 6, the CAbased recoverer first starts with exposing hightextured dense missing samples at a fine scale. Then, by updating model parameters, this process continues and finally terminates by retrieving lowpass sparse samples at a coarse scale. The advised variablescale windowing in comparison to fixed window speeds up the convergence rate of Algorithm 2 without sacrificing recovery quality. Also, this mechanism controls better error propagation during recursion. In Lemma 1, we prove the algorithm quickly converges after a few generations for an image with arbitrary degree of texture.
Lemma 1 (Convergence guarantee)
Let patches of the image be square of the length . If denotes the number of CA generations, then, Algorithm 2 for recovering the image guarantees to converge at most at three iterations, i.e. .
Proof:
To prove, we consider the worst case, i.e. the image under reconstruction is a fully DC image as the possible sparsest signal to determine the upper bound. In such a case, for all patches, we have which results in , and the random and nonuniform patterns equal . Hence, for all patches. Let the live and dead cells be represented by white and black colored cells, respectively. For visualization, Fig. 5 illustrates generations of the sampling mask matrix for a given flat DC image of the dimension . It is important to note that, if and are not multiples of , we can use zero padding. If the matrix denotes the sampling mask at generation , then it is shown in Fig. 5 that the number of dead cells at the generation is . This implies that for a completely DC image because the distance between two live cells is 7 pixels in both horizontal and vertical directions. For a given image with a certain degree of texture, the density of live cells in is always more than or equal to the worst case, thus demonstrating . \qed
Example 1 (Sampling and recovery)
Figure 6 shows the original images of Baboon, Boat, and Lena for which the sampling masks, subsampled images, the intermediate, and final results of recovery step are visualized. These images have different texture averages sorted in descending order, for which the sampler automatically extracted , and percent pixels, and the recoverer estimated the images with of , and decibels, respectively.
IiiC Practical Considerations
In order to implement the proposed sampling and recovery algorithms in practice, we can consider a mechanism similar to the structure of image acquisition using Rice singlepixel camera [20]. However, one of the main differences is to utilize a small array of sensors but not a single photodiode. This allows us to measure adaptively the local image content, sample in 2D space, and recover the scene by the suggested CAbased recovery approach. The solution is to focus the physical scene on a Digital Micromirror Device (DMD) via a primary lens. The DMD itself is partitioned into nonoverlapping patches such as blocks. In a sequential linescan manner and at each time, only one of the analog image patches is reflected to another lens by configuring mirror positions. Other mirrors reflect the scene on a black absorbent surface. This secondary lens focuses the light of that single patch on an CCD or CMOS sensor. After amplifying the signal and passing through an Analog to Digital Converter (ADC), the digitized subimage, , is obtained. Afterwards, the texture of the patch is measured. Then, based on the proposed sampler, the sampling mask, , and the subsampled image, , are determined. This process is repeated until the generation of the binary mask as well as the subsampled image pertaining to the last patch. The overall sampling mask, , and corresponding sampled image, , can be stored in a memory card. In the decoding phase, the proposed recoverer gets and and finally restores the image scene, .
Iv Experiments
This section provides extensive experiments to support our findings. Algorithms were implemented in MATLAB and run on an Intel Core i7 2.2GHz laptop with 8GB RAM.
Iva Experimental Settings
In order to evaluate the performance of our sampling/recovery framework and comparing it with other related approaches, we utilized various databases having different statistical characteristics. Image sets include the standard databases of NCID [32], CCID [33], UCID [34], Microsoft Research Cambridge Object Recognition Image Database^{2}^{2}2https://www.microsoft.com/enus/download/details.aspx?id=52644, and a collection of wellknown test images such as Baboon, Cameraman, Lena, etc. Beside, we gathered a set of IR test images as well as a megapixel range images database called RCID [35], to more precisely investigate the efficiency of our algorithms. Some of these images are shown in Fig. 12. In order to evaluate recovery performance, we employed standard objective criteria of PSNR, SSIM [36], and Normalized Recovery Error (NRE) defined as as well as subjective evaluation. We compared our sampling and recovery algorithms to related approaches including the intelligent MAR sampler suggested in [10], the scattered data recovery technique of spline interpolation and the stateoftheart methods of BCS [6], uHMTNCS AMP [7], IMAT [13], IMATI [15], ISP [16], and TSCWGDPHMT [19].
We adjusted the set of parameters so that, in the suggested sampler, the PSNR is maximum for a given average rate. For tuning parameters, we used images of UCID database as validation set. To generate uniform matrices in the uniform sampler, different regular and periodic lattices with various punched points may be used. We examined a set of such configurations and finally selected the patterns introduced in Fig. 2, which optimize the aforementioned objective criterion. We also set parameters of CAbased recoverer and its postprocessing stage to obtain a suboptimal objective PSNR performance criterion in dB. The influence of postprocessing stage with appropriate parameters of and in the CAbased recovery algorithm is to improve PSNR up to about . The standard deviation of smoother , meanwhile, has negligible impact on performance. The number of punched points in the uniform sampler, the coefficients and in the random sampler, and the threshold in the nonuniform sampler control the sampling rate in the suggested algorithm. On one hand, for natural image patches, we have often , thus considering short intervals for such textures. This also means the number of punched points is often less than or equal to 4. On the other hand, the main usage of the coefficient is to make the formula acceptable for dense images, i.e. high values of . Therefore, the influential factors for increasing or decreasing the adaptive sampling rate are the coefficient and the threshold .
IvB Performance on Standard Databases
Here, we evaluate the performance of our sampling/recovery method on 21 wellknown test images of the size and four standard databases. The adaptive sampling rate measured by sampler and recovery quality on image processing test images are tabulated in Table I. For an average dynamic sampling rate, the mean of was . For variations and , the curves of the average PSNR (dB) vs the average sampling rate (%) on 21 wellknown images are depicted in Fig. 7. The curves demonstrate that the sensitivity of our algorithm is low for variations and . Figure 8 also shows average PSNR (dB) vs . In this figure, the coefficient means the recovery without applying postprocessing step and the optimal value is , thus demonstrating about gain.
As an example, Fig. 9 illustrates recovery of Baboon image for compared stateoftheart approaches. To be able to compare different algorithms of various complexities on the laptop, we reduced the image to the size . For all methods, the sampling rate was , i.e. 6000 measurements. Based on PSNR, the proposed approach outperforms other methods. For BCS, IMAT, IMATI, TSCWGDPHMT, and the proposed algorithm, the average PSNR (dB) on all images with the size of was 18.52, 23.34, 24.26, 28.54, and 27.99, respectively. Other methods such as ISP failed to compute the image restoration process. The average sampling rate determined by our algorithm was 34.3%. For a fair comparison, we set this rate for competing algorithms, too. Although the average PSNR for TSCWGDPHMT algorithm is more than the suggested method, its computational complexity is very high. To clarify this issue, Fig. 10 compares the runtime (s) of different methods in terms of sampling rate () on the Baboon image. As seen, the proposed method is slightly better than IMATI.
Image number  Image name  Sampling rate (%)  Performance metrics  

1  Baboon  35.23  29.38  0.8814  0.0085 
2  Barbara  28.86  26.8  0.8431  0.0143 
3  Boat  25.16  29.43  0.8227  0.0093 
4  Butterfly  24.01  30.27  0.8356  0.0082 
5  Cameraman  15.39  32.77  0.9295  0.0074 
6  Einstein  18.09  31.26  0.7867  0.0082 
7  Elaine  19.56  31.26  0.7613  0.0106 
8  Fruits  16.6  31.13  0.8605  0.0056 
9  Goldhill  25.51  30.4  0.8199  0.0107 
10  Jetplane  18.59  31.36  0.91  0.0078 
11  Lake  25.67  29.21  0.8324  0.0155 
12  Lena  18.12  32.19  0.8779  0.0068 
13  Livingroom  29.04  29.27  0.8288  0.0116 
14  Owl  34.48  28.53  0.819  0.0115 
15  Peppers  18.97  31.19  0.8464  0.0129 
16  Pirate  24.62  29.99  0.8488  0.0095 
17  Shack  34.1  27.09  0.8045  0.0234 
18  Walkbridge  38  27.38  0.8279  0.0119 
19  Womanblonde  21.73  29.73  0.8229  0.0115 
20  Womandarkhair  11.55  36.69  0.9265  0.0053 
21  Zelda  15.41  34.72  0.8915  0.0076 
Average  23.75  30.48  0.8465  0.0104 
Table II reports the performance of our approach on NCID, CCID, UCID, and RCID databases, which are sorted in descending order of textural information; i.e. RCID and NCID databases have minimum and maximum texture averages, respectively. The average sampling rate acquired by the proposed dynamic sampler confirms this subject. Images of each imageset have also intradatabase variability of textural information. However, we tuned parameters of our sampler so that dynamic sampling rate for each image doesn’t approximately exceed 50%, based on the limitation imposed in compressive sensing theory for realworld applications [26]. This leads to an acceptable average PSNR with a high variance, so that the range of PSNR (dB), [min(PSNR), max(PSNR)], is [17.39, 48.48], [21.91, 41.25], [18.41, 40.03], and [26.29, 44.35] for NCID, CCID, UCID, and RCID databases, respectively. From compression viewpoint, recovery performance in comparison to sensed samples is promising for all databases.
IvC Test on InfraRed Imaging Systems
Infrared imaging has a wide range of applications from surveillance and Intelligent Transportation Systems (ITSs) to medical imaging. These images are naturally sparse and of interest in the compressive sensing community to fabricate their lowcost CSbased imagers [37]. In this experiment, we evaluate the performance of adaptive sampling/recovery framework on a testset including 20 images grabbed by various near IR cameras for ITSs applications such as License Plate Recognition (LPR). Four representative IR images are shown in the first row of Fig. 12. Specifically, for recovering images 1 to 4 in this figure, PSNRs(sampling rates) in terms of dB() were 46.68(2.08), 35.89(11.58), 42.04(4.41) and 38.86(6.57), respectively. Generally, all IR images were recovered by an average PSNR of dB for the average adaptive sampling rate , thus demonstrating an excellent IR recovery quality of our scheme for the high sample reduction.
For BCS, IMAT, IMATI, TSCWGDPHMT, and the proposed algorithm, the average PSNR (dB) on all images with the size of was 28.44, 22.11, 26.34, 35.23, and 31.88, respectively. The average sampling rate determined by our algorithm was 15.37%. For a fair comparison, we set this rate for competing algorithms. The performance of the proposed method is the second best.
IvD Influence of Uniform and Nonuniform Samples
In this experiment, we considered two scenarios to evaluate the role of the proposed uniform and nonuniform samplers. In the first scenario, we omitted our proposed uniform sampler from the scheme of Fig. 1 and compared the resulting random/nonuniform sampler with the whole model. This can be done by setting . In this case, for a fair comparison by preserving samples at an equal rate, we can manually increase the coefficient in the random sampler or decrease in the nonuniform sampler. Here, we only increased from the resulting random/nonuniform sampler. In this case, the average PSNR on 21 wellknown images listed in Table I was dB. In comparison to the result reported in Table I, a decrease of nearly dB in PSNR is seen which demonstrates the importance of existing uniform sampler. It is noticeable that performance reduction in lowpass and periodic images like Cameraman and Barbara is much more than cluttered ones.
In the second scenario, the nonuniform sampler was neglected by setting . Similarly, we increased to equalize the rate of resulting uniform/random sampler with the complete scheme. The average PSNR was dB, that is dB more than the average of Table I. Although such an objective evaluation shows a little performance improvement in the absence of nonuniform sampler, different subjective evaluations narrate a better reconstruction of image structure such as edges in the presence of nonuniform sampler. For instance, Fig. 11 visualizes this phenomenon for four images in which reconstructed edges are sharper in the entire solution than the second scenario. Therefore, for approximately equal sampling rate and objective PSNR metric, the combined strategy has subjective superiority in performance.
IvE Recovery Performance under Fixed Sampling Structures
This section compares the performance of different recovery approaches under fixed sampling structures. To this intent, we considered two sampling scenarios including conventional pure random and our measurementadaptive samplers. For recovering scattered samples obtained from the proposed intelligent sampling stage as well as random sampling structure, various approaches such as scattered data interpolators can be employed. Therefore, in addition to modern sparse recovery techniques [6, 13, 15, 16, 19], we applied the prominent scattered spline interpolator of spline to separately investigate the efficiency of our CAbased recoverer under the considered scenarios.
In this experiment, we address the problem of image reconstruction of different scales. To do this, we selected 4 megapixel images from RCID database with the original size or vice versa. These images are shown in the second row of Fig 12. To evaluate the efficiency of the mentioned approaches, by using the bicubic interpolation, we generated 6 downsampled versions for each of which by a factor about to finally construct a 7level pyramid. Table III compares the proposed CAbased recovery algorithm to modern reconstruction methods under random and adaptive sampling scenarios for original and scaled images. From lowest to highest resolution, i.e. in the direction of increasing correlation, the average dynamic sampling rates (%) were 42.95, 38.63, 34.4, 30.21, 26.69, 22.37, and 15.59, respectively. For a fair comparison, we set the random sampling rate equal to that of the average dynamic rate determined by measurementadaptive sampler. The bolded values show the best performance. Except for very low resolution levels, ISP, TSCWGDPHMT, BCS, and spline methods failed to yield viable results due to computational complexity of matrix operations or huge memory requirement. Because of heavy blockiness effects, BCS algorithm fails to reconstruct images in both the random measurement and the adaptive scheme especially in lowpass images. Generally, stateoftheart results were obtained for configurations of measurementadaptive sampler+spline for lowresolution and measurementadaptive sampler+CA for highresolution levels. This demonstrates the power of measurementadaptive sampling scheme.
It is noticeable that IMAT and IMATI recovery methods reconstruct the whole image rather than blockwise processing. The suboptimal parameters for adaptive thresholding technique of IMAT were , , and . In ISP algorithm, DCT was utilized as sparsifying transform and hard thresholding function. We also set its suboptimal parameters as , , e6, , , and . TSCWGDPHMT method uses Generalized Double Pareto (GDP) distribution for modeling signal sparsity. An inherent limitation of this algorithm is that the input image should be square with a length of multiples of 8. To tackle this problem and compare different approaches as fair as possible, we first considered the nearest length more than or equal to dimensions reported in Table III, and then rescaled the recovered image to the real size. BCS algorithm utilizes Daubechies1 2D wavelet at four decomposition levels. We also used MATLAB builtin function to implement spline interpolation.
Image number  Size at pyramid level  

Random sampling scenario  Proposed measurementadaptive sampling scenario  
IMAT  IMATI  ISP  TSCWGDPHMT  BCS  Spline  CA  IMAT  IMATI  ISP  TSCWGDPHMT  BCS  Spline  CA  
1  22.79  20.27  20.11  27.7  7.93  25.71  25.64  21.2  20.27  18.56  25.54  8.43  27.49  26.56  
22.88  24.1  F  29.56  8.77  26.44  26.02  21.03  21.73  F  28.55  8.94  28.37  27.15  
22.61  24.02  F  F  F  F  25.07  20.27  21  F  F  F  F  26.88  
22.82  25.46  F  F  F  F  25.36  20.15  22.36  F  F  F  F  27.93  
23.84  27.82  F  F  F  F  27.81  19.87  23.26  F  F  F  F  30.27  
24.74  31.57  F  F  F  F  31.49  18.59  24.19  F  F  F  F  33.3  
22.45  35.31  F  F  F  F  35.33  15.58  26.48  F  F  F  F  36.7  
2  21.92  19.7  18.26  25.96  8.75  25.62  24.63  23.86  19.4  20.5  24.03  7.01  27.18  25.78  
22.82  23.72  F  27.3  7.63  26.06  25.27  23.9  23.87  F  26.23  8.79  27.84  26.63  
23.22  24.36  F  F  F  F  25.7  23.22  24.43  F  F  F  F  27.3  
23.64  26.22  F  F  F  F  26.7  22.44  25.05  F  F  F  F  28.17  
24.22  27.74  F  F  F  F  28.1  21.49  25.39  F  F  F  F  29.78  
24.58  29.61  F  F  F  F  30.27  19.19  25.25  F  F  F  F  32.19  
21.92  31.51  F  F  F  F  32.77  14.73  25.44  F  F  F  F  35.16  
3  21.99  20.77  16.42  25.82  11.19  24.68  23.51  22.84  21.13  17.23  24.25  10.24  26.68  24.85  
23.21  22.8  F  28.7  10.35  25.55  24.33  22.33  23.64  F  27.47  9.24  27.55  26.51  
23.99  26.07  F  F  F  F  25.82  19.87  19.51  F  F  F  F  27.87  
25.07  28.43  F  F  F  F  27.48  17.29  19.16  F  F  F  F  29.43  
26.47  31.77  F  F  F  F  30.43  16.04  20  F  F  F  F  32.3  
27.36  35.47  F  F  F  F  34.41  14.91  21.73  F  F  F  F  35.71  
24.37  35.9  F  F  F  F  36.76  13.94  26.33  F  F  F  F  37.06  
4  23.11  19.12  20.01  27.84  7.88  26.91  24.95  22.11  19.64  20.84  25.81  9.71  27.58  26.08  
22.97  23.64  F  28.35  8.26  26.29  25.4  21.14  22.61  F  27.55  8.34  27.68  26.55  
22.72  24.49  F  F  F  F  25.36  20.67  22.85  F  F  F  F  26.92  
22.35  24.54  F  F  F  F  25.25  20.27  23.04  F  F  F  F  27.34  
21.97  24.03  F  F  F  F  25.07  19.64  23.14  F  F  F  F  27.59  
21.74  24.33  F  F  F  F  25.88  18.57  23.23  F  F  F  F  28.99  
20.14  25.15  F  F  F  F  27.3  16.32  23.39  F  F  F  F  31.2 
IvF Evaluation under Fixed Recovery Algorithms
In this experiment, we evaluate the performance of three sampling methods including the pure random, the stateoftheart dynamic MAR sampling approach [10], and the suggested sampler under fixed recovery algorithms. We utilized 21 wellknown test images of the original size for performance evaluation. As known in Table III, only IMAT, IMATI and the proposed CAbased recoveres can restore the images of such dimensions. Hence, these recovery algorithms were considered to be able to recover subsampled images. MAR sensing matrix is basically generated from the lowresolution versions of images with the size . For a fair comparison with this method, we created subsampled images from Hadamard product of MAR masks and original images in recovery side. Figure 13 depicts average PSNR criterion in dB for different sampling/recovery configurations. The sampling rate for all samplers is the same and equal to 23.75%. The first and the second ranks are belonging to the proposed measurementadaptive sampler+CA recoverer and MAR sampler+CA recoverer, respectively. It is noticeable that the performance of IMAT and IMATI recovery algorithms under the pure random sampling strategy are better than other stateoftheart samplers due to their nature.
IvG Recovery Robustness in the Presence of Noise
In order to evaluate the robustness of the proposed CAbased recovery algorithm in the presence of noise, we can model the noisy subsampled image, , as
(10) 
where matrices , , and denote the noisefree subsampled image, the binary sampling mask, and Additive White Gaussian Noise (AWGN) with the distribution of , respectively. We then applied the normalized version of between 0 and 1 to CA recoverer. Figure 14 plots average PSNR in dB vs the noise variance, , for 21 wellknown test images of the size . In this figure, means the noisefree case. The suggested method shows better performance in terms of the noise variance than IMAT and IMATI approaches. All curves exponentially decay by increasing . As mentioned before, other algorithms fail to recover these images due to data dimensionality.
IvH Performance of Joint Sampling/Recovery Frameworks
In practice, design of sampling structure and recovery stage are closely related to each other. Here, we jointly compare the performance of the proposed sampling/recovery with other stateoftheart frameworks [7, 13, 15, 16, 6]. To do this, we collected 27 randomly selected samples from Microsoft Object Recognition Database as shown in Fig. 15, which were already employed in [7]. At first, the randomly chosen images were cropped for highlighting salient objects in the scene, and then their grayscale versions were resized to dimensions using the bicubic interpolation to be able to compare different algorithms of various complexity. Figure 16 plots NRE vs image number as indexed by numbers 1 to 27 in Fig. 15 for competing methods. The average are given in legend parentheses. As shown, the average NRE of the proposed scheme is lower than other methods. BCS failed to recover the smooth images 4 and 18, whereas our scheme retrieved them with minimum recovery error. For a fair comparison, we set the sampling rate of competing methods the same as the average rate obtained adaptively by our sampler.
V Conclusion
This paper suggested a measurementadaptive sparse image sampling and recovery framework. The proposed sampler adaptively measures the required sampling rate and accordingly determines sample positions by a combined uniform, random, and nonuniform sampling mechanism. Our idea originated from the fact that natural images may have lowpass, bandpass, highpass, or sparse components depending on the scene under view. Therefore, a measurementadaptive mechanism was advised to trap informative samples. Unlike Gaussian measurement sensing scenario, the proposed sparse coding style does not need all samples in advance.
In the recovery phase, we modeled the problem by a cellular automaton machine. The suggested algorithm converges always at a few generations. Low computational burden and memory usage are two main advantages of this algorithm in comparison to sophisticated techniques of ISP [16], TSCWGDPHMT [19], and BCS [6], especially in reconstruction of megapixel range imaging such as remote sensing. In CAbased recovery algorithm, updating rule for reviving dead cells is done based on a simple weighted averaging. As a future work, more precise predictors can be utilized for this purpose. Also, the suggested sampling/recovery pipeline can be generalized to other sparsifying transform domains like wavelets. Extensive tests on standard image datasets, infrared, and megapixel imaging show the capabilities of proposed technique for practical implementations of compressively sampled imaging systems.
Based on JPEG standard [25], the quantization matrix corresponding to the texture can be formulated as
(11) 
in which the matrix denotes a allone matrix. Also, the scaling factor, , and the reference quantization table, , are determined by
(12) 
and
.
Acknowledgements
This Postdoc research was jointly sponsored by Iran National Science Foundation (INSF) and ACRI of Sharif University of Technology under agreement numbers 95/SAD/47585 and 7000/6642, respectively. The authors would like to thank Dr Z. Sadeghigol who provided TSCWGDPHMT codes for comparison. We also thank Prof A. Amini, Mr A. Esmaeili, and other researchers in Signal Processing and Multimedia Lab for their priceless comments.
References
 [1] Y. C. Eldar and G. Kutyniok, Compressed sensing: Theory and applications. Cambridge University Press, 2012.
 [2] F. Marvasti, Nonuniform sampling: Theory and practice. Springer Science & Business Media, 2012.
 [3] J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 14–20, 2008.
 [4] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
 [5] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006.
 [6] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2346–2356, 2008.
 [7] B. Shahrasbi and N. Rahnavard, “Modelbased nonuniform compressive sampling and recovery of natural images utilizing a waveletdomain universal hidden markov model,” IEEE Trans. Signal Process., vol. 65, no. 1, pp. 95–104, 2017.
 [8] Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Trans. Image Process., vol. 19, no. 1, pp. 264–270, 2010.
 [9] M. L. Malloy and R. D. Nowak, “Nearoptimal adaptive compressed sensing,” IEEE Trans. Inf. Theory, vol. 60, no. 7, pp. 4001–4012, 2014.
 [10] J. Yang, W. E. I. Sha, H. Chao, and Z. Jin, “Highquality image restoration from partial mixed adaptiverandom measurements,” Multimed. Tools Appl., vol. 75, no. 11, pp. 6189–6205, 2016.
 [11] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655–4666, 2007.
 [12] T. Blumensath and M. E. Davies, “Normalized iterative hard thresholding: Guaranteed stability and performance,” IEEE J. Sel. Topics Signal Process., vol. 4, no. 2, pp. 298–309, 2010.
 [13] F. Marvasti, M. Azghani, P. Imani, P. Pakrouh, S. J. Heydari, A. Golmohammadi, A. Kazerouni, and M. Khalili, “Sparse signal processing using iterative method with adaptive thresholding (IMAT),” in IEEE ICT, 2012, pp. 1–6.
 [14] E. J. Candès and T. Tao, “Nearoptimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406–5425, 2006.
 [15] A. I. Zayed and G. Schmeisser, New perspectives on approximation and sampling theory. Springer, 2014.
 [16] M. Sadeghi and M. BabaieZadeh, “Iterative sparsificationprojection: Fast and robust sparse signal approximation,” IEEE Trans. Signal Process., vol. 64, no. 21, pp. 5536–5548, 2016.
 [17] H. Mohimani, M. BabaieZadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed norm,” IEEE Trans. Signal Process., vol. 57, no. 1, pp. 289–301, 2009.
 [18] J. M. DuarteCarvajalino and G. Sapiro, “Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Process., vol. 18, no. 7, pp. 1395–1408, 2009.
 [19] Z. Sadeghigol, M. H. Kahaei, and F. Haddadi, “Model based variational Bayesian compressive sensing using heavy tailed sparse prior,” Signal Process.: Image, vol. 41, pp. 158–167, 2016.
 [20] M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Singlepixel imaging via compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 83–91, 2008.
 [21] X. Li, “Anisotropic mesh adaptation for image representation,” EURASIP J. IMAGE VIDE., vol. 2016, no. 1, p. 26, 2016.
 [22] A. Taimori, F. Razzazi, A. Behrad, A. Ahmadi, and M. BabaieZadeh, “Quantizationunaware double JPEG compression detection,” J. Math. Imaging Vis., vol. 54, no. 3, pp. 269–286, 2016.
 [23] R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Trans. Syst., Man, Cybern., vol. SMC3, no. 6, pp. 610–621, 1973.
 [24] R. Jain, R. Kasturi, and B. G. Schunck, Machine vision. McGrawHill, Inc., 1995.
 [25] W. Luo, J. Huang, and G. Qiu, “JPEG error analysis and its applications to digital image forensics,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 3, pp. 480–491, 2010.
 [26] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 72–82, 2008.
 [27] R. HasimotoBeltrán, S. Baqai, and A. Khokhar, “Transform domain interblock interleaving schemes for robust image and video transmission in ATM networks,” J. Vis. Commun. Image R., vol. 15, no. 4, pp. 522–547, 2004.
 [28] Z.L. Bai and Q. Huo, “A study on the use of 8directional features for online handwritten chinese character recognition,” in IEEE ICDR, 2005, pp. 262–266.
 [29] L.L. Huang, A. Shimizu, Y. Hagihara, and H. Kobatake, “Gradient feature extraction for classificationbased face detection,” Pattern Recogn., vol. 36, no. 11, pp. 2501–2511, 2003.
 [30] J. L. Schiff, Cellular automata: A discrete view of the world. John Wiley & Sons, 2011, vol. 45.
 [31] P. Rosin, A. Adamatzky, and X. Sun, Cellular automata in image processing and geometry. Springer, 2014, vol. 10.
 [32] Q. Liu, A. H. Sung, and M. Qiao, “Neighboring joint densitybased JPEG steganalysis,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 2, 2011, article 16.
 [33] A. Olmos and F. A. A. Kingdom, “A biologically inspired algorithm for the recovery of shading and reflectance images,” Perception, vol. 33, no. 12, pp. 1463–1473, 2004.
 [34] G. Schaefer and M. Stich, “UCID: An uncompressed color image database,” in SPIE SRMAM, vol. 5307, 2004, pp. 472–480.
 [35] A. Taimori, F. Razzazi, A. Behrad, A. Ahmadi, and M. BabaieZadeh, “A novel forensic image analysis tool for discovering double JPEG compression clues,” Multimed. Tools Appl., vol. 76, no. 6, pp. 7749–7783, 2017.
 [36] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
 [37] H. Chen, N. Xi, B. Song, L. Chen, J. Zhao, K. W. C. Lai, and R. Yang, “Infrared camera using a single nanophotodetector,” IEEE Sensors J., vol. 13, no. 3, pp. 949–958, 2013.