Measurement-Adaptive Sparse Image Sampling and Recovery

Measurement-Adaptive Sparse Image Sampling and Recovery

Ali Taimori, Farokh Marvasti,  A. Taimori and Farokh Marvasti are with the Electrical Engineering Department, Sharif University of Technology, Tehran 14588-89694, Iran (e-mails: alitaimori@yahoo.com; marvasti@sharif.edu).
Abstract

This paper presents an adaptive and intelligent sparse model for digital image sampling and recovery. In the proposed sampler, we adaptively determine the number of required samples for retrieving image based on space-frequency-gradient information content of image patches. By leveraging texture in space, sparsity locations in DCT domain, and directional decomposition of gradients, the sampler structure consists of a combination of uniform, random, and nonuniform sampling strategies. For reconstruction, we model the recovery problem as a two-state cellular automaton to iteratively restore image with scalable windows from generation to generation. We demonstrate the recovery algorithm quickly converges after a few generations for an image with arbitrary degree of texture. For a given number of measurements, extensive experiments on standard image-sets, infra-red, and mega-pixel range imaging devices show that the proposed measurement matrix considerably increases the overall recovery performance, or equivalently decreases the number of sampled pixels for a specific recovery quality compared to random sampling matrix and Gaussian linear combinations employed by the state-of-the-art compressive sensing methods. In practice, the proposed measurement-adaptive sampling/recovery framework includes various applications from intelligent compressive imaging-based acquisition devices to computer vision and graphics, and image processing technology. Simulation codes are available online for reproduction purposes.

Cellular automaton, compressive sensing, directional gradients, measurement-adaptive sampling, sparse recovery, sparsity location, texture.

I Introduction

Current digital imaging devices at first acquire images and then separately compress them leading to big data-related problems. Contrarily, the aim of emerging Compressive Sensing (CS) theory is to merge sampling and compressing into one step by introducing the concept of “compressing during sensing” for reducing data-rate, acquisition time, power consumption, and device manufacturing cost with demanding applications to next-generation Infra-Red (IR), remote sensing and Magnetic Resonance Imaging (MRI) systems [1, 2, 3]. The theory of CS states that signals with a sparse representation in a specific domain can be recovered at a rate less than the traditional Shannon-Nyquist rate theorem [4, 5]. DCT, wavelet, and gradient spaces represent paradigms in which natural images are sparse or at least compressible. The later is not fully invertible.

I-a Considered Scenario and Related Arts

In the literature, significant attempts have been done to compressively sample data at a very low sampling rate [6, 7, 8, 9, 10]. Conventional CS-based sampling/recovery strategies only exploit the sparsity-prior of signals in an appropriate domain [11, 12, 13]. On the other hand, theoretical compressive sampling analyses rely on using Gaussian or random sampling matrices to measure signals [14]. However, to design sensing matrix, recent scientific studies discover considering signal model-prior in sampling phase can significantly improve recovery quality [7, 8, 10]. In this regard, the present paper concentrates on finding a way to adaptively and sparsely sense image scene in order to simply and efficiently retrieve the image.

Ji et al. in [6] introduced an adaptive Bayesian CS (BCS) with abilities such as determining sufficient number of measurements and confidence of recovery. The researchers in [7] proposed a nonuniform CS-based image sampling/recovery method in which a Hidden Markov Tree (HMT) is utilized to consider the correlation among sparse wavelet coefficients in both sampling and reconstructing phases, so-called uHMT-NCS. In [8], a variable density sampling strategy was designed in the frequency domain which exploits priori statistical distributions of images in the wavelet space. Malloy and Nowak in [9] suggested an adaptive compressed sensing algorithm that in comparison to standard random nonadaptive design matrices requires nearly the same number of measurements but succeeds at lower SNR levels. Yang et al. in [10] designed an efficient Mixed Adaptive-Random (MAR) sensing matrix which employs image edge information with a sensor array 16 times less than the ultimate length of recovered image. The Iterative Method with Adaptive Thresholding (IMAT) [13] and its modified version equipped with interpolation, IMATI [15], are extensions of the Iterative Hard Thresholding (IHT) family for sparse recovery [12]. Instead of a fixed thresholding, they adaptively threshold coefficients in a transform domain such as Discrete Cosine Transform (DCT) during iterations. To recover sparse signals from Gaussian measurements, the authors in [16] presented a general iterative framework based on proximal methods called Iterative Sparsification-Projection (ISP). ISP family contains the well-known SL0 algorithm [17] as a special case. In [18], a learning-based approach by jointly optimizing the sensing and overcomplete dictionary matrices was introduced which performs better than random matrices or the matrices that are optimized without learning dictionary. In [19], a variational BCS in complex wavelet domain was suggested. This model, called TSCW-GDP-HMT, considers sparsity and structural information of images.

I-B Motivations and Contributions

Although aforementioned efforts result in performance improvement somewhat, it seems that the role of Artificial Intelligence (AI) in the context of CS-based sampling/recovery is yet negligible. In this paper, we take a step towards considering AI and present an adaptive and intelligent model for digital image sampling and recovery. For a given number of measurements, we show the proposed sensing matrix considerably increases the overall recovery performance, or equivalently decreases the number of sampling points for a specific recovery quality compared to Gaussian, random, and dynamic sampling matrices employed by the state-of-the-art CS methods [16, 10, 19, 6, 13, 11]. The number of required samples in the spatial domain to reconstruct image is adaptively determined based on space-frequency-gradient information content of image. To this intent, a mixture of uniform, random, and nonuniform sampling strategies is suggested [2]. The devised intelligent sampling mechanism donates the advantages of simple, fast, and efficient recovery compared to complicated methods [7, 16, 19]. For reconstruction, we model the recovery as a Cellular Automaton (CA) machine to iteratively restore the image with scalable windows from generation to generation. To the best of the authors’ knowledge, this is the first work for modeling of image reconstruction issue via a CA. The convergence of the proposed CA-based recovery algorithm is guaranteed after a few generations, thus making it suitable for reconstructing the present mega-pixel range images, whereas sophisticated approaches such as ISP [16], TSCW-GDP-HMT [19], and BCS [6] fail to process and store such high-dimensional signals with general-purpose computers.

The adaptivity in signal measurements helps to automatically select enough sampling points based on local information content of the scene under view. For instance, lowpass images are sampled with low rates, whereas cluttered scenes are acquired with more samples for an accurate reconstruction. Generally, available methods in the literature do not have such a degree of flexibility and for each signal are manually adjusted at a fixed rate. This undesirable phenomenon may lead to data-redundancy in smooth images or information-loss in textured ones. In practice, the measurement-adaptive image sampling/recovery scheme includes various applications from intelligent compressive imaging-based acquisition devices [20] to computer vision and graphics, and image processing technology, e.g. inpainting, remote sensing missing recovery, denoising of impulsive noise, and mesh-based image representation [21]. For reproduction purposes, simulation codes are available online111http://ee.sharif.edu/imat/.

Briefly, the main contributions of the paper are

  • proposing a new measurement-adaptive image sampling mechanism consisting of three uniform, random, and nonuniform samplers which, in a synergistically manner, incorporates local space-frequency-gradient information content of the image,

  • introducing a novel nonuniform sampler based on directional gradients,

  • suggesting a novel cellular automaton model for image recovery which its convergence is guaranteed after a few generations, and

  • exposing various examinations to demonstrate the efficiency of the proposed sampling/recovery approach under different settings and application areas.

The remainder of this paper is organized as follows. Section II describes the proposed sampler. In Section III, we explain the CA-based recoverer. Section IV evaluates the effectiveness of our framework and compares it to modern approaches. The paper is finally concluded in Section V.

Ii Intelligent Sampling

Based on the content of local image patches, the proposed sampling method performs flexibly at various rates to capture required informative samples for recovery step. Figure 1 depicts the block diagram of the suggested measurement-adaptive sampler. As shown, the system gets a gray-level image patch and gives the related binary mask. Due to psycho-visual considerations, we assume that patches are square with the length . The final sampling mask is generated from the union of three uniform, random, and nonuniform patterns obtained from spatial, frequency, and gradient spaces, respectively. In the further sub-sections, we discuss about each sampling strategy in detail.

Fig. 1: The proposed measurement-adaptive sparse sampler.

Ii-a Texture Measure

To adaptively sense the information content of each local image patch, it may be required to measure a representative criterion of signal’s complexity [22]. Here, we utilized Shannon’s joint entropy determined from Haralick’s co-occurrence matrix [23, 24] as a quantitative metric to measure image block texture. We calculate the texture percentage as

(1)

where the gray-level co-occurrence matrix is the joint Probability Mass Function (PMF) which is obtained by the quantized or scaled version of the input image patch, namely . The joint probabilities can be defined in the horizontal, vertical, main diagonal, and off-diagonal directions. We used the horizontal neighbor as defined in MATLAB by , where , , and . The variables and represent row and column, respectively. We considered the number of distinct gray levels in as , hence . The normalizer in (1) denotes the maximum entropy of , which obtains from the uniform distribution, i.e. , . Thus, , which .

Ii-B Uniform Sampler

In the proposed uniform sampling strategy, we punch local patches at certain regular locations based on their local texture content in space domain. In fact, the punch operation makes appropriate regular patterns for a stable recovery. As shown in Fig. 1, the uniform sampler at first gets the image patch . Afterwards, the textureness of the signal is estimated. The number of punched points is proportional to the calculated texture percentage. By this intuition, whenever the textureness is high, we decorate the image with denser regular points and vice versa. Therefore, the binary mask of uniform pattern is determined by the following fuzzy rule

(2)

in which the matrices , , , , and are assigned for very-low, low, bandpass, high, and very-high textures, respectively. Figure 2 shows the regular lattices employed in the proposed uniform sampler. The number of live cells (punched points) in the matrices , , , , and are , , , , and , respectively. This reveals the relation between the texture and the number of live cells is nonlinear with an increasing exponential form. The configuration of punched points affects the recovery performance. Sub-optimal lattices should have minimum number of sample points and maximum number of live cells in a predefined neighboring of missing samples. They should also consider patch boundaries and local correlation. We tried to experimentally design patterns based on such constraints.

Fig. 2: From left to right, patterns of the uniform sampler defined by matrices , , , , and .

Ii-C Random Sampler

The intermediate layer of Fig. 1 depicts the proposed random sampler. Here, we first convert the data matrix from the spatial domain to the frequency space, , by using 2-D DCT transform as , where the matrix represents the transformation kernel of DCT. In DCT domain, samples are decorrelated and sparse. After an adaptive quantization of DCT coefficients, we estimate the rate of random sampling. Accordingly, a uniformly distributed random mask is finally generated. In the following sections, we discuss about creating this mask.

Ii-C1 Random Sampling Rate Measure

To measure sparsity, one way is to threshold insignificant coefficients in a transform domain [13]. However, such thresholding may be inaccurate due to varying information of the signal under process/acquisition. To overcome this problem, we suggest adaptive quantization of DCT coefficients inspired from the following observation.

Observation 1 (Adaptive quantization)

In JPEG compression standard where DCT transform is utilized, the results in [22] reveal that the number of zero entries of quantized DCT coefficients is inversely proportional to both compression quality level, , and the image block texture, . Now, let the matrix be the quantized DCT matrix for a image patch, then , where denotes the -norm and the operator counts the number of zero entries of a matrix/vector. From the above results, we have both and . By assuming their equivalence, we have . This shows the texture percentage defined in (1) can be a good criterion for adaptive quantization of information in the frequency domain to measure the necessary random sampling rate.

Details: Based on Observation 1, we calculate the quantized DCT matrix as

(3)

where the symbols and denote the entry-by-entry division and the nearest integer, respectively. The matrix , , also represents the quantization table related to the texture . For determining , we exploited the Quantization Tables (QTs) introduced by Independent JPEG Group (IJG) [25]. Based on the value of , the corresponding quantization matrix is chosen and applied (See Appendix.). Therefore, we adaptively derive sparsity number from the number of nonzero entries of quantized DCT coefficients by . Now, we suggest measuring the rate of random sampling as the following formula, which is consistent with the theory of CS, i.e. ,

(4)

in which , and and are tunable factors. The parameter changes the number of undersampled points, whereas the coefficient prevents from decaying sampling rate on well-textured patches and saturates it at a relatively fixed rate to control unnecessary information. By setting and , the function in (4) is a non-decreasing curve vs . It is important to note that, in practical situations, the number of non-zero quantized DCT coefficients, , may be a large value even near the length of signal, . This phenomenon occurs especially in high quality level settings or textured regions. Generally, high performance compressive sensing in such a dense scenario is problematic because compressive sensing theory for real-world applications imposes the sampling rate to be at most 10% [26]. We tried to consider this limitation for calculating the rate of random sampling in (4) by setting and , which covers both sparse and dense cases.

Ii-C2 Graduate Randomization Procedure

Instead of blind random sampling adopted by researchers of compressive sensing field, we introduce a Graduate Randomization Procedure (GRP). In this procedure, by receiving both the feedback of the sampling rate in frequency domain and the generated uniform mask, we add a certain randomness to the uniform pattern. In other words, from low- to high- textured patches, GRP gradually adds random nonuniformity to the uniform base lattice. It is important to note that random samples are selected and added to locations other than uniform marks. In order to generate the random pattern, the GRP algorithm is implemented as follows

  • Find the number of zero elements in the uniform pattern, i.e. , and their locations by storing in the set , where and represent the row and column of the location, respectively.

  • Shuffle the location of zero entries and substitute the set with the result.

  • Truncate (4) by

    (5)
  • Generate integer values randomly without replacement in the range , namely .

  • Define the matrix of random pattern, , and initialize it with , where represents a zero matrix.

  • For to , at first, assign , and then .

Ii-D Nonuniform Sampler

Fig. 3: A partitioning of 2-D frequency domain defined in [27, 22] into 6 regions.

It is very important to preserve the structure of image like edges in recovery side. Generally, edge preservation is a difficult task in interpolation even for sophisticated algorithms. Due to fuzziness nature of image edges, an edge-prior sampler can reduce uncertainty in reconstruction phase. Although, the GRP carry this task somewhat, in well-structured horizontal, vertical, and diagonal edges, or periodic configurations which their recovery is perceptually important from subjective evaluation viewpoint, more edge samples are required to reduce uncertainty and unambiguously recover the missing scattered samples. Hence, we suggest a sparsity location-aware algorithm to capture horizontal, vertical, and diagonal edges in sampling phase.

To implement the above idea, we address sparsity locations in different frequency regions. Figure 3 depicts a partitioning of 2-D frequency domain defined in [27, 22]. Except for the single DC mode , the five subsets of AC modes exist, namely to corresponding to low, horizontal, vertical, diagonal, and high frequencies information, respectively. If nonzero quantized DCT coefficients are sparsely located in horizontal, vertical, or diagonal regions, the corresponding edge points are sampled in space domain. For instance, in a block of well-structured vertical edges, the nonzero coefficients are only appeared in the partitions and . To detect the edge maps, we first determine a -directional decomposition of the image patch gradient [28, 29]. For this purpose, we utilized Sobel operator to calculate the gradient of image block, , , and considered with four cardinal directions N, E, S, W in addition to four ordinal directions NE, SE, SW, NW to decompose the block. In order to preserve the image structure in the boundary of blocks, we also acquired border pixels for computing the gradient. Let , , be -directional gradient features. To project the gradient vector in - plane onto the two adjacent directions

(6)
(7)

the following rules are utilized.

  • If , then assign ; else if , then ; else if , then ; else if , then .

  • If , then assign ; else if , then ; else if , then ; else if , then .

Fig. 4: The gradient directions corresponding to cardinal and ordinal directions, and the directional decomposition of the gradient vector into two and adjacent directions.

Afterwards, we apply a threshold to the magnitude of two directions with stronger gradients to suppress redundant uninformative samples. Figure 4 illustrates the directional decomposition of the gradient vector into two adjacent directions. For this example, we have , , , , , , , and based on the specified rules. The steps of nonuniform sampling algorithm are implemented as follows.

  • Initialize -directional gradient matrices, i.e. , .

  • Extract -directional gradient features as explained above for the vector .

  • Obtain the normalized versions of , , between and .

  • Define the matrix of nonuniform pattern, , and initialize it as .

  • For determining the structure of edges, initialize a binary string with the length of the number of AC frequency regions, , such as .

  • For to , obtain the set . If , set .

  • If the string , which respectively represent vertical, horizontal, diagonal, vertical-diagonal, or horizontal-diagonal edges, at first, sort the set . Afterwards, for the two directions having stronger gradients, calculate its gradient magnitude, and then renormalize it between and . Finally, for to and to , if the resulting magnitude of gradient exceeds the predefined threshold , set .

Ii-E Measurement-Adaptive Sampling Algorithm

As shown in Fig. 1, the ultimate binary mask is generated by the union of uniform, random, and nonuniform patterns as

(8)

The regularity, randomness, and structure of patterns in the above combined form create naturally a sampling mask with chaotic behavior. For selecting the threshold in the nonuniform sampler, a trade-off between subjective quality enhancement and increased redundancy exists. If , then . Therefore, we empirically set to maintain sampling near an optimal sub-Shannon-Nyquist rate. The downsampled image patch is constructed from

(9)

in which the symbol denotes Hadamard product. Algorithm 1 summarizes the proposed adaptive intelligent image sampling strategy. For the previously captured image of the dimension , the proposed block-wise algorithm can be repeated on all nonoverlapping image patches to get the overall sampling mask and the subsampled image .

1:Input: The image patch .
2:Measure the texture as (1).
3:Generate the uniform sampling mask by (2).
4:Determine the quantization table in terms of .
5:Transform the patch to 2-D DCT domain .
6:Quantize DCT coefficients via .
7:Estimate the sparsity number .
8:Calculate the rate of random sampling as (4).
9:Perform the GRP routine to obtain the random mask .
10:Run the nonuniform sampler algorithm to generate .
11:Combine the created masks as .
12:Obtain the scattered sample points by .
13:Outputs: The mask and the subsampled image .
Algorithm 1 The proposed adaptive sampling algorithm

Iii Cellular Automaton-based Image Recovery

Fig. 5: Applying Algorithm 2 on a DC image. From left to right, sampling matrices encompassed by the green-colored frame illustrate initial state, and the evolution at , , and generations, respectively.

As mentioned in the literature, different approaches exist for image recovery [7, 8, 9, 18, 16, 13, 19, 12, 11, 6, 17]. Here, we tailor a novel technique to recover scattered samples obtained from the intelligent sampling stage. We model the reconstruction mechanism by a dynamic CA. Cellular automata are simple intelligent agents with fundamental characteristics of locality, uniformity, and synchronicity [30]. They are composed of discrete cells equipped with special rules to be able to solve sophisticated problems. Various applications have been found for CA in image processing tasks such as noise removal, edge detection, and forgery localization [31].

Iii-a Cellular Automaton Modeling for Recovery

In modeling, we consider a 2-state CA machine, in which the states of dead and live cells represent the zero and one values in the sampling mask , respectively. Contrary to conventional fixed-neighbor CA models, the proposed CA-based recoverer performs as an iterative method which applies variable-scale windows to the sampled image . The size of square window, , increases at each generation of CA. In summary, we define the elements of model as , in which represents the standard deviation of a Gaussian kernel, is a coefficient for increasing during generations, and and denote the matrix at next generation of and the recovered image, respectively. The rule behind the suggested recovery algorithm is simple yet efficient as follows. By considering the correlation among adjacent samples, missing pixels are reconstructed via a Gaussian-weighted averaging. We find live cells of around the central dead cell and extract their corresponding weights in -Moore neighborhood [30], namely the dynamic-length vectors and , respectively. If at least one live cell exists, then, the dead cell is replaced with the weighted mean of live cells in the subsampled image and that dead cell will revive at next generation. At each generation of CA-based recoverer, the model elements are updated. Algorithm 2 presents the pseudo-code of CA-based image recovery algorithm, in which the symbols and represent the floor and ceiling functions, respectively. We utilized the replication rule to cope with boundary conditions of border cells.

After the last generation, we utilize a post processing stage to alleviate possible blockiness artifacts due to the patch-based nature of recovery. This phenomenon may be observed in plain regions. Based on the coding style of measurement-adaptive sampler, the number of live cells within a window can be an appropriate criterion to discriminate plain regions from textured ones in the recovery phase. Therefore, we apply a conditional smoothing filter only for flat regions. To this intent, at first, we initialize the post-processed result as . Afterwards, for all and , we scan the image with a square window of the length . If the number of live cells in the initial sampling mask inside the window are less than the threshold , the corresponding pixel values of in the window are smoothed via the Gaussian kernel , for which . Then, the result is placed into . In experiments, we set the coefficient and the standard deviation of smoother .

1:Inputs: The mask and the subsampled image .
2:Initialize , , , , , and .
3:Calculate the number of dead cells by .
4:while  do
5:     
6:     Obtain the Gaussian kernel .
7:     for  do
8:         for  do
9:              if  then
10:                  
11:                  
12:                  for  do
13:                       for  do
14:                           if  then
15:                                
16:                                
17:                                
18:                                
19:                           end if
20:                       end for
21:                  end for
22:                  if  then
23:                       
24:                       
25:                  end if
26:                  Release the vectors and .
27:              end if
28:         end for
29:     end for
30:     
31:     
32:     
33:     
34:     
35:end while
36:Apply the post processing on via to obtain .
37:Output: The recovered post-processed image .
Algorithm 2 The proposed CA-based recovery algorithm
Fig. 6: From top to down, different images respectively show the original images, sampling masks, subsampled images, recovered results after , and generations, and the results of applying conditional smoother.

Iii-B Convergence Analysis

As illustrated in Fig. 6, the CA-based recoverer first starts with exposing high-textured dense missing samples at a fine scale. Then, by updating model parameters, this process continues and finally terminates by retrieving lowpass sparse samples at a coarse scale. The advised variable-scale windowing in comparison to fixed window speeds up the convergence rate of Algorithm 2 without sacrificing recovery quality. Also, this mechanism controls better error propagation during recursion. In Lemma 1, we prove the algorithm quickly converges after a few generations for an image with arbitrary degree of texture.

Lemma 1 (Convergence guarantee)

Let patches of the image be square of the length . If denotes the number of CA generations, then, Algorithm 2 for recovering the image guarantees to converge at most at three iterations, i.e. .

Proof:

To prove, we consider the worst case, i.e. the image under reconstruction is a fully DC image as the possible sparsest signal to determine the upper bound. In such a case, for all patches, we have which results in , and the random and nonuniform patterns equal . Hence, for all patches. Let the live and dead cells be represented by white- and black- colored cells, respectively. For visualization, Fig. 5 illustrates generations of the sampling mask matrix for a given flat DC image of the dimension . It is important to note that, if and are not multiples of , we can use zero padding. If the matrix denotes the sampling mask at generation , then it is shown in Fig. 5 that the number of dead cells at the generation is . This implies that for a completely DC image because the distance between two live cells is 7 pixels in both horizontal and vertical directions. For a given image with a certain degree of texture, the density of live cells in is always more than or equal to the worst case, thus demonstrating . \qed

Example 1 (Sampling and recovery)

Figure 6 shows the original images of Baboon, Boat, and Lena for which the sampling masks, subsampled images, the intermediate, and final results of recovery step are visualized. These images have different texture averages sorted in descending order, for which the sampler automatically extracted , and percent pixels, and the recoverer estimated the images with of , and decibels, respectively.

Iii-C Practical Considerations

In order to implement the proposed sampling and recovery algorithms in practice, we can consider a mechanism similar to the structure of image acquisition using Rice single-pixel camera [20]. However, one of the main differences is to utilize a small array of sensors but not a single photo-diode. This allows us to measure adaptively the local image content, sample in 2-D space, and recover the scene by the suggested CA-based recovery approach. The solution is to focus the physical scene on a Digital Micro-mirror Device (DMD) via a primary lens. The DMD itself is partitioned into non-overlapping patches such as blocks. In a sequential line-scan manner and at each time, only one of the analog image patches is reflected to another lens by configuring mirror positions. Other mirrors reflect the scene on a black absorbent surface. This secondary lens focuses the light of that single patch on an CCD or CMOS sensor. After amplifying the signal and passing through an Analog to Digital Converter (ADC), the digitized sub-image, , is obtained. Afterwards, the texture of the patch is measured. Then, based on the proposed sampler, the sampling mask, , and the sub-sampled image, , are determined. This process is repeated until the generation of the binary mask as well as the sub-sampled image pertaining to the last patch. The overall sampling mask, , and corresponding sampled image, , can be stored in a memory card. In the decoding phase, the proposed recoverer gets and and finally restores the image scene, .

Iv Experiments

This section provides extensive experiments to support our findings. Algorithms were implemented in MATLAB and run on an Intel Core i7 2.2GHz laptop with 8GB RAM.

Iv-a Experimental Settings

In order to evaluate the performance of our sampling/recovery framework and comparing it with other related approaches, we utilized various databases having different statistical characteristics. Image sets include the standard databases of NCID [32], CCID [33], UCID [34], Microsoft Research Cambridge Object Recognition Image Database222https://www.microsoft.com/en-us/download/details.aspx?id=52644, and a collection of well-known test images such as Baboon, Cameraman, Lena, etc. Beside, we gathered a set of IR test images as well as a mega-pixel range images database called RCID [35], to more precisely investigate the efficiency of our algorithms. Some of these images are shown in Fig. 12. In order to evaluate recovery performance, we employed standard objective criteria of PSNR, SSIM [36], and Normalized Recovery Error (NRE) defined as as well as subjective evaluation. We compared our sampling and recovery algorithms to related approaches including the intelligent MAR sampler suggested in [10], the scattered data recovery technique of spline interpolation and the state-of-the-art methods of BCS [6], uHMT-NCS AMP [7], IMAT [13], IMATI [15], ISP [16], and TSCW-GDP-HMT [19].

We adjusted the set of parameters so that, in the suggested sampler, the PSNR is maximum for a given average rate. For tuning parameters, we used images of UCID database as validation set. To generate uniform matrices in the uniform sampler, different regular and periodic lattices with various punched points may be used. We examined a set of such configurations and finally selected the patterns introduced in Fig. 2, which optimize the aforementioned objective criterion. We also set parameters of CA-based recoverer and its post-processing stage to obtain a sub-optimal objective PSNR performance criterion in dB. The influence of post-processing stage with appropriate parameters of and in the CA-based recovery algorithm is to improve PSNR up to about . The standard deviation of smoother , meanwhile, has negligible impact on performance. The number of punched points in the uniform sampler, the coefficients and in the random sampler, and the threshold in the nonuniform sampler control the sampling rate in the suggested algorithm. On one hand, for natural image patches, we have often , thus considering short intervals for such textures. This also means the number of punched points is often less than or equal to 4. On the other hand, the main usage of the coefficient is to make the formula acceptable for dense images, i.e. high values of . Therefore, the influential factors for increasing or decreasing the adaptive sampling rate are the coefficient and the threshold .

Iv-B Performance on Standard Databases

Here, we evaluate the performance of our sampling/recovery method on 21 well-known test images of the size and four standard databases. The adaptive sampling rate measured by sampler and recovery quality on image processing test images are tabulated in Table I. For an average dynamic sampling rate, the mean of was . For variations and , the curves of the average PSNR (dB) vs the average sampling rate (%) on 21 well-known images are depicted in Fig. 7. The curves demonstrate that the sensitivity of our algorithm is low for variations and . Figure 8 also shows average PSNR (dB) vs . In this figure, the coefficient means the recovery without applying post-processing step and the optimal value is , thus demonstrating about gain.

Fig. 7: Average PSNR () vs average sampling rate () on images listed in Table I.
Fig. 8: The influence of the coefficient on recovery performance for images listed in Table I.

As an example, Fig. 9 illustrates recovery of Baboon image for compared state-of-the-art approaches. To be able to compare different algorithms of various complexities on the laptop, we reduced the image to the size . For all methods, the sampling rate was , i.e. 6000 measurements. Based on PSNR, the proposed approach outperforms other methods. For BCS, IMAT, IMATI, TSCW-GDP-HMT, and the proposed algorithm, the average PSNR (dB) on all images with the size of was 18.52, 23.34, 24.26, 28.54, and 27.99, respectively. Other methods such as ISP failed to compute the image restoration process. The average sampling rate determined by our algorithm was 34.3%. For a fair comparison, we set this rate for competing algorithms, too. Although the average PSNR for TSCW-GDP-HMT algorithm is more than the suggested method, its computational complexity is very high. To clarify this issue, Fig. 10 compares the run-time (s) of different methods in terms of sampling rate () on the Baboon image. As seen, the proposed method is slightly better than IMATI.

Fig. 9: PSNR for different methods in dB. From top to down and left to right, the original image, recovered images obtained by uHMT-NCS AMP (24.15), ISP (19.46), BCS (18.58), IMAT (24.25), IMATI (24.95), TSCW-GDP-HMT (26.72), and the proposed algorithm (27.33), respectively.
Fig. 10: The run-time (s) of different methods in terms of sampling rate () on the image of Fig. 9.
Image number Image name Sampling rate (%) Performance metrics
1 Baboon 35.23 29.38 0.8814 0.0085
2 Barbara 28.86 26.8 0.8431 0.0143
3 Boat 25.16 29.43 0.8227 0.0093
4 Butterfly 24.01 30.27 0.8356 0.0082
5 Cameraman 15.39 32.77 0.9295 0.0074
6 Einstein 18.09 31.26 0.7867 0.0082
7 Elaine 19.56 31.26 0.7613 0.0106
8 Fruits 16.6 31.13 0.8605 0.0056
9 Goldhill 25.51 30.4 0.8199 0.0107
10 Jetplane 18.59 31.36 0.91 0.0078
11 Lake 25.67 29.21 0.8324 0.0155
12 Lena 18.12 32.19 0.8779 0.0068
13 Livingroom 29.04 29.27 0.8288 0.0116
14 Owl 34.48 28.53 0.819 0.0115
15 Peppers 18.97 31.19 0.8464 0.0129
16 Pirate 24.62 29.99 0.8488 0.0095
17 Shack 34.1 27.09 0.8045 0.0234
18 Walkbridge 38 27.38 0.8279 0.0119
19 Woman-blonde 21.73 29.73 0.8229 0.0115
20 Woman-darkhair 11.55 36.69 0.9265 0.0053
21 Zelda 15.41 34.72 0.8915 0.0076
Average 23.75 30.48 0.8465 0.0104
TABLE I: Results of the proposed sampling/recovery method

Table II reports the performance of our approach on NCID, CCID, UCID, and RCID databases, which are sorted in descending order of textural information; i.e. RCID and NCID databases have minimum and maximum texture averages, respectively. The average sampling rate acquired by the proposed dynamic sampler confirms this subject. Images of each image-set have also intra-database variability of textural information. However, we tuned parameters of our sampler so that dynamic sampling rate for each image doesn’t approximately exceed 50%, based on the limitation imposed in compressive sensing theory for real-world applications [26]. This leads to an acceptable average PSNR with a high variance, so that the range of PSNR (dB), [min(PSNR), max(PSNR)], is [17.39, 48.48], [21.91, 41.25], [18.41, 40.03], and [26.29, 44.35] for NCID, CCID, UCID, and RCID databases, respectively. From compression viewpoint, recovery performance in comparison to sensed samples is promising for all databases.

Database Total images Image dimensions of sampling rate (%) of performance metrics
NCID [32] 5150 e-
CCID [33] 1096 e-
UCID [34] 1338 e-
RCID [35] 208 e-4 e-6
TABLE II: Performance of the proposed image sampling/recovery framework on four standard databases

Iv-C Test on Infra-Red Imaging Systems

Infra-red imaging has a wide range of applications from surveillance and Intelligent Transportation Systems (ITSs) to medical imaging. These images are naturally sparse and of interest in the compressive sensing community to fabricate their low-cost CS-based imagers [37]. In this experiment, we evaluate the performance of adaptive sampling/recovery framework on a test-set including 20 images grabbed by various near IR cameras for ITSs applications such as License Plate Recognition (LPR). Four representative IR images are shown in the first row of Fig. 12. Specifically, for recovering images 1 to 4 in this figure, PSNRs(sampling rates) in terms of dB() were 46.68(2.08), 35.89(11.58), 42.04(4.41) and 38.86(6.57), respectively. Generally, all IR images were recovered by an average PSNR of dB for the average adaptive sampling rate , thus demonstrating an excellent IR recovery quality of our scheme for the high sample reduction.

For BCS, IMAT, IMATI, TSCW-GDP-HMT, and the proposed algorithm, the average PSNR (dB) on all images with the size of was 28.44, 22.11, 26.34, 35.23, and 31.88, respectively. The average sampling rate determined by our algorithm was 15.37%. For a fair comparison, we set this rate for competing algorithms. The performance of the proposed method is the second best.

Iv-D Influence of Uniform and Nonuniform Samples

In this experiment, we considered two scenarios to evaluate the role of the proposed uniform and nonuniform samplers. In the first scenario, we omitted our proposed uniform sampler from the scheme of Fig. 1 and compared the resulting random/nonuniform sampler with the whole model. This can be done by setting . In this case, for a fair comparison by preserving samples at an equal rate, we can manually increase the coefficient in the random sampler or decrease in the nonuniform sampler. Here, we only increased from the resulting random/nonuniform sampler. In this case, the average PSNR on 21 well-known images listed in Table I was dB. In comparison to the result reported in Table I, a decrease of nearly dB in PSNR is seen which demonstrates the importance of existing uniform sampler. It is noticeable that performance reduction in low-pass and periodic images like Cameraman and Barbara is much more than cluttered ones.

In the second scenario, the nonuniform sampler was neglected by setting . Similarly, we increased to equalize the rate of resulting uniform/random sampler with the complete scheme. The average PSNR was dB, that is dB more than the average of Table I. Although such an objective evaluation shows a little performance improvement in the absence of nonuniform sampler, different subjective evaluations narrate a better reconstruction of image structure such as edges in the presence of nonuniform sampler. For instance, Fig. 11 visualizes this phenomenon for four images in which reconstructed edges are sharper in the entire solution than the second scenario. Therefore, for approximately equal sampling rate and objective PSNR metric, the combined strategy has subjective superiority in performance.

Fig. 11: From left to right and top to down, the reconstruction of Walkbridge, Livingroom, Jetplane, and Lake images without and with nonuniform sampler, respectively.

Iv-E Recovery Performance under Fixed Sampling Structures

This section compares the performance of different recovery approaches under fixed sampling structures. To this intent, we considered two sampling scenarios including conventional pure random and our measurement-adaptive samplers. For recovering scattered samples obtained from the proposed intelligent sampling stage as well as random sampling structure, various approaches such as scattered data interpolators can be employed. Therefore, in addition to modern sparse recovery techniques [6, 13, 15, 16, 19], we applied the prominent scattered spline interpolator of spline to separately investigate the efficiency of our CA-based recoverer under the considered scenarios.

In this experiment, we address the problem of image reconstruction of different scales. To do this, we selected 4 mega-pixel images from RCID database with the original size or vice versa. These images are shown in the second row of Fig 12. To evaluate the efficiency of the mentioned approaches, by using the bicubic interpolation, we generated 6 down-sampled versions for each of which by a factor about to finally construct a 7-level pyramid. Table III compares the proposed CA-based recovery algorithm to modern reconstruction methods under random and adaptive sampling scenarios for original and scaled images. From lowest to highest resolution, i.e. in the direction of increasing correlation, the average dynamic sampling rates (%) were 42.95, 38.63, 34.4, 30.21, 26.69, 22.37, and 15.59, respectively. For a fair comparison, we set the random sampling rate equal to that of the average dynamic rate determined by measurement-adaptive sampler. The bolded values show the best performance. Except for very low resolution levels, ISP, TSCW-GDP-HMT, BCS, and spline methods failed to yield viable results due to computational complexity of matrix operations or huge memory requirement. Because of heavy blockiness effects, BCS algorithm fails to reconstruct images in both the random measurement and the adaptive scheme especially in lowpass images. Generally, state-of-the-art results were obtained for configurations of measurement-adaptive sampler+spline for low-resolution and measurement-adaptive sampler+CA for high-resolution levels. This demonstrates the power of measurement-adaptive sampling scheme.

It is noticeable that IMAT and IMATI recovery methods reconstruct the whole image rather than block-wise processing. The sub-optimal parameters for adaptive thresholding technique of IMAT were , , and . In ISP algorithm, DCT was utilized as sparsifying transform and hard thresholding function. We also set its sub-optimal parameters as , , e-6, , , and . TSCW-GDP-HMT method uses Generalized Double Pareto (GDP) distribution for modeling signal sparsity. An inherent limitation of this algorithm is that the input image should be square with a length of multiples of 8. To tackle this problem and compare different approaches as fair as possible, we first considered the nearest length more than or equal to dimensions reported in Table III, and then rescaled the recovered image to the real size. BCS algorithm utilizes Daubechies1 2-D wavelet at four decomposition levels. We also used MATLAB built-in function to implement spline interpolation.

Fig. 12: From left to right, images in the first and the second rows indexed by numbers 1 to 4, show representative infra-red and RCID mega-pixel images [35], respectively.
Image number Size at pyramid level
Random sampling scenario Proposed measurement-adaptive sampling scenario
IMAT IMATI ISP TSCW-GDP-HMT BCS Spline CA IMAT IMATI ISP TSCW-GDP-HMT BCS Spline CA
1 22.79 20.27 20.11 27.7 7.93 25.71 25.64 21.2 20.27 18.56 25.54 8.43 27.49 26.56
22.88 24.1 F 29.56 8.77 26.44 26.02 21.03 21.73 F 28.55 8.94 28.37 27.15
22.61 24.02 F F F F 25.07 20.27 21 F F F F 26.88
22.82 25.46 F F F F 25.36 20.15 22.36 F F F F 27.93
23.84 27.82 F F F F 27.81 19.87 23.26 F F F F 30.27
24.74 31.57 F F F F 31.49 18.59 24.19 F F F F 33.3
22.45 35.31 F F F F 35.33 15.58 26.48 F F F F 36.7
2 21.92 19.7 18.26 25.96 8.75 25.62 24.63 23.86 19.4 20.5 24.03 7.01 27.18 25.78
22.82 23.72 F 27.3 7.63 26.06 25.27 23.9 23.87 F 26.23 8.79 27.84 26.63
23.22 24.36 F F F F 25.7 23.22 24.43 F F F F 27.3
23.64 26.22 F F F F 26.7 22.44 25.05 F F F F 28.17
24.22 27.74 F F F F 28.1 21.49 25.39 F F F F 29.78
24.58 29.61 F F F F 30.27 19.19 25.25 F F F F 32.19
21.92 31.51 F F F F 32.77 14.73 25.44 F F F F 35.16
3 21.99 20.77 16.42 25.82 11.19 24.68 23.51 22.84 21.13 17.23 24.25 10.24 26.68 24.85
23.21 22.8 F 28.7 10.35 25.55 24.33 22.33 23.64 F 27.47 9.24 27.55 26.51
23.99 26.07 F F F F 25.82 19.87 19.51 F F F F 27.87
25.07 28.43 F F F F 27.48 17.29 19.16 F F F F 29.43
26.47 31.77 F F F F 30.43 16.04 20 F F F F 32.3
27.36 35.47 F F F F 34.41 14.91 21.73 F F F F 35.71
24.37 35.9 F F F F 36.76 13.94 26.33 F F F F 37.06
4 23.11 19.12 20.01 27.84 7.88 26.91 24.95 22.11 19.64 20.84 25.81 9.71 27.58 26.08
22.97 23.64 F 28.35 8.26 26.29 25.4 21.14 22.61 F 27.55 8.34 27.68 26.55
22.72 24.49 F F F F 25.36 20.67 22.85 F F F F 26.92
22.35 24.54 F F F F 25.25 20.27 23.04 F F F F 27.34
21.97 24.03 F F F F 25.07 19.64 23.14 F F F F 27.59
21.74 24.33 F F F F 25.88 18.57 23.23 F F F F 28.99
20.14 25.15 F F F F 27.3 16.32 23.39 F F F F 31.2
TABLE III: Analyzing the performance of different recovery techniques in the case of two distinct sampling scenarios on images shown in the second row of Fig. 12. For each scenario, the values in bold type show the best performance.

Iv-F Evaluation under Fixed Recovery Algorithms

In this experiment, we evaluate the performance of three sampling methods including the pure random, the state-of-the-art dynamic MAR sampling approach [10], and the suggested sampler under fixed recovery algorithms. We utilized 21 well-known test images of the original size for performance evaluation. As known in Table III, only IMAT, IMATI and the proposed CA-based recoveres can restore the images of such dimensions. Hence, these recovery algorithms were considered to be able to recover subsampled images. MAR sensing matrix is basically generated from the low-resolution versions of images with the size . For a fair comparison with this method, we created subsampled images from Hadamard product of MAR masks and original images in recovery side. Figure 13 depicts average PSNR criterion in dB for different sampling/recovery configurations. The sampling rate for all samplers is the same and equal to 23.75%. The first and the second ranks are belonging to the proposed measurement-adaptive sampler+CA recoverer and MAR sampler+CA recoverer, respectively. It is noticeable that the performance of IMAT and IMATI recovery algorithms under the pure random sampling strategy are better than other state-of-the-art samplers due to their nature.

Fig. 13: The comparison of recovery performance for different sampling approaches.

Iv-G Recovery Robustness in the Presence of Noise

In order to evaluate the robustness of the proposed CA-based recovery algorithm in the presence of noise, we can model the noisy subsampled image, , as

(10)

where matrices , , and denote the noise-free subsampled image, the binary sampling mask, and Additive White Gaussian Noise (AWGN) with the distribution of , respectively. We then applied the normalized version of between 0 and 1 to CA recoverer. Figure 14 plots average PSNR in dB vs the noise variance, , for 21 well-known test images of the size . In this figure, means the noise-free case. The suggested method shows better performance in terms of the noise variance than IMAT and IMATI approaches. All curves exponentially decay by increasing . As mentioned before, other algorithms fail to recover these images due to data dimensionality.

Fig. 14: Average PSNR (dB) vs the noise variance, , for the case of noisy measurements.

Iv-H Performance of Joint Sampling/Recovery Frameworks

In practice, design of sampling structure and recovery stage are closely related to each other. Here, we jointly compare the performance of the proposed sampling/recovery with other state-of-the-art frameworks [7, 13, 15, 16, 6]. To do this, we collected 27 randomly selected samples from Microsoft Object Recognition Database as shown in Fig. 15, which were already employed in [7]. At first, the randomly chosen images were cropped for highlighting salient objects in the scene, and then their gray-scale versions were resized to dimensions using the bicubic interpolation to be able to compare different algorithms of various complexity. Figure 16 plots NRE vs image number as indexed by numbers 1 to 27 in Fig. 15 for competing methods. The average are given in legend parentheses. As shown, the average NRE of the proposed scheme is lower than other methods. BCS failed to recover the smooth images 4 and 18, whereas our scheme retrieved them with minimum recovery error. For a fair comparison, we set the sampling rate of competing methods the same as the average rate obtained adaptively by our sampler.

Fig. 15: From top to down and left to right, test samples of Microsoft Database indexed by numbers 1 to 27.
Fig. 16: Normalized Recovery Error (NRE) of different approaches for the images labeled in Fig. 15.

V Conclusion

This paper suggested a measurement-adaptive sparse image sampling and recovery framework. The proposed sampler adaptively measures the required sampling rate and accordingly determines sample positions by a combined uniform, random, and nonuniform sampling mechanism. Our idea originated from the fact that natural images may have lowpass, bandpass, highpass, or sparse components depending on the scene under view. Therefore, a measurement-adaptive mechanism was advised to trap informative samples. Unlike Gaussian measurement sensing scenario, the proposed sparse coding style does not need all samples in advance.

In the recovery phase, we modeled the problem by a cellular automaton machine. The suggested algorithm converges always at a few generations. Low computational burden and memory usage are two main advantages of this algorithm in comparison to sophisticated techniques of ISP [16], TSCW-GDP-HMT [19], and BCS [6], especially in reconstruction of mega-pixel range imaging such as remote sensing. In CA-based recovery algorithm, updating rule for reviving dead cells is done based on a simple weighted averaging. As a future work, more precise predictors can be utilized for this purpose. Also, the suggested sampling/recovery pipeline can be generalized to other sparsifying transform domains like wavelets. Extensive tests on standard image data-sets, infra-red, and mega-pixel imaging show the capabilities of proposed technique for practical implementations of compressively sampled imaging systems.

Based on JPEG standard [25], the quantization matrix corresponding to the texture can be formulated as

(11)

in which the matrix denotes a all-one matrix. Also, the scaling factor, , and the reference quantization table, , are determined by

(12)

and

.

Acknowledgements

This Postdoc research was jointly sponsored by Iran National Science Foundation (INSF) and ACRI of Sharif University of Technology under agreement numbers 95/SAD/47585 and 7000/6642, respectively. The authors would like to thank Dr Z. Sadeghigol who provided TSCW-GDP-HMT codes for comparison. We also thank Prof A. Amini, Mr A. Esmaeili, and other researchers in Signal Processing and Multimedia Lab for their priceless comments.

References

  • [1] Y. C. Eldar and G. Kutyniok, Compressed sensing: Theory and applications.   Cambridge University Press, 2012.
  • [2] F. Marvasti, Nonuniform sampling: Theory and practice.   Springer Science & Business Media, 2012.
  • [3] J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 14–20, 2008.
  • [4] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006.
  • [5] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006.
  • [6] S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Trans. Signal Process., vol. 56, no. 6, pp. 2346–2356, 2008.
  • [7] B. Shahrasbi and N. Rahnavard, “Model-based nonuniform compressive sampling and recovery of natural images utilizing a wavelet-domain universal hidden markov model,” IEEE Trans. Signal Process., vol. 65, no. 1, pp. 95–104, 2017.
  • [8] Z. Wang and G. R. Arce, “Variable density compressed image sampling,” IEEE Trans. Image Process., vol. 19, no. 1, pp. 264–270, 2010.
  • [9] M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” IEEE Trans. Inf. Theory, vol. 60, no. 7, pp. 4001–4012, 2014.
  • [10] J. Yang, W. E. I. Sha, H. Chao, and Z. Jin, “High-quality image restoration from partial mixed adaptive-random measurements,” Multimed. Tools Appl., vol. 75, no. 11, pp. 6189–6205, 2016.
  • [11] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol. 53, no. 12, pp. 4655–4666, 2007.
  • [12] T. Blumensath and M. E. Davies, “Normalized iterative hard thresholding: Guaranteed stability and performance,” IEEE J. Sel. Topics Signal Process., vol. 4, no. 2, pp. 298–309, 2010.
  • [13] F. Marvasti, M. Azghani, P. Imani, P. Pakrouh, S. J. Heydari, A. Golmohammadi, A. Kazerouni, and M. Khalili, “Sparse signal processing using iterative method with adaptive thresholding (IMAT),” in IEEE ICT, 2012, pp. 1–6.
  • [14] E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: Universal encoding strategies?” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406–5425, 2006.
  • [15] A. I. Zayed and G. Schmeisser, New perspectives on approximation and sampling theory.   Springer, 2014.
  • [16] M. Sadeghi and M. Babaie-Zadeh, “Iterative sparsification-projection: Fast and robust sparse signal approximation,” IEEE Trans. Signal Process., vol. 64, no. 21, pp. 5536–5548, 2016.
  • [17] H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed norm,” IEEE Trans. Signal Process., vol. 57, no. 1, pp. 289–301, 2009.
  • [18] J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Process., vol. 18, no. 7, pp. 1395–1408, 2009.
  • [19] Z. Sadeghigol, M. H. Kahaei, and F. Haddadi, “Model based variational Bayesian compressive sensing using heavy tailed sparse prior,” Signal Process.: Image, vol. 41, pp. 158–167, 2016.
  • [20] M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 83–91, 2008.
  • [21] X. Li, “Anisotropic mesh adaptation for image representation,” EURASIP J. IMAGE VIDE., vol. 2016, no. 1, p. 26, 2016.
  • [22] A. Taimori, F. Razzazi, A. Behrad, A. Ahmadi, and M. Babaie-Zadeh, “Quantization-unaware double JPEG compression detection,” J. Math. Imaging Vis., vol. 54, no. 3, pp. 269–286, 2016.
  • [23] R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Trans. Syst., Man, Cybern., vol. SMC-3, no. 6, pp. 610–621, 1973.
  • [24] R. Jain, R. Kasturi, and B. G. Schunck, Machine vision.   McGraw-Hill, Inc., 1995.
  • [25] W. Luo, J. Huang, and G. Qiu, “JPEG error analysis and its applications to digital image forensics,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 3, pp. 480–491, 2010.
  • [26] M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Compressed sensing MRI,” IEEE Signal Process. Mag., vol. 25, no. 2, pp. 72–82, 2008.
  • [27] R. Hasimoto-Beltrán, S. Baqai, and A. Khokhar, “Transform domain inter-block interleaving schemes for robust image and video transmission in ATM networks,” J. Vis. Commun. Image R., vol. 15, no. 4, pp. 522–547, 2004.
  • [28] Z.-L. Bai and Q. Huo, “A study on the use of 8-directional features for online handwritten chinese character recognition,” in IEEE ICDR, 2005, pp. 262–266.
  • [29] L.-L. Huang, A. Shimizu, Y. Hagihara, and H. Kobatake, “Gradient feature extraction for classification-based face detection,” Pattern Recogn., vol. 36, no. 11, pp. 2501–2511, 2003.
  • [30] J. L. Schiff, Cellular automata: A discrete view of the world.   John Wiley & Sons, 2011, vol. 45.
  • [31] P. Rosin, A. Adamatzky, and X. Sun, Cellular automata in image processing and geometry.   Springer, 2014, vol. 10.
  • [32] Q. Liu, A. H. Sung, and M. Qiao, “Neighboring joint density-based JPEG steganalysis,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 2, 2011, article 16.
  • [33] A. Olmos and F. A. A. Kingdom, “A biologically inspired algorithm for the recovery of shading and reflectance images,” Perception, vol. 33, no. 12, pp. 1463–1473, 2004.
  • [34] G. Schaefer and M. Stich, “UCID: An uncompressed color image database,” in SPIE SRMAM, vol. 5307, 2004, pp. 472–480.
  • [35] A. Taimori, F. Razzazi, A. Behrad, A. Ahmadi, and M. Babaie-Zadeh, “A novel forensic image analysis tool for discovering double JPEG compression clues,” Multimed. Tools Appl., vol. 76, no. 6, pp. 7749–7783, 2017.
  • [36] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
  • [37] H. Chen, N. Xi, B. Song, L. Chen, J. Zhao, K. W. C. Lai, and R. Yang, “Infrared camera using a single nano-photodetector,” IEEE Sensors J., vol. 13, no. 3, pp. 949–958, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
16574
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description