Hardware Implementation of Compressed Sensing based Low Complex Video Encoder
Abstract
This paper presents a memory efficient VLSI architecture of low complex video encoder using three dimensional (3D) wavelet and Compressed Sensing (CS) is proposed for space and low power video applications. Majority of the conventional video coding schemes are based on hybrid model, which requires complex operations like transform coding (DCT), motion estimation and deblocking filter at the encoder. Complexity of the proposed encoder is reduced by replacing those complex operations by 3D DWT and CS at the encoder. The proposed architecture uses 3D DWT to enable the scalability with levels of wavelet decomposition and also to exploit the spatial and the temporal redundancies. CS provides the good error resilience and coding efficiency. At the first stage of the proposed architecture for encoder, 3D DWT has been applied (Lifting based 2D DWT in spatial domain and Haar wavelet in temporal domain) on each frame of the group of frames (GOF), and in the second stage CS module exploits the sparsity of the wavelet coefficients. Small set of linear measurements are extracted by projecting the sparse 3D wavelet coefficients onto random Bernoulli matrix at the encoder. Compared with the best existing 3D DWT architectures, the proposed architecture for 3D DWT requires less memory and provide high throughput. For an NN image, the proposed 3D DWT architecture consumes a total of only words of onchip memory for the one level of decomposition. The proposed architecture for an encoder is first of its kind and to the best of my knowledge, no architecture is noted for comparison. The proposed VLSI architecture of the encoder has been synthesized on 90nm CMOS process technology and results show that it consumes 90.08 mW power and occupies an area equivalent to 416.799 K equivalent gate at frequency of 158 MHz. The proposed architecture has also been synthesised for the Xilinx zync 7020 series field programmable gate array (FPGA).
Index Terms : Scalable Video Coding (SVC), Compressed Sensing (CS), 3D wavelets, VLSI.
1 Introduction
Current video coding standards (e.g.,H.264 and HEVC)[1][2] are able to provide good compression using a highcomplexity encoders. At the encoder, motion estimation (using blockmatching) has been applied between adjacent frames to exploit the temporal redundancy. Then each reference and residual frame (motioncompensated differences) is divided in to non overlapping blocks (block size may vary from 88 to 6464 pixels) and apply the transform coding on each block (e.g., DCT) to exploit the spatial redundancy. Motion estimation and transform coding accounts for nearly 70 of the total complexity of the encoder [3]. Moreover, block wise transform coding leads to blocking artifacts in the motion compensated frame and it may reduced by using deblocking filter. However, which may further increase the complexity of the encoder. In contrast, the decoder complexity is very low. The main function of the decoder is to reconstruct the video frames by using reference frame, motioncompensated residuals and motion vectors. They are more suitable for the broadcasting applications, where a high complexity encoder would support thousands of low complex decoders. However, conventional video coding schemes are not suitable for applications requires low complexity encoders like mobile phones and camcorders. There requires low complex, low power and low cost devices. High complex encoder enables increase in compression ratio and power consumption. Therefore, to increase battery life in mobile devices, a lowcomplexity encoder with good coding efficiency is highly desirable.
In a mobile video broadcast network (wireless networks), a video source is broadcast to multiple receivers and may have various channel capacities, display resolutions, or computing facilities. It is necessary to encode and transmit the video source once, but allow any subset of the bit stream to be successfully decoded by a receiver. In order to reduce the error rate in wireless broadcast network, error correction coding such as ReedSolomon (RS) code and convolutional code has been widely used. However, this type of channel coding is not flexible. It can correct the bit errors only if the error rate is smaller than a given threshold. Therefore, it is hard to find a single channel code suitable for different channels having different capacities. For broadcast applications, without the feedback from individual receivers, the sender can retransmit data that are helpful to all the receivers. These requirements are indeed difficult and challenging for traditional channel coding design. From the above requirements it is desired to have a encoder with less complex, good coding efficiency, error resilience, scalable and support the realtime application.
This paper introduces a new VLSI architecture for scalable low complex encoder using 3D DWT and compressed sensing. Fig. 1(a) shows the block diagram of low complex video codec (encoder and decoder). Encoder has 3D DWT and CS as main functional modules shown in Fig. 1(b). 3D DWT module provides the scalability with the levels of decomposition and also exploit the spatial and temporal redundancies of the video frames. 3D DWT module of the encoder replaces the transform coding, motion estimation and deblocking filters of the current video coding system. CS module utilize the sparse nature of the wavelet coefficients and projects on the random Bernoulli matrices for selecting the measurements at the encoder to enable the compression and approximate message passing algorithm for reconstruction at the decoder. CS module provides the good compression ratio and improves the error resilience. As a result the proposed architecture enjoys lesser complexity at the encoder and marginal complexity at the decoder.
From the last two decades, several hardware designs have been noted for implementation of 2D DWT and 3D DWT for different applications. Majority of the designs are developed based on three categories, viz. (i) convolution based (ii) liftingbased and (iii) BSpline based. Most of the existing architectures are facing the difficulty with larger memory requirement, lower throughput, and complex control circuit. In general the circuit complexity is denoted by two major components viz, arithmetic and Memory component. Arithmetic component includes adders and multipliers, whereas memory component consists of temporal memory and transpose memory. Complexity of the arithmetic components is fully depends on the DWT filter length. In contrast size of the memory component is depends on dimensions of the image. As image resolutions are continuously increasing (HD to UHD), image dimensions are very high compared to filter length of the DWT, as a result complexity of the memory component occupied major share in the overall complexity of DWT architecture.
Convolution based implementations [5][7] provides the outputs within less time but require high amount of arithmetic resources, memory intensive and occupy larger area to implement. Lifting based a implementations requires less memory, less arithmetic complex and possibility to implement in parallel. However it require long critical path, recently huge number of contributions are noted to reduce the critical path in lifting based implementations. For a general lifting based structure [8] provides critical path of 4, by introducing 4 stage pipeline it cut down to . In [9] Huang et al., introduced a flipping structure it further reduced the critical path to . Though, it reduced the critical path delay in lifting based implementation, it requires to improve the memory efficiency. Majority of the designs which implement the 2D DWT, first by applying 1D DWT in rowwise and then apply 1D DWT in column wise. It require huge amount of memory to store these intermediate coefficients. To reduce this memory requirements, several DWT architecture have been proposed by using line based scanning methods [10][14]. Huang et al., [10][11] given brief details of BSpline based 2D IDWT implementation and discussed the memory requirements for different scan techniques and also proposed a efficient overlapped stripbased scanning to reduce the internal memory size. Several parallel architectures were proposed for liftingbased 2D DWT [11][20]. Y. Hu et al. [20], proposed a modified strip based scanning and parallel architecture for 2D DWT is the best memoryefficient design among the existing 2D DWT architectures, it requires only 3N + 24P of on chip memory for a NN image with parallel processing units (PU). Several lifting based 3D DWT architectures are noted in the literature [21][26] to reduce the critical path of the 1D DWT architecture and to decrease the memory requirement of the 3D architecture. Among the best existing designs of 3D DWT, Darji et al. [26] produced best results by reducing the memory requirements and gives the throughput of 4 results/cycle. Still it requires the onchip memory.
Based on the ideas of compressed sensing (CS) [27][29], several new video codecs [30][35] have been proposed in the last few years. Wakin et al. [30] have introduced the compressive imaging and video encoding through single pixel camera. From his research results, Wakin has established that 3D wavelet transform is a better choice for video compared to 2D (twodimensional) wavelet transform. Y. Hou and F. Liu [31] have proposed a system of low complexity, where sparsity extracted is from residuals of successive nonkey frames and CS is applied on those frames. Key frames are fully sampled resulting in increased bitrate. Moreover, performing motion estimation and compensation while predicting the non key frames increases the encoder complexity. S. Xiang and Lin Cai [32] proposed a CS based scalable video coding, in which the base layer is composed of a small set of DCT coefficients while the enhancement layer is composed of compressed sensed measurements. It uses DCT for I frames and undecimated DWT (UDWT) for CS measurements which increases the complexity at the decoder to a great extent. Jiang et al. [33] proposed CS based scalable video coding using total variation of the coefficients of temporal DCT. Scalability is enabled by multiresolution measurements while the video signal is reconstructed by total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3) [34] at the decoder. However, it increases the decoder complexity, making hardware implementation quite difficult. J. Ma et al. [35] introduced the fast and simple online based encoding and decoding by forward and backward splitting algorithm. Though encoder complexity is low, scalability is not achieved and decoder complexity is very high. Most of the recently proposed video codecs [30][35], which are assumed to be of uniform sparsity, are available for all the video frames and a fixed number of measurements are transmitted to decoder for all the frames. Depending on the content of the video frame, sparsity may change. A fixed number of measurements may cause an increase in bitrate (decrease in compression ratio).
This paper introduces a new compressed sensing based low complex encoder architecture using 3D DWT. The proposed method uses the random Bernoulli sequence at the encoder for selecting the measurements and the approximate message passing algorithm for reconstruction at the decoder. Major contributions of the present work may be stated as follows. Firstly the proposed framework has revised the MCTF based SVC [36] model by introducing compressed sensing concepts to increase the compression ratio and to reduce the complexity. As a result, the proposed framework ensures low complexity at the encoder and marginal complexity at the decoder. Secondly, we proposed a new architecture for 3D DWT, which requires only words of onchip memory with a throughput of 8 results/cycle. Thirdly, we proposed a efficient architecture for compressed sensing module.
Organization of the paper as follows. Fundamentals of 3D DWT and compressed sensing is presented in Section II. Detailed description of the proposed architecture for 3D DWT and compressed sensing modules are provided in section III and IV respectively . Results and comparison are given in Section V. Finally, concluding remarks are given in Section VI.
2 Theoretical Framework
This section presents a theoretical background of the wavelets and compressed sensing. 3D DWT has been used to exploit the spatial and temporal redundancies of the video, thereby it eliminates the complex operations like ME, MC and deblocking filter. Compressed sensing is used to provide the error resilience and coding efficiency.
2.1 Discrete Wavelet Transform
Lifting based wavelet transform designed by using a series of matrix decomposition specified by the Daubechies and Sweledens in [8]. By applying the flipping [9] to the lifting scheme, the multipliers in the longest delay path are eliminated, resulting in a shorter critical path. The original data on which DWT is applied is denoted by ; and the 1D DWT outputs are the detail coefficients and approximation coefficients . For the Image (2D) above process is performed in rows and columns as well. Eqns.(1)(6) are the design equations for flipping based lifting (9/7) 1D DWT [37] and the same equations are used to implement the proposed row processor (1D DWT) and column processor (1D DWT).
(1)  
(2)  
(3)  
(4)  
(5)  
(6) 
Where , , , , , and [8]. The lifting step coefficients , , , and scaling coefficient are constants and its values , , , and , and
Lifting based wavelets are always memory efficient and easy to implement in hardware. The lifting scheme consists of three steps to decompose the samples, namely, splitting, predicting (eqn. (1) and (3)), and updating (eqn. (2) and (4)).
Haar wavelet transform is orthogonal and simple to construct and provide fast output. By considering the advantages of the Haar wavelets, the proposed architecture uses the Haar wavelet to perform the 1D DWT in temporal direction (between two adjacent frames). Sweldens et al. [45] developed a lifting based Haar wavelet. The equations of the lifting scheme for the Haar wavelet transform is as shown in eqn.(7)
(7) 
(8) 
Eqn.(8) is extracted by substituting Predict value as 1 and Update step value as 1/2 in eqn.(7), which is used to develop the temporal processor to apply 1D DWT in temporal direction ( dimension). Where L and H are the low and High frequency coefficients respectively.
The process which is shown in Fig. 2 represents the one level decomposition in spatial and temporal. Among all the subbands, only LLL subband (LL band of Lframes) is fully sampled and transmitted without applying any CS techniques because it represents the image in low resolution (Base layer in SVC domain) which is not sparse. All the other subbands (3D wavelet coefficients) except LLL exhibit approximate sparsity (Near to zero) and hard thresholding has been applied (consider as zero if value is less than threshold). After this step, conventional encoders use EZW coding to encode these wavelet coefficients which is complex to implement in hardware. EZW coding is replaced by CS in the proposed framework which exploits the sparsity preserving nature of random Bernoulli matrix by projecting the wavelet coefficients onto them. DWT version of each frame consists of four subbands. All the LL subbands of Lframes have large wavelet coefficients. Remaining three bands of Lframes and four subbands of Hframes exhibits sparsity on which compressed sensing is applied.
2.2 Compressed Sensing
Compressed sensing is an innovative scheme that enables sampling below the Nyquist rate, without (or with small) drop in reconstruction quality. The basic principle behind the compressed sensing consists in exploiting sparsity of the signal in some domain. In the proposed work, CS has been applied in wavelet domain.
Let be a set of real and discretetime samples. Let s be the representation of in the (transform) domain, that is:
(9) 
where = is a weighted coefficients vector, = , and is an basic matrix. Assume that the vector is sparse ( coefficients of are nonzero) in the domain , and . To get the sparsity of the signal , conventional transform coding is applied on whole signal (all samples) by using and gives the transform coefficients. Among the N coefficients, or more coefficients are discarded because they carry negligible energy and the remaining are encoded. The basic idea of CS is to remove this “sampling redundancy” by taking only samples of the signal, where Let be an element measurement vector given by: or with , are nonadaptive linear projections of a signal with typically .
Recovering the original signal means solving an underdetermined linear equation with usually no unique solution. However, the signal can be recovered losslessly from measurements, if the measurement matrix is designed in such a way that, it should preserve the geometry of the sparse signals and each of its submatrices possesses full rank. This property is called Restricted Isometry Property (RIP) and mathematically, it ensures that . Where represents the  norm of the vector . It has been observed that the random matrices drawn from independent and identically distributed (i.i.d.) Gaussian or Bernoulli distributions satisfy the RIP property with high probability.
The problem of signal recovery from CS measurements is very well studied in the recent years and there exists a host of algorithms that have been proposed such as Orthogonal Matching
Pursuit (OMP) [38][40], Iterative HardThresholding (IHT) [41], Iterative
SoftThresholding (IST) [42]. Although recently introduced Approximate Message Passing (AMP) algorithm [43] shows a similar structure to IHT and IST, it exhibits faster convergence. Literature [43],[44] shows that AMP performs excellently for many deterministic and highly structured matrices.
3 Proposed architecture for 3D DWT
The proposed architecture for 3D DWT comprising of two parallel spatial processors (2D DWT) and four temporal processors (1D DWT), is depicted in Fig. 1(b). After applying 2D DWT on two consecutive frames, each spatial processor (SP) produces 4 subbands, viz. LL, HL, LH and HH and are fed to the inputs of four temporal processors (TPs) to perform the temporal transform. Output of these TPs is a low frequency frame (Lframe) and a high frequency frame (Hframe). Architectural details of the spatial processor and temporal processors are discussed in the following sections.
3.1 Architecture for Spatial Processor
In this section, we propose a new parallel and memory efficient lifting based 2D DWT architecture denoted by spatial processor (SP) and it consists of row and column processors. The proposed SP is a revised version of the architecture developed by the Y. Hu et al.[20]. The proposed architecture utilizes the strip based scanning [20] to enable the tradeoff between external memory and internal memory. To reduce the critical path in each stage flipping model [9][37] is used to develop the processing element (PE). Each PE has been developed with shift and add techniques in place of multiplier. Lifting based (9/7) 1D DWT process has been performed by the processing unit (PU) in the proposed architecture. To reduce the CPD, processing unit is designed with five pipeline stages and multipliers are replaced with shift and add techniques. This modified PU reduces the CPD to (two adder delay). Fig. 3(a) shows the data flow graph (DFG) of the proposed PU and Fig. 3(b) depicts the internal architecture of the proposed PU. The number of inputs to the spatial processor is equal to 2P+1, which is also equal to the width of the strip. Where P is the number of parallel processing units (PUs) in the row processor as well as column processor. We have designed the proposed architecture with two parallel processing units (P = 2). The same structure can be extended to P = 4, 8, 16 or 32 depending on external bandwidth. Whenever row processor produces the intermediate results, immediately column processor start to process on those intermediate results. Row processor takes 5 clocks to produce the temporary results then after column processor takes 5 more clocks to to give the 2D DWT output; finally, temporal processor takes 2 more clock after 2D DWT results are available to produce 3D DWT output. As a summary, proposed 2D DWT and 3D DWT architectures have constant latency of 10 and 12 clock cycles respectively, regardless of image size N and number of parallel PUs (P). Details of the row processor and column processor are given in the following subsections.
Row Processor (RP)
Let X be the image of size NN, extend this image by one column by using symmetric extension. Now image size is N(N+1). Refer [20] for the structure of strip based scanning method. The proposed architecture initiates the DWT process in row wise through row processor (RP) then process the column DWT by column processor (CP). Fig. 4(a). shows the generalized structure for a row processor with P number of PUs. P = 2 has been considered for our proposed design. For the first clock cycle, RP get the pixels from X(0,0) to X(0,2P) simultaneously. For the second clock RP gets the pixels from next row i.e. X(1,0) to X(1,2P), the same procedure continues for each clock till it reaches the bottom row i.e., X(N,0) to X(N,2P). Then it goes to the next strip and RP get the pixels from X(0,2P) to X(0,4P) and it continues this procedure for entire image. Each PU consists of five pipeline stages and each pipeline stage is processed by one processing element (PE) as depicted in Fig. 3(b). First stage (shiftPE) provide the partial results which is required at stage (PEalpha), likewise Processing elements PEalpha to PEdelta (stage to stage) gives the partial results along with their original outputs. i.e. consider the PU1, PEalpha needs to provide output corresponding to eqn.(1) (), along with , it also provides the partial output which is required for the PEbeta. Structure of the PEs are given in the Fig. 3(b), it shows that multiplication is replaced with the shift and add technique. The original multiplication factor and the value through the shift and add circuit are noted in Table.1, it shows that variation between original and adopted one is extremely small. The maximum CPD provided by the these PEs is . The outputs , , and corresponding to PEalpha and PEbeta of last PU and PEgama of last PU is saved in the memories Memoryalpha, Memorybeta and Memorygama respectively. Those stored outputs are inputted for next subsequent columns of the same row. For a NN image rows is equivalent to N. So the size of the each memory is N1 words and total row memory to store these outputs is equals to 3N. Output of each PU are under gone through a process of scaling before it producing the outputs H and L. These outputs are fed to the transposing unit. The transpose unit has P number of transpose registers (one for each PU). Fig. 5(a) shows the structure of transpose register, and it gives the two H and two L data alternatively to the column processor.
Column Processor (CP)
The structure of the Column Processor (CP) is shown in Fig. 4(b). To match with the RP throughput, CP is also designed with two number of PUs in our architecture. Each transpose register produces a pair of H and L in an alternative order and are fed to the inputs of one PU of the CP. The partial results produced are consumed by the next PE after two clock cycles. As such, shift registers of length two are needed within the CP between each pipeline stages for caching the partial results (except between and pipeline stages). At the output of the CP, four subbands are generated in an interleaved pattern, i.e., (HL,HH), (LL,LH), (HL,HH), (LL,LH), and so on. Outputs of the CP are fed to the rearrange unit. Fig. 5(b) shows the architecture for rearrange unit, and it provides the outputs in subband order i.e LL, LH, HL and HH simultaneously, by using P registers and 2P multiplexers. For multilevel decomposition, the same DWT core can be used in a folded architecture with an external frame buffer for the LL subband coefficients.
Original  Multiplier  
PE  Multiplier  value through 
Value  shift and add  
PEalpha  =0.6305  =0.6328 
PEbeta  =11.90  =12 
PEgama  =21.378  =21.375 
PEdelta  =2.55  =2.565 
3.2 Architecture for Temporal Processor (TP)
Eqn.(8) shows that Haar wavelet transform depends on two adjacent pixels values. As soon as spatial processors are provide the 2D DWT results, temporal processors starts processing on the spatial processor outputs (2D DWT results) and produce the 3D DWT results. Fig. 1(b) shows that there is no requirement of temporal buffer, due to the subband coefficients of two spatial processors are directly connected to the four temporal processors. But it has been designed with 2 pipeline stages, it require 8 pipeline registers for each TP. Same frequency subband of the distinct spatial processors are fed to the each temporal processor. i.e. LL, HL, LH and HH subbands of spatial processor 1 and 2 are given as inputs to the temporal processor 1, 2, 3 and 4 respectively. Temporal processor apply 1D Haar wavelet on subband coefficients, and provide the low frequency subband and high frequency subband as output. By combining all low frequency subbands and high frequency subbands of all temporal processors provide the 3D DWT output in the form of LFrame and HFrame (2D DWT by spatial processors and 1D DWT by temporal processors).
4 Architecture for Compressed Sensing Module
The proposed 3D DWT module, simultaneously works on two video frames of size and provide eight 3D DWT subbands as its output. As shown in Fig. 1(b), CS is applied on all subbands of 3D DWT outputs, except LLL band (LL band of LFrame) and each subband is connected to one CS module. Size of the each subband equals to the half of the original frame for one level decomposition (N/2N/2). The main function of the CS module is to calculate the measured matrix from and by using the CS equation . Where is a input vector (for which CS need to calculate). Size of is equal to P* N/2 (N/2 is the height of single column in a subband), because proposed 3D DWT simultaneously works on P columns due to P number of PUs in the spatial processor. Proposed architecture has been designed with P = 2; so for each clock, alternative column coefficients are provided by the 3D DWT module for each subband. With P equals to 2, the size of is [2*N/2]1 = N1, and is the randomly generated Bernoulli matrix, size of the is MN, ( for some small constant c). Value of (norm) of the input vector . We have tested for different video sequences of size 512512 and 10241024 with different threshold values (wavelet coefficient value less than the threshold value consider as zero) have been observed and it shows that the value of K is not more than N/8 for given of size N1. Based on those observations, value of has been fixed to N/4.
Fig. 6 shows the internal architecture of CS module. Proposed architecture for CS based encoder has seven CS modules, one for each subband except LLL subband. The structure of seven CS modules are same and works simultaneously. For all these seven CS modules only one Bernoulli matrix has been used and it is stored in ROM, denoted by Bernmat. The size of the Bernmat is MN, each location has M bits representing one entire column and number of locations equals to N. Bernoulli matrix has been generated by using ‘binord’ function in the Matlab tool ( = binornd(1,0.5,M,N)), with equal probability for 0 and 1 of size MN. Here bit ‘0’ represents the value ‘+1’ and 1 represents ‘1’. This generated Bernoulli matrix has been loaded in the Bernmat (ROM) locations and is used by all CS modules. As shown in Fig. 6, input for a CS module is datain which is subband out from 3D DWT. For every clock one 15bit datain will arrive (alternative column per each clock). In , is column matrix of size M1, which is represented as = [, , , , …….. ], = or we can also calculate iterative fashion for every clock = + , it require N clocks to complete this operation, because = 0 to N1.
The proposed architecture uses M adders, one for each individual measurement . One input of the adder is which is the output of a multiplexer, where is either 0 or 1, if is 0, then multiply with +1 (datain), otherwise multiply with 1 ( = 2’s compliment of (datain )), this task has been done by connecting the as a selection line of the multiplexer and first and second inputs of the multiplexer is and respectively. The second input of adder is from partial result of in the previous clock. The proposed architecture for CS module utilize the two registers to store the M measurements () namely, Ymsr1 and Ymsr2, each of capacity M*16 bits (16 bits for each measurement). Ymsr1 is used store partial results of from 0 to N1 clocks. Just after completing N clocks, measurements are ready and are available in Ymsr1, then control circuit transfer the Ymsr1 data to Ymsr2 and clear the Ymsr1 for next set of measurements. The above procedure is repeated for all the columns of subband at the same time calculated measurements each of 16bit are send as output (Yout) from Ymsr2 by shifting 16 bits for each clock. This procedure is followed for all the seven subbands. Each measured matrix is sent for the entropy coding (Golomb Rice Coding) block and coded bit streams are transmitted through channel. LLL subband is directly coded by entropy coding block and then transmitted through channel by considering as a base layer. Entropy coding is out of scope of this paper, not discussed in this paper.
5 Results and Performance Comparison
5.1 Simulation Results
The proposed encoder has been simulated by using Matlab tool and functionality has been verified on (Downloaded from the NASA website) and video sequences of 512512 resolution, and video sequences of 256256 resolution. After applying the 3D DWT, all the HL, LH and HH subbands of LFrames and LL, HL, LH and HH subbands of HFrames are sent to CS. After applying the CS on 3D DWT coefficients measurements are passed through the entropy coder (Golomb Rice coding + run length encoding). Percentage of measurements are calculated before entropy coding. Compression Ration is the ratio of total number of bits in input frame and number of bits after the entropy coding. Table 2 shows that performance of the proposed framework competes with the existing IBMCTF [36] and H.264 [1]. Performance in terms of compression ratio and PSNR of the proposed encoder and decoder for and video sequences are noted from the level 1 to level 3 in Table 3.
CODEC  Video  Compression Ratio  PSNR (dB) 

Proposed  clock  24.24  44.01 
cyclone  16.85  34.2  
viplane  20.96  37.5  
IBMCTF [36]  clock  7.33  46.2 
cyclone  5.08  40.6  
viplane  5.28  47.33  
H.264 [1]  clock  62.33  42.65 
cyclone  22.1  38.4  
viplane  37.8  40.57 
Video clip  level  PSNR  Compression Ratio 




1  44  24.24  34.99  
2  33.2  41.67  23.82  
3  30.12  53.23  20.52  

1  34  16.85  43.7  
2  29  20.56  38.61  
3  25.5  23.3  36.6  

1  37.5  20.96  32.7  
2  31.5  35.63  23.5  
3  28  65.54  18.12 
5.2 Synthesis Results
The proposed architecture for CS based low complex video encoder has been described in Verilog HDL. Simulation results have been verified by using Xilinx ISE simulator. We have simulated the Matlab model which is similar to the proposed CS based low complex video encoder architecture and verified the 3D DWT coefficients and CS measurements. RTL simulation results have been found to exactly match the Matlab simulation results. The Verilog RTL code is synthesised using Xilinx ISE 14.2 tool and mapped to a Xilinx programmable device (FPGA) 7z020clg484 (zync board) with speed grade of 3. Table 4 shows the device utilisation summary of the proposed architecture and it operates with a maximum frequency of 265 MHz. The proposed architecture has also been synthesized using SYNOPSYS design compiler with 90nm technology CMOS standard cell library. Synthesis results of the proposed encoder is provided in Table 5, it consumes 90.08 mW power and occupies an area equivalent to 416.799 K equivalent gate count at frequency of 158 MHz.
Logic utilized  Used  Available  Utilization (%) 
Slice Registers  15917  106400  14% 
Number of Slice LUTs  47303  53200  88% 
Number of fully  15523  47697  32% 
used LUTFF pairs  
Number of Block RAM  3  140  2% 
Combinational Area  1072673 

Non Combinational Area  915778 
Total Cell Area  1988451 
Interconnect area  316449 
Operating Voltage  1.2 V 
Total Dynamic Power  80.17 mW 
Cell Leakage Power  9.90 mW 
5.3 Comparison
Table 6 shows the comparison of proposed 3D DWT architecture with existing 3D DWT architecture. It is found that, the proposed design has less memory requirement, High throughput, less computation time and minimal latency compared to [22], [23], [24], and [26]. Though the proposed 3D DWT architecture has small disadvantage in area and frequency, when compared to [24], the proposed one has a great advantage in remaining all aspects.
Table 7 gives the comparison of synthesis results between the proposed 3D DWT architecture and [26]. It seems to be proposed one occupying more cell area, but it included total on chip memory also, where as in [26] on chip memory is not included. Power consumption of the proposed 3D architecture is very less compared to [26].
Parameters  Weeks [22]  Taghavi [23]  A.Das [24]  Darji [26]  Proposed 
Memory requirement  +  + 5N  + 10N  2*(3N+40P)  
Throughput/cycle    1 result  2 results  4 results  8 results 
Computing time  + /2  /2P  
For 2 Frames  
Latency  + 0.5  cycles  cycles  /2 cycles  12 cycles 
Area      1825 slices  2490 slices  2852 slice LUTs 
Operating  200 MHz (ASIC)    321 MHz  91.87 MHz  265 MHz 
Frequency  (FPGA)  (FPGA)  (FPGA)  
Multipliers      Nil  30  Nil 
Adders  MACs    78  48  176 
Filter bank  length  D9/7  D9/7  D9/7  D9/7 (2D) + Haar (1D) 
Parameters  Darji et al.,[26]  Proposed 

Combinational Area  61351  526419 
Non Combinational Area  807223  553078 
Total Cell Area  868574  1079498 
Operating Voltage  1.98 V  1.2 V 
Total Dynamic Power  179.75 mW  38.56 mW 
Cell Leakage Power  46.87  4.86 mW 
6 Conclusions
In this paper, we have proposed memory efficient and high throughput architecture for CS based low complex encoder. The proposed architecture is implemented on 7z020clg484 FPGA target of zync family, also synthesized on Synopsys’ design vision for ASIC implementation. An efficient design of 2D spatial processor and 1D temporal processor reduces the internal memory, latency, CPD and complexity of a control unit, and increases the throughput. When compared with the existing architectures the proposed scheme shows higher performance at the cost of slight increase in area. The proposed encoder architecture is capable of computing 60 UHD (38402160) frames in a second. The proposed architecture is also suitable for scalable video coding. In addition, the complexity of the encoder is reduced to a great extent. The proposed encoder is considered to be suitable for applications including satellite communication, wireless transmission and data compression by high speed cameras.
References
 Advanced Video Coding for Generic Audiovisual Services, ITUT Rec. H.264 and ISO/IEC 1449610 (MPEG4 AVC), ITUT and ISO/IEC JTC 1, Version 1: May 2003, Version 2: May 2004, Version 3: Mar. 2005, Version 4: Sept. 2005, Version 5 and Version 6: June 2006, Version 7: Apr. 2007, Version 8 (including SVC extension): Consented in July 2007.
 B. Bross, W. J. Han, G. J. Sullivan, J. R. Ohm and T. Wiegand, “High Efficiency Video Coding (HEVC) Text Specification Draft 9”, document JCTVCK1003, ITUT/ISO/IEC Joint Collaborative Team on Video Coding (JCTVC), Oct. 2012.
 I.E. G. Richardson and Y. Zhao, “Adaptive algorithms for variablecomplexity video coding,” Proceeding of International Conference on Image Processing, Vol.1, pp.457  460, Oct 2001.
 I. Chakrabarti, B. K. N. Srinivasarao, S. K. Chatterjee, “Motion Estimation for Video Coding: Efficient Algorithms and Architectures,” Studies in Computational Intelligence, Springer International Publishing, Volume: 590, 2015, ISBN: 9783319143750 (Print), 9783319143767 (Online), DOI: 10.1007/9783319143767
 Q. Dai, X. Chen, and C. Lin,“A Novel VLSI Architecture for Multidimensional Discrete Wavelet Transform,”IEEE Trans. Circuits Syst. Video Technol.,, Vol. 14, No. 8, pp. 11051110, Aug. 2004.
 C. Cheng and K. K. Parhi, “Highspeed VLSI implementation of 2D discrete wavelet transform,” IEEE Trans. Signal Process., Vol. 56, No. 1, pp. 393403, Jan. 2008.
 B. K. Mohanty and P. K. Meher, “MemoryEfficient HighSpeed Convolutionbased Generic Structure for Multilevel 2D DWT.”IEEE Trans. Circuits Syst. Video Technol., Vol. 23, No. 2, pp. 353363, Feb. 2013.
 I. Daubechies and W. Sweledens, “Factoring wavelet transforms into lifting schemes,” J. Fourier Anal. Appl., Vol. 4, No. 3, pp. 247269, 1998.
 C.T. Huang, P.C. Tseng, and L.G. Chen, “Flipping structure: An efficient VLSI architecture for liftingbased discrete wavelet transform,” IEEE Trans. Signal Process., Vol. 52, No. 4, pp. 10801089, Apr. 2004.
 C.T. Huang, P.C. Tseng, and L.G. Chen, “Analysis and VLSI architecture for 1D and 2D discrete wavelet transform,” IEEE Trans. Signal Process., Vol. 53, No. 4, pp. 15751586, Apr. 2005.
 C.C. Cheng, C.T. Huang, C.Y. Ching, C.J. Chung, and L.G. Chen, “Onchip memory optimization scheme for VLSI implementation of line based twodimentional discrete wavelet transform,” IEEE Trans. Circuits Syst. Video Technol., Vol. 17, No. 7, pp. 814822, Jul. 2007.
 H.Y. Liao, M. K. Mandal, and B. F. Cockburn, “Efficient architectures for 1D and 2D liftingbased wavelet transforms,” IEEE Trans. Signal Process., Vol. 52, No. 5, pp. 13151326, May 2004.
 B.F. Wu and C.F. Chung, “A highperformance and memoryefficient pipeline architecture for the 5/3 and 9/7 discrete wavelet transform of JPEG2000 codec,” IEEE Trans. Circuits Syst. Video Technol., Vol. 15, No. 12, pp. 16151628, Dec. 2005.
 C.Y. Xiong, J. Tian, and J. Liu, “Efficient architectures for twodimensional discrete wavelet transform using lifting scheme,” IEEE Trans. Image Process., Vol. 16, No. 3, pp. 607614, Mar. 2007.
 W. Zhang, Z. Jiang, Z. Gao, and Y. Liu, “An efficient VLSI architecture for liftingbased discrete wavelet transform,” IEEE Trans. Circuits Syst. II, Exp. Briefs, Vol. 59, No. 3, pp. 158162, Mar. 2012.
 B. K. Mohanty and P. K. Meher, “Memory Efficient Modular VLSI Architecture for High throughput and LowLatency Implementation of Multilevel Lifting 2D DWT,” IEEE Trans. Signal Process., Vol. 59, No. 5, pp. 20722084, May 2011.
 A.Darji, S. Agrawal, Ankit Oza, V. Sinha, A.Verma, S. N. Merchant and A. N. Chandorkar, “DualScan Parallel Flipping Architecture for a LiftingBased 2D Discrete Wavelet Transform,”IEEE Trans. Circuits Syst. II, Exp. Briefs, Vol. 61, No. 6, pp. 433437, Jun. 2014.
 B. K. Mohanty, A. Mahajan, and P. K. Meher, “Area and power efficient architecture for highthroughput implementation of lifting 2D DWT,” IEEE Trans. Circuits Syst. II, Exp. Briefs, Vol. 59, No. 7, pp. 434438, Jul. 2012.
 Y. Hu and C. C. Jong,“A MemoryEfficient HighThroughput Architecture for LiftingBased MultiLevel 2D DWT,”IEEE Trans. Signal Process., Vol. 61, No. 20, pp.49754987, Oct. 15, 2013.
 Y. Hu and C. C. Jong, “A MemoryEfficient Scalable Architecture for LiftingBased Discrete Wavelet Transform,”IEEE Trans. Circuits Syst. II, Exp. Briefs, Vol. 60, No. 8, pp. 502506, Aug. 2013.
 J. Xu, Z.Xiong, S. Li, and YaQin Zhang, “MemoryConstrained 3D Wavelet Transform for Video Coding Without Boundary Effects,” IEEE Trans. Circuits Syst. Video Technol., Vol. 12, No. 9,pp. 812818, Sep. 2002.
 M. Weeks and M. A. Bayoumi, “ThreeDimensional Discrete Wavelet Transform Architectures,”IEEE Trans. Signal Process., Vol. 50, No. 8, pp.20502063, Aug. 2002.
 Z. Taghavi and S. kasaei, “A memory efficient algorithm for multidimensional wavelet transform based on lifting,” in Proc. IEEE Int. Conf. Acoust Speech Signal Process. (ICASSP) , Vol. 6, pp. 401404, 2003.
 A. Das, A. Hazra, and S. Banerjee,“An Efficient Architecture for 3D Discrete Wavelet Transform,”IEEE Trans. Circuits Syst. Video Technol., Vol. 20, No. 2, pp. 286296, Feb. 2010.
 B. K. Mohanty and P. K. Meher, “MemoryEfficient Architecture for 3D DWT Using Overlapped Grouping of Frames,”IEEE Trans. Signal Process.,, Vol. 59, No. 11, pp.56055616, Nov. 2011.
 A. Darji, S. Shukla, S. N. Merchant and A. N. Chandorkar, “Hardware Efficient VLSI Architecture for 3D Discrete Wavelet Transform,” Proc. of Int. Conf. on VLSI Design and Int. Conf. on Embedded Systems pp. 348352, 59 Jan. 2014.
 D. L. Donoho, “Compressed sensing,” in IEEE Trans. Info. Theory,, Vol. 58, No. 4, pp.12891306, 2006.
 E. Candes, “Compressive sampling,” in Int. Congress of Mathematics, Vol. 3, pp. 14331452, 2006.
 R. Baraniuk, “Compressive sensing,” in IEEE Signal Process. Mag., Vol. 25, pp. 2130, Mar. 2007.
 M. B. Wakin, J.N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, R.G. Baraniuk, “Compressive imaging for video representation and coding”, In Proceedings of Picture Coding Symposium (PCS06)., 2006, pp. 16.
 Yantian Hou and Feng Liu, “A Lowcomplexity Video Coding Scheme Based on Compressive Sensing”, in proceedings of International Symposium on Computational Intelligence and Design (ISCID), Vol. 2, pp. 326329, 2830 Oct. 2011.
 Siyuan Xiang and Lin Cai, “Scalable Video Coding with Compressive Sensing for Wireless Videocast”, IEEE International Conference on Communications (ICC), pp. 15, 59 June 2011, DOI: 10.1109/icc.2011.5963359
 Hong Jiang, Chengbo li, R.H.Cohen, P.A.Wilford, and Y. Zhang, “Scalable Video Coding Using Compressive Sensing”, Bell Technical Journal, Vol 16(4), pp. 149170, 2012.
 Chengbo Li, Wotao Yin, Hong Jiang and Yin Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization”, an international journal of Computational Optimization and Applications (Springer), Vol. 56, No.3, pp 507530, Dec. 2013,
 J. Ma, G. Plonka, and M. Y.Hussaini, “Compressive Video Sampling With Approximate Message Passing Decoding”, IEEE Trans. Circuits Syst. Video Technol., Vol. 22, No. 9, Sep. 2012.
 Y. Andreopoulos, A.Munteanu, J. Barbarien, M. Vander Schaar, Jan Cornelis, and P. Schelkens, “Inband motion compensated temporal filtering”, Signal Processing: Image Communication, Vol. 19 (2004) 653673.
 C.Y. Xiong, J.W. Tian, and J. Liu, “A Note on Flipping Structure: An Efficient VLSI Architecture for LiftingBased Discrete Wavelet Transform,” IEEE Trans. Signal Process., Vol. 54, No. 5, pp. 19101916, May 2006.
 J. A. Troppand, A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” in IEEE Trans. Info. Theory, Vol. 53, No. 12, pp. 46554666, 2007.
 Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,” in Proc. Asilomar Conference on Signals, Systems and Computers, IEEE Comput. Soc. Press, 1993.
 S. Chen, S. A. Billing and W. Lue, “Orthogonal least squares methods and their application to nonlinear system identification”, International Journal of Control, Vol. 50, No.5, pp.18731896, 1989.
 T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing”, Applied and Computational Harmonic Analysis, vol. 27(3), pp. 265274, 2009.
 K. Bredies and D. Lorenz, “Linear convergence of iterative softthresholding,” Journal of Fourier Analysis and Applications, Vol. 14, pp. 813837, 2008.
 D. L. Donoho, A. Maleki and A. Montanari, “Messagepassing algorithms for compressed sensing,” in Proceedings of the National Academy of Sciences of the United States of America, Vol.106, No. 45, pp. 1891418919, October 26, 2009. doi: 10.1073/pnas.0909892106.
 D. L. Donoho, A. Maleki and A. Montanari, “Message passing algorithms for compressed sensing: II. analysis and validation,” in Proceedings of IEEE conference on Information Theory Workshop (ITW), pp. 15, 68 Jan. 2010 held in cairo. DOI: 10.1109/ITWKSPS.2010.5503228
 W.Sweldens, “The Lifting Scheme: a Construction of Second Generation of Wavelets,” SIAM Journal on Mathematical Analysis, Vol.29 No.2, pp. 511546, 1998.