Screen Content Image Segmentation Using Sparse Decomposition and Total Variation Minimization
Sparse decomposition has been widely used for different applications, such as source separation, image classification, image denoising and more. This paper presents a new algorithm for segmentation of an image into background and foreground text and graphics using sparse decomposition and total variation minimization. The proposed method is designed based on the assumption that the background part of the image is smoothly varying and can be represented by a linear combination of a few smoothly varying basis functions, while the foreground text and graphics can be modeled with a sparse component overlaid on the smooth background. The background and foreground are separated using a sparse decomposition framework regularized with a few suitable regularization terms which promotes the sparsity and connectivity of foreground pixels. This algorithm has been tested on a dataset of images extracted from HEVC standard test sequences for screen content coding, and is shown to have superior performance over some prior methods, including least absolute deviation fitting, k-means clustering based segmentation in DjVu and shape primitive extraction and coding (SPEC) algorithm.
Decomposition of an image into multiple components has many applications in different tasks. One special case is the background-foreground segmentation, which tries to decompose an image into two components, background and foreground. It has many applications in image processing such as separate coding of background and foreground for compression of mixed content images , denoising, medical image segmentation , text extraction , and pre-processing steps for biometrics recognition ,.
Different algorithms have been proposed in the past for foreground-background segmentation in still images such as hierarchical k-means clustering in DjVu , shape primitive extraction and coding (SPEC)  and least absolute deviation fitting .
The hierarchical k-means clustering applies the k-means clustering algorithm with k=2 on blocks in multi-resolution. It first applies the k-means clustering algorithm on a large block to obtain foreground and background colors and then uses them as the initial foreground and background colors for the smaller blocks in the next stages. It also applies some post-processing at the end to refine the results. This algorithm has difficulty for the regions where background and foreground color intensities overlap. In SPEC algorithm, a two-step segmentation algorithm is proposed. In the first step the algorithm classifies each block of size into either pictorial block or text/graphics, by comparing the number of colors with the threshold 32. In the second step, it refines the segmentation result of pictorial blocks, by extracting shape primitives. Because blocks containing smoothly varying background over a narrow range can also have a small color number, it is hard to find a fixed color number threshold that can robustly separate pictorial blocks and text/graphics blocks. We have previously proposed a least absolute deviation fitting method, which fits a smooth model to the image and classifies the pixels to either background or foreground based on the fitting error . It use the norm on the fitting error to enforce the sparsity of the error term. Although this algorithm achieved significantly better segmentation than both DjVu and SPEC, it suffers from the existence of isolated points in foreground.
The issues with previous algorithms motivate us to design a new segmentation algorithm which overcomes the problems of previous algorithms. We propose a sparse-decomposition framework to perform this image segmentation task. Sparse representation has been used for various applications in recent years, including face recognition , visual tracking , morphological component analysis , recognition , image decomposition ,  and image restoration . Despite the huge application of sparse representation, there has not been many works using sparsity for still image segmentation.
Instead of looking at the intensities of pixels and deciding whether it should belong to background or foreground, we believe it is better to look at the smoothness of a group of pixels and then decide whether they should belong to background or foreground. The other important observation is that, the foreground layer should contain a set of connected pixels (such as the pixels in a text stroke, or a line in graphics), not a set of randomly located isolated points. Therefore in our segmentation algorithm, we enforce the extracted foreground pixels to be connected to each other by penalizing their total variation. Based on these two notions, we propose a sparse decomposition framework for the segmentation task. We model the background part of the image with a linear combination of a set of smoothly varying basis functions, and the foreground layer with a sparse component that have connected pixels.. A problem with our prior least absolute deviation approach is that it uses a fixed set of smooth basis functions and does not impose any prior on the weighting coefficients. It was challenging to determining the right set of smooth bases that can represent all background patterns well. When too few bases are used, some part of a complicated background will be considered foreground; On the other hand, with too many bases, some foreground pixels can be fitted into the smooth model, hence falsely segmented as the background. Our new formulation overcomes this difficulty by using a relatively rich set of bases, but enforcing the resulting coefficients to be sparse, by adding a regularization term, which penalizes the use of many bases for background representation. The proposed algorithm here, can also be used for decomposition of other types of signals .
The structure of the rest of this paper is as follows: Section II presents the core idea of the proposed segmentation methods. Section III describes the ADMM formulation for solving the sparse decomposition based segmentation. Section IV provides the experimental results for the proposed algorithm. And finally the paper is concluded in Section V.
Ii Sparse Decomposition Framework
It is clear that smooth background regions can be well represented with a few smooth basis functions, whereas the high-frequency component of the image belonging to the foreground, cannot be modeled with a smooth model. But using the fact that foreground pixels occupy a relatively small percentage of the images we can model them with a sparse component overlaid on background. Therefore it is fairly natural to think of mixed content image as a superposition of two components, one smooth and the other one sparse. Therefore we can use signal decomposition techniques to separate these two components.
We first need to derive a suitable model for background component. We divide each image into non-overlapping blocks of size , and then represent each image block denoted by , with a smooth model , where and denote the horizontal and vertical axes and denote the parameters of this smooth model. For the choice of smooth model, we propose to use a linear combination of basis functions , where denotes a 2D smooth basis function . We applied Karhunen-Loeve transform  to a training set of smooth background images and the optimal transforms turns out to be very similar to 2D DCT bases. Therefore we used a set of low frequency two-dimensional DCT basis functions, since they have been shown to be very efficient for image representation . The 2-D DCT function is defined as:
where and denote the frequency of the basis and and are normalization factors. We order all the possible basis functions in the conventional zig-zag order in the plane, and choose the first basis functions. Since we do not know in advance how many basis functions to include for the background part, we allow the model to choose from a large set of bases that we think are sufficient to represent the most "complex" background, while minimizing coefficient norm. Without such a restriction on the coefficients, we might end up with the situation that all foreground pixels are also modeled by the smooth layer.
Overall, each image block is represented as:
where and correspond to the smooth background region and foreground pixels respectively. So after decomposition, those pixels with large value in the component will be considered as foreground.
To have a more compact notation, we can look at the 1D version of this problem by converting the 2D blocks of size into a vector of length , denoted by , and denoting as where is a matrix of size in which the -th column corresponds to the vectorized version of and . The 1D version of is denoted by . Then Eq. (1) can be written as:
Now to perform image segmentation, we need to impose some prior knowledge about background and foreground to our optimization problem. First of all, as described earlier, we do not want to use too many basis functions for background representation, since by using many basis for background we might be able to even represent some part of the foreground regions with the smooth model and consider them as background (imagine the case that we use a complete set of bases for background representation). Therefore the number of nonzero components of should be small ( i.e. should be small). On the other hand we expect the majority of the pixels in each block to belong to the background component, therefore the number of nonzero components of should not be very large. This feature is very desirable in image and video compression applications, because the background component can be easily represented using a set of low-order DCT bases. So the more background pixels we have, the smaller bit-rate we usually need. And the last but not the least point is that we expect the nonzero component of the foreground to be connected to each other, therefore we can add a regularization term which promotes the connectivity of foreground pixels. Here we used total variation of the foreground component to penalize isolated points in foreground. Putting all of these priors together we will get the following optimization problem:
where and are some constants that need to be tuned. For the first two terms since is not convex, we use its approximated version to have a convex problem. For the total variation we can use either the isotropic or the anisotropic version of 2D total variation . To make our optimization problem simpler, we have used the anisotropic version in this algorithm, which is defined as:
After converting the 2D blocks into 1D vector, we can denote the total variation as below:
where and are the horizontal and vertical gradient operator matrices, and . Then we will get the following problem:
From the constraint in the above problem, we get and then we derive the following unconstrained problem:
This problem can be solved with different approaches, such as alternating direction method of multipliers (ADMM) , majorization minimization  and forward-backward-forward algorithm (FBF) . Here we present the formulation using ADMM algorithm.
Iii ADMM formulation for the proposed sparse decomposition
ADMM (Alternating Direction Method of Multipliers) is a popular algorithm which combines superior convergence properties of method of multiplier and decomposability of dual ascent. The ADMM formulation for the optimization problem in (7) can be derived as:
Then the augmented Lagrangian for the above problem can be formed as:
where and denote the dual variables and penalty parameters respectively. Then, by taking the gradient of the objective function w.r.t. to the primal variables and setting it to zero and using dual descent for dual variables, we will get update rule described in Algorithm 1:
where , and denotes the soft-thresholding operator applied elementwise and is defined as:
Iv Experimental Results
To enable rigorous evaluation of different algorithms, we have generated an extended version of dataset in , consisting of 332 image blocks of size 64x64, extracted from sample frames from HEVC test sequences for screen content coding . The ground truth foregrounds for these images are extracted manually by the author and then refined independently by another person. This dataset is publicly available at .
In our implementation, the block size is chosen to be =64, which is the same as the largest CU size in HEVC standard. The number of DCT basis functions, , is chosen to be 20. The weight parameters in the objective function are tuned by testing on a validation set (consist of 70 blocks of 64x64) and are set to be and . The ADMM algorithm described in Algorithm 1 is implemented in MATLAB and we made it publicly available in . The number of iteration for ADMM is chosen to be 50 and the parameter , and are all set to 1 as in .
We compare the proposed algorithm with three previous algorithms; hierarchical k-means clustering in DjVu, SPEC and least absolute deviation fitting. For SPEC, we have adapted the color number threshold and the shape primitive size threshold from the default value given in  when necessary to give more satisfactory result. Furthermore, for blocks classified as text/graphics based on the color number, we segment the most frequent color and any similar color to it (i.e. colors whose distance from most frequent color is less than 10 in luminance) in the current block as background and the rest as foreground.
To provide a numerical comparison between the proposed scheme and previous approaches, we have calculated the average precision and recall and F1 score (AKA F-measure) achieved by different segmentation algorithms over this dataset. The average precision, recall and F1 score by different algorithms are given in Table 1.
The precision and recall are defined as in Eq. (10), where TP, FP and FN denote true positive, false positive and false negative respectively. In our evaluation, we treat a foreground pixel as positive. A pixel that is correctly identified as foreground (compared to the manual segmentation) is considered true positive. The same holds for false negative and false positive.
The balanced F1 score is defined as the harmonic mean of precision and recall.
As can be seen, the proposed scheme achieved much higher precision and recall than hierarchical k-means and SPEC algorithms. Compared to the least absolute deviation fitting, the proposed formulation yields significant improvement in terms of precision, while also having a slightly higher recall rate.
To see the visual quality of the segmentation, the results for 5 test images (each consisting of multiple 64x64 blocks) are shown in Fig. 1.
|Segmentation Algorithm||Precision||Recall||F1 score|
|Hierarchical Clustering ||64%||69%||66.4%|
|Least Absolute Deviation ||91.4%||87%||89.1%|
|The proposed algorithm||94.3%||88%||90.9%|
It can be seen that in all cases the proposed algorithm gives superior performance over DjVu and SPEC. There are also noticeable improvement over our prior approach using LAD. Note that our dataset mainly consists of challenging images where the background and foreground have overlapping color ranges. For simpler cases where the background has a narrow color range that is quite different from the foreground, both DjVu and LAD will work well. On the other hand, SPEC does not work well when the background is fairly homogeneous within a block and the foreground text/lines have varying colors. The result for the rest of images in our dataset are publicly available and can be downloaded from . We would like to discuss briefly about the impact of varying , number of bases, on the accuracy of the algorithm. By increasing , we will have more inliers, i.e. less foreground pixels. Therefore by increasing from its optimal value, we will get a higher precision and lower recall.
This paper proposed a new algorithms for segmentation of background and foreground in images. The background is defined as the smooth component of the image that can be well modeled by a set of DCT functions and foreground as a sparse component overlaid on background. We propose a sparse decomposition framework to decompose the image into these two layers. Compared to our prior least absolute fitting formulation, the background layer is allowed to choose as many bases from a rich set of smooth functions, but the coefficients are enforced to be sparse so that will not falsely include foreground pixels. Total variation of the foreground component is also added to the cost function to promote the foreground pixels to be connected. This algorithm has been tested on several test images and compared with three other well-known algorithms for background/foreground separation and has shown significantly better performance.
The authors would like to thank Patrick Combettes and Ivan Selesnick for their useful comments and feedback regarding this work. We would also like to thank JCT-VC group for providing the HEVC test sequences for screen content coding.
-  R.L. DeQueiroz, R.R. Buckley and M. Xu, “Mixed raster content (MRC) model for compound image compression”, Electronic Imaging’99. International Society for Optics and Photonics, 1998.
-  S. Minaee, M. Fotouhi and B.H. Khalaj, “A geometric approach for fully automatic chromosome segmentation”, IEEE symposium on SPMB, 2014.
-  J. Zhang and R. Kasturi, “Extraction of Text Objects in Video Documents: Recent Progress”, Document Analysis Systems. 2008.
-  S. Minaee, A. Abdolrashidi and Y. Wang, “Iris Recognition Using Scattering Transform and Textural Features”, IEEE Signal Processing Workshop, 2015.
-  S. Minaee and Y. Wang, “Fingerprint Recognition Using Translation Invariant Scattering Network”, IEEE Signal Processing in Medicine and Biology Symposium, 2015.
-  P. Haffner, P.G. Howard, P. Simard, Y. Bengio and Y. Lecun, “High quality document image compression with DjVu”, Journal of Electronic Imaging, 7(3), 410-425, 1998.
-  T. Lin and P. Hao, “Compound image compression for real-time computer screen image transmission”, IEEE Transactions on Image Processing, 14(8), 993-1005, 2005.
-  S. Minaee and Y. Wang, “Screen content image segmentation using least absolute deviation fitting”, IEEE International Conference on Image Processing (ICIP), pp.3295-3299, Sept. 2015.
-  J. Wright, AY. Yang, A. Ganesh, SS. Sastry, and Y. Ma, “Robust face recognition via sparse representation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2), 210-227, 2009.
-  A Taalimi, H Qi, R Khorsandi, “Online multi-modal task-driven dictionary learning and robust joint sparse representation for visual tracking,” International Conference on Advanced Video and Signal Based Surveillance (AVSS), IEEE, 2015.
-  JL Starck, M Elad, D Donoho, “Redundant multiscale transforms and their application for morphological component separation”, Advances in Imaging and Electron Physics 132: 287-348, 2004.
-  A Taalimi, A Rahimpour, C Capdevila, Z Zhang, H Qi, “Robust coupling in space of sparse codes for multi-view recognition”, International Conference on Image Processing, IEEE, 2016.
-  JL. Starck, M. Elad, and DL. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Transactions on Image Processing, 14.10: 1570-1582, 2005.
-  S. Minaee, A. Abdolrashidi and Y. Wang, “Screen Content Image Segmentation Using Sparse-Smooth Decomposition”, Asilomar Conference on Signals,Systems, and Computers, IEEE, 2015.
-  J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration”, Image Processing, IEEE Transactions on 17.1: 53-69, 2008.
-  A Rahimpour, H Qi, T Kuruganti, D Fugate, “Non-Intrusive Load Monitoring of HVAC Components using Signal Unmixing”, Global Conference on Signal and Information Processing, IEEE, 2015.
-  Levey, A., and M. Lindenbaum. “Sequential Karhunen-Loeve basis extraction and its application to images,” Image Processing, IEEE Transactions on 9.8: 1371-1374, 2000.
-  A.B. Watson, “Image compression using the discrete cosine transform”, Mathematica journal 4.1: 81, 1994.
-  S Osher, M Burger, D Goldfarb, J Xu and W Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Modeling and Simulation 4.2: 460-489, 2005.
-  S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers”, Foundations and Trends in Machine Learning, 3(1), 1-122, 2011.
-  IW Selesnick, HL Graber, Y Ding, T. Zhang, RL. Barbour, “Transient Artifact Reduction Algorithm (TARA) Based on Sparse Optimization”, IEEE Transactions on Signal Processing, 6596-611, 2014.
-  P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Modeling and Simulation, vol. 4, no. 4, pp. 1168-1200, November 2005.
-  ISO/IEC JTC 1/SC 29/WG 11 Requirements subgroup, “Requirements for an extension of HEVC for coding of screen content,” in MPEG 109 meeting, 2014.
-  https://sites.google.com/site/shervinminaee/research/image-segmentation
-  https://web.stanford.edu/ boyd/papers/admm/