Learning Document Image Binarization from Data

Learning Document Image Binarization from Data

Abstract

In this paper we present a fully trainable binarization solution for degraded document images. Unlike previous attempts that often used simple features with a series of pre- and post-processing, our solution encodes all heuristics about whether or not a pixel is foreground text into a high-dimensional feature vector and learns a more complicated decision function. In particular, we prepare features of three types: 1) existing features for binarization such as intensity [1], contrast [2], and Laplacian [4]; 2) reformulated features from existing binarization decision functions such those in [6] and [7]; and 3) our newly developed features, namely the Logarithm Intensity Percentile (LIP) and the Relative Darkness Index (RDI). Our initial experimental results show that using only selected samples (about 1.5% of all available training data), we can achieve a binarization performance comparable to those fine-tuned (typically by hand), state-of-the-art methods. Additionally, the trained document binarization classifier shows good generalization capabilities on out-of-domain data.

1Introduction

As one of the most fundamental preprocessing methods in various document analysis work [8], document binarization aims to convert a color or grayscale document image into a monotonic image, where all text pixels of interest are marked in black with a white background. Mathematically, given a document image of size , image binarization assigns each pixel a binary class label according to a decision function in a meaningful way, namely

A successful document binarization process discards irrelevant and noisy information while preserving meaningful information in the binary image . This process reduces the space to represent a document image, and largely simplifies the complexity of advanced document analysis tasks [4].

Although human do not often face many difficulties in identifying texts even on some low-quality document images, the document image binarization problem is indeed subjective and ill-posed [4], and it involves many different challenges and combinations of challenges. For example, several of the well-known ones are 1) how to handle document degradations like ink blob, fade text etc.; 2) how to deal with uneven lighting; and 3) how to differentiate bleed-through text from normal text. In such difficult scenarios, human actually uses high-level knowledge that might not be easily captured by low-level features–such as a script character set and background texture analysis–to help decide which pixel is foreground text.

Classic solutions more or less seek heuristic thresholds in simple feature spaces. This can be further grouped into the so-called global thresholding and local thresholding methods [14] according to whether this threshold is location independent or not. For example, Otsu’s method [1] binarizes a pixel by comparing its pixel intensity to an optimal global threshold derived from intensity histogram [1] as shown in

In contrast Niblack’s method [6] uses the decision function

where is a parameter below 0, and and denote the mean and standard deviation of pixel intensities within a region of size . Although heuristic solutions are very efficient–may only requiring a constant number of operations per pixel, and work fairly well on many well-conditioned document images, it is clear that simple features and decision functions are insufficient for handling difficult cases.

To achieve robust document binarization, many efforts are being made in the areas of 1) image normalization/adaptation, 2) discriminative feature space, and 3) more complicated decision functions. For example, Lu et al.[16] proposes a local thresholding approach that mainly relies on background estimation and stroke estimation. Su et al.[2] finds that Otsu’s thresholding helps attain more discriminative power in a local contrast feature space. Sauvola et al.[17] adds the parameter to allow a non-linear decision plane .

Although many of these attempts work well when method assumptions are satisfied and method parameters are appropriate, adapting a heuristic binarization method to a new domain is often not easy. Indeed, Lazzara et al. [18] show that the original Sauovla method might fail even for well-scanned document images because of text fonts of different sizes.

Unsupervised learning recently dominates document binarization area. In [19], a document image is first clustered into three classes, namely foreground, background and uncertain, and pixels in the uncertain class will be further classified into either the foreground or background class according to their distances from these two classes. In [4], an image is first transformed into a Laplacian feature space, and a global energy function is constructed to ensure that resulting binary labels are optimal in the sense of a predefined Markov random field. In [20], an unsupervised ensemble of expert frameworks is used to combine multiple binarization candidates. Although these methods do not require a training stage, some rely on theoretical models or heuristic rules whose assumptions may not be necessarily satisfied, some require expensive iterative tuning and optimizations, and thus no surprise to see they are not reliable for certain types of degradations [21].

Although image binarization is clearly a classification problem, supervised learning-based binarization solutions are still rare in the community. In this letter we discuss our initial attempts to solve the the document image binarization problem using supervised learning. The remainder of our paper is organized as follows: Section II overviews our solution and discusses all used features. Section III provides implementation details related to training and testing. Section IV shows our experimental results, and Section V concludes this paper.

2Feature Engineering

Our goal is to develop a generic solution without preset parameters and pre- or post-processing. Specifically, we are interested in learning a decision function that maps a d feature vector extracted around a pixel to a binary space in a meaningful way, i.e.

Detailed feature engineering discussions are given below.

Existing Features

Since a number of simple tasks can be accomplished just by applying Otsu’s method. We thus include a pixel intensity and its deviation from the Otsu’s threshold as features below

In addition, we also use local statistics of Eqs. and , but with respect to different scales, i.e.,

where we make the size of local window be associated with scales , and estimate stroke width using Su’s method [3]. Inspired by the success of the Su [2] and Howe methods [4], we include their contrast and Laplacian features shown in and .

Exponential Truncated Niblack Index

To include Niblack’s decision function in our considerations, we first rearrange terms in according to , as shown below

and then compute a so-called Exponential Truncated Niblack Index (ETNI) feature as follows.

Figure 1 compares an image in the original form and its corresponding ETRI feature space.

Figure 1: ETRI features for image DIBCO2010_HW04. (a) Original image; (b) ETNI feature for R of size 64\times 64.
ETRI features for image DIBCO2010_HW04. (a) Original image; (b) ETNI feature for R of size 64\times 64.
Figure 1: ETRI features for image DIBCO2010_HW04. (a) Original image; (b) ETNI feature for of size .

Logistic Truncated Sauvola Index

Similarly, we rearrange terms in Sauvola’s decision function according to for its key parameter as follows,

Since could be , we normalize this index by using the logistic function shown in Eq. , and call it the Logistic Truncated Sauvola Index (LTSI),

where the range of is , and the condition ensures the sign consistency of . LTSI thus reflects the Sauvola decision surface. A sample result of the LTSI feature is given in Figure 2.

Figure 2: LTSI features for image DIBCO2011_PR05. (a) Original image; (b) LTSI feature for R of size 8\times 8.
LTSI features for image DIBCO2011_PR05. (a) Original image; (b) LTSI feature for R of size 8\times 8.
Figure 2: LTSI features for image DIBCO2011_PR05. (a) Original image; (b) LTSI feature for of size .

Logarithm Intensity Percentile Features

Intuitively, the darkness of a pixel is related to whether it is a text pixel. Given a region , the percentile of the pixel’s intensity can be computed as

where denotes the indicator function whose value is 1 when and 0 otherwise, and denotes the cardinality function. It is clear that this percentile is a type of rank feature, and thus is invariant to any monotonic transform on the original intensity space. To give a higher resolution for lower percentiles, we use the logarithm version of as shown in , and call it Logarithm Intensity Percentile (LIP) feature. Here is a threshold ( =.01 in this paper).

With regard to , we make parallelogram cover multiple rows, columns, diagonals, and inverse diagonals. The number of rows, columns, diagonals and inverse diagonals in is made to be times the estimated stroke width . Finally, we also compute the LIP features with respect to the entire image and the maximum percentile among all previously extracted LIP features. Fig. ? shows the original document with its corresponding features in the LIP spaces. As one can see, the LIP space indeed provides more discriminative powers.

LIP features for image DIBCO2011_HW1. (a) original image; (b) global LIP; (c)-(e) LIP along row, column, and diagonal; and (f) max LIP of all directions. LIP features for image DIBCO2011_HW1. (a) original image; (b) global LIP; (c)-(e) LIP along row, column, and diagonal; and (f) max LIP of all directions. LIP features for image DIBCO2011_HW1. (a) original image; (b) global LIP; (c)-(e) LIP along row, column, and diagonal; and (f) max LIP of all directions. LIP features for image DIBCO2011_HW1. (a) original image; (b) global LIP; (c)-(e) LIP along row, column, and diagonal; and (f) max LIP of all directions. LIP features for image DIBCO2011_HW1. (a) original image; (b) global LIP; (c)-(e) LIP along row, column, and diagonal; and (f) max LIP of all directions. LIP features for image DIBCO2011_HW1. (a) original image; (b) global LIP; (c)-(e) LIP along row, column, and diagonal; and (f) max LIP of all directions.
(a) (b) (c) (d) (e) (f)

Relative Darkness Index Features

Inspired by the great success of local ternary patterns(LTP) [22] in face recognition, we borrow their essences here. LTP relies on the comparison of a center pixel’s intensity with each pixel in a set of neighbors that are on a radius circle, and the th code in a length- code string is defined as

where and denote the relative coordinates of a neighbor w.r.t. a center pixel, and is a preset tolerance. However, the number of possible LTP codes is often huge to effectively encode. Though one may reduce this number by considering all shift-equivalent codes as one, or separating a ternary code into two binary codes, we find that the simple frequency count of each code in a code string has already revealed many intrinsic properties of pixels, and we call them the Relative Darkness Index (RDI) features. Precisely, given the code and neighbors on a radius circle, the RDI feature can be defined as below

As one can see from Figure 3(c-e), most of the nearly homogeneous background parts are of high code 0 indices; pixels close to strong edges are dominated by code+1 indices, and foreground text pixels have high response on code-1 indices. To further enhance RDI’s discriminative power, we compute the ratios of one code to the sum of itself and another code as well (see Figure 3(f-h)).

Figure 3: RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature X^{{\rm{RDI}}| {\cal{C}}, 8 }for {\cal{C}}\in\{0\!,\!-1\!,\!+1\}, respectively; (f) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{0,+1\}, 8 }} ; (g){ X^{{\rm{RDI}}| {\cal{C}}\!=\!-\!1, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,+1\}, 8 }} ; and (h) { X^{{\rm{RDI}}| {\cal{C}}\!=\!+\!0, 8 }\over X^{{\rm{RDI}}| {\cal{C}}\!\in\!\{-1,0\}, 8 }} .
Figure 3: RDI features for image DIBCO2013_PR05 (darker pixels indicate a value close to 0). (a) original color image; (b) original image; (c)-(e) RDI feature for , respectively; (f) ; (g) ; and (h) .

Other Features

Besides of features discussed above, we extract features from the global image statistics, including the mean and standard deviation of the entire image intensities, the mean and standard deviation of the percentile image, the 32 bins of normalized histogram (sum to 1) for image intensities, and the 32 bins of a normalized logarithmed histogram.

3Training and Testing Settings

In experiments, we use the widely accepted Document Image Binarization Contest (DIBCO) from 2009 to 2014 [8] as our training and testing data; it totals 76 images. We adopt the leave-one-out strategy where we first pick a DIBCO image set of a particular year as our testing set, and use the rest as our training set.

Feature Summary

We summarize all used features with dimensions and corresponding normalization considerations in Table 1. Here, the stroke width can be estimated via various methods; we use Su’s method [3]. ‘Scale’ indicates the side of local square region .

Table 1: Used Features
Type Scale Dimension Normalization
Local int.
Otsu diff. N/a 1 divide by 255
Local avg./std. 1,2,4,8 4/4 divide by 255
Su/Howe 1,1,2,4 4/4 MinMax
ETRI/LTSI 1,2,4,8 4/4 N/a
LIP 1,1,2,4,8 1+44+1 N/a
RDI 1,1,2,4,8 56 N/a
Global int. avg./std. N/a 1/1 divide by 255
Global perc. avg./std. N/a 1/1 N/a
Global int./perc. loghist. N/a 32/32 N/a
Total 142

Sampling Strategy

Selecting training samples is essential in task. First, one may not be handle a big training set of this task. These 76 images totally contain more than 80 million pixels. Assuming each feature is store in float32 format, we need MB (256GB) memory for just training features, while this requirement clearly beyond the capacities of most computers nowadays. Second, one may notice the imbalanced training data. We know both the background nontext class and foreground text class in the binarization problem actually cover different subclasses [19], while we also know nearly homogeneous background and foreground dominate our training data.

To solve both problems, we first artificially classify all pixels in an image into 16 subclasses, each is represented as a 4-bit string, where each bit indicates whether or not this pixel should be treated as a pixel in Otsu’s foreground, in Niblack’s foreground, within pixels away from reference image edges, and in a reference annotated foreground. We draw the same number of random samples for each subclass. Figure 4 illustrates the samples we extracted that balanced both foreground and background subclasses.

Figure 4: Sampling strategy. (a) pixels with subclass labels for image DIBCO2012-HW02 (each color denotes a subclass); (b) samples extracted from DIBCO2012-HW02 image balanced subclasses (red/green dots indicate background/foreground.)
Sampling strategy. (a) pixels with subclass labels for image DIBCO2012-HW02 (each color denotes a subclass); (b) samples extracted from DIBCO2012-HW02 image balanced subclasses (red/green dots indicate background/foreground.)
Figure 4: Sampling strategy. (a) pixels with subclass labels for image DIBCO2012-HW02 (each color denotes a subclass); (b) samples extracted from DIBCO2012-HW02 image balanced subclasses (red/green dots indicate background/foreground.)

Training and Testing Strategies

In all of the following experiments, we perform a two-pass training. We first extract 9,600 samples (subclass balanced) from each training image and train a simple classifier, say Gaussian Naive Bayes. We use this classifier to decode all training images, and extract additional 9,600 erroneous samples (subclass balanced) from each image, and use all extracted samples to train an more complicated sklearn [24] ExtraTrees classifier [25]. Note in total we extract 19,200 samples per image, which only account for roughly about 1.5% of all samples. Classifier parameters are obtained from a 10-folded cross-validation using all samples. A final classifier is trained by using all extracted samples and validated parameters. Figure 5 plots the feature importance of each feature type in terms of the overall contribution and the averaged dimensional contribution with respect to each feature type. As one can see, RDI, Global int. hist. and LIP are the three most useful feature categories in terms of overall contributions; and Su, LTSI and RDI are the three best features in terms of dimensional contributions.

In testing, we use the final classifier to predict the class label for all pixels in a testing image. Depending on the size of an image, the decoding time may vary between 5s to 30s.

Figure 5: Feature importance. Left: overall importance of each feature type; and Right: dimensional feature importance for each feature type.
Feature importance. Left: overall importance of each feature type; and Right: dimensional feature importance for each feature type.
Figure 5: Feature importance. Left: overall importance of each feature type; and Right: dimensional feature importance for each feature type.

4Experimental Results

Performance on DIBCO Datasets

Table II lists performance of our proposed supervised binarization solution over the DIBCO 2012 [11], 2013 [12], and 2014 [13] datasets using standard metrics F1-score, peak signal-to-noise ratio (PSNR), and distance reciprocal distortion (DRD) ( metric definitions can be found in [8] ). As we can see, our performance is comparable to the top five methods. We also notice that our binarization classifier’s performance is very stable among all three datasets, especially since it always keeps a DRD score below 3 pixels. Sample decoding results are compared to the top two contest methods in Fig. ?. As one can see, our supervised solution successfully learnt knowledge to handle difficult cases: 1) faded text; and 2) text on a dirty background.

Performance Evaluations On DIBCO Datasets
Method Contest Rank F1% PSNR DRD
Lelore et al.’s [11] 2 92.85 20.57 2.660
[2] 3 91.54 20.14 3.048
Nina’s [11] 4 90.38 19.30 3.348
Yazid et al.’s [11] 5 91.85 19.65 3.056
Ours 92.01 19.92 2.601
Su et al.’s method [12] 1 92.12 20.68 3.100
[5] 2 92.70 21.29 3.180
[20] 3 91.81 20.68 4.020
[26] 4 91.69 20.54 3.590
[23] 5 90.92 19.32 3.910
Ours 91.40 20.13 2.637
Mesquita et al.’s [13] 1 96.88 22.66 0.902
[5] 2 96.63 22.40 1.001
[27] 3 93.35 19.45 2.194
Ziaratban et al.’s [13] 4 89.24 18.94 4.502
Mitianoudis et al.’s [13] 5 89.77 18.49 4.502
Ours 92.69 19.47 2.571
DIBCO2012_H07 DIBCO2014_HW07
Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results. Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results.(a)
Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results. Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results.(b)
Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results. Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results.(c)
Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results. Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results.(d)
Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results. Binarization results for image. (a) original images; (b) reference binarized images (highlighted red regions indicate disagreements); (c) results of contest rank 1; (d) results of contest rank 2; and (e) our results.(e)

Learning Curve

As we mentioned previously, only about 1.5% of all available training samples are used in our experiments. We investigate the relationship between the amount of training samples and the binarization performance using the test set of DIBCO 2012 in Table 2. As in many pattern recognition problems, the improvement of binarization performance gets smaller as the number of samples increases.

Table 2: Performance v.s. Training Samples
#Samples 1,920 5,760 9,600 13,440 15,360 17,280 19,200
F1%
PSNR 19.64 19.81 19.86 19.86 19.88 19.93 19.92
DRD 2.797 2.689 2.637 2.634 2.618 2.599 2.601

Document Binarization in the Wild

Although images in DIBCO datasets have already covered a wide range of variations, there are clearly more variations and combinations of variations that are not included in DIBCO training data. We therefore test our learned classifier on out-of-domain document images, and we observe satisfactory results (see Fig. ?).

Binarization results of out-of-domain data Binarization results of out-of-domain data Binarization results of out-of-domain data
Binarization results of out-of-domain data Binarization results of out-of-domain data Binarization results of out-of-domain data

5Conclusion

In this paper we investigate the document binarization solution via supervised learning. Unlike previous efforts, this solution is parameter-free and fully trainable. Our experimental results showed that one can learn a reasonably well binarization decision function from a small set of carefully selected training data. Such a learned decision function not only works well for in-domain data, but can also apply to out-of-domain data. In future work, we will explore several interesting aspects such as discriminative features (e.g., image moments and connected component attributes) and classifier adaptation on the fly.

References

  1. N. Otsu, “A threshold selection method from gray-level histograms,” Automatica, vol. 11, no. 285-296, pp. 23–27, 1975.
  2. B. Su, S. Lu, and C. L. Tan, “Binarization of historical document images using the local maximum and minimum,” in Proceedings of the 9th IAPR International Workshop on Document Analysis Systems.1em plus 0.5em minus 0.4emACM, 2010, pp. 159–166.
  3. ——, “Robust document image binarization technique for degraded document images,” Image Processing, IEEE Transactions on, vol. 22, no. 4, pp. 1408–1417, 2013.
  4. N. R. Howe, “A laplacian energy for document binarization,” in Document Analysis and Recognition (ICDAR), 2011 International Conference on. 1em plus 0.5em minus 0.4emIEEE, 2011, pp. 6–10.
  5. ——, “Document binarization with automatic parameter tuning,” International Journal on Document Analysis and Recognition (IJDAR), vol. 16, no. 3, pp. 247–258, 2013.
  6. W. Niblack, An Introduction to Digital Image Processing.1em plus 0.5em minus 0.4emPrentice-Hall, 1986.
  7. J. Sauvola and M. Pietikäinen, “Adaptive document image binarization,” Pattern recognition, vol. 33, no. 2, pp. 225–236, 2000.
  8. B. Gatos, K. Ntirogiannis, and I. Pratikakis, “Icdar 2009 document image binarization contest (dibco 2009),” in Document Analysis and Recognition (ICDAR), 2009 International Conference on, vol. 9, 2009, pp. 1375–1382.
  9. I. Pratikakis, B. Gatos, and K. Ntirogiannis, “H-dibco 2010-handwritten document image binarization competition,” in Frontiers in Handwriting Recognition (ICFHR), 2010 International Conference on.1em plus 0.5em minus 0.4emIEEE, 2010, pp. 727–732.
  10. ——, “Icdar 2011 document image binarization contest (dibco 2011),” in Document Analysis and Recognition (ICDAR), 2011 International Conference on, 2011, pp. 1506–1510.
  11. ——, “Icfhr 2012 competition on handwritten document image binarization (h-dibco 2012).” ICFHR, vol. 12, pp. 18–20, 2012.
  12. ——, “Icdar 2013 document image binarization contest (dibco 2013),” in Document Analysis and Recognition (ICDAR), 2013 International Conference on.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 1471–1476.
  13. K. Ntirogiannis, B. Gatos, and I. Pratikakis, “Icfhr2014 competition on handwritten document image binarization (h-dibco 2014),” in 2014 14th International conference on frontiers in handwriting recognition, 2014, pp. 809–813.
  14. ——, “A combined approach for the binarization of handwritten document images,” Pattern Recognition Letters, vol. 35, pp. 3–15, 2014.
  15. X. Peng, H. Cao, R. Prasad, and P. Natarajan, “Text extraction from video using conditional random fields,” in Document Analysis and Recognition (ICDAR), 2011 International Conference on, Sept 2011, pp. 1029–1033.
  16. S. Lu, B. Su, and C. L. Tan, “Document image binarization using background estimation and stroke edges,” International journal on document analysis and recognition, pp. 1–12, 2010.
  17. J. Sauvola, T. Seppanen, S. Haapakoski, and M. Pietikainen, “Adaptive document binarization,” in Document Analysis and Recognition, 1997., Proceedings of the Fourth International Conference on, vol. 1.1em plus 0.5em minus 0.4emIEEE, 1997, pp. 147–152.
  18. G. Lazzara and T. Géraud, “Efficient multiscale sauvola’s binarization,” International Journal on Document Analysis and Recognition (IJDAR), vol. 17, no. 2, pp. 105–123, 2014.
  19. B. Su, S. Lu, and C. L. Tan, “A learning framework for degraded document image binarization using markov random field,” in Pattern Recognition (ICPR), 2012 21st International Conference on.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 3200–3203.
  20. R. F. Moghaddam, F. F. Moghaddam, and M. Cheriet, “Unsupervised ensemble of experts (eoe) framework for automatic binarization of document images,” in Document Analysis and Recognition (ICDAR), 2013 12th International Conference on.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 703–707.
  21. H. Ziaei Nafchi, R. Farrahi Moghaddam, and M. Cheriet, “Phase-based binarization of ancient document images: Model and applications,” 2014.
  22. X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” Image Processing, IEEE Transactions on, vol. 19, no. 6, pp. 1635–1650, 2010.
  23. M. A. Ramírez-Ortegón, E. Tapia, L. L. Ramírez-Ramírez, R. Rojas, and E. Cuevas, “Transition pixel: A concept for binarization based on edge detection and gray-intensity histograms,” Pattern Recognition, vol. 43, no. 4, pp. 1233–1243, 2010.
  24. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825–2830, 2011.
  25. P. Geurts, D. Ernst, and L. Wehenkel, “Extremely randomized trees,” Machine learning, vol. 63, no. 1, pp. 3–42, 2006.
  26. T. Lelore and F. Bouchara, “Fair: A fast algorithm for document image restoration,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 8, pp. 2039–2048, Aug 2013.
  27. H. Z. Nafchi, R. F. Moghaddam, and M. Cheriet, “Historical document binarization based on phase information of images,” in Computer Vision-ACCV 2012 Workshops.1em plus 0.5em minus 0.4emSpringer, 2013, pp. 1–12.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
13141
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description