Towards a Multi-modal, Multi-task Learning based Pre-training Framework for Document Representation Learning

Towards a Multi-modal, Multi-task Learning based Pre-training Framework for Document Representation Learning

Abstract

In this paper, we propose a multi-task learning-based framework that utilizes a combination of self-supervised and supervised pre-training tasks to learn a generic document representation. We design the network architecture and the pre-training tasks to incorporate the multi-modal document information across text, layout, and image dimensions and allow the network to work with multi-page documents. We showcase the applicability of our pre-training framework on a variety of different real-world document tasks such as document classification, document information extraction, and document retrieval. We conduct exhaustive experiments to compare performance against different ablations of our framework and state-of-the-art baselines. We discuss the current limitations and next steps for our work and make the code available to promote future research in this direction.

\copyrighttext\affiliations

1IBM Cloud, India
2IBM Research, India
{subhojeet,shamujum,himapatel}@in.ibm.com

1 Introduction

In the era of digitization, most businesses are turning towards leveraging artificial intelligence (AI) techniques to exploit the information contained in business documents. Traditional information extraction (IE) approaches utilize Natural Language Processing (NLP) methods to process the information from documents expressed in the form of natural language text Manevitz and Yousef (2001). However, documents contain rich multi-modal information that includes both text and the document layout. The document layout organises the textual information into different formats such as sections, paragraphs, tables, multi-column etc. utilising different font-types/colors/positions/sizes/styles. Further, important visual cues are also indicated through figures/charts/logos etc. and the overall document page appearance. In general, information in a document spans over multiple pages which gives rise to a variety of complex document layouts that can be observed in scientific articles, invoices, receipts, emails, contracts, presentations, blogs, etc. Analyzing and understanding these documents is a challenging endeavor and requires a multi-disciplinary perspective combining NLP, computer vision (CV), and knowledge-representation to learn a generic document representation suitable for different downstream applications DI (2019).

Recent approaches towards document analysis have explored frameworks that utilize information from document text, document layout and document image in different capacities Majumder et al. (2020); Katti et al. (2018); Yang et al. (2017) for specific document tasks. Majumder et al. (2020) have proposed joint training of document text and structure for the task of IE from form-like documents, while Yang et al. (2017) combine text and image information for the task of semantic segmentation of documents. Their proposed frameworks optimize the network performance with respect to downstream task which are not suitable for other tasks. To address this limitation, Xu et al. (2020) proposed a pre-training technique based on the BERT transformer architecture Devlin et al. (2018), to combine text and layout information from scanned documents. They showcase applicability of their pre-trained network on different downstream tasks further utilizing the image information during fine-tuning for each task. Although Xu et al. (2020) presents a pre-trained framework to learn document representation, there are two limitations to their approach - (i) the framework only allows for single page documents and (ii) proposed pre-training tasks cannot utilize image information for learning document representation. In the real-world scenario, multi-page documents are common with different pages potentially containing different information across text, layout, and image dimensions. Also, the page image captures the overall layout beyond the appearance of text tokens in the document. Thus, for serving different documents tasks, a unified pre-training framework that learns a generic document representation from all three modalities and works on multi-page documents is necessary.

Figure 1: Applicability of our framework on multi-page documents for different downstream tasks - (a) Document Classification (b) Information Extraction (c) Document Retrieval

.

In this paper, we propose such a generic document representation learning framework that takes as input the document text, layout, and image information applicable to different document tasks. Specifically, we encode the multi-modal document information as - (i) text and position embeddings similar to BERT Devlin et al. (2018) (ii) text token 2D position embeddings to capture the layout, (iii) text token image embeddings to capture their appearance, and (iv) document page image and position embeddings to learn the document representation capable of handling multi-page documents. In order to handle large token sequences courtesy of multi-page documents, we utilize the Longformer model proposed by Beltagy et al. (2020) as the backbone of our framework which introduces an attention mechanism that scales linearly with the sequence length. Following the work of Xu et al. (2020), we utilize the Masked Visual Language Modelling (MVLM) task and a document classification task that enforces the joint pre-training of all the input embeddings. To further ensure the network learns from the image embeddings, we introduce two additional self-supervised pre-training tasks in our framework - (i) document topic modeling (DTM) and (ii) document shuffle prediction (DSP). Similar to the work of Cosma et al. (2020), we mine the latent topics from the document text and train our framework to predict the topic distribution using only the document page image embeddings for the task of DTM. On the other hand, DSP involves shuffling the page image order while keeping the other embeddings intact for randomly sampled documents during training to identify if the document is tampered with. While DSP task enforces the joint pre-training of the image embeddings with the text and layout embeddings, DTM task helps to learn richer page image embeddings. As explored by different approaches in prior art Ruder (2017), we employ a multi-task learning framework to simultaneously train multiple objectives of the different pre-training tasks to learn shared representations across the text, layout, and image modalities of the documents. We train our network on the publicly available ArXiv dataset Arxiv (2020) which contains millions of research articles spanning a variety of STEM domains such as mathematics, physics, computer science, etc.

Fig. 1 signifies the applicability of our pre-trained embeddings for different document tasks. We evaluate the performance of our framework on the following tasks and datasets - (i) Form Understanding and IE from scanned forms (FUNSD dataset) Guillaume Jaume (2019) (ii) Document Classification (RVL-CDIP dataset and ArXiV dataset) Harley et al. (2015); Arxiv (2020) (iii) Table Token Classification Gao et al. (2019) and (iv) Document Retrieval Arxiv (2020). We conduct an exhaustive set of experiments to analyze the performance of our pre-trained embeddings against state-of-the-art (SOTA) baselines and ablations of our framework. We’re able to beat the SOTA baselines trained on comparable dataset size and network parameters for most of these tasks. In summary, the main contributions of this work are:

  • We introduce a self-supervised, multi-task learning framework that combines information across text, layout, and image modalities to learn a generic document representation applicable to different document tasks.

  • We introduce topic-modeling and document shuffling as self-supervised tasks in addition to BERT pre-training tasks Devlin et al. (2018) in our multi-task learning framework to learn shared representations across document text, layout, and image modalities.

  • Our proposed framework works with multi-page documents in contrast to most of the prior-art approaches which are limited to single-page documents.

  • We conduct exhaustive experiments to compare performance against different ablations of our framework and SOTA baselines to showcase the applicability of our framework on different document tasks.

2 Approach

In this section, we describe the details of our proposed architecture. We start by briefly reviewing the Longformer architecture Beltagy et al. (2020), and introduce the modifications incorporated to encode rich visual, text, and layout information present in multi-page PDF documents. Finally, we describe the multi-tasking framework used for pre-training our architecture.

Figure 2: Demonstration of the proposed architecture encoding two sample pages from a PDF document. At each time-step, the input to the architecture is a text token , page id , bounding box coordinates , and the entire page image corresponding to the token. The network encodes the token, layout and image inputs through the Longformer architecture and encoded representations are used by multiple tasks for prediction to compute the loss during training.

2.1 The Longformer model

Transformer based architectures such as BERT Devlin et al. (2018) & RoBERTa Liu et al. (2019) use multi-layered bi-directional self-attention and pre-train on large scale datasets to learn encoder representations. However, the self-attention mechanism scales quadratically in terms of computational and memory requirements. To address these constraints, Longformer Beltagy et al. (2020) introduced a combination of local windowed attention and task motivated global attention mechanisms that scales linearly with sequence length. The LongformerBASE architecture pre-trains Masked Language Modelling (MLM) from the RoBERTaBASE checkpoint Liu et al. (2019) with sequence lengths up to 4096 which is significantly higher than the BERT pre-training sequence length of 512 tokens. This makes the Longformer architecture ideal for encoding multi-page documents where a few pages can usually amount to over a few thousand tokens. For our network, we use the LongformerBASE variant that has 12 layers and a sliding window attention of length 512.

2.2 The Proposed architecture

Figure 2 showcases our proposed network architecture. For a multi-page document we parse its constituent tokens using standard parsers such as PdfPlumber (2020); PyOCR (2020) and store the token text and the corresponding bounding boxes along with the page numbers and page images. Every document is encoded as a sequence of tokens , page numbers , bounding box coordinates , and the image of the entire page corresponding to the given token; where is the vocab size, is the maximum number of page embeddings; are the coordinates for the upper left and are the coordinates for the lower right corner of the bounding box, and , capture the height and width of the bounding box respectively. We use four embedding layers to encode the layout information: X dimension (), Y dimension (), and . Embeddings from the same dimension share the embedding layers.

Novel to our approach, we use a ResNet-50 He et al. (2016) architecture combined with Feature Pyramid Network (FPN) Lin et al. (2017) to generate multi-level image embeddings for the given page corresponding to each token. For an image of size , the Resnet+FPN layer produces feature maps of size . The bounding boxes which are originally in range & are linearly scaled to match the feature map dimension respectively. We select the final layer of the FPN network which has the highest semantic representations. To generate the final image embedding corresponding to the region indicated by the bounding box coordinates, an RoI pooling operation Dai et al. (2016) is performed on the page image feature map with an output size of using the interpolated bounding box coordinates. RoI pooling operation is widely used in object detection frameworks to pass multiple region proposals generated by the network through a single feature map and extract fixed-sized embeddings. Using RoI pooling allows us to efficiently select from the page feature map, the embeddings for the regions indicated by all the bounding boxes for each token, for a particular page. For each token, we also pass the corresponding page number through an embedding layer initialized using sinusoidal embeddings, as described in Vaswani et al. (2017). The embeddings for the images, layout and pages are added to the existing text embeddings and passed to the Longformer encoder to generate sequence representations for the document.

For the special tokens CLS and SEP which are predominantly used in BERT variants for sequence inputs, we use as the bounding box as it captures the image embedding for the entire page, thereby benefitting the downstream tasks that require the representation of the CLS token for prediction. For, all our experiments, we freeze all except the last layer of Resnet-50.

2.3 Multi-task learning framework

We use a multi-task learning framework to pre-train our network on a combination of three self-supervised tasks that are posed as classification tasks along with a supervised category classification task. At each training step, we optimize all the pretraining tasks in a joint fashion. For each pretraining task, the task-specific inputs are encoded according to their respective input strategies, and the task-specific loss is calculated. The gradients are computed with respect to each task-specific loss and accumulated across all tasks to be optimized using the AdamW optimizer Loshchilov and Hutter (2018). All tasks use cross-entropy loss for classification except DTM which uses soft cross-entropy loss.

Pre-training dataset:

We use the first 130k PDF documents from Arxiv Arxiv (2020) comprising of scientific articles belonging to 16 different categories such as mathematics, physics, computer science, etc., for pretraining our network. We use a train, val, test split of 110k, 10k, 10k respectively. We process the documents PdfPlumber (2020) to extract the text tokens, corresponding bounding boxes, page numbers, and the page images along with the document category to feed as input to our network.

Pre-training tasks:

For all the tasks, we follow the BERT Devlin et al. (2018) sequencing pattern where a CLS and SEP token are passed at the start and end of a sequence respectively.

1. Masked Visual Language Modelling (MVLM): BERT model utilizes Masked Language Modelling (MLM) where input tokens are masked during pre-training and predicted as output using the context from non-masked tokens. Compared to MLM, MVLM masks the input tokens by replacing it with a designated MASK token, but keeps the layout & visual information provided by the bounding boxes and the image embedding generated from the ResNet layers. The model is trained to predict the tokens at the masked positions using the context from all the other embeddings. Following the same masking strategy as BERT model, we mask 15% input tokens during pre-training.

2. Document Category Classification (CLF): Each document in Arxiv dataset belongs to one of 16 categories denoting the relevant subject area of the document. The category prediction is performed at the CLS token, by passing the output of token into a fully-connected (FC) layer appended with a softmax classification layer.

3. Document Shuffle Prediction (DSP): For DSP, given a document, we randomly shuffle the order of the page images while preserving the order of other embeddings before passing to the network. Thus, although the token text and bounding box embeddings are in order, the corresponding token image embeddings are uncorrelated since the page images are shuffled. For a given document, the page images are shuffled with a probability of 0.5, and the model is trained to predict whether the input document pages are shuffled or intact using all the embeddings. We argue that, in order to successfully train on the DSP task, the network is forced to correlate the token image embeddings with the corresponding token text and bounding box embeddings.

4. Document Topic Modelling (DTM): Although training on the DSP task enforces the network to correlate image and text modalities at the token level, we introduce the DTM task to learn improved page image representations. Similar to the work of Cosma et al. (2020); Gomez et al. (2017) the objective is to learn discriminative visual-features employing the semantic context as soft-labels during training. We encode the semantic context for each document as a probability distribution over a set of latent topics. We utilize the Latent Dirichlet Allocation (LDA) algorithm Blei et al. (2003) to mine the latent topics over the set of text tokens parsed from the Arxiv training set. During training, the vector of topic probabilities is computed using the learnt LDA model for each document. For the DTM task, we pass the page images of the document to our network, while a single MASK token is passed for text embedding and the bounding box coordinates of the complete page are passed as part of the layout embedding. A soft cross-entropy loss is applied on the predicted output of the network against the vector of topic probabilities for learning. Since the Arxiv dataset has 16 subject areas as categories within the documents, we chose to mine 30 latent topics to further identify granular categorization among the documents.

Model Input Pre-training Tasks Pre-training Size Precision Recall F1
Our Text MLM + CLF 110K (5 epochs) 90.72% 90.40% 90.46%
Our Text + Layout MVLM + CLF 110K (5 epochs) 90.79% 90.72% 90.71%
Our All MVLM + CLF 110K (5 epochs) 98.92% 98.90% 98.90%
Our All ALL 110K (5 epochs) 98.63% 98.92% 98.93%
BERTBASE Text - - 91.56% 91.47% 91.43%
Table 1: Model performance numbers for the Arxiv Classification task.
Model Input Pre-training Tasks Pre-training Size Accuracy
Our Text MLM + CLF 110K (5 epochs) 84.48%
Our Text + Layout MVLM + CLF 110K (5 epochs) 86.55%
Our All MVLM + CLF 110K (5 epochs) 91.22%
Our All ALL 110K (5 epochs) 91.72%
LayoutLMBASE Text + Layout MVLM 500K (6 epochs) 91.25%
LayoutLM*BASE Text + Layout MVLM + MDC 1M (6 epochs) 94.31%
VGG-16 Afzal et al. (2017) Image - - 90.97%
Stacked CNN Ensemble Das et al. (2018) Image - - 92.21%
LadderNet Sarkhel and Nandi (2019) Image - - 92.77%
Multimodal Ensemble Dauphinee et al. (2019) Text + Image - - 93.07%
Table 2: Model performance numbers for the RVL-CDIP classification task. All our models are pretrained on the Arxiv dataset, LayoutLM is pretrained on IIT-CDIP dataset, whereas all other baseline perform no pretraining. LayoutLM*BASE uses Resnet-101 image embeddings during fine-tuning.

3 Evaluation and Results

3.1 Datasets

1. FUNSD: The FUNSDS dataset Guillaume Jaume (2019) consists of 199 fully annotated, scanned single-page forms. Semantic entities comprising of multiple tokens are annotated with labels among ‘question’, ‘answer’, ‘header’, or ‘other’. Additionally, the text, bounding boxes for each token, and links to other entities are present. The dataset has 149 train and 50 test images. We evaluate our network on the semantic labeling task and measure the token-level scores.

2. RVL-CDIP: The RVL-CDIP dataset Harley et al. (2015) consists of scanned document images belonging to 16 classes such as letter, form, email, resume, memo, etc. The dataset has 320,000 training, 40,000 validation and 40,000 test images. The images are characterized by low quality, noise, and low resolution, typically 100 dpi. We evaluate our network on the document classification task using the 16 labels.

3. ICDAR19: The Track A Modern data released as part of the ICDAR19 dataset Gao et al. (2019) contains images from PDF documents such as scientific journals, forms, financial statements, etc. Each image is annotated with table bounding box coordinates. The training set contains 600 images while the test set contains 240 images and we perform token-level binary classification.

Model Input Pre-training Tasks Pre-training Size Precision Recall F1
Our Text MLM + CLF 110K (5 epochs) 77.25% 68.40% 69.66%
Our Text + Layout MVLM + CLF 110K (5 epochs) 75.45% 74.93% 75.15%
Our All MVLM + CLF 110K (5 epochs) 77.31% 76.50% 76.79%
Our All ALL 100K (5 epochs) 78.41% 77.35% 77.44%
LayoutLMBASE Text+Layout MVLM 500K (6 epochs) 66.50% 73.55% 69.85%
LayoutLM*BASE Text+Layout MVLM + CLF 1M (6 epochs) 71.01% 78.15% 74.41%
Table 3: Model performance numbers for the semantic labelling task on FUNSD dataset.

3.2 Model Pre-training

We initialize the “Longformer Encoder” and “word emb layer”, as shown in Figure 2 with the pre-trained weights from the LongformerBASE (12 layers, 512 hidden size) Beltagy et al. (2020). We utilize a global+sliding window attention of length 512. The weights of Resnet-50 are initialized using the Resnet-50 model pre-trained on the ImageNet dataset He et al. (2016). Across all pre-training and downstream tasks, we resize all page images to and correspondingly scale the bounding box coordinates. We limit the maximum number of pages to 5 per document and limit the number of tokens to 500 per page during pre-training for sequence classification tasks. For the different pre-training tasks, we use a batch size (BS) and gradient accumulation (GA) of - (i) MVLM & CLF (BS=32 & GA=2); (ii) DSP (BS=16 & GA=1); (iii) DTM (BS=16 & GA=1). We pre-train our architecture for 15K steps (5 epochs) with a learning rate of 3e-5 on a single NVIDIA Tesla V100 32GB GPU.

3.3 Experiment Setup

We evaluate our model on the following different downstream tasks to demonstrate its efficacy.

Document Classification: We finetune our model to perform multi-page document classification on the Arxiv dataset and single-page document classification on the RVL-CDIP dataset. For both tasks, each document is encoded as a sequence of tokens, bounding boxes, page images, page numbers, as shown in Figure 2. We use PdfPlumber (2020); PyOCR (2020) to extract the word-level tokens and bounding boxes. The category prediction is performed at the CLS token by passing its output through an FC+Softmax layer. We use a learning rate of 3e-5, (BS=12 & GA=4) for Arxiv and (BS=64 & GA=1) for RVL-CDIP, and we fine-tune our model and different ablations for 5 epochs for both datasets independently.

Form Understanding: We perform the semantic labeling task on the FUNSD dataset as a sequence labeling problem. Each form is treated as a single page document and sequenced as a list of tokens, bounding boxes, and the page image. For each token, we pass its output representation through an FC+Softmax layer to predict its category. We use token-level precision, recall, and F1 score as the evaluation metrics. For fine-tuning, we use BS=12, GA=1, a learning rate of 3e-5, and train for 20 epochs.

Table Token Classification: For this task, the model is fine-tuned as a sequence labeling problem to classify the tokens in a document as ‘table’ or ‘other’. The table bounding boxes are used to generate the ground truth labels for each token in the document as detected by PyOCR (2020). Processing the document, generating the input embeddings, and generating the token level prediction are performed in a similar fashion to the Form Understanding task. For fine-tuning, we use BS=4, GA=2, a learning rate of 3e-5, and train for 14 epochs.

Document Retrieval: This task aims to measure the multi-page document retrieval performance of our model. Similar to the classification task, we process the documents in the Arxiv dataset and we fine-tune our pre-trained model with all inputs on all pre-training tasks and the BERTBASE model with only the text input from the Arxiv training set. We utilize the 10k documents from the Arxiv test set split into 2k query and 8k retrieval set. For a given query document, we use the fine-tuned embeddings from each model and compute its cosine distance with the query set for retrieval. We compare the mean average precision (MAP) as well as the normalized discounted cumulative gain (NDCG) for evaluation.

Pre-training Tasks (Our Model) Inference Ablations
All Only Text Only Image
MVLM + CLF 76.79% 73.64% 33.24%
ALL 77.44% 75.10% 40.12%
Table 4: Inference Ablation Results (F1 score)

3.4 Results and Discussion

Table 1 shows results of the multi-page document classification on the Arxiv dataset. A significant boost in performance can be observed with the introduction of the image embeddings to the pre-training against the Text and Text+Layout ablations and the BERT baseline. The addition of the DSP and DTM tasks to the pre-training further improve the performance marginally. The performance gain is also attributed to the underlying sequence encoder LongformerBASE since the attention mechanism supports upto 4096 tokens which enables processing of multiple pages to build rich multi-modal contextual representations.

Table 2 shows the results of the single-page document classification on the RVL-CDIP dataset. Although our best model beats the comparable LayoutLMBASE model pre-trained on 500K documents and image-based VGG-16 model, the task-specific approaches such as Stacked CNN Ensemble, LadderNet and Multimodal Ensemble outperform our model. Since RVL-CDIP inherently contains low quality images, task-specific approaches propose clever network architectures to utilize discriminative multi-scale features Sarkhel and Nandi (2019), multiple VGG-16 models to process different parts of document image Das et al. (2018) and augmenting image features with raw text features Dauphinee et al. (2019) to achieve high classification performance. Further, it is known that Resnet-50 performs poorly on the RVL-CDIP dataset and even a smaller network such as VGG-16 achieves better performance Afzal et al. (2017). It is worth noting that the best performing LayoutLM*BASE model is pre-trained on 1M documents and fine-tuned using the Faster R-CNN embeddings for 30 epochs while our model is pre-trained on 110k documents and fine-tuned for 5 epochs. Thus, we argue that, with an improved model for capturing image embeddings and further fine-tuning on RVL-CDIP dataset, we can close the performance gap with other baselines. We believe the negligible performance trade-off on the difficult RVL-CDIP dataset can be justified by the applicability of our network for different tasks.

We present results of our model fine-tuned on FUNSD semantic labeling task in Table 3. We compare our model with LayoutLM Xu et al. (2020) configurations which are pre-trained using Text+Layout on the IIT-CDIP dataset Lewis et al. (2006), and fine-tuned using Text+Layout+Image on the FUNSD dataset. Our best model pre-trained on all four tasks and all inputs achieves an F1 score of 77.44% outperforming the comparable LayoutLM*BASE model which achieves an F1 score of 74.41%. We attribute the increase in performance to the inclusion of RoI pooled image embeddings indicated by the various bounding box regions for each text token during pre-training as well as fine-tuning the Resnet-50 layers. The LayoutLM architecture is agnostic to both these properties since it does not utilize image embeddings during pre-training and only adds the image embeddings generated by the Faster R-CNN model to the pre-trained embeddings, while not updating the weights of the Faster R-CNN model during fine-tuning. Further, our architecture pre-trains using only 110K documents compared to the presented LayoutLM settings which use 500K & 1M documents. Thus, we argue that even for significantly smaller dataset size, our model generalizes better by incorporating image embeddings during pre-training on the FUNSD task. It is worth noting that LayoutLMBASE pre-trained on 11M documents beats our best F1 score by (1%)

We further investigate the usefulness of the DSP and DTM tasks introduced for pre-training. The objective of DTM task is to learn richer image representations while for DSP task is to jointly train across image and text embeddings. We compare the performance of two models on the FUNSD task - (i) pre-trained with MVLM+CLF (OurMC) and (ii) pre-trained with all tasks OurALL. For each model, we conduct two ablations during inference, where only the text or image embedding is used to make the prediction while excluding the layout embeddings. As shown in Table 4, for both the ablations, OurALL retains more performance than OurMC. This suggests that OurALL generates better visual and textual representations. For the image-only ablation, the difference in the performance drop for OurALL (37%) is lesser than that for OurMC (43%). This suggests that the learnt image embeddings for OurALL capture information from the text and layout embeddings better than image embeddings for OurMC. The trend observed is similar when using only text embedding. However, the difference is not that significant as both share MVLM and classification tasks which are more adept at learning textual representations.

Similar to the FUNSD task, the table token classification task performs semantic labeling. However, the impact of jointly learning the text, layout and image embeddings is much more evident from the results shown in Figure 3. Our model is able to correctly classify all the tokens belonging to tables with negligible amount of false positives. We get the precision, recall and f1-scores of 94.99%, 94.98% and 94.97% respectively on the ICDAR2019 test set. It is noteworthy that only fine-tuning our model on the train set is able to achieve promising results, which the prior art approaches employ careful heuristics to achieve Gao et al. (2019).

Finally, we present the results of the multi-page document retrieval task on the Arxiv dataset in Table 5. Fine-tuned embeddings from our model significantly outperform those from the BERT model. The high value of MAP and NDCG-10 indicate that the retrieved samples are not only correct, but also ranked higher than the incorrect ones for most of the queries. Although our model captures richer embeddings, the significant boost in performance is also attributed to the Longformer architecture that is able to encode much more information across document pages compared to vanilla BERT architecture.

Model MAP NDCG-1 NDCG-10
BERT 91.01% 90.08% 93.00%
OurALL 98.99% 98.94% 99.21%
Table 5: Results of Document Retrieval Task

4 Related Work

In recent years, different prior-art approaches have explored the document semantics, visual appearance as well as its layout to have a granular understanding of the document information necessary to solve problems such as information extraction, semantic segmentation, layout analysis, table structure detection, etc. Katti et al. (2018) introduce a document representation that encodes the character level textual information while preserving the 2D document layout. They train a fully convolutional encoder-decoder network that learns from this input representation to extract semantic information from invoices. For similar task of information extraction from invoices, Zhao et al. (2019) propose a convolutional network that learns both semantic and structural information from scanned invoices by creating a gridded text representation that preserves the spatial relationship among the text tokens. Contrary to these approaches, Majumder et al. (2020) utilizes the knowledge of key fields to be extracted from a document to generate candidates and learn their dense representation that also encodes information from its positional neighbors. For analysing the tables in scanned documents, Schreiber et al. (2017); Paliwal et al. (2019); Prasad et al. (2020) propose different modifications to standard CNN network architectures such as VGGNet Simonyan and Zisserman (2014) used for classification and Faster R-CNN Ren et al. (2015) for object detection in images to recognise tables and identify their structure. To segment key regions in scientific articles, Yang et al. (2017) propose to take a pixel-wise semantic segmentation approach. They use a multi-modal encoder-decoder network architecture that takes as input both the text and image embeddings. To learn a generic representation for supporting different tasks such as document image classification and document information extraction, Xu et al. (2020) propose to utilize the BERT transformer architecture to encode text as well as layout information to learn pre-trained embeddings and further utilize image information to fine-tune for a specific task. Most of the approaches in prior-art utilize the multi-modal document information from single page documents and extending their applicability to multi-page documents needs further exploration. Further, these approaches rely on limited labeled data, thus, exploring self-supervised learning to leverage large unlabeled datasets also needs exploration. We attempt to address these limitations in this paper.

Figure 3: Results of Table Token Classification. Tokens predicted as “table” by our model are marked in green.

.

5 Conclusion and Future work

We present a multi-modal pre-training framework that utilizes multi-task learning to learn a generic document representation. Our framework encodes the visual, layout and textual information and supports real-world multi-page documents. Our network is pre-trained on the publically available Arxiv dataset utilizing self-supervised tasks that promote learning multi-modal shared representations. We fine-tune our pre-trained network to showcase state-of-the-art performance on different document tasks such as document classification, information extraction and document retrieval. In future, we will investigate pre-training on large datasets such as PublayNet Zhong et al. (2019) to analyze the performance gain for different tasks and further explore new architecture designs that will enable document image tasks such as object detection/segmentation using our framework.

References

  1. Cutting the error by half: investigation of very deep cnn and advanced training strategies for document image classification. In ICDAR, Cited by: Table 2, §3.4.
  2. ArXiv bulk data access. Note: \urlhttps://arxiv.org/help/bulk_data Cited by: §1, §1, §2.3.
  3. Longformer: the long-document transformer. arXiv. Cited by: §1, §2.1, §2, §3.2.
  4. Latent dirichlet allocation. JMLR. Cited by: §2.3.
  5. Self-supervised representation learning on document images. arXiv. Cited by: §1, §2.3.
  6. R-fcn: object detection via region-based fully convolutional networks. In NIPS, Cited by: §2.2.
  7. Document image classification with intra-domain transfer learning and stacked generalization of deep convolutional neural networks. In ICPR, Cited by: Table 2, §3.4.
  8. Modular multimodal architecture for document classification. arXiv. Cited by: Table 2, §3.4.
  9. Bert: pre-training of deep bidirectional transformers for language understanding. arXiv. Cited by: 2nd item, §1, §1, §2.1, §2.3.
  10. Workshop on document intelligence at neurips 2019. Note: \urlhttps://sites.google.com/view/di2019 Cited by: §1.
  11. Icdar 2019 competition on table detection and recognition (ctdar). In ICDAR, Cited by: §1, §3.1, §3.4.
  12. Self-supervised learning of visual features through embedding images into text topic spaces. In CVPR, Cited by: §2.3.
  13. FUNSD: a dataset for form understanding in noisy scanned documents. In ICDAR-OST, Cited by: §1, §3.1.
  14. Evaluation of deep convolutional nets for document image classification and retrieval. In ICDAR, Cited by: §1, §3.1.
  15. Deep residual learning for image recognition. In CVPR, Cited by: §2.2, §3.2.
  16. Chargrid: towards understanding 2d documents. In EMNLP, Cited by: §1, §4.
  17. Building a test collection for complex document information processing. In ACM SIGIR, Cited by: §3.4.
  18. Feature pyramid networks for object detection. In CVPR, Cited by: §2.2.
  19. Roberta: a robustly optimized bert pretraining approach. arXiv. Cited by: §2.1.
  20. Fixing weight decay regularization in adam. Cited by: §2.3.
  21. Representation learning for information extraction from form-like documents. In ACL, Cited by: §1, §4.
  22. One-class svms for document classification. JMLR. Cited by: §1.
  23. TableNet: deep learning model for end-to-end table detection and tabular data extraction from scanned document images. In ICDAR, Cited by: §4.
  24. Plumb a pdf for detailed information. Note: \urlhttps://github.com/jsvine/pdfplumber Cited by: §2.2, §2.3, §3.3.
  25. CascadeTabNet: an approach for end to end table detection and structure recognition from image-based documents. In CVPR Workshops, Cited by: §4.
  26. Python wrapper for tesseract and cuneiform. Note: \urlhttps://gitlab.gnome.org/World/OpenPaperwork/pyocr Cited by: §2.2, §3.3, §3.3.
  27. Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, Cited by: §4.
  28. An overview of multi-task learning in deep neural networks. arXiv. Cited by: §1.
  29. Deterministic routing between layout abstractions for multi-scale classification of visually rich documents.. In IJCAI, Cited by: Table 2, §3.4.
  30. Deepdesrt: deep learning for detection and structure recognition of tables in document images. In ICDAR, Cited by: §4.
  31. Very deep convolutional networks for large-scale image recognition. arXiv. Cited by: §4.
  32. Attention is all you need. In NIPS, Cited by: §2.2.
  33. Layoutlm: pre-training of text and layout for document image understanding. In ACM SIGKDD, Cited by: §1, §1, §3.4, §4.
  34. Learning to extract semantic structure from documents using multimodal fully convolutional neural networks. In CVPR, Cited by: §1, §4.
  35. Cutie: learning to understand documents with convolutional universal text information extractor. arXiv. Cited by: §4.
  36. Publaynet: largest dataset ever for document layout analysis. In ICDAR, Cited by: §5.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
414539
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description