Sequence embeddings help to identify fraudulent cases in healthcare insurance

Sequence embeddings help to identify fraudulent cases in healthcare insurance

I. Fursov A. Zaytsev R. Khasyanov M. Spindler E. Burnaev Skoltech University of Hamburg
Abstract

Fraud causes substantial costs and losses for companies and clients in the finance and insurance industries. Examples are fraudulent credit card transactions or fraudulent claims. It has been estimated that roughly percent of the insurance industry’s incurred losses and loss adjustment expenses each year stem from fraudulent claims. The rise and proliferation of digitization in finance and insurance has lead to big data sets, consisting in particular of text data, which can be used for fraud detection. In this paper we propose architectures for text embeddings via deep learning, which help to improve the detection of fraudulent claims compared to other machine learning methods. We illustrate our methods using a data set from a large international health insurance company. The empirical results show that our approach outperforms other state-of-the-art methods and can help make the claims management process more efficient. As (unstructured) text data become increaslingly available to economists and econometricians, our proposed methods will be valuable for many similar applications, particularly when variables have a large number of categories as is typical for example of the International Classification of Disease (ICD) codes in health economics and health services.

keywords:
embeddings, deep learning, fraud detection, structured data, health insurance, social media and text
journal: Journal of Econometrics

1 Introduction

Fraud causes substantial costs and losses for companies and clients in the finance and insurance industries. Examples include fraudulent credit card transactions or insurance claims. Indeed, it has been estimated that roughly percent of the insurance industry’s incurred losses and loss adjustment expenses each year stem from fraudulent claims.111https://www.iii.org/article/background-on-insurance-fraud Hence fraud detection is a key function in these industries and core to the claims management process. Detecting fraud is also considered as a key competence of insurance and finance companies.

The rise and proliferation of digitization in finance and insurance have led to big data sets, which can be exploited for fraud detection. In this paper, we propose architectures for text embeddings via deep learning, that help improve the detection of fraudulent claims compared to other machine learning methods. We illustrate our method using a data set from a large international health insurance company.

Analyzing fraud with statistical and machine learning methods poses special challenges. First, transaction data and claims data are often available only in a so-called unstructured format. Second, fraud data are highly unbalanced, meaning that the number of fraudulent cases is very small compared to the number of non-fraudulent ones. This fact influences the choice of the classification approach and performance measures. Third, claims do not have a fixed length because the number of items in an invoice varies. One approach to this might be to equalize the input length by filling with zeros, but doing so often leads to distorted results. It is well known that deep learning outperforms other machine learning methods for analyzing unstructured data comprising, for example, text or images. In this paper, we develop deep learning architectures that are tailored to claims data and can handle each of the challenges listed above when processing unstructured information.

Our analysis is based on doctor’s bills, which have an interesting structure that is common to many economic data sets, in particular those used to address microeconomic problems. Such bills usually consist of unstructured text and have the properties of text data, for example insofar as the number of claims/items varies from bill to bill. Unlike text data, however, the exact ordering of the claims in each bill is irrelevant. It is also typical of doctor’s bills that some variables are coded with many thousands of categories. In this paper, we develop methods for analyzing such (semi)unstructured data, that might also be useful for many other applications.

We test our methods on a data set from a health insurance company. Our empirical results show that these outperforms other state-of-the-art methods in predicting fraudulent claims, and help make the claims management process more efficient.

Plan of the paper: After the introduction (Section 1), we provide an overview of the literature and state-of-the-art methods (Section 2). In Section 3, we present models and methods for analyzing general text data and for analyzing the special structure of claims data to detect insurance fraud. In the next Section 4, we describe our data set, and in Section 5 we present the results of our analysis. Finally, we present our conclusions in Section 6.

2 Overview of the literature

2.1 Anomaly detection

Detecting anomalies in data is one of the core problems in data analysis and has been investigated in recent years within diverse research areas and application domains, including time-series modeling (see Artemov et al. (2015), Ishimtsev et al. (2017)), predictive maintenance of technical systems (see Artemov and Burnaev (2016), Smolyakov et al. (2018)), and applications in the finance and insurance industries (see Chandola et al. (2009)).

Anomalies in data are important because they can translate to significant and often critical, actionable information in a wide variety of application domains. For example, in credit card transactions, anomalies can indicate when unauthorized purchases have taken place as a result of credit card or identity theft, see Jurgovsky et al. (2018). Similarly, anomalies in health insurance claims can be indicative, for example, of deception of misrepresentation carried out to gain an inappropriate or unjustified health benefit, or of billing for services not rendered, see Kirlidog and Asuk (2012).

We also refer to the reviews of Zhou et al. (2018), Phua et al. (2010), dedicated to various fraud detection problems and machine-learning based solutions to them.

2.2 Machine learning for healthcare and insurance

While a number of machine learning methods have been applied to problems in healthcare and insurance in recent years, deep learning and embeddings for fraud detection do not seem to have been covered by the literature until very recently. One of these few examples can be found in a recent study by Wang and Xu (2018), who focused on the detection of automobile insurance fraud.

Wang and Xu (2018) focused in their recent study, on the detection of automobile insurance fraud. They processed text descriptions of the accidents, extracting traditional text features manually and combined these with features, extracted automatically using deep learning. While their model showed accuracy superior to that achieved using existing approaches, neither the precise architecture of the best model nor the approach to training and validating the model is described clearly in their paper.

Another example can be seen in a recent article by Kim et al. (2019) who used hierarchical clustering based on deep neural networks to detect fraud in descriptions of candidates during job recruitment, significantly improving the accuracy of detection compared to conventional methods.

In order to predict instances of automobile insurance frauds Balasubramanian (2019) used manually-crafted features.

2.3 Construction of embeddings

Embeddings for anomaly detection problems have been used in different application domains to solve anomaly detection and various other problems. For example, Chen et al. (2016) developed an approach to embed entities, representing events in real computer systems, into a common latent space. Each event involved heterogeneous types of attributes: time, user, source process, destination process, and so on. Hu et al. (2016) studied the problem of detecting structurally inconsistent nodes in graphs, for example to detect outlier authors in a network in which different authors were connected if co-authored a paper.

However, it is embeddings constructed for applications related to natural language processing that are currently attracting the most attention. We focus here on papers with embeddings of simple entities, such as words. These include the classic TF-IDF approach, explained for instance by Rajaraman and Ullman (2011), and the recent and well-known word2vec by Mikolov et al. (2013) and GloVE by Pennington et al. (2014). The latter two methods take into account the concurrences of words, while TF-IDF is simply a normalized one-hot-encoding for a dictionary of words at hand.

To unite embeddings of simple entities, there are also a number of approaches. For example, we are able to construct an embedding of a text from an embedding of each word within this text. Simple heuristics include taking maximum value among each dimension for embeddings of words or taking mean values, see Arora et al. (2016). More complex approaches are based on convolutional and recurrent neural networks, see Wang et al. (2016), Kiros et al. (2015), Arora et al. (2016).

2.4 Imbalanced classification problems

Skewed distribution (imbalanced classes) is considered one of the most critical challenges to solving fraud detection problems. Generally speaking, there are far fewer instances of fraudulent items than normal ones. The resulting imbalance makes it difficult for learners to detect patterns in the minority class data. There are currently three main approaches to learn from imbalanced data Krawczyk (2016):

  • Data-level methods that modify the data set to balance their distributions and/or remove difficult observations

  • Algorithm-level methods that directly modify existing learning algorithms to alleviate the bias towards majority objects and adapt them to mining data with skewed distributions,

  • Hybrid methods that combine the advantages of these two approaches

In the data-level approach, Duman et al. (2013) used under-sampling for a skewed class in a credit card fraud detection system, and Erofeev et al. (2015) assessed how resampling multiplier selection influences on classification accuracy. In the algorithmic-level approach, Sahin et al. (2013) used cost-sensitive classifiers to address the class imbalance problem. In turn, Seeja and Zareapoor (2014) proposed the FraudMiner model, which is able to handle class imbalance by entering the unbalanced data directly to classifier. More general-purpose approaches include over-sampling, see Chawla et al. (2002), combinations of over- and under-sampling, see Sáez et al. (2015), and meta-learning to automate selection of imbalanced classification methods, see Smolyakov et al. (2019).

3 Methods

3.1 Learning of classical data-based models

The common scenario for supervised learning is the following: we have a sample of observations, each containing a description of an object (given by features) and values of target variables for that object. In disability insurance, for example, annual income, education, occupation, age and past medical records make up a description of a customer, and the target variable is a binary label signifying whether the customer’s claim is fraudulent (fraud) or justified (not fraud).

In case of our data set from a large international health insurance company, the observations consist of doctor’s bills that include information about the treatments provided, their costs and dates, and final amounts. The target variable is whether the bill was classified as a fraud by a clerk handling the claim.

Thus, we can learn a model that predicts the target variables by taking features of a new object as its input. An example of a widely adopted model is a decision tree. In Figure 1, we provide an example of a decision tree for some input features.

Figure 1: A scheme of a decision tree for fraud detection in the disability insurance: we start at the root node and go down making decisions on the directions, selected according to features of the object. At leaf nodes we make a decision according to their labels. The scheme does not represent a real model.

The power of machine learning is that we can learn a model that adapts to the given sample, and also generalizes well to unseen data that are similar to the training sample. Machine learning methods makes it possible to learn non-linear and complex relationships in data sets.

The common limitation of classic approaches is that they require the descriptions of objects to be in a restrictive format. Usually they use vectors of fixed, small length, which is not the case for many real-world objects such as texts or images with millions of pixels. In the case of insurance, the texts have different lengths and describe visits to a doctor with each patient having a different number of visits or doctor’s bills and the invoice listing a varying number of treatments.

Data scientists devised various ways to generate features from complex, but structured data such as images and texts, in a manual fashion. These manually generated features are used as input for classic machine learning models. In economic applications, manual feature generation is also widely used – for example the variable “age” is often constructed out of the variable “date of birth”. Such approaches yield results of reasonable but limited quality.

3.2 Deep learning revolution

The deep learning revolution changed the rules of the game in machine learning, data based models  Makridakis (2017). Now algorithms can learn representations or embeddings of object descriptions to generate features that are informative enough to provide accurate predictions while using rather simple machine learning models, such as fully-connected neural networks with only a few layers or decision trees. In summary, the strength of deep learning lies in feature extraction, which means learning informative features from high-dimensional, unstructured and complex input data. A schematic comparison of the classic approach, the classic approach with manually generated features and the deep learning approach is presented in Figure 2.

Figure 2: Classical approaches cannot handle complex weakly structured descriptions of objects. In the past, features from structured descriptions of objects were produced manually. Now, deep learning approaches produce such features in an automatic way.

The three driving forces behind the deep learning revolution are the availability of new algorithms (e.g., the convolutional neural networks of LeCun et al. (2015) for image processing, recurrent neural nets for sequences, and word embeddings for texts), new hardware (graphical processing units, see Vasilache et al. (2014)), and vast samples of data (e.g. ImageNet dataset contains more than million of labeled images, see Russakovsky et al. (2015)).

The most successful application of deep learning is in the field of image processing. However, sound advancements in deep learning have also been achived in recognition, see Amodei et al. (2016); for natural language processing (NLP), see Young et al. (2018), and for graph data, see Hamilton et al. (2017).

The key idea of deep learning is to apply a sequence of nonlinear transformations (layers of the neural network) of the object description to produce an informative embedding and use it as input to a final classifier.

3.3 Concept of embeddings

In this paper, we address the problem of representing healthcare insurance data using embeddings for the purpose of fraud detection. Embedding is a transformation of object descriptions to vectors that belong to the same low-dimensional space. Instances of these low-dimensional representations are such that the instances that are more alike have a smaller distance between them in the embedded space. For example, a good embedding provides vector representations of words in such a way that the relationship between two vectors mirrors the relationship between the two words. A popular word2vec model that has proven its effectiveness in natural language processing tasks, constructs a low-dimensional vector of real numbers such that words appearing in a similar context have similar vector representations.

In our case, we learn an embedding space constructed specifically for sequential data from healthcare insurance claims. Such representation significantly helps to detect fraudulent patterns. Thus, embedding is first a general framework for dimension reduction and, second, an effective approach to extracting the features of intrinsic relations between objects.

To make these ideas clear, suppose that a text consists of words that come from a pool of different words (a so-called dictionary). One way to transform the text into numeric features would be to one-hot encode each word of the text so that each word in the text sequence is represented as a -dimensional vector consisting of zeros except one entry at the location corresponding to that word. This is a standard way of encoding categorical variables. The representation is not efficient, however, if the dictionary is large, which is a typical situation not only for general texts, but also for healthcare data. In turn, embeddings of words from the dictionary are represented by real-valued -dimensional vectors, such that is much lower than the size of . This allows a compressed representation of the input textual description. In this representation usually the entries of the embedded vector are usually all different from zero. The embedding of the dictionary into the vector space should also maintain some relations between words. For example, a desirable property of the word embeddings is that the difference in the vector space between words “queen” and “king” should be similar to that between the words “woman” and “man”. Learning word embeddings with such properties makes them a very powerful tool for text analysis.

3.4 Application of embeddings

In recent years, learning embeddings to represent complex relationships in data has become a common approach in the machine learning community. As a result, different types of embeddings have been used in many domains, such as natural language processing (NLP), network analysis, and computer vision.

Word embeddings, such as word2vec by Mikolov et al. (2013), GloVe by Pennington et al. (2014), AdaGram by Bartunov et al. (2016) and others, provide vector representations of words such that the relationship between two vectors mirrors some linguistic relationship between the two words. In supervised problems, word and sentence embeddings have proven effective in natural language processing tasks such as part-of-speech tagging, see Collobert et al. (2011), phrase-based machine translation, see Zou et al. (2013), named-entity recognition, see Ma and Hovy (2016), and word sense disambiguation, see Bartunov et al. (2016).

Graph and network embeddings attempt to capture local and global attributes on graphs, either based on engineered graph features or driven by training on graph data. Classical approaches to graph embeddings include feature-based methods such as graph kernels, see Vishwanathan et al. (2010), Haussler (1999), and data-driven algorithms that yield distributed graph representations, see Yanardag and Vishwanathan (2015), Niepert et al. (2016), Ivanov and Burnaev (2018). Using such embeddings we can solve various tasks related to network data analysis – for example Ivanov et al. (2018) used anonymous walk embeddings for graph influence set completion.

4 Model

We process data in a way similar to that shown in Figure 2. Thus, we need to specify which method we use to generate features from initial descriptions of objects, and how we train machine learning models that predict whether the treatment is fraudulent or not.

In the following subsections we provide details about each step. When using neural networks, feature generation and model training occur simultaneously, so we describe both of these steps in the related subsection.

4.1 Generation of features and embeddings

For gradient boosting, we used BoW (bag of words) and TF-IDF (term frequency-inverse document frequency) representations of the sequence of treatments. The idea behind BoW is to represent a text by counting the number of times a word is used in a text . This gives a vector of frequencies of the words in the dictionary. Because words like “a”, “the” show up many times, but provide less information, a normalized versions of BoW leads to TF-IDF. We get a TF-IDF representation of the text as the product of the term frequency and inverse document frequency. The frequency term is equal to divided by the total number of words in the text . The inverse document frequency is equal to the logarithm of the total number of documents divided by the number of documents that contain the considered word . Note, that neither of these approaches take account of the order of words in a text; neglecting the order can decrease performance in many problems.

For approaches based on neural networks we use the same vocabulary. Hence, the size of the input to the machine learning approach is equal to , where is the number of additional features and is the vocabulary size.

Embedding matrix comes from the word2vec model, available in Gensim package Řehůřek and Sojka (2010), with , and other hyperparameters set to default values. The embedding matrix contains in its columns for each word of the dictionary the corresponding representation in the embedded vector space.

4.2 Model training

4.2.1 Gradient boosting

In machine learning the most common model for classification aside from neural networks is ensembles of decision trees, see Fernández-Delgado et al. (2014). For each separate decision tree, we pass through it for a given object according to the values of input variables at each node until reach a leaf; in a leaf the classifier returns the probabilities to belong to classes. In an ensemble, we use a weighted sum of basic decision tree classifiers.

We adopt a gradient boosting algorithm to construct ensembles of decision trees (see Chen and Guestrin (2016)) as an easy-to-use approach that provides state-of-the-art performance in many problems. The algorithm has the following main hyperparameters: the number of trees in the ensemble, the maximum depth of each tree, the share of features used in each tree, the share of samples used for training of each tree, and the learning rate.

Ensembles of decision trees are fast to construct, almost avoid over-fitting, successfully handle missing values and outliers and provide competitive performance, see Fernández-Delgado et al. (2014). One of the many advantages of gradient boosting is its ability to solve imbalanced classification problems and easy incorporate various imbalanced classification heuristics, see Kozlovskaia and Zaytsev (2017).

4.2.2 Deep learning approach

Our main deep learning approach is simple word-embedding-based models (SWEMs) from  Shen et al. (2018) that show strong performance in many natural language processing tasks. Below, we describe each layer of these models in more details.

  1. Treatment Embedding Layer maps each treatment (single item of the doctor’s bill) to a vector space using a trainable embedding matrix of size , where is a vocabulary set. Here it is the set of all billable treatments. We pre-trained this layer using Word2Vec and used its weights to initialize the embedding matrix. Let represent the treatments in the input sequence (doctor’s bill) of size . Each treatment in a sequence is embedded into a vector of side . Therefore, for each record (bill) we obtain a treatments-feature matrix . Another option to perform embeddings is through various neural recurrent architectures like LSTM, GRU, etc

  2. Aggregation Layer aggregates the sequence of embeddings using element-wise average, taking maximum values or concatenating along each dimension over the treatments vectors. This layer combines information about each treatment into a single vector.

  3. Several Multilayer Perceptron (MLP) layers with ReLU activations. Each layer learns features from the sequence of treatments at different levels of granularity. We expect the model to pay attention to the features that indicate the possibility of fraud.

  4. Extra tower is used to take additional features into account. We included extra tower with fully-connected layers over meta-features (gender, age, insurance type, etc.). The outputs of the treatment tower and the feature tower are concatenated before passing through two fully connected layers with ReLU activations.

  5. Output layer is represented by an MLP layer is followed by a softmax function to receive class probabilities.

Training of the model

The neural network models are trained for three epochs using the Adam optimization algorithm, minimizing a standard cross-entropy loss. We used a batch size of and a learning rate of . We initialized weights randomly from a Gaussian distribution with zero mean and .

5 Data description

5.1 Overview

An insurance company provided a data set consisting of claims from outpatient care. The data set comprises doctor’s bills with million items in total. Each data point is a sequence of treatments encoded with anonymized IDs.

There are input features in total. In the data, we have two types of features for each patient: general and visit-specific. General static features include age, sex, insurance type and doctor’s speciality and refer to the patient, insurance, and doctor in general. Visit-specific features describe each outpatient visit of a patient. As these features we use: treatments including type of a treatment, number of each treatment, cost of each treatment, factor (multiplies the amount of a treatment because of potential complications), total amount of money charged, billing type, cost category, and performance type. The type of treatment is coded using one of more than two thousand categories.

The number of treatments on a bill varies greatly among the patients. Figure 3 provides a histogram of the number of treatments / items for the claims data. The distribution of treatments is nonuniform. Most patients have only a small number of treatments in case of an outpatient treatment.

Figure 3: Histogram of the number of billing items for patients. The right orange bar represents the number of patients with or more. Most of the patients have fewer than treatments / items, and the most frequent number of billing items is .

For each record we have a label. The label is either “fraudulent” or “non-fraudulent” with fraudulent records corresponding to various fraudulent activities. Here “fraudulent” simply refers to the fact that the final amount of the bill was corrected, which can happen for different reasons. About of records are fraudulent. The problem is to identify whether the record corresponds to a fraudulent activity based on given input features.

5.2 Treatments

The goal of our work is to determine whether information from labels of treatments can help to identify fraud in an automatic way. As the number of items / treatments varies for each patient, we must aggregate information about all treatments in one vector: we want to construct an embedding of all treatments into a vector of a fixed dimensional size. An approach to deal with varying input size has been proposed by Farbmacher et al. (2019).

The natural way to construct embeddings is to use methods that have their roots in natural language processing, as a doctor’s bill is represented by a corresponding series of treatments. Each anonymized treatment belongs to an alphabet of size . Treatments are summarized to upper-level groups. Another alphabet feature is the kind of benefit with . There are categories for cost type in the data set.

Moreover, the distribution of treatments with respect to their rank (e.g., the most frequent treatment has rank one) is close to what is known in natural language processing as empirical Zipf’s law Montemurro (2001), i.e. the frequency of any treatment is roughly inversely proportional to its rank in the frequency table. Figure 4 demonstrates this behavior for our data set as a log-log plot. However, we see a heavier tail with more rare treatments having higher frequencies than would be expected from the Zipf’s law. This means that there are fewer rare treatments compared to what happens in natural language texts.

Figure 4: Log-log plot for ranks and corresponding frequencies of treatments, groups of treatments, and benefits in the data set. The plot significantly deviates from the straight line expected according to the Zipf’s law

Also note that a specific treatment does not correlate with fraud: if we measure correlation between the presence of a specific treatment and the target variable, the maximum absolute value of correlation is only . We therefore need to apply more sophisticated machine learning approaches to make it possible to identify fraudulent treatments.

6 Results

6.1 Metrics

There are many metrics used to evaluate classification models. Because we have an imbalanced classification problem, the main goal is to detect the minority class with high precision. Thus, for fraud detection it is very important to find all actual positive events. When all instances of minority class are correctly predicted, then the solution is usually considered as excellent.

Below we consider the common metrics for measuring the quality of imbalanced classification problems.

Actual Positive Actual Negative
Predicted Positive True Positive (TP) False Positive (FP)
Predicted Negative False Negative (FN) True Negative (TN)
Table 1: Confusion Matrix for Binary Classification

The confusion matrix is the basis of all metrics. From Table 1, commonly used metrics can be generated to estimate the performance of a classifier with different focuses of evaluation, such as the area under ROC curve (ROC AUC) and area under PR curve (PR AUC). These metrics are based on simpler metrics, such as recall (true positive rate), precision, and false positive rate. For this we introduce some notation.

  1. Recall = true positive rate is the percentage of positive instances correctly classified. When this metric is equal to 1, it means that all fraud cases have been identified.

  2. Precision = is the percentage of positive instances among positive predictions. A high value for precision means a good understanding of fraud behavior.

  3. False Positive Rate = is the percentage of positive instances misclassified.

The F1 score is defined as and lies in the interval . A hgiher F1 score is preferred.

The common ROC curve shows how well both classes are classified. The quality of one class is estimated by the high value of TPR, and the quality of the second class is estimated by the low value of FPR. Therefore, a good prediction is the “balance” between these values.

PR curve characterizes how well the minor class (fraud class) is classified. We want to maximize the TP number in predicted positive and actual positive classes. This curve is more appropriate for imbalanced data sets, because it reflects the model’s ability to distinguish the fraud behavior from the common behaviour. Thus, the PR AUC metric is more focused on the minority class and, as a result, has an extra advantage compared to the other metrics, because it reflects the prediction quality of the most important class of the problem.

6.2 Validation procedure

To evaluate the performance of the model, we use data splitting. We randomly split up the dataset: of the bills are used during training, and the data from the remaining are used for testing of constructed models. Because the problem is imbalanced, we split the data into training and test data in a stratified way: ratios of classes in training and test samples coincide with those in the initial sample.

6.3 Results

6.3.1 Usefulness of treatment features

We have two types of features: general static features and visit-specific features. In this section, we generate TF-IDF features from the visit-specific features and compare three different sets of features: only general, only visit-specific, and both. The results are given in Table 2. The results clearly indicate that using all available features leads to the most accurate predictions, resulting in the highest ROC AUC and PR AUC value.

Features ROC AUC PR AUC
General
Visit-specific
Both
Table 2: Quality of the model for three different sets of features: both general and visit-specific features improve the quality of the model

6.3.2 Overall performance of models

We set gradient boosting hyperparameters — the number of trees to and the maximum depth to . The embedding dimension for SWEM is . We estimate the performance metrics using 10-fold cross validation with data used for training and of the data used for the test each time.

The main results across the models are given in Table 3. Embedding-based models outperform the gradient boosting classifier on the same data. For SWEM, the best aggregation strategy is max-pooling. Adding of general features improves the quality of both gradient-boosting-based and neural-network-based models.

Model Static ROC AUC PR AUC
features
Gradient boosting (BoW) without
Gradient boosting (TF-IDF) without
Gradient boosting (BoW) with
Gradient boosting (TF-IDF) with
SWEM-mean without
SWEM-concat without
SWEM-max without
SWEM-mean with
SWEM-concat with
SWEM-max with
Table 3: Quality of used models for -fold cross validation: mean values and standard deviations given after sign. SWEM model with different aggregation strategies mean, concat, and max models that train problem-specific embeddings perform better than gradient boosting with various types of features. Adding general static features significantly reduces quality of the neural network model. Embedding dimension for SWEM is equal to .

6.4 Dependence of quality of models on sample size

We examine how the proportion of the data used for training affects the quality of the final model. The model was trained using , , …, percent of the initial training sample selected at random. The results are given in Figures 5. We see that the PR AUC and ROC AUC still increase as we increase the proportion of training data passed to the model.

(a) Dependence of ROC AUC on training data size
(b) Dependence of PR AUC on training data size
Figure 5: Dependence of model quality on used sample size. Increasing the proportion of training data leads to further increase in the quality of the fraud detection model. Results are provided for the SWEM-max based model

6.4.1 Selection of NN architecture hyperparameters

We undertake cross-validation to test how the selection of hyperparameters affects the performance of the model. In particular we consider: inclusion and exclusion of general features in our neural network model, use of different encoders (see the description of the Embedding Layer in Section 4.2.2) in the architecture of the neural network, and different types of aggregation of treatment embeddings to get an embedding, characterizing a particular patient. The results are given in Figure 6.

(a) Dependence of ROC AUC on a strategy for aggregating embeddings of treatments to an embedding of a patient
(b) Dependence of ROC AUC on a strategy for encoding embeddings
(c) Dependence of PR AUC on a strategy for aggregating embeddings of treatments to an embedding of a patient
(d) Dependence of PR AUC on a strategy to construct encodings for embeddings
Figure 6: Dependence of the model quality on the model architecture: selection of the right encoding strategy and the right aggregation strategy lead to an improvement in the quality of the model

6.4.2 Selection of embedding dimension

To select the embedding dimension, we try models with different sizes of embedding dimension . Figure 7 demonstrates that the quality of the model gradually rises as the embedding dimension increases if we train embeddings along with the model parameters. Our hypothesis is that direct use of natural language processing approaches leads to loss of some information during mapping from the initial feature space to an embedded space, resulting in poorer model performance. That is why increasing the embedding dimension lead to increase in accuracy, as we loose less information.

(a) Dependence of ROC AUC on dimension of embedding
(b) Dependence of PR AUC on dimension of embedding
Figure 7: Dependence of model quality on dimension of embeddings: increasing of leads to better quality of models. To estimate the optimal dimension we use the state-of-the-art methods by  Yin and Shen (2018). These dimensions yield lower quality of models. The scale of the plots is not uniform.

6.4.3 Application of resampling

As we are dealing with fraud detection data, the number of good cases is significantly higher than number of fraudulent cases. There are a number of approaches on imbalanced classification described in Section 2.4. Most of the problems consider resampling techniques: how should we change the balance of classes in our training sample, for example by dropping some major class objects or giving more weight to minor class objects.

In Table 4 we provide results on how imbalanced classification approaches can improve the overall quality of the model. We see that it is really the case for all applied resampling approaches. The best techniques for the problem at hand are under-sampling (InstanceHardnessThreshold) and a combination of over- and under-sampling (SMOTEENN), see Lemaître et al. (2017).

Resampling technique ROC AUC PR AUC
No resampling (baseline)
Over (SMOTE)
Over (ADASYN)
Under (RepeatedEditedNN)
Under (InstanceHardnessThreshold)
Both over- and under (SMOTEENN)
Table 4: Improvement of gradient boosting model using resampling techniques for fighting imbalanced classification problem at hand: over- and under-sampling techniques are considered

6.4.4 Reliability of models

Significant issues for machine learning models are reliability and resistance to malicious attacks. In particular, this problem is important in fraud detection: if a malicious user of a decision system can provide slightly distorted data to the system and trick it, then the system is of limited use, see  Papernot et al. (2017).

Usually one considers two approaches: “is a model robust with respect to random errors in data submitted to a system?” ( Saltelli et al. (2004)), and “is a model robust to malicious efforts, when someone tries to break the system in a particular way by corrupting the input to the system?” ( Papernot et al. (2017)).

In our case we test the reliability with respect to the history of treatments: can we change it slightly and thereby obtain an entirely different outcome with the model? To do so, we compare the quality the model before and after the corruption of the data.

We test these issues in two ways:

  • To test the reliability of the model, we randomly add a treatment to the end of the sequence of treatments for each customer. This disturbance corrupts the labels obtained by our model, as we use a different set of features.

  • To test the model’s resistance to malicious attacks, we randomly add treatments one by one to fraudulent cases and maliciously select model output that is affected the most by a single treatment addition.

We use the machine learning model that was constructed using TF-IDF features for treatments and general features.

The ROC and PR curves before and after the corruption of labels are shown in Figure 8 for TF-IDF and gradient boosting approach and in Figure 9 for the SWEM-max approach. We see that changing of inputs to the model leads to a drop in its quality especially of the SWEM model. To make the model more stable one should augment the training data with more cases and possible distortions of the initial data (so-called data augmentation) and keep the model secret to avoid a malicious attack.

Figure 8: Comparison of ROC and PR AUC curves before and after corruption of inputs of the “TF-IDF + Gradient Boosting” model for the test data. Random addition of a treatment results in a small drop in the quality. Malicious attack results in a significant drop in the quality of the model.
Figure 9: Comparison of ROC and PR AUC curves before and after corruption of inputs of the SWEM-based model for the test data. Random addition of a treatment results in a small drop in the quality. Malicious attack results in a significant drop in the quality of the model.

7 Conclusion

In this paper we introduce and propose deep learning architectures that are tailor-made for claims data and provide embeddings for treatment codes / classification of diseases. Insurance fraud is one of the main threats to insurance and other financial companies. Using a real world data set, we show that our proposed methods based on embeddings for unstructured data outperform standard methods and have the potential to improve the claims management process. Although doctor’s bills have a text format, claims data and treatment codes seem to differ from “traditional” texts. Compared to other texts, in that they are more uniformly distributed among patients and the optimal dimension of embeddings appear to be higher as proposed for texts. By designing a tailor-made architecture for treatment embeddings we outperform standard models. Our empirical experiments show that our model can be further improved by optimization of the neural network architecture and by increasing volume of data used for training. Moreover, our model is robust, to some extent, to disturbances of the data and adversarial and malicious changes to some extent.

As digitization continues to proliferate, increasing amounts of unstructured data in the form of texts will become available, including electronic health records and claims data, personnel files and financial statements. Often these data will have a special structure and in particular will contain variables with a large number of categories that cannot be handled by classical econometric methods. The deep learning architectures and embeddings we propose in this paper are tailor-made for such data such as these and can be useful to researchers in health economics, organizational economics, finance, and many other fields.

References

  • Artemov et al. (2015) A. Artemov, E. Burnaev, A. Lokot, Nonparametric decomposition of quasi-periodic time series for change-point detection, in: Proc. SPIE, volume 9875, 2015, pp. 9875 – 9875 – 5. doi:10.1117/12.2228370.
  • Ishimtsev et al. (2017) V. Ishimtsev, A. Bernstein, E. Burnaev, I. Nazarov, Conformal k-nn anomaly detector for univariate data streams, in: A. Gammerman, V. Vovk, Z. Luo, H. Papadopoulos (Eds.), Proceedings of the Sixth Workshop on Conformal and Probabilistic Prediction and Applications, volume 60 of Proceedings of Machine Learning Research, PMLR, Stockholm, Sweden, 2017, pp. 213–227.
  • Artemov and Burnaev (2016) A. Artemov, E. Burnaev, Detecting performance degradation of software-intensive systems in the presence of trends and long-range dependence, in: 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), 2016, pp. 29–36. doi:10.1109/ICDMW.2016.0013.
  • Smolyakov et al. (2018) D. Smolyakov, N. Sviridenko, E. Burikov, E. Burnaev, Anomaly pattern recognition with privileged information for sensor fault detection, in: L. Pancioni, F. Schwenker, E. Trentin (Eds.), Artificial Neural Networks in Pattern Recognition, Springer, 2018, pp. 320–332.
  • Chandola et al. (2009) V. Chandola, A. Banerjee, V. Kumar, Anomaly detection: A survey, ACM Comput. Surv. 41 (2009) 15:1–15:58. doi:10.1145/1541880.1541882.
  • Jurgovsky et al. (2018) J. Jurgovsky, M. Granitzer, K. Ziegler, S. Calabretto, P.-E. Portier, L. He, O. Caelen, Sequence classification for credit-card fraud detection, Expert Systems with Applications 100 (2018). doi:10.1016/j.eswa.2018.01.037.
  • Kirlidog and Asuk (2012) M. Kirlidog, C. Asuk, A fraud detection approach with data mining in health insurance, Procedia - Social and Behavioral Sciences 62 (2012) 989 – 994. World Conference on Business, Economics and Management (BEM-2012), May 4–6 2012, Antalya, Turkey.
  • Zhou et al. (2018) X. Zhou, S. Cheng, M. Zhu, C. Guo, S. Zhou, P. Xu, Z. Xue, W. Zhang, A state of the art survey of data mining-based fraud detection and credit scoring, in: MATEC Web of Conferences, volume 189, EDP Sciences, 2018, p. 03002.
  • Phua et al. (2010) C. Phua, V. Lee, K. Smith, R. Gayler, A comprehensive survey of data mining-based fraud detection research, arXiv preprint arXiv:1009.6119 (2010).
  • Wang and Xu (2018) Y. Wang, W. Xu, Leveraging deep learning with lda-based text analytics to detect automobile insurance fraud, Decision Support Systems 105 (2018) 87–95.
  • Kim et al. (2019) J. Kim, H.-J. Kim, H. Kim, Fraud detection for job placement using hierarchical clusters-based deep neural networks, Applied Intelligence (2019) 1–20.
  • Balasubramanian (2019) M. V. Balasubramanian, Ensemble modeling & prediction interpretability for insurance fraud claims classification, Ph.D. thesis, Dublin Business School, 2019.
  • Chen et al. (2016) T. Chen, L.-A. Tang, Y. Sun, Z. Chen, K. Zhang, Entity embedding-based anomaly detection for heterogeneous categorical events, 2016. arXiv:1608.07502.
  • Hu et al. (2016) R. Hu, C. C. Aggarwal, S. Ma, J. Huai, An embedding approach to anomaly detection, 2016 IEEE 32nd International Conference on Data Engineering (ICDE) (2016) 385–396.
  • Rajaraman and Ullman (2011) A. Rajaraman, J. D. Ullman, Mining of massive datasets, Cambridge University Press, 2011.
  • Mikolov et al. (2013) T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean, Distributed representations of words and phrases and their compositionality, in: Advances in neural information processing systems, 2013, pp. 3111–3119.
  • Pennington et al. (2014) J. Pennington, R. Socher, C. Manning, Glove: Global vectors for word representation, in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
  • Arora et al. (2016) S. Arora, Y. Liang, T. Ma, A simple but tough-to-beat baseline for sentence embeddings, ICLR (2016).
  • Wang et al. (2016) Y. Wang, H. Huang, C. Feng, Q. Zhou, J. Gu, X. Gao, Cse: Conceptual sentence embeddings based on attention model, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, 2016, pp. 505–515.
  • Kiros et al. (2015) R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, S. Fidler, Skip-thought vectors, in: Advances in neural information processing systems, 2015, pp. 3294–3302.
  • Krawczyk (2016) B. Krawczyk, Learning from imbalanced data: open challenges and future directions, Progress in Artificial Intelligence 5 (2016) 221–232. URL: https://doi.org/10.1007/s13748-016-0094-0. doi:10.1007/s13748-016-0094-0.
  • Duman et al. (2013) E. Duman, A. Buyukkaya, I. Elikucuk, A novel and successful credit card fraud detection system implemented in a turkish bank, in: 2013 IEEE 13th International Conference on Data Mining Workshops, IEEE, 2013, pp. 162–171.
  • Erofeev et al. (2015) P. Erofeev, E. Burnaev, A. Papanov, Influence of resampling on accuracy of imbalanced classification, in: Proc. SPIE, volume 9875, 2015, pp. 9875–9875–5. doi:10.1117/12.2228523.
  • Sahin et al. (2013) Y. Sahin, S. Bulkan, E. Duman, A cost-sensitive decision tree approach for fraud detection, Expert Syst. Appl. 40 (2013) 5916–5923.
  • Seeja and Zareapoor (2014) K. Seeja, M. Zareapoor, Fraudminer: A novel credit card fraud detection model based on frequent itemset mining, TheScientificWorldJournal 2014 (2014) 252797. doi:10.1155/2014/252797.
  • Chawla et al. (2002) N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, Smote: synthetic minority over-sampling technique, Journal of artificial intelligence research 16 (2002) 321–357.
  • Sáez et al. (2015) J. A. Sáez, J. Luengo, J. Stefanowski, F. Herrera, Smote–ipf: Addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering, Information Sciences 291 (2015) 184–203.
  • Smolyakov et al. (2019) D. Smolyakov, A. Korotin, P. Erofeev, A. Papanov, E. Burnaev, Meta-learning for resampling recommendation systems, in: Proc. SPIE 11041, Eleventh International Conference on Machine Vision (ICMV 2018), 110411S (15 March 2019), 2019.
  • Makridakis (2017) S. Makridakis, The forthcoming artificial intelligence (ai) revolution: Its impact on society and firms, Futures 90 (2017) 46–60.
  • LeCun et al. (2015) Y. LeCun, Y. Bengio, G. Hinton, Deep learning, nature 521 (2015) 436.
  • Vasilache et al. (2014) N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Piantino, Y. LeCun, Fast convolutional nets with fbfft: A gpu performance evaluation, arXiv preprint arXiv:1412.7580 (2014).
  • Russakovsky et al. (2015) O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei, ImageNet Large Scale Visual Recognition Challenge, International Journal of Computer Vision (IJCV) 115 (2015) 211–252. doi:10.1007/s11263-015-0816-y.
  • Amodei et al. (2016) D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, Q. Cheng, G. Chen, et al., Deep speech 2: End-to-end speech recognition in english and mandarin, in: International conference on machine learning, 2016, pp. 173–182.
  • Young et al. (2018) T. Young, D. Hazarika, S. Poria, E. Cambria, Recent trends in deep learning based natural language processing, ieee Computational intelligenCe magazine 13 (2018) 55–75.
  • Hamilton et al. (2017) W. L. Hamilton, R. Ying, J. Leskovec, Representation learning on graphs: Methods and applications, arXiv preprint arXiv:1709.05584 (2017).
  • Mikolov et al. (2013) T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, arXiv preprint arXiv:1301.3781 (2013).
  • Bartunov et al. (2016) S. Bartunov, D. Kondrashkin, A. Osokin, D. Vetrov, Breaking sticks and ambiguities with adaptive skip-gram, in: Artificial Intelligence and Statistics, 2016, pp. 130–138.
  • Collobert et al. (2011) R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, P. Kuksa, Natural language processing (almost) from scratch, Journal of Machine Learning Research 12 (2011) 2493–2537.
  • Zou et al. (2013) W. Y. Zou, R. Socher, D. Cer, C. D. Manning, Bilingual word embeddings for phrase-based machine translation, in: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013, pp. 1393–1398.
  • Ma and Hovy (2016) X. Ma, E. Hovy, End-to-end sequence labeling via bi-directional lstm-cnns-crf, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016, p. 1064–1074.
  • Vishwanathan et al. (2010) S. V. N. Vishwanathan, N. N. Schraudolph, R. Kondor, K. M. Borgwardt, Graph kernels, J. Mach. Learn. Res. 11 (2010) 1201–1242.
  • Haussler (1999) D. Haussler, Convolution Kernels on Discrete Structures, Technical Report, University of California at Santa Cruz, 1999.
  • Yanardag and Vishwanathan (2015) P. Yanardag, S. V. N. Vishwanathan, Deep graph kernels, in: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015, 2015, pp. 1365–1374.
  • Niepert et al. (2016) M. Niepert, M. Ahmed, K. Kutzkov, Learning convolutional neural networks for graphs, in: Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, 2016, pp. 2014–2023.
  • Ivanov and Burnaev (2018) S. Ivanov, E. Burnaev, Anonymous walk embeddings, in: J. Dy, A. Krause (Eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, PMLR, 2018, pp. 2186–2195.
  • Ivanov et al. (2018) S. Ivanov, N. Durasov, E. Burnaev, Learning node embeddings for influence set completion, in: Proc. of IEEE International Conference on Data Mining Workshops (ICDMW), 2018, pp. 1034–1037.
  • Řehůřek and Sojka (2010) R. Řehůřek, P. Sojka, Software Framework for Topic Modelling with Large Corpora, in: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, ELRA, Valletta, Malta, 2010, pp. 45–50. http://is.muni.cz/publication/884893/en.
  • Fernández-Delgado et al. (2014) M. Fernández-Delgado, E. Cernadas, S. Barro, D. Amorim, Do we need hundreds of classifiers to solve real world classification problems, J. Mach. Learn. Res 15 (2014) 3133–3181.
  • Chen and Guestrin (2016) T. Chen, C. Guestrin, XGBoost: A scalable tree boosting system, in: Proc. of the 22nd ACM SIGKDD int. conf. on knowledge discovery and data mining, ACM, 2016, pp. 785–794.
  • Kozlovskaia and Zaytsev (2017) N. Kozlovskaia, A. Zaytsev, Deep ensembles for imbalanced classification, in: Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on, 2017, pp. 908–913.
  • Shen et al. (2018) D. Shen, G. Wang, W. Wang, M. R. Min, Q. Su, Y. Zhang, C. Li, R. Henao, L. Carin, Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms, CoRR abs/1805.09843 (2018). URL: http://arxiv.org/abs/1805.09843. arXiv:1805.09843.
  • Farbmacher et al. (2019) H. Farbmacher, L. Loew, M. Spindler, An Explainable Attention Network for Fraud Detection in Claims Management, Technical Report, University og Hamburg, 2019.
  • Montemurro (2001) M. A. Montemurro, Beyond the zipf–mandelbrot law in quantitative linguistics, Physica A: Statistical Mechanics and its Applications 300 (2001) 567–578.
  • Yin and Shen (2018) Z. Yin, Y. Shen, On the dimensionality of word embedding, in: Advances in Neural Information Processing Systems, 2018, pp. 887–898.
  • Lemaître et al. (2017) G. Lemaître, F. Nogueira, C. K. Aridas, Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning, Journal of Machine Learning Research 18 (2017) 1–5. URL: http://jmlr.org/papers/v18/16-365.
  • Papernot et al. (2017) N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, A. Swami, Practical black-box attacks against machine learning, in: Proceedings of the 2017 ACM on Asia conference on computer and communications security, ACM, 2017, pp. 506–519.
  • Saltelli et al. (2004) A. Saltelli, S. Tarantola, F. Campolongo, M. Ratto, Sensitivity analysis in practice: a guide to assessing scientific models, Chichester, England (2004).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
393471
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description