Chinese Named Entity Recognition Augmented with Lexicon Memory

Chinese Named Entity Recognition Augmented with Lexicon Memory

Abstract

Inspired by a concept of content-addressable retrieval from cognitive science, we propose a novel fragment-based model augmented with a lexicon-based memory for Chinese NER, in which both the character-level and word-level features are combined to generate better feature representations for possible name candidates. It is observed that locating the boundary information of entity names is useful in order to classify them into pre-defined categories. Position-dependent features, including prefix and suffix are introduced for NER in the form of distributed representation. The lexicon-based memory is used to help generate such position-dependent features and deal with the problem of out-of-vocabulary words. Experimental results showed that the proposed model, called LEMON, achieved state-of-the-art on four datasets.

{CJK}

UTF8gbsn

Introduction

Named Entity Recognition (NER) aims to locate and classify elements in sentences into pre-defined categories such as persons’ names, organizations, locations, etc. NER systems have been developed using linguistic rule-based techniques or statistical models. Rule-based systems identify names by applying linguistic grammar rules governing the derivation of names [35], while statistical models identify names based on the distribution of their components in a larger corpus [43, 33]. Recently, neural networks have been applied in NER [7], such as recurrent neural networks [19, 22, RNNs] and encoder-decoder architectures [18, 5]. There are two reasons for the success of neural networks. On the one hand, neural network can memorize cases that have been seen after training. On the other hand, they can be generalized to other unseen cases [49]. However, these models still suffer from two problems of ambiguous word boundaries and out-of-vocabulary words.

Ambiguity of word boundaries: Traditional approaches to Chinese NER can be divided into two paradigms: character-based and word-based models. Character-based models are not effective enough due to lack of explicit word information [15, 26], while word-based models suffer from the issue of error propagation, since word segmentation provides rather significant information for boundaries of named entities. \citeauthorzhang2018lattice \shortcitezhang2018lattice proposed a lattice-based model to encode a sequence of characters as well as every potential word that matches a lexicon [24]. However, the important boundary features (prefix and suffix) for each name candidate might be blurred because they consider all possible segmentations, but only few of them are feasible, possibly introducing unnecessary noise. Named entities are often in the form of a fragment (sequence of contiguous words) rather than a single character or word [45], which indicates that fragment-based models deserves further exploring.

Out-of-vocabulary words: If word-level information can be harnessed in form of their embeddings, the adverse effect of unknown words could be much alleviated by leveraging a large unlabeled text corpus to learn word embeddings. As shown in Figure 1, “Microsoft” would be classified with a higher probability into the correct category (namely organization) because its embedding is close to “Google”, “Amazon”, and so on in the embedding space. Such regular pattern can be also applied into location entities such as “Rome”, “Tokyo” and “Beijing”. However, for names of persons or organizations less well-known, such as “司马懿” (Sima Yi) and “天美工作室” (Timi Studio), which can not be found in the vocabulary, syntactic features may help. Most of the people’s names start with a common Chinese surname followed by one or two characters. Organization names usually begin with the name of a city or country, and end with one of few words like “公司 (company)”, “大学 (university)”, “医院 (hospital)”, etc.

We propose a fragment-based approach to address the above problem, which combines information at different levels of granularity. Position-dependent features, including prefix, suffix and infix deserve further investigation in the case of distributed representations. However not all fragments in a sentence are common words or phrases, we filter those rare ones with the assistance of a lexicon. It has proven fruitful to incorporate a lexicon (an external dictionary) for NER [19, 6], although such word-level features are added by string matching in a rigid, discrete manner. Constructing a lexicon via collecting information such as person surname list and geographical dictionary in a hand-crafted way is time-consuming, so it is worth exploring the possibility of deriving such features automatically from a large word corpus.

Figure 1: Cases when position-dependent features benefit the identification of named entities. In cases (1) and (2), semantic information of the words is enough for recognition; while in cases (3) and (4), where the words may be out-of-vocabulary (e.g., names of persons or organizations not well known), syntactic information(prefix/suffix) will help.

The fragment-based approach conforms to the way human recognizes names. Given a fragment, a person’s attention will be drawn towards contents most relevant to her memory, which can be regarded as content-addressable retrieval, a concept borrowed from cognitive science to artificial intelligence [13]. From the viewpoint of cognitive systems, the biological brain does not learn by a single and global optimization principle [17], but is modular and composed of distinct subsystems, such as memory and control which can interact with each other [1, 36].

Inspired by the findings from cognitive science, we propose a fragment-based model for Chinese NER augmented with a lexicon-based memory, called LEMON (LExicon-MemOry-augmented-Ner). The model consists of three submodules: a character encoder that imitates the process of scanning each character in an input sentence to grasp the global semantics, a fragment encoder that simulates the procedure of reading a sub-sequence (such as words or fragments) in a sentence, and a memory which stores massive words that have ever seen. A ranking algorithm is used to determine whether a fragment is a valid name and which category it belongs to by taking its prefix, suffix, and infix features into account. Experimental results showed that the proposed model achieved state-of-the-art results on four different benchmark datasets.

Related Work

Local Detection

\citeauthor

xu2017local \shortcitexu2017local firstly presented a local detection approach for mention detection and name classification. Their model uses a fixed-size ordinary forgetting encoding (FOFE) to represent all fragments in the context [51]. Our model differs from theirs in that we adopts a character encoder to establish connections between the fragment and its context to provide global context features. Besides, position-dependent features are introduced for each candidate name via lexicon-based memory.

Attention Mechanism and Memory Network

Attention mechanism was first proposed for machine translation [2, 28], which learns an alignment between the source and target languages by estimating their correlation scores. It was also applied to NER in several ways: integrating character-level information by attending to characters [34], capturing global context information by attending to different sentences in a document [44], and adopting an adaptive co-attention between texts and pictures [50]. Memory networks was first introduced for question answering [42, 12], this study is among the first ones to incorporate word-level features by memory networks for NER.

Model

We present architecture of the proposed model in this section. As shown in Figure 2, the LEMON is mainly composed of three parts: a character encoder which maps each character into a its feature vector, a fragment encoder which encodes any variable-length sub-sequence in an input sentence into a fixed-sized vector representation, and a lexicon memory which is designed to help in disambiguating the word boundaries and dealing with the out-of-vocabulary problems by providing external syntactic and semantic features for possible words occurred in any fragment.

Figure 2: The LEMON model. Letter “E” denotes exactly match, “P” prefix match, and “S” suffix match.

Character Encoder

Given a sentence , each character is firstly mapped into its feature vector . The information derived from the results of word segmentation and part-of-speech (POS) tagging has proven to be useful for NER tasks [32, 47], and thus we augment the character representation with its soft-word and part-of-speech information. As shown in Figure 3, the BMES scheme is used to represent the results of word segmentation [46]. Each character is also assigned a POS tag as same as that of the word to which it belongs. The feature vector of each character is obtained by concatenating the feature vectors from the three parts as:

(1)

where , , are three look-up tables, , , are indices of the characters, soft-word labels and POS tags. The character encoder is used to get the context-aware representation of a character in a given sentence:

(2)

where , is the dimensionality of the context-aware character representation. A few networks can be adopted as the character encoder, such as a bi-directional LSTM that is of great superiority in modelling long-distance dependencies [11, 16], and a transformer that was firstly proposed for machine translation [39] to capture the dependencies between different words with any distance in a sentence, which is gaining much attention recently.

Figure 3: Three different types of features used for each character. For word segmentation, each character will be assigned one of four possible boundary tags: “B” for a character located at the beginning of a word, “M” for that inside of a word, “E” for that at the end of a word, and “S” for a character that is a word by itself.

Fragment Encoder

The fragment encoder is used to produce a feature vector for each -gram in a sentence. Given a sequence of characters, , where is the dimensionality of the context-aware character representation, the fragment encoder learns to map the matrix to a fixed-sized vector , where is the dimensionality of fragment embedding.

(3)

where denotes a candidate fragment spanning from character to .

Assuming that the maximum length of named entities is , for a sentence consisting of characters, the number of all possible fragments would be . The complexity of enumerating all the fragments is , which is rather time consuming. However, an inherent recursive structure helps to reduce the complexity, since the produced representations of shorter fragments can be used to generate those of longer ones, and all the fragments can be enumerated in time.

There are some methods that can be chosen as the fragment encoder. \citeauthorxu2017local \shortcitexu2017local employs a Fixed-size Ordinary Forgetting Encoding (FOFE) as such encoder, which incorporates a forgetting factor to reflect position information [51]. The bag-of-words method that simply averages the representations of words or characters also can be used as a baseline encoder [20, 8].

Lexicon Memory

Lexicon Construction

The lexicon used in this study is not just a gazetteer (i.e., a vocabulary consisting of known named entities): it contains all the possible words extracted from a dataset, which allows us to leverage a large-scale unlabeled data to obtain rich features about words. Like [53], the lexicon is obtained by automatically segmenting Chinese Giga-Word dataset1 and collecting the words. After that, the embeddings of the words in the lexicon are learned by word2vec [29]. Due to the ambiguity of Chinese word segmentation, there may exist several smaller parts of a word in the lexicon, which reflects different levels of granularity. For example, the named entity “财政部” (Ministry of Finance), can exist along with “财政” (public finance), “è´¢” (finance), “政” (administration), and “部” (ministry). The words at the finer level of granularity, such as “部” (ministry) can provide the finer word-formation features for NER.

Lexicon Matching Modes

Given a fragment , we perform pattern matching on it over the constructed lexicon . We define four types of matching modes as follows:

  • Exact matching: If there exits one word in the lexicon that is exactly the same as the fragment, the word can be directly used to replace this fragment.

  • k-prefix matching: If the first characters of a fragment is matched with a word, we call it -prefix matching. For example, a fragment “” matches “” in a 2-prefix matching mode. Such matching patterns provide informative features to identify the named entities whose prefixes are usually chosen from a limited number of words, such as commonly-used Chinese surnames like “上官” (Shangguan), and “司马” (Sima).

  • k-suffix matching: We say a -suffix matching if the last characters of a fragment, “” is matched with a word “”. Those matching patterns are quite useful to recognize the entities whose names end with one of few words. For example, many locations and organizations share the similar suffixes, such as “省” (Province), “部” (Ministry).

  • Infix matching: If a word can be found in the middle of a fragment, it is an infix matching. Its role is slightly different from the above modes, and such match serves as a hint that a fragment might contain a nested structure.

Since the first (or last) one and two characters are relatively more important for NER, the results of different matching modes are grouped into multiple buckets according to their importance. We defined LEMON- in the way that for each distinct , the feature derived from -prefix (or -suffix) matching is placed into a separate bucket if , while the remaining features () are grouped into a single bucket. The value is a hyper-parameter.

Attention over Lexicon Memory

Memory networks provide us a feasible method to extract the relevant features from a lexicon-based memory with content-addressable retrieval [42, 37]. Given a matching instance , where denotes a matched word in the lexicon, and denotes one of the matching modes, they will be mapped into two feature vectors, and concatenated as a memory unit.

(4)

where , is the size of the lexicon, and is the dimensionality of its vector space. , is the number of the matching modes, and is the dimensionality of the feature vectors used to represent different matching modes.

For a fragment , we first find all its matched words, and then group them into multiple buckets as the way introduced in Section Lexicon Matching Modes, and finally assemble them into a matrix , which is a Lexicon Memory dynamically built for the fragment, where , and is the number of matching over the lexicon.

Given a fragment representation and its corresponding lexicon memory , a scaled bi-linear attention is performed for over as follows [28, 39]:

(5)

where is a learned parameter matrix.

Classification and Decoding

Training Objective

For a fragment, its representation and the result of the attention over the lexicon are concatenated to produce the final representation . Such representation is then fed into a multi-layer feed-forward neural network to predict the labels of entities. If a fragment does not belong to any entity, it is labelled as “NONE”. We choose to use a recently proposed focal loss as the training objective to mitigate the sample-imbalance problem [25].

(6)

where denotes the probability of the true label, is a parameter vector for the true label which will be tuned during the training process, and is a hyper-parameter that governs the relative importance of the positive samples with the negative ones. If all the values of , and are set to , the focal loss is reduced to the cross-entropy loss.

Decoding Strategy

A decoding layer is stacked on top of the entity detector to resolve the issue that occasionally some overlapped fragments might be all recognized as valid entities [45]:

  • A threshold is used to filter the results. A fragment is identified as an entity if the model assigns the highest probability to this entity type and the probability is greater than ; otherwise it will be recognized as “NONE”.

  • If a recognized entity contains another candidate (nested) entity, only the outer entity will be remained for the further processing.

  • If two identified entities overlap each other, only the one with higher probability is kept.

We found that such decoding strategy works well although it runs in a greedy way. This strategy also can be used to recognize nested entities just by removing the second step.

Dataset #Train #Dev #Test Domain
OntoNotes-4 15.7k 4.3k 4.3k News
MSRA 46.4k - 4.4k News
Weibo 1.4k 0.27k 0.27k Social Media
Resume 3.8k 0.46k 0.48k Resume
Table 1: Statistics of datasets
P (%) R (%) F1 (%) P (%) R (%) F1 (%) P (%) R (%) F1 (%)
fragment \ Character Baseline Transformer Bi-RNN

Gold

BOW 72.40 62.03 66.81 - - - 73.60 69.08 71.27
FOFE 75.52 64.86 69.78 64.35 54.04 58.74 76.93 70.43 73.54
Bi-RNN 73.68 69.74 71.66 59.92 54.87 57.28 71.51 73.66 72.57

BOW + Lex 78.77 70.40 74.35 (+7.54) 76.73 73.48 75.07 (+8.26) 78.27 75.34 76.78 (+5.51)
FOFE + Lex 77.33 71.90 74.52 (+4.74) 79.92 72.65 76.11 (+17.37) 79.49 73.77 76.53 (+2.99)
Bi-RNN + Lex 77.40 74.39 75.87 (+4.21) 79.62 73.87 76.64 (+19.36) 81.12 75.18 78.04 (+5.47)

Auto

BOW 76.67 56.24 64.88 - - - 75.39 61.36 66.92
FOFE 71.66 58.21 64.24 73.17 61.75 66.98 76.20 61.65 68.16
Bi-RNN 74.60 63.67 68.70 72.05 63.52 67.52 76.73 63.70 69.61

BOW + Lex 76.33 64.75 70.06 (+5.18) 73.96 64.69 69.02 (+4.14) 78.42 67.06 72.30 (+5.38)
FOFE + Lex 77.24 63.91 69.95 (+5.71) 78.46 62.93 69.85 (+2.87) 76.24 68.76 72.31 (+4.15)
Bi-RNN + Lex 77.62 66.32 71.53 (+2.83) 76.79 67.09 71.61 (+4.09) 76.57 69.54 72.89 (+3.28)
  • The heading with a word “gold” denotes that gold segmentation and part-of-speech tags are used, while “auto” denotes that they are automatically generated by the THULAC toolkit.

Table 2: Results on OntoNotes-4 development set with different model architectures.
P (%) R (%) F1 (%) P (%) R (%) F1 (%)
Features \ Data Ground truth Automatically labelled
NCRF char 66.37 60.21 63.14 - - -
char + seg 70.58 69.96 70.27 70.77 63.33 66.85
char + pos 71.81 74.48 73.12 70.20 70.26 70.23
char + seg + pos 75.63 72.35 73.08 72.88 68.18 70.45
No Lex char 67.6 55.03 60.67 - - -
char + seg 72.16 66.09 68.99 70.48 62.65 66.33
char + pos 74.39 65.44 69.63 72.87 63.73 67.99
char + seg + pos 74.97 72.23 73.58 76.29 64.43 69.86
Lex char 77.27 60.73 68.01 - - -
char + seg 78.40 70.75 74.38 76.11 64.63 69.91
char + pos 77.71 72.35 74.93 77.46 66.89 71.79
char + seg + pos 78.70 74.95 76.78 76.41 68.61 72.30
Table 3: Results on the OntoNotes-4 development set with different features

Experiments

Experiments Settings

Datasets

We evaluated our model on four different datasets: OntoNotes-4 [41], MSRA [23], Weibo NER [32, 31], and Resume [53] datasets. The statistics of those four datasets are given in Table 1. As mentioned in Section Character Encoder, each character needs to be assigned with a soft-word label as well as a POS tag. All the datasets were segmented and tagged by using THULAC toolkit [38], which achieved about of F1-score in the word segmentation on the datasets. For OntoNotes-4 dataset, the gold segmentation and part-of-speech tags are available, and we reported the NER results both with and without gold segmentation and POS tags.

Training Details

The proposed model is implemented by PyTorch deep learning framework [30]. We pretrained the word embeddings and character embeddings on Chinese Giga-word by Word2vec. We tuned all the hyper-parameters on the development set of OntoNotes-4 dataset. The dimensionalities of word embeddings and character embeddings were all set to , and the dimensionalities of soft-word embeddings and POS tag embeddings were both set to . Dropout mechanism was applied to the character encoder at the embedding layer with a drop rate of . All learned parameters are updated by the Adam Optimizer [21]. It is worth mentioning that we use a sparse version of the Adam Optimizer2 to update the learned embedding parameters. The learning rate was set to , and the weight decay to 3.

Experiments on OntoNotes-4

We carried out a set of preliminary experiments on the development set of OntoNotes-4 to optimize the architecture by trying few different components, and to gain some understanding of how the choice of features impacts upon the performance.

Evaluation with Different Architectures

We tried several combinations of different character and fragment encoders to find a suitable configuration for NER. Three different types of networks were tested as the character encoder, and we also tried three different architectures for the fragment encoder. An embedding look-up layer serves as a baseline for the character encoder. Besides, two popular sequence models including a transformer (6-layer, 8-heads, 512-dim) and a Bi-directional LSTM (2-layer, 256-dim) are compared. As to the fragment encoder, we conducted experiments with Bag-of-Words (BOW), FOFE () and bi-directional LSTM. \citeauthorxu2017local \shortcitexu2017local used an embedding look-up layer as the character encoder and a FOFE as the fragment encoder, and they predict the type of a candidate -gram with the help of its left and right contexts. We tried to integrate such context information for NER as \citeauthorxu2017local \shortcitexu2017local, but the results of preliminary experiments showed that its contribution in performance is negligible.

The results of different combinations on the development set of OntoNotes-4 are shown in Table 2. The performances of all models will decrease of approximately in F1-score if we used the results of word segmentation and POS-tagging automatically generated by THULAC toolkit instead of the ground truth. It shows that the NER performance is significantly influenced by the results of the upstream tasks through the error propagation.

Character Encoder: Bi-RNN always outperforms other character encoders due to its ability in modelling long-term dependencies. The transformer contributes a little, and performs slightly better than the baseline although it achieved a great success in the machine translation. One reasonable explanation is that the number of training sentences is not sufficient enough to fit the model capacity of the transformer [9].

Fragment Encoder: Bi-RNN surpasses other encoders, especially when the character encoder is not built based on the Bi-RNN. BOW performs inferior to others since it is unable to model the order information of a sequence, which is critical for the entity recognition. FOFE learns to produce a linear combination of the representations of words in a sub-sequence, which is less flexible than the Bi-RNN in the sequence modeling since the latter is capable of learning non-linear combinations.

Lexicon Memory: The incorporation of lexicon memory greatly boosts the results of any combination of components, with an average increase of about in F1-score. It can be taken as a strong evidence that the introduced lexicon memory can enhance the model’s performance in NER.

Feature Combinations

The significance of different features is shown in Table 3. We also trained a LSTM-CRF model as a traditional approach for comparison by NCRF++, an open source neural sequence labeling toolkit [48]. The experimental results demonstrate that the features derived from the word segmentation and POS-tagging always benefit to all the models no matter they are labeled by human or produced by automatic toolkit.

The LEMON still beats the LSTM-CRF-based model by in F1-score without using any word segmentation or part-of-speech information, which shows that the introduced lexicon memory provides the valuable position-dependent and word-level features via the attention mechanism.

Results

Model P (%) R (%) F1 (%)
[4] 91.22 81.71 86.20
[52] 92.20 90.08 91.18
[27] - - 87.94
[10] 91.28 90.62 90.95
[53] 93.57 92.79 93.18
LEMON 95.39 91.77 93.55
Table 4: Results on the MSRA dataset
Model P (%) R (%) F1 (%)
word 93.72 93.44 93.58
word+char+bichar 94.07 94.42 94.24
char 93.66 93.31 93.48
char+bichar+softword 94.53 94.29 94.41
[53] 94.81 94.11 94.46
LEMON 95.59 94.07 94.82
  • Models indicated with are those in which the sequence labeling technique is used with LSTM + CRF. Results are extracted from [53].

Table 5: Results on the Resume NER dataset
Model P (%) R (%) F1 (%)
[40] 76.43 72.32 74.32
[3] 77.71 72.51 75.02
[47] 72.98 80.15 76.40
LEMON 79.27 78.29 78.78
[53] 76.35 71.56 73.88
LEMON 80.61 71.05 75.53
  • The model indicated with denotes that gold word segmentation are used.

Table 6: Results on the OntoNotes-4 dataset
Model P (%) R (%) F1 (%)
[32] - - 58.99
[14] - - 58.23
[53] - - 58.79

LEMON
70.86 55.42 62.19
Table 7: Results on the Weibo NER dataset

The LEMON-2 achieved state-of-the-art results on all the four datasets. As shown in Table 4 and 5, the LEMON performs slightly better than the Lattice LSTM on the MSRA and Resume NER Datasets. Our model also achieved the highest F1-score on the OntoNotes-4 (see Table 6). Note that the Weibo NER data is extracted from the social media, it is full of non-standard expressions and only contains about k samples. The problems of out-of-vocabulary words and ambiguity of word boundaries become more serious for NER on this dataset. However, the LEMON still outperforms other models with a fairly significant margin (at least increase), as we can see in Table 7.

Figure 4: An example heat map. Note that the attentions are sharp for those words particularly useful for NER.
Figure 5: F1-score versus training epochs.
Figure 6: Best F1-score with regards to decoding thresholds under different values of gamma, the horizontal line represents the results without the decoding step.

Discussion

We conducted experiments on the Weibo NER dataset to study the influence of attention over lexicon memory, and how the choice of values of the thresholds and focal loss coefficients impact upon the performance.

Attention over Lexicon Memory

Figure 5 illustrates which words will be given more weights computed by the attention operations over the lexicon memory. As shown in the heat map, the model can learn to assign more weights on the key words of named entities, and the attentions are sharp for those words particularly informative for NER.

Taking the entities of “ORG” (organization) as examples, more weights are placed to the last two characters, such as “中心” (center), “政府” (government), “学校” (school), “组织” (organization), etc. It is in accordance with the common sense that the last characters are more important in identifying Chinese names of organizations. We also found the similar phenomenon when recognizing person names. For instance, famous names such as “江泽民” (Zemin Jiang) can be matched exactly and recognized as a person name, while for names of less well-known persons, the first character (i.e. surname) tends to be given more attention.

Decoding Threshold Settings

We reported the F1-scores for different settings of LEMON on the development set of Weibo NER in Figure 6. LEMON-2 generally performs better than LEMON-0 and LEMON-1, since the features derived from - and -prefix and suffix matching are useful for NER and they cannot be mixed into a single bucket as we described in Section Lexicon Matching Modes.

We found that the best value of the threshold is in range of and . As shown in Figure 6, if the value of is greater than or equal to , the performances are more sensitive to the values of threshold . The performance will drop dramatically if is less than and is set to . One possible explanation is that the focal loss tends to update the parameters by a far larger step for the samples that are hard to be recognized, especially when the probabilities assigned for those samples are pretty low.

Coefficients of Focal Loss

We compared the speed of convergence versus different values of used in focal loss in Figure 5. If is set to zero, the focal loss will be reduced to the cross entropy loss. When the cross entropy loss is used, the model is trapped at an extreme low performance for epochs, which indicates that such loss is not optimal for the situation with severe sample imbalance. Note that the models usually suffer from the problem of sample imbalance in NER because most candidates will be labelled as “NONE”. The model with the focal loss converges relatively faster because this loss will adaptively assign different update steps to mis-classified samples according to how hard they are recognized. Although the model trained with the focal loss did not outperform that with the cross entropy, it does help to speed up the training process.

Conclusion

Observing that Chinese names are usually formed in some distinct patterns and the features derived from their prefix and suffix are particularly useful to identify them, a fragment-based model augmented with position-dependent features learned from a lexicon is introduced for Chinese NER tasks. Experimental results showed that the model using position-dependent features and lexicon-based memory achieved state-of-the-art on four different NER datasets.

Footnotes

  1. https://catalog.ldc.upenn.edu/LDC2011T13
  2. https://pytorch.org/docs/stable/optim.html#torch.optim. SparseAdam
  3. The weight decay was set to for the Weibo NER dataset, otherwise the network will be hard to converge.

References

  1. J. R. Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere and Y. Qin (2004) An integrated theory of the mind.. Psychological review. Cited by: Introduction.
  2. D. Bahdanau, K. Cho and Y. Bengio (2015) Neural machine translation by jointly learning to align and translate. ICLR. Cited by: Attention Mechanism and Memory Network.
  3. W. Che, M. Wang, C. D. Manning and T. Liu (2013) Named entity recognition with bilingual constraints. In NAACL, Cited by: Table 6.
  4. A. Chen, F. Peng, R. Shan and G. Sun (2006) Chinese named entity recognition with conditional probabilistic models. In SIGHAN Workshop, Cited by: Table 4.
  5. L. Chen and A. Moschitti (2018) Learning to progressively recognize new named entities with sequence to sequence models. In Proceedings of the 27th International Conference on Computational Linguistics, Cited by: Introduction.
  6. J. P. Chiu and E. Nichols (2016) Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics. Cited by: Introduction.
  7. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu and P. Kuksa (2011) Natural language processing (almost) from scratch. Journal of machine learning research. Cited by: Introduction.
  8. A. Conneau, G. Kruszewski, G. Lample, L. Barrault and M. Baroni (2018) What you can cram into a single vector: probing sentence embeddings for linguistic properties. ACL. Cited by: Fragment Encoder.
  9. J. Devlin, M. Chang, K. Lee and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. NAACL. Cited by: Evaluation with Different Architectures.
  10. C. Dong, J. Zhang, C. Zong, M. Hattori and H. Di (2016) Character-based LSTM-CRF with radical-level features for chinese named entity recognition. In Natural Language Understanding and Intelligent Applications, Cited by: Table 4.
  11. J. L. Elman (1990) Finding structure in time. Cognitive science. Cited by: Character Encoder.
  12. J. Hammerton (2003) Named entity recognition with long short-term memory. In NAACL, Cited by: Attention Mechanism and Memory Network.
  13. D. Hassabis, D. Kumaran, C. Summerfield and M. Botvinick (2017) Neuroscience-inspired artificial intelligence. Neuron. Cited by: Introduction.
  14. H. He and X. Sun (2017) A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In AAAI, Cited by: Table 7.
  15. J. He and H. Wang (2008) Chinese named entity recognition and word segmentation based on character. In SIGHAN Workshop, Cited by: Introduction.
  16. S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation. Cited by: Character Encoder.
  17. J. J. Hopfield (1982) Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences. Cited by: Introduction.
  18. S. Huang, X. Sun and H. Wang (2017) Addressing domain adaptation for chinese word segmentation with global recurrent structure. In IJCNLP, Cited by: Introduction.
  19. Z. Huang, W. Xu and K. Yu (2015) Bidirectional lstm-crf models for sequence tagging. Arxiv. Cited by: Introduction, Introduction.
  20. A. Joulin, E. Grave, P. Bojanowski and T. Mikolov (2017) Bag of tricks for efficient text classification. In EACL, Cited by: Fragment Encoder.
  21. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. ICLR. Cited by: Training Details.
  22. G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami and C. Dyer (2016) Neural architectures for named entity recognition. Arxiv. Cited by: Introduction.
  23. G. Levow (2006) The third international chinese language processing bakeoff: word segmentation and named entity recognition. In SIGHAN Workshop, Cited by: Datasets.
  24. J. Li, A. Sun, J. Han and C. Li (2018) A survey on deep learning for named entity recognition. Arxiv. Cited by: Introduction.
  25. T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, Cited by: Training Objective.
  26. Z. Liu, C. Zhu and T. Zhao (2010) Chinese named entity recognition with a sequence labeling approach: based on characters, or based on words?. In Advanced intelligent computing theories and applications. With aspects of artificial intelligence, Cited by: Introduction.
  27. Y. Lu, Y. Zhang and D. Ji (2016) Multi-prototype chinese character embedding.. In LREC, Cited by: Table 4.
  28. M. Luong, H. Pham and C. D. Manning (2015) Effective approaches to attention-based neural machine translation. EMNLP. Cited by: Attention Mechanism and Memory Network, Attention over Lexicon Memory.
  29. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NIPS, Cited by: Lexicon Construction.
  30. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga and A. Lerer (2017) Automatic differentiation in pytorch. NIPS Workshop. Cited by: Training Details.
  31. N. Peng and M. Dredze (2015) Named entity recognition for chinese social media with jointly trained embeddings. In EMNLP, Cited by: Datasets.
  32. N. Peng and M. Dredze (2016) Improving named entity recognition for chinese social media with word segmentation representation learning. ACL. Cited by: Character Encoder, Datasets, Table 7.
  33. J. R.Curran (2013) Learning multilingual named entity recognition from wikipedia. Artificial Intelligence. Cited by: Introduction.
  34. M. Rei, G. K. Crichton and S. Pyysalo (2016) Attending to characters in neural sequence labeling models. Arxiv. Cited by: Attention Mechanism and Memory Network.
  35. C. Sacarea (2013) Natural language processing: semantic aspects. Cited by: Introduction.
  36. T. Shallice (1988) From neuropsychology to mental structure. Cited by: Introduction.
  37. S. Sukhbaatar, J. Weston and R. Fergus (2015) End-to-end memory networks. In NIPS, Cited by: Attention over Lexicon Memory.
  38. M. Sun, X. Chen, K. Zhang, Z. Guo and Z. Liu (2016) Thulac: an efficient lexical analyzer for chinese. Technical report Technical Report. Technical Report. Cited by: Datasets.
  39. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin (2017) Attention is all you need. In NIPS, Cited by: Character Encoder, Attention over Lexicon Memory.
  40. M. Wang, W. Che and C. D. Manning (2013) Effective bilingual constraints for semi-supervised learning of named entity recognizers. In AAAI, Cited by: Table 6.
  41. R. Weischedel, S. Pradhan, L. Ramshaw, M. Palmer, N. Xue, M. Marcus, A. Taylor, C. Greenberg, E. Hovy and R. Belvin (2011) OntoNotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium. Cited by: Datasets.
  42. J. Weston, S. Chopra and A. Bordes (2015) Memory networks. ICLR. Cited by: Attention Mechanism and Memory Network, Attention over Lexicon Memory.
  43. X. Wu (2009) Phrase clustering for discriminative learning. In ACL, Cited by: Introduction.
  44. G. Xu, C. Wang and X. He (2018) Improving clinical named entity recognition with global neural attention. In Asia-Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data, Cited by: Attention Mechanism and Memory Network.
  45. M. Xu, H. Jiang and S. Watcharawittayakul (2017) A local detection approach for named entity recognition and mention detection. In ACL, Cited by: Introduction, Decoding Strategy.
  46. N. Xue and L. Shen (2003) Chinese word segmentation as lmr tagging. In SIGHAN Workshop, Cited by: Character Encoder.
  47. J. Yang, Z. Teng, M. Zhang and Y. Zhang (2016) Combining discrete and neural features for sequence labeling. In International Conference on Intelligent Text Processing and Computational Linguistics, Cited by: Character Encoder, Table 6.
  48. J. Yang and Y. Zhang (2018) NCRF++: an open-source neural sequence labeling toolkit. In ACL, Cited by: Feature Combinations.
  49. C. Zhang, S. Bengio, M. Hardt, B. Recht and O. Vinyals (2016) Understanding deep learning requires rethinking generalization. ICLR. Cited by: Introduction.
  50. Q. Zhang, J. Fu, X. Liu and X. Huang (2018) Adaptive co-attention network for named entity recognition in tweets. In AAAI, Cited by: Attention Mechanism and Memory Network.
  51. S. Zhang, H. Jiang, M. Xu, J. Hou and L. Dai (2015) The fixed-size ordinally-forgetting encoding method for neural network language models. In ACL, Cited by: Local Detection, Fragment Encoder.
  52. S. Zhang, Y. Qin, J. Wen and X. Wang (2006) Word segmentation and named entity recognition for sighan bakeoff3. In SIGHAN Workshop, Cited by: Table 4.
  53. Y. Zhang and J. Yang (2018) Chinese NER using lattice LSTM. ACL. Cited by: Lexicon Construction, Table 5, Datasets, Table 4, Table 5, Table 6, Table 7.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402538
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description