TACAM: Topic And Context Aware Argument Mining

TACAM: Topic And Context Aware Argument Mining

Michael Fromm Ludwig-Maximilians-Universität München    Evgeniy Faerman11footnotemark: 1    Tomas Seidl11footnotemark: 1
Abstract

In this work we address the problem of argument search. The purpose of argument search is the distillation of pro and contra arguments for requested topics from large text corpora. In previous works, the usual approach is to use a standard search engine to extract text parts which are relevant to the given topic and subsequently use an argument recognition algorithm to select arguments from them. The main challenge in the argument recognition task, which is also known as argument mining, is that often sentences containing arguments are structurally similar to purely informative sentences without any stance about the topic. In fact, they only differ semantically. Most approaches use topic or search term information only for the first search step and therefore assume that arguments can be classified independently of a topic. We argue that topic information is crucial for argument mining, since the topic defines the semantic context of an argument. Precisely, we propose different models for the classification of arguments, which take information about a topic of an argument into account. Moreover, to enrich the context of a topic and to let models understand the context of the potential argument better, we integrate information from different external sources such as Knowledge Graphs or pre-trained NLP models. Our evaluation shows that considering topic information, especially in connection with external information, provides a significant performance boost for the argument mining task.

1 Introduction

Figure 1: Argument search pipeline with context

The main focus of argument search is on presenting an overview of different standpoints and their justifications to some inquired topic. This may be useful in different scenarios, like legal reasoning [42] or decision making processes [33], especially if a topic or a problem is controversial. An automated argument search process could ease much of the manual effort involved in these areas, especially if it can make use of large text databases or even combinations of them. The online argument search in state-of-the-art argument search systems proceeds in two steps [29]:

  1. Some standard text search engine, e.g. [3], extracts relevant text parts from large text corpora using a given topic as a query.

  2. Relevant text parts are analyzed sentence-wise by an argument recognition component which decides for each sentence whether it is an argument and optional about its stance.

Therefore, the core technique in argument search is argument recognition or argument mining [23, 18, 34, 38, 17]. The basis for argument mining is an argument model (to avoid confusion with machine learning models in the following we refer to argument model as argument scheme). An argument scheme formally defines what kind of arguments exist and what their properties are. The task of a machine learning model is to identify these arguments in text.

The classic argument mining approaches extract arguments from text without taking the topic of the argument into consideration [22, 12, 9]. However, the special characteristic of application of argument mining to argument search is that there is always a query topic. The query topic carries information about query context and understanding the context of potential arguments can be crucial for the decision. For instance, if the query is about the usefulness of some medical procedure in the context of medicine, we expect meaningful arguments from the medical doctors and not from people who share their own individual experiences. Thus, if a potential argument follows a particular structure or some special terminology is used, it may increase an argument’s chances to be classified as the argument. Another desirable property of argument mining approach is to be able to decide dependent on a topic, whether a candidate sentence is an argument. To be useful, an argument search engine has to rely on a large text corpus. The larger the text corpus is, the more probable is the scenario that texts extracted by the search engine contain arguments about different topics. For instance, consider the following example: A user looks for arguments to Emission Trading and the following sentence candidates are retrieved by the text search engine:

  • ETS sets a clear price on carbon and combats climate change.

  • Free trade secures all the advantages of international division of labour.

  • UK signals plan to leave EU emissions trading scheme after Brexit.

The first two sentences can be considered as arguments, the third sentence is purely informative. However, if we look more closely at the second sentence, we recognize that this is not an argument about the query topic Emission Trading. Therefore, it is important to be able to classify arguments in context of topics. At the same time, since we cannot expect that available training datasets for argument recognition cover all possible topics, the generalization to unseen topics is an important requirement. Therefore, the better the machine learning model is able to grasp the context of a topic and of potential arguments at different granularities, the better is the decision the model can make and the more certain it can be about its decisions.

In this work we propose a new approach for argument mining which also takes the topic of potential arguments into account. The overview of our approach is depicted in Figure 1. The standard argument search pipeline looks like workflow presented in this figure without context source and dotted arrows. Our approach enriches topic and argument candidates with the context information and takes them into account in classification process. We show how contextual information about a topic and argument from different sources like knowledge graphs or pre-trained models can be integrated into our approach. We investigate the benefits of considering the topic and the integration of external knowledge. We summarize our main contributions as follows:

  • We present a novel approach for argument classification which takes the topic of the argument into account.

  • We show how contextual information about topic and argument from different sources like knowledge graphs or pre-trained models can be integrated.

  • We demonstrate that considering topics is beneficial for the argument classification, especially in connection with external knowledge.

  • We show that our approach is particularly successful if the model has to generalize to unseen topics. This is a typical real-world scenario in argument search.

  • We present thorough experimental evaluations of our models and comparisons to state-of-the-art methods on real-world dataset and introduce additional experimental setting. In this setting we evaluate the ability of different models to classify in context of topics.

2 Related Work

In general, the main focus in argument mining lies in the recognition of argument components [22, 12, 31, 15] and the detection of relations between them [31]. However, all these approaches which tackle the problem of argument classification do not take information about the specific topic of a given argument into consideration.

At the same time different argumentation schemes of different complexity were proposed in previous works [35, 41, 10, 30]. Since each argumentation scheme contains different numbers of various argument types, this has an implication on machine learning models designed for argument detection, since they have to learn how to identify them. However, as was shown in [7], these argumentation schemes do not generalize well to different types of texts. Concretely, the authors of this work collected datasets used with different argumentation schemes and combined them in a single dataset. Afterwards, they trained a model, which should detect the argument component of type claim, which is central in each argument scheme. However, the machine learning models which perform well for single datasets could not achieve good results on this simple binary classification task. Additionally, it was shown that even human annotators often label differently when annotating the same datasets according to complex argumentation schemes. Therefore, the authors came to the conclusion that certain argument components (backing, warrant) as introduced in [35], and other argumentation schemes are often only stated implicitly in common argumentation documents on the internet. In more recent work, argumentation schemes became simpler and more flexible [32, 40]. This enables broader applicability and topic-dependent argument search across multiple text types.

Our work is built upon [32]. The authors introduced a dataset with arguments of different text types and topics for each argument. Additionally, they propose two simple argumentation schemes. The first scheme is a binary decision, aiming at classifying a sentence as argumentative or non-argumentative. In the second scheme there is a distinction between non argumentative sentences and pro and contra arguments. They also propose a model which takes a topic into consideration. We extend their work by proposing new architectures and context sources and compare our approach with their method.

There are few approaches which use transfer learning for the argument mining task. In [32] the proposed model is pre-trained on another dataset for argument mining [13], but this approach does not lead to considerable improvement. Parallel to our work, the authors of [36] also use transfer learning with BERT for a new introduced corpus with tagged sequences. However, their model does not generalize to the new topics by design.

Based on recent developments two argument search engines, i.e., www.args.me [40] and www.argumentsearch.com [28], where a user is able to search a broad range of documents for certain topics, have been developed.

3 Problem setting

We model the recognition of an argumentative sentences as a classification task. Given a sentence and topic with , being one-hot encoded vectors, and being the size of the vocabulary, we seek to classify as ”contra argument” or ”pro argument” if the sentence includes evidence for supporting or opposing the topic . If the sentence does not contain evidence, it is classified as a ”non-argument”.

4 Method

In contrast to previous approaches, we aim at incorporating context information into the learning procedure when training our models. This way, the models learn which argument properties are especially meaningful in the context of a particular topic and can put a special emphasis on these information for the subsequent classification task. For instance, emission trading is a frequently discussed topic, but we would expect the most meaningful arguments about its usefulness coming from particular academic communities. Consequently, by providing topic information in a meaningful way, we enable models e.g. to learn argument structures and vocabulary which are common in those communities. On the other hand we also expect our models to learn how topics are related to their domain specific arguments. Although a sentence might contain topic-specific words it may still be an argument of a different topic. Considering the topic emission trading again, relevant arguments are probably more related to climate change than to the stock market, though trading is a frequently used term in the latter area. Thus, it is important to understand the context of the topic and the context of the potential arguments. Consequently, we propose various approaches to provide context information about topic and potential argument from various external sources. However, as the proposed models should be able to generalize to arbitrary topics, we provide the context information as an additional input to the models. Therefore, all our models aggregate the representation of the potential argument with the representation of the topic.

4.1 Models

4.1.1 Recurrent Network

The first model we propose is a recurrent model for which we use two instances of a BiLSTM [14] model. Precisely, one is used to encode a topic and the other model aims at encoding the potential argument:

We use word2vec [19] embeddings of the given words in a sentence as input for the argument BiLSTM instance . However, it is noteworthy that any other kind of word embeddings can be used, too. Furthermore, function maps some given topic description to a sequence of entities . In general, we allow arbitrary information sources to provide topic context. Therefore, depends on the information source. In case of describing the relevant entities of in terms of relevant words, one could use a sequence of word embeddings to encode the topic information. In this case would map the relevant words to the corresponding one-hot encoded vectors which, if multiplied with the word embedding matrix , serve as input for the topic BiLSTM instance denoted as . In case of using knowledge graphs as external source of information for the context, first examines, whether there is an entity with the same name as the whole topic description. Otherwise it maps each word in the topic description to an corresponding entity in the knowledge graph. If there is no such corresponding entity for a particular word, we employ a nearest neighbor search for this word in the word embedding space and finally use a knowledge graph entity which matches to a semantically similar word. Once we found an entity for each word in the topic description, we use the corresponding sequence of knowledge graph entity representations as input for the topic BiLSTM instance. The function is used to aggregate topic and argument representations. We evaluate the following aggregation functions:

  • Addition:

  • Hadamard product:

  • Concatenation:

Finally, we use the aggregated representation as input to a dense layer with softmax activation to obtain the classification result .

4.1.2 Attention model

We also use a deep bidirectional transformer encoder [37], the architecture which was used in BERT [8]. Specifically, we concatenate argument and topic description and use a special separator token and segment embeddings to distinguish between topic and potential argument. The output of the first special [CLS] token is used as input to the dense classification layer, which predicts the distribution over the classes.

4.2 Context source

As mentioned previously, our models are able to rely on different external sources that may provide the context information. In this work, we experiment with the following sources:

  • Shallow Word Embeddings [19, 24, 4] are widely used in NLP applications and encode context information implicitly. In fact, the word embeddings are learned such that the representations of words that frequently appear in similar contexts are similar to each other. We use shallow word embeddings trained by word2vec as input to the recurrent model.

  • Knowledge Graphs model information about the world explicitly in the form of an heterogeneous graph. The entities in the knowledge graph are represented as nodes, and relationship between them as edges of different types. Information in knowledge graph is represented as triples consisting of subject, predicate and object, where subject and object are entities and predicate stands for the relationship between them. In contrast to information contained in text data, knowledge graphs are structured, i.e., each entity and relationship have a distinct meaning, and the information about the modelled world are distilled in form of facts. These facts can be extracted from texts, different databases or inserted manually. The trustworthiness of these facts in publicly available knowledge graphs is in general very high [21]. In our work we use the english version of the DBpedia knowledge graph, which has about 400 million facts with more than 3.7 million unique entities [16]. We applied TransE [5] to obtain embeddings for the knowledge graph entities. These embeddings are used as input to recurrent model (alternatively to the word embeddings).

  • Fine-Tuning based Transfer Learning approaches [26, 8, 27] adapt whole models, that were pre-trained on some (auxiliary) task, to a new problem. This is different from feature-based approaches which provide pre-trained representations [25, 6] and require task-specific architecture for a new problem. We use the weights of pre-trained BERT (Large and Base) [8] models for initializing our 4.1.2 model and train it for the argument classification task.

5 Evaluation

5.1 Dataset and Evaluation Tasks

For the evaluation we use the UKP Sentential Argument Mining corpus from [32]. The dataset consists of more than 25000 sentences from multiple text types covering eight different topics. It contains a broad range of genres including news reports, editorials, blogs, debate forums and encyclopedia articles which are all related to at least one topic. The topics have been randomly selected and are all controversial. The authors define an argument as a sentence that can be used to oppose or support a given topic. For all models each sentence is truncated to 60 words. Note that in contrast to [32] we use weighted cross-entropy to account for class imbalance.111We assume this is a reason we obtained better results for comparison methods as stated in original paper. Following [32] we evaluate our approach by performing the following classification tasks:

  • Binary classification: whether a sentence is an argument for the given topic.

  • Multiclass classification: whether a sentence is supporting, respectively attacking an argument, or is not an argument at all for the given topic.

As suggested in [32], we evaluate all approaches in two different scenarios. In the In-Topic scenario each topic is split into training and test data, which leads to having arguments of the same topics in both training and test data. The Cross-Topic scenario primarily aims at evaluating the generalization of the models, i.e., answering the question how good the performance of the models is on yet unseen topics. Therefore, seven topics are used for training and the remaining one for test. Here we want to mention that although Cross-Topic is the more complex task, it is more relevant for real-world problems, since we generally cannot expect all possible topic queries to be present in a dataset that is available for training.

5.2 Models

For all tasks we compare the following approaches:

  • BiLSTM is a bidirectional LSTM model [14], which does not use topic information

  • BiCLSTM is the contextual biderectional LSTM [11]. Topic information is used as an additional input to the gates of an LSTM cell. We use the version from [32] where the topic information is only used at the and gates since this model showed the most promising result in their work.

  • TACAM-WE is our recurrent model described in 4.1.1 which uses word embeddings to define the context of the topic

  • TACAM-WE is our recurrent model described in 4.1.1 which uses Knowledge Graphs embeddings from DBPedia to define the context of the topic.

  • TACAM-BERT is our attention based model with topic information as described in Section 4.1.2. The model uses pre-initialized weights as described in Section 4.2.

  • CAM-BERT is identical to TACAM-BERT without topic information. This model provides context for the potential argument but does not have access to the topic. Comparing this model to its counterpart with topic information enables the evaluation of topic importance.

In our experimental setting we mostly follow the experimental settings suggested in [32]. We use the same the train/validation/test splits. The validation set is used to select the hyperparameters and we report Macro F1 scores on test sets. To avoid effects of bad initialization and local minima we train each model 10 times and select the model which performs best on the validation set.

5.3 In-topic Results

Method

two-class

BiLSTM 0.74
BiCLSTM 0.75
TACAM-WE 0.74
TACAM-KG 0.74
CAM-BERT 0.82
TACAM-BERT 0.81

three-class

BiLSTM 0.56
BiCLSTM 0.56
TACAM-WE 0.57
TACAM-KG 0.55
CAM-BERT 0.69
TACAM-BERT 0.71
Table 1: In-Topic

The results of the in-topic argument classification are listed in Table 1. In this setting we do not expect a large improvement by providing topic information since the models have already been trained with arguments of the same topics as in the training set. The results in Table 1 reflect our expectations: we can slightly improve the classification results for the more complex multiclass classification problem. However, we see a relative increase of 13% for the two-classes and 25% for the three-classes classification problem by using context information from transfer learning. Therefore, we conclude that contextual information about potential arguments is important and since the topics are diverse, the model is able to learn argument structure for each topic.

5.4 Cross-Topic Results

Method Topics
Abortion Cloning Death penalty Gun control Marij. legal. Min. wage Nucl. energy School unif.

two-classes

BiLSTM 0.71 0.71 0.71 0.76 0.72 0.75 0.69 0.59 0.70
BiCLSTM 0.70 0.72 0.72 0.75 0.74 0.72 0.70 0.58 0.70
TACAM-WE 0.69 0.53 0.71 0.75 0.74 0.74 0.67 0.61 0.68
TACAM-KG 0.72 0.72 0.69 0.74 0.76 0.75 0.73 0.62 0.72
CAM-BERT 0.56 0.78 0.72 0.73 0.68 0.49 0.72 0.56 0.65
TACAM-BERT 0.78 0.80 0.78 0.80 0.80 0.83 0.81 0.80 0.80

three-classes

BiLSTM 0.43 0.54 0.49 0.49 0.47 0.43 0.47 0.40 0.47
BiCLSTM 0.49 0.53 0.46 0.48 0.48 0.47 0.52 0.45 0.49
TACAM-WE 0.47 0.72 0.47 0.49 0.46 0.41 0.49 0.41 0.49
TACAM-KG 0.50 0.52 0.47 0.50 0.48 0.48 0.51 0.45 0.49
CAM-BERT 0.37 0.63 0.50 0.46 0.50 0.39 0.58 0.43 0.48
TACAM-BERT 0.54 0.68 0.54 0.52 0.59 0.68 0.67 0.65 0.61
Table 2: Cross-Topic

Our cross-topic results are presented in Table 2. In this experiment, which reflects a real-life argument search scenario, we want to prove our two hypotheses:

  • When classifying potential arguments, it is advantageous to take information about the topic into account.

  • The context of an argument and topic context are important for the classification decision.

On the whole, we can see that our two hypotheses are confirmed. In the two-classes scenario the recurrent model improves if topic information is provided by knowledge graph embeddings. Using attention based model with pre-trained weights we can observe a significant performance boost of 8 points in average when considering topic information. However, the same model without topic information performs even worse than the recurrent models. Therefore, we conclude that both topic information together with contexts of topic and argument, are important for the correct decision about a potential argument. We observe similar effects in the three-classes scenario. Although in average different contexts for the recurrent model have similar effect, we can clearly observe, that taking topic information into account improves classification results by two points. The combination of transfer learning for context and topic information again outperforms all other approaches by far. At the same time, the pre-trained model without topic information achieves moderate 48 points.

5.5 Topic Dependent Cross-Topic Results

As was shown in the previous subsection, argument classification produces satisfying results, especially if topic information and contexts are taken into account. In this set of experiments we evaluate the ability of different models to classify dependent on topic. Therefore, a sentence may be considered to be an argument for one topic but be non argumentative for another. We argue that this is important, especially if text corpora is large, to filter out argumentative candidates which are arguments for different topics. To evaluate the models ability to perform well in topic dependent classification we extend our dataset and change the experimental setting. For each topic we select a number of related terms. These are words which come from the similar context as a topic but it is very unlikely that the topic’s argument are valid arguments for them. The list of related terms for each topic is provided in Table 3. For 50% of argumentative sentences selected randomly from the test set, we replace the topic by one of the related terms of the topic and change the sentence label in the test set to non-argumentative. Therefore, to perform well on this task, a model should be able to recognize argumentative sentences in the context of the topic. To train for this task we correspondingly augment the training data. We keep the original training data and additionally select 50% of argumentative sentences from the training set, select one of the related terms as topic, label them as non-argumentative and insert them into the training set. For this task we compare our model, which performed best on the original cross-topic task and compare it with the state-of-the-art approach BiCLSTM . We also include the same models without topic information to see, whether topic information is still helpful or if the models get confused instead.

The results for topic dependent classification are presented in Table 4. For the two-classes problem we observe a massive performance drop of 10 points for the BiCLSTM model. Nonetheless, the model still makes use of topic information and outperforms the standard BiLSTM by two points. Our approach TACAM-BERT is more robust, the performance falls by moderate 4 points and the gap to the counterpart model without topic information is incredible 17 points large. We observe a similar behaviour in the three-classes scenario. Our TACAM-BERT approach achieves nearly the same average score as in the original cross topic task. In contrast the performance of the BiCLSTM model drops by 13 points and it even performs worse than the same model without topic information on this more complex task. Thus we conclude that unlike previous models our approaches are indeed able to grasp the context of the argument and topic and are able to relate them with each other.

Topic Related terms
abortion eutanasie teenage pregnancy family medical procedure rape
cloning biology species religion organ donation modified food
death penalty politics ethic prison homicide sentence
gun control safety school shooting robbery regulation police state
marijuana legalization drugs medicine relaxation freedom liberty
minimum wage social justice slavery automation economic crisis stagnation
nuclear energy environment employment industry pollution climate change
school uniforms equality social justice individualism clothing mobbing
Table 3: Related terms for each topic
Method Topics
Abortion Cloning Death penalty Gun control Marij. legal. Min. wage Nucl. energy School unif.

two-class

BiLSTM 0.57 0.59 0.53 0.59 0.62 0.62 0.59 0.57 0.58
BiCLSTM 0.62 0.72 0.46 0.46 0.76 0.60 0.69 0.45 0.60
CAM-BERT 0.56 0.63 0.60 0.62 0.61 0.55 0.60 0.53 0.59
TACAM-BERT 0.68 0.77 0.78 0.79 0.82 0.85 0.79 0.58 0.76

three-class

BiLSTM 0.39 0.39 0.37 0.36 0.39 0.42 0.40 0.39 0.39
BiCLSTM 0.46 0.34 0.29 0.35 0.42 0.29 0.47 0.30 0.36
CAM-BERT 0.42 0.50 0.42 0.42 0.48 0.51 0.50 0.49 0.47
TACAM-BERT 0.46 0.62 0.54 0.51 0.63 0.67 0.64 0.57 0.59
Table 4: Topic dependent cross-topic classification results

6 Conclusion

In this paper, we introduced a new approach for argument mining which takes a topic of the potential argument into account. We hypothesized that considering information about the topic of a potential argument and their contexts should lead to better argument recognition. We presented multiple ways to include topic and contexts into the argument mining process. Precisely, we showed how contexts from word embeddings, Knowledge Graph embeddings and models pre-trained on other tasks can be integrated into our approach. Our experimental results clearly show that considering topics in the decision process leads to better results in almost all considered cases. Especially our approach with topic information in connection with context from pre-trained models improves state-of-the-art approach by far in the real-world scenario. We also could show that in contrast to current state-of-the-art methods, our approach is robust and able to perfectly grasp the context of topic and potential argument. For future work we plan to focus more on Knowledge Graphs and other external context sources. In detail, we want to use information gathered from knowledge graphs not only for topics but also on the argument side. We also plan to investigate different Knowledge Graph embedding techniques and combine different Knowledge Graphs in the same model. For instance, a combination of fact based knowledge graphs like DBPedia [16] and Wikidata [39] with knowledge graphs like WordNet [20] and FrameNet [1, 2] which focus on lexical similarities could further increase the representation quality of the context. Additional datasets with topic information about more topics could also deepen our understanding of the interplay between context and arguments and potentially further increase the performance of the argumentation models.

7 Acknowledgements

This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) within the project Relational Machine Learning for Argument Validation (ReMLAV), Grant Number SE 1039/10-1, as part of the Priority Program ”Robust Argumentation Machines (RATIO)” (SPP-1999). This work has also been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibilities for its content.”

References

  • [1] C. F. Baker, C. J. Fillmore and J. B. Lowe (1998) The berkeley framenet project. In Proceedings of the 17th International Conference on Computational Linguistics - Volume 1, COLING ’98, Stroudsburg, PA, USA, pp. 86–90. External Links: Link, Document Cited by: §6.
  • [2] C. F. Baker, C. J. Fillmore and J. B. Lowe (1998) The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 1, ACL ’98/COLING ’98, Stroudsburg, PA, USA, pp. 86–90. External Links: Link, Document Cited by: §6.
  • [3] A. Białecki, R. Muir, G. Ingersoll and L. Imagination (2012) Apache lucene 4. In SIGIR 2012 workshop on open source information retrieval, pp. 17. Cited by: item 1.
  • [4] P. Bojanowski, E. Grave, A. Joulin and T. Mikolov (2017) Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5, pp. 135–146. Cited by: 1st item.
  • [5] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani and K. Q. Weinberger (Eds.), pp. 2787–2795. External Links: Link Cited by: 2nd item.
  • [6] A. M. Dai and Q. V. Le (2015) Semi-supervised sequence learning. In Advances in neural information processing systems, pp. 3079–3087. Cited by: 3rd item.
  • [7] J. Daxenberger, S. Eger, I. Habernal, C. Stab and I. Gurevych (2017) What is the essence of a claim? cross-domain claim identification. CoRR abs/1704.07203. Cited by: §2.
  • [8] J. Devlin, M. Chang, K. Lee and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805. Cited by: 3rd item, §4.1.2.
  • [9] S. Eger, J. Daxenberger and I. Gurevych (2017) Neural end-to-end learning for computational argumentation mining. CoRR abs/1704.06104. Cited by: §1.
  • [10] J. B. Freeman (2011) Argument structure: representation and theory. Springer. Cited by: §2.
  • [11] S. Ghosh, O. Vinyals, B. Strope, S. Roy, T. Dean and L. P. Heck (2016) Contextual LSTM (CLSTM) models for large scale NLP tasks. CoRR abs/1602.06291. External Links: Link, 1602.06291 Cited by: 2nd item.
  • [12] I. Habernal and I. Gurevych (2016) Argumentation mining in user-generated web discourse. CoRR abs/1601.02403. Cited by: §1, §2.
  • [13] I. Habernal, M. Sukhareva, F. Raiber, A. Shtok, O. Kurland, H. Ronen, J. Bar-Ilan and I. Gurevych (2016-07) New Collection Announcement: Focused Retrieval Over the Web. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’16, New York, NY, USA, pp. 701–704. External Links: Document, Link Cited by: §2.
  • [14] S. Hochreiter and J. Schmidhuber (1997-11) Long short-term memory. Neural Comput. 9 (8), pp. 1735–1780. External Links: ISSN 0899-7667, Link, Document Cited by: §4.1.1, 1st item.
  • [15] X. Hua and L. Wang (2017-04) Understanding and detecting supporting arguments of diverse types. pp. . Cited by: §2.
  • [16] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. van Kleef, S. Auer and C. Bizer (2015) DBpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal 6 (2), pp. 167–195. External Links: Link Cited by: 2nd item, §6.
  • [17] M. Lippi and P. Torroni (2015) Argument mining: a machine learning perspective. In Theory and Applications of Formal Argumentation, E. Black, S. Modgil and N. Oren (Eds.), Cham, pp. 163–176. External Links: ISBN 978-3-319-28460-6 Cited by: §1.
  • [18] M. Lippi and P. Torroni (2015) Context-independent claim detection for argument mining. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, pp. 185–191. External Links: ISBN 978-1-57735-738-4, Link Cited by: §1.
  • [19] T. Mikolov, K. Chen, G. Corrado and J. Dean (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cited by: 1st item, §4.1.1.
  • [20] G. A. Miller (1995-11) WordNet: a lexical database for english. Commun. ACM 38 (11), pp. 39–41. External Links: ISSN 0001-0782, Link, Document Cited by: §6.
  • [21] M. Nickel, K. Murphy, V. Tresp and E. Gabrilovich (2015) A review of relational machine learning for knowledge graphs. Proceedings of the IEEE 104 (1), pp. 11–33. Cited by: 2nd item.
  • [22] R. M. Palau and M. Moens (2009) Argumentation mining: the detection, classification and structure of arguments in text. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, ICAIL ’09, New York, NY, USA, pp. 98–107. External Links: ISBN 978-1-60558-597-0, Link, Document Cited by: §1, §2.
  • [23] A. Peldszus and M. Stede (2013-01) From Argument Diagrams to Argumentation Mining in Texts: A Survey. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) 7 (1), pp. 1–31. External Links: Document, Link Cited by: §1.
  • [24] J. Pennington, R. Socher and C. Manning (2014) Glove: global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: 1st item.
  • [25] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee and L. Zettlemoyer (2018) Deep contextualized word representations. In Proc. of NAACL, Cited by: 3rd item.
  • [26] A. Radford, K. Narasimhan, T. Salimans and I. Sutskever (2018) Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assets/research-covers/languageunsupervised/language understanding paper. pdf. Cited by: 3rd item.
  • [27] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei and I. Sutskever (2019) Language models are unsupervised multitask learners. OpenAI Blog 1, pp. 8. Cited by: 3rd item.
  • [28] C. Stab, J. Daxenberger, C. Stahlhut, T. Miller, B. Schiller, C. Tauchmann, S. Eger and I. Gurevych (2018-06) ArgumenText: searching for arguments in heterogeneous sources. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, New Orleans, Louisiana, pp. 21–25. External Links: Link, Document Cited by: §2.
  • [29] C. Stab, J. Daxenberger, C. Stahlhut, T. Miller, B. Schiller, C. Tauchmann, S. Eger and I. Gurevych (2018) ArgumenText: searching for arguments in heterogeneous sources. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 21–25. External Links: Document, Link Cited by: §1.
  • [30] C. Stab and I. Gurevych (2014) Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 46–56. External Links: Link Cited by: §2.
  • [31] C. Stab and I. Gurevych (2017-09) Parsing argumentation structures in persuasive essays. Computational Linguistics 43 (3), pp. 619–659. External Links: Link, Document Cited by: §2.
  • [32] C. Stab, T. Miller and I. Gurevych (2018) Cross-topic argument mining from heterogeneous sources using attention-based neural networks. CoRR abs/1802.05758. Cited by: §2, §2, §2, 2nd item, §5.1, §5.1, §5.2.
  • [33] O. Svenson (1979) Process descriptions of decision making. Organizational Behavior and Human Performance 23 (1), pp. 86 – 112. External Links: ISSN 0030-5073, Document, Link Cited by: §1.
  • [34] R. Swanson, B. Ecker and M. Walker (2015-09) Argument mining: extracting arguments from online dialogue. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Prague, Czech Republic, pp. 217–226. External Links: Link, Document Cited by: §1.
  • [35] S. E. Toulmin (1958) The uses of argument. Cambridge University Press. Cited by: §2.
  • [36] D. Trautmann, J. Daxenberger, C. Stab, H. Schütze and I. Gurevych (2019) Robust argument unit recognition and classification. CoRR abs/1904.09688. Cited by: §2.
  • [37] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §4.1.2.
  • [38] M. P. G. Villalba and P. Saint-Dizier (2012) Some facets of argument mining for opinion analysis. In COMMA, Cited by: §1.
  • [39] D. Vrandečić and M. Krötzsch (2014-09) Wikidata: a free collaborative knowledgebase. Commun. ACM 57 (10), pp. 78–85. External Links: ISSN 0001-0782, Link, Document Cited by: §6.
  • [40] H. Wachsmuth, M. Potthast, K. Al Khatib, Y. Ajjour, J. Puschmann, J. Qu, J. Dorsch, V. Morari, J. Bevendorff and B. Stein (2017-09) Building an argument search engine for the web. In Proceedings of the 4th Workshop on Argument Mining, Copenhagen, Denmark, pp. 49–59. External Links: Link, Document Cited by: §2, §2.
  • [41] D. Walton (2012-04) Argument mining by applying argumentation schemes. Studies in Logic 4, pp. . Cited by: §2.
  • [42] A. Wyner, R. Mochales-Palau, M. Moens and D. Milward (2010) Approaches to text mining arguments from legal cases. In Semantic Processing of Legal Texts: Where the Language of Law Meets the Law of Language, E. Francesconi, S. Montemagni, W. Peters and D. Tiscornia (Eds.), pp. 60–79. External Links: ISBN 978-3-642-12837-0, Document, Link Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
371253
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description