Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?
Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper’s authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing — specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data — give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a “gold standard” of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place.
Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. (mellin1957work; babbage2011passages) However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks (e.g. friedman2009elements; james2013introduction; goodfellow2016deep). The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications.
1.1. Study overview
All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper’s authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more.
As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier.
2. Literature review and motivation
2.1. A different kind of “black-boxing” in machine learning
In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell (burrell2016machine) notes. A major focus is on public accountability (e.g. pasquale2015black), where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems (stuart2004databases; eubanks2018automating).
In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories (latour1979laboratory). They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation.
In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility (Kitzes2018). Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement (Goodwin1994; scott_seeing_1998; Latour1999a; bowker1999sorting). In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” (jacobs_measurement_2019, p. 19). This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” (jacobs_measurement_2019, p. 14).
2.2. Content analysis
Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory (glaser). The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest (e.g. nelson).
Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” (riff2013analyzing, p. 19) method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based.
Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google’s reCAPTCHA (von2008recaptcha) is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases (chang_revolt_2017; maeda_annotation_2008; perez_marky_2015; bontcheva_gate_2013; halfaker2019ores; doccano). For example, the Zooniverse (Simpson2014) provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways.
2.3. Meta-research and methods papers in linguistics and crowdsourcing
Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation (e.g. hovy2010towards), including recent work about using crowdworkers (sabouetal2014). Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium’s guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages (linguistic2008ace). A universal problem of standardization is that there are often too many standards and not enough enforcement. As (bender2018data) notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics (McDonald2019).
Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper (Mozetic2016) examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf’s alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. (soberon2013measuring). One highly-cited paper (raykar2012eliminating) proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly.
2.4. The data documentation movements
Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code (fecher_open_2014). The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results (Wilson2017; Kitzes2018). This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising (Goodman2014; gil_toward_2016). There are also intersecting literatures on systems for capturing information in ML data flows and supply chains (singh_decision_2019; schelter_automatically_2017; gharibi_automated_2019), as well as supporting data cleaning (schelter_automating_2018; krishnan_activeclean_2016). These issues have long been discussed in the fields of library and information science, particularly in Research Data Management (schreier2006academic; borgman2012conundrum; Medeiros2017; sallans2012dmp).
A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” (gebru2018datasheets), ”model cards” (mitchell2019model), “data statements” (bender2018data), “nutrition labels” (holland2018dataset), a “bill of materials” (barclay2019towards), “data labels” (beretta2018ethical), and “supplier declarations of conformity” (hind2018increasing). Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion.
3. Data and methods
3.1. Data: machine learning papers performing classification tasks on Twitter data
Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section 7.1.1, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more.
We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier’s Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined.
ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in 7.1.2), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in 7.1.3 and 7.1.2). The final dataset can be found on GitHub and Zenodo.
3.2. Labeling team, training, and workflow
Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students’ majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics.
The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined.
3.3. Second round verification and reconciliation
After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix 7.2.2). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper’s labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section 7.4.
Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators.
We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss’s kappa and Krippendorf’s alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section 7.2.1.
3.4. Raw and normalized information scores
We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset.
For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score.
4.1. Original classification task
The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models.
As table 1 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions).
4.2. Labels from human annotation
One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research.
4.3. Used original human annotation and external human annotation
Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper’s authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables 4 and 4 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset.
4.4. Original human annotation source
Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper’s authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper’s authors did the labeling. If the paper discussed labelers’ qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.”
As table 5 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper’s authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk.
|Experts / professionals||16||21.62%|
|Amazon Mechanical Turk||3||4.05%|
4.5. Number of human annotators
Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics.
As table 6 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers’ authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work.
4.6. Formal definitions and instructions
Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it ”No Information”. Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table 7 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples.
|Instructions w/ formal definitions/examples||32||43.24%|
|No instructions beyond question text||7||9.46%|
4.7. Training for human annotators
We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators’ progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team’s process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions.
|Some training details||11||14.86%|
The overwhelming majority of papers did not discuss such issues, as table 8 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema.
4.8. Pre-screening for crowdwork platforms
Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower’s Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them.
4.9. Multiple annotator overlap and reporting inter-annotator agreement
Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see (McDonald2019)). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen’s kappa or Krippendorff’s alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement.
|Yes for all items||31||41.89%|
|Yes for some items||6||8.11%|
For multiple annotator overlap, table 11 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table 11 indicates.
4.10. Reported crowdworker compensation
Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay (silberman2018responsible). We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema.
4.11. Link to dataset available
Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table 12 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can.
5. Paper information scores
The raw and normalized information scores (see section 3.4 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation.
5.1. Overall distributions of information scores
Figure 1 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05.
5.2. Information scores by corpus and publication type
Figure 2 shows two boxplots
5.3. Information scores by publisher
Figure 3 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints.
6. Concluding discussion
In the sample of ML application publications using Twitter data we examined, we found a wide range in levels of documentation about methodological practices in human annotation. While we hesitate to overly generalize our findings to ML at large, these findings do indicate concern, given how crucial the quality of training data is and the difficulty of standardizing human judgment. Yet they also give us hope, as we found a number of papers we considered to be excellent cases of reporting the processes behind their datasets. About half of the papers using original human annotation engaged in some form of multiple overlap, and about 70% of the papers that did multiple overlap reported metrics of inter-annotator agreement. The distribution of annotation information scores was roughly bimodal, suggesting two distinct populations of those who provide substantially more and less information about training data in their papers. We do see preliminary evidence that papers in our sample published by certain publishers/venues tended to have papers with far more information than others (e.g. ACM and ACL at the top end, followed closely by journal publishers Springer and Elsevier, with IEEE and AAAI proceedings at the lower end). Preprints exclusively published on ArXiv also had the widest range of scores.
Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers.
Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting (Tong2007), a requirement by some journals. A number of proposed standards have been created around datasets for ML (gebru2018datasheets; mitchell2019model; bender2018data; holland2018dataset; barclay2019towards; beretta2018ethical; hind2018increasing), which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard.
From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one ‘right’ answer and when there might be many valid answers.
Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed.
Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others.
6.3. Limitations and future work
Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors.
Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes).
Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners.
The appendix appears following the references section.
Acknowledgements.This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley’s Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley’s Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project.
7.1. Dataset/corpus details
To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords.
The distribution is plotted in Figure 4, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section 4.3). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword ”social networks” was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus.
Distribution of paper types in the corpus
|Preprint never published||Postprint||Preprint||Non-ArXived (Scopus)||Total|
|Preprint never published||57||-||-||-||57|
|Refereed journal article||-||8||7||6||21|
For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version.
To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue.
The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure’s caption constituted a substantive content change. Table 13 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section.
|Year||# in ArXiv sample||# in Scopus sample|
|From ArXiv sample||From Scopus sample|
Distribution of publishers in corpus
For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table 15, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table 15.
7.2. Methods and analysis details
In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table 16 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous.
We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss’s kappa and Krippendorf’s alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely (quarfoot_how_2016; oleinik_choice_2014). This is the case in balanced datasets, but we observed widely skewed distributions.
|Question||% agreement, round 1||% agreement, round 2|
|original classification task||69.7%||93.9%|
|labels from human annotation||51.3%||82.9%|
|used original human annotation||72.0%||85.4%|
|used external human annotation||51.1%||63.4%|
|original human annotation source||44.3%||79.3%|
|number of annotators||38.2%||95.7%|
|training for human annotators||81.0%||84.8%|
|prescreening for crowdwork platforms||83.7%||89.0%|
|multiple annotator overlap||69.3%||81.7%|
|reported inter-annotator agreement||79.2%||83.5%|
|reported crowdworker compensation||94.9%||89.0%|
|link to dataset available||82.1%||86.0%|
The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity.
We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication.
The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist.
Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations.
In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper.
Changes to the coding schema
Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases.
The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table 17).
|Original coding schema||Revised coding schema|
|interactive training||some training details|
|professional training||some training details|
|prescreening with feedback||some training details|
|no training (explicitly stated)||other|
|no information||no information|
In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round.
7.3. Software used
All computational analysis and scripting was conducted in Python 3.7 (python), using the following libraries: Pandas dataframes (pandas) for data parsing and transformation; SciPy (scipy) and NumPy (numpy) for quantitative computations; and Matplotlib (Matplotlib) and Seaborn (seaborn) for visualization. Analysis was conducted in Jupyter Notebooks (jupyter) using the IPython (ipython) kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder (binder).
7.4. Coding schema, examples, and instructions
A final version of our coding schema and instructions is below:
1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area.
Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not.
Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all.
Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations.
Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer.
Example: Literature review papers that include classification papers aren’t in this, if they didn’t actually build a classifier.
Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that.
If no, skip the following questions.
2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation.
3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata.
Do a quick CTRL-F for “manual” and “annot” if you don’t see anything, just to be sure.
If not, skip the following questions about human annotation.
Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q).
Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation.
Example: Generating (smart) simulated datasets from metadata is not human annotation.
Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved.
Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don’t define or explain it.
Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn’t used to make the classifier. (1710.07394.pdf)
Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn’t actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf)
4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset?
Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes.
New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap.
If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf)
4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data?
If they are using external human annotated data, skip the remaining questions:
5. Original human annotation source: Who were the human annotators? Drop-down options are:
Amazon Mechanical Turk (AMT, Turkers)
Any other crowdworking platform (Crowdflower / Figure8)
The paper’s authors
Academic experts / professionals in the area
No information in the paper
For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don’t say anything about the nurses having specific training in the annotation task at hand. If it doesn’t easily fit into these or uses multiple sources, add them in the next column.
Example: “We develop a mechanism to help three volunteers analyze each collected user manually” – put other, if that is all they say
Example: If it just says “we annotated…” then assume it is only the paper’s authors unless otherwise stated.
6. Number of human annotators:
Put the number if stated, if not, leave blank.
7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include:
Some kind of training is mentioned
No information in the paper
Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work.
Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.”
8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples.
No instructions beyond question text
Instructions include formal definition or examples
No information in paper (or not enough to decide)
Example of a paper showing examples: “we asked crowdsourcing workers to assign the ‘relevant’ label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the ‘non-relevant’ label”
9. Prescreening for crowdwork platforms
Leave blank if this is not applicable.
No prescreening (must state this)
Previous platform performance qualification (e.g. AMT Master)
Generic skills-based qualification (e.g. AMT Premium)
Project-specific prescreening: researchers had known ground truth and only invited
10. Multiple annotator overlap: Did the annotators label at least some of the same items?
Yes, for all items
Yes, for some items
If it says there was overlap but not info to say all or some, put unsure.
11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf’s alpha, Cohen’s kappa, F1 score, or other things.
12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used.
13. Link to dataset available: Is there a link in the paper to the dataset they used?
- journalyear: 2020
- copyright: acmlicensed
- conference: Conference on Fairness, Accountability, and Transparency; January 27–30, 2020; Barcelona, Spain
- booktitle: Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27–30, 2020, Barcelona, Spain
- price: 15.00
- doi: 10.1145/3351095.3372862
- isbn: 978-1-4503-6936-7/20/01
- ccs: Information systems Content analysis and feature selection
- ccs: Computing methodologies Supervised learning by classification
- ccs: Social and professional topics Project and people management
- ccs: Theory of computation Incomplete, inconsistent, and uncertain databases
- https://doi.org/10.5281/zenodo.3564844 and https://github.com/staeiou/gigo-fat2020
- By 6 if neither crowdworkers nor multiple annotators were used, by 7 if multiple annotators were used, by 8 if crowdworkers were used, and by 9 if both were used.
- The main box is the inter-quartile range (IQR), or the 25th & 75th percentiles. The middle red line is the median, the green triangle is the mean, and the outer whiskers are 5th & 95th percentiles.