Appeared in the proceedings of EMNLP–IJCNLP 2019 (Hong Kong, November).This clarified version was prepared in December 2019.It’s All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution

Appeared in the proceedings of EMNLP–IJCNLP 2019 (Hong Kong, November). This clarified version was prepared in December 2019. It’s All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution


This paper treats gender bias latent in word embeddings. Previous mitigation attempts rely on the operationalisation of gender bias as a projection over a linear subspace. An alternative approach is Counterfactual Data Augmentation (CDA), in which a corpus is duplicated and augmented to remove bias, e.g. by swapping all inherently-gendered words in the copy. We perform an empirical comparison of these approaches on the English Gigaword and Wikipedia, and find that whilst both successfully reduce direct bias and perform well in tasks which quantify embedding quality, CDA variants outperform projection-based methods at the task of drawing non-biased gender analogies by an average of 19% across both corpora. We propose two improvements to CDA: Counterfactual Data Substitution (CDS), a variant of CDA in which potentially biased text is randomly substituted to avoid duplication, and the Names Intervention, a novel name-pairing technique that vastly increases the number of words being treated. CDA/S with the Names Intervention is the only approach which is able to mitigate indirect gender bias: following debiasing, previously biased words are significantly less clustered according to gender (cluster purity is reduced by 49%), thus improving on the state-of-the-art for bias mitigation. 12


1 Introduction

Gender bias describes an inherent prejudice against a gender, captured both by individuals and larger social systems. Word embeddings, a popular machine-learnt semantic space, have been shown to retain gender bias present in corpora used to train them (Caliskan et al., 2017). This results in gender-stereotypical vector analogies à la \newciteNIPS2013_5021, such as man:computer programmer :: woman:homemaker (Bolukbasi et al., 2016), and such bias has been shown to materialise in a variety of downstream tasks, e.g. coreference resolution (Rudinger et al., 2018; Zhao et al., 2018).

By operationalising gender bias in word embeddings as a linear subspace, \newciteDBLP:conf/nips/BolukbasiCZSK16 are able to debias with simple techniques from linear algebra. Their method successfully mitigates 3direct bias: man is no longer more similar to computer programmer in vector space than woman. However, the structure of gender bias in vector space remains largely intact, and the new vectors still evince indirect bias: associations which result from gender bias between not explicitly gendered words, for example a possible association between football and business resulting from their mutual association with explicitly masculine words (Gonen and Goldberg, 2019). In this paper we continue the work of Gonen and Goldberg, and show that another paradigm for gender bias mitigation proposed by Lu et al. (2018), Counterfactual Data Augmentation (CDA), is also unable to mitigate indirect bias. We also show, using a new test we describe (non-biased gender analogies), that WED might be removing too much gender information, casting further doubt on its operationalisation of gender bias as a linear subspace.

To improve CDA we make two proposals. The first, Counterfactual Data Substitution (CDS), is designed to avoid text duplication in favour of substitution. The second, the Names Intervention, is a method which can be applied to either CDA or CDS, and treats bias inherent in first names. It does so using a novel name pairing strategy that accounts for both name frequency and gender-specificity. Using our improvements, the clusters of the most biased words exhibit a reduction of cluster purity by an average of 49% across both corpora following treatment, thereby offering a partial solution to the problem of indirect bias as formalised by Gonen and Goldberg (2019). 4Additionally, although one could expect that the debiased embeddings might suffer performance losses in computational linguistic tasks, our embeddings remain useful for at least two such tasks, word similarity and sentiment classification Le and Mikolov (2014).

2 Related Work

The measurement and mitigation of gender bias relies on the chosen operationalisation of gender bias. As a direct consequence, how researchers choose to operationalise bias determines both the techniques at one’s disposal to mitigate the bias, as well as the yardstick by which success is determined.

2.1 Word Embedding Debiasing

One popular method for the mitigation of gender bias, introduced by \newciteDBLP:conf/nips/BolukbasiCZSK16, measures the genderedness of words by the extent to which they point in a gender direction. Suppose we embed our words into . The fundamental assumption is that there exists a linear subspace that contains (most of) the gender bias in the space of word embeddings. (Note that is a direction when it is a single vector.) We term this assumption the gender subspace hypothesis. Thus, by basic linear algebra, we may decompose any word vector as the sum of the projections onto the bias subspace and its complement: . The (implicit) operationalisation of gender bias under this hypothesis is, then, the magnitiude of the bias vector .

To capture , Bolukbasi et al. (2016) first construct sets , each of which contains a pair of words that differ in their gender but that are otherwise semantically equivalent (using a predefined set of gender-definitional pairs). For example, {man, woman} would be one set and {husband, wife} would be another. They then compute the empirical covariance matrix


where is the mean embedding of the words in , then is taken to be the space spanned by the top eigenvectors of associated with the largest eigenvalues. Bolukbasi et al. set , and thus define a gender direction.

Using this operalisation of gender bias, Bolukbasi et al. go on to provide a linear-algebraic method (Word Embedding Debiasing, WED, originally “hard debiasing”) to remove gender bias in two phases: first, for non-gendered words, the gender direction is removed (“neutralised”). Second, pairs of gendered words such as mother and father are made equidistant to all non-gendered words (“equalised”). Crucially, under the gender subspace hypothesis, it is only necessary to identify the subspace as it is possible to perfectly remove the bias under this operationalisation using tools from numerical linear algebra.

All words captured by an embedding (3M)

Gender-specific words (6449)

Gender-neutral words (3M)

Equalise pairs (52)

Definitional pairs (10)


















Figure 1: Word sets used by WED with examples

The method uses three sets of words or word pairs: 10 definitional pairs (used to define the gender direction), 218 gender-specific seed words (expanded to a larger set using a linear classifier, the compliment of which is neutralised in the first step), and 52 equalise pairs (equalised in the second step). The relationships among these sets are illustrated in Figure 1; for instance, gender-neutral words are defined as all words in an embedding that are not gender-specific.

Bolukbasi et al. find that this method results in a 68% reduction of stereotypical analogies as identified by human judges. However, bias is removed only insofar as the operationalisation allows. In a comprehensive analysis, \newcitehila show that the original structure of bias in the WED embedding space remains intact.

2.2 Counterfactual Data Augmentation

As an alternative to WED, Lu et al. (2018) propose Counterfactual Data Augmentation (CDA), in which a text transformation designed to invert bias is performed on a text corpus, the result of which is then appended to the original, to form a new bias-mitigated corpus used for training embeddings. Several interventions are proposed: in the simplest, occurrences of words in 124 gendered word pairs are swapped. For example, ‘the woman cleaned the kitchen’ would (counterfactually) become ‘the man cleaned the kitchen’ as manwoman is on the list. Both versions would then together be used in embedding training, in effect neutralising the manwoman bias.

The grammar intervention, Lu et al.’s improved intervention, uses coreference information to veto swapping gender words when they corefer to a proper noun.5 This avoids Elizabeth …she …queen being changed to, for instance, Elizabeth …he …king. It also uses POS information to avoid ungrammaticality related to the ambiguity of her between personal pronoun and possessive determiner. In the context, ‘her teacher was proud of her’, this results in the correct sentence ’his teacher was proud of him’.

3 Improvements to CDA

We prefer the philosophy of CDA over WED as it makes fewer assumptions about the operationalisation of the bias it is meant to mitigate.

3.1 Counterfactual Data Substitution

The duplication of text which lies at the heart of CDA will produce debiased corpora with peculiar statistical properties unlike those of naturally occurring text. Almost all observed word frequencies will be even, with a notable jump from 2 directly to 0, and a type–token ratio far lower than predicted by Heaps’ Law for text of this length. The precise effect this will have on the resulting embedding space is hard to predict, but we assume that it is preferable not to violate the fundamental assumptions of the algorithms used to create embeddings.

As such, we propose to apply substitutions probabilistically (with 0.5 probability), which results in a non-duplicated counterfactual training corpus, a method we call Counterfactual Data Substitution (CDS). Substitutions are performed on a per-document basis in order to maintain grammaticality and discourse coherence. This simple change should have advantages in terms of naturalness of text and processing efficiency, as well as theoretical foundation.

3.2 The Names Intervention

Our main technical contribution in this paper is to provide a method for better counterfactual augmentation, which is based on bipartite-graph matching of names. Instead of Lu et. al’s (2018) solution of not treating words which corefer to proper nouns in order to maintain grammaticality, we propose an explicit treatment of first names. This is because we note that as a result of not swapping the gender of words which corefer with proper nouns, CDA could in fact reinforce certain biases instead of mitigate them. Consider the sentence ‘Tom …He is a successful and powerful executive’. Since he and Tom corefer, the counterfactual corpus copy will not replace he with she in this instance, and as the method involves a duplication of text, this would result in a stronger, not weaker, association between he and gender-stereotypic concepts present like executive. Even under CDS, this would still mean that biased associations are left untreated (albeit at least not reinforced). Treating names should in contrast effect a real neutralisation of bias, with the added bonus that grammaticality is maintained without the need for coreference resolution.

Figure 2: Frequency and gender-specificity of names in the SSA dataset

The United States Social Security Administration (SSA) dataset contains a list of all first names from Social Security card applications for births in the United States after 1879, along with their gender.6 Figure 2 plots a few example names according to their male and female occurrences, and shows that names have varying degrees of gender-specificity.7

We fixedly associate pairs of names for swapping, thus expanding Lu et al.’s short list of gender pairs vastly. Clearly both name frequency and the degree of gender-specificity are relevant to this bipartite matching. If only frequency were considered, a more gender-neutral name (e.g. Taylor) could be paired with a very gender-specific name (e.g. John), which would negate the gender intervention in many cases (namely whenever a male occurrence of Taylor is transformed into John, which would also result in incorrect pronouns, if present). If, on the other hand, only the degree of gender-specificity were considered, we would see frequent names (like James) being paired with far less frequent names (like Sybil), which would distort the overall frequency distribution of names. This might also result in the retention of a gender signal: for instance, swapping a highly frequent male name with a rare female name might simply make the rare female name behave as a new link between masculine contexts (instead of the original male name), as it rarely appears in female contexts.

Figure 3 shows a plot of various names’ number of primary gender8 occurances against their secondary gender occurrences, with red dots for primary-male and blue crosses for primary-female names.9 The problem of finding name-pairs thus decomposes into a Euclidean-distance bipartite matching problem, which can be solved using the Hungarian method (Kuhn, 1955). We compute pairs for the most frequent 2500 names of each gender in the SSA dataset. There is also the problem that many names are also common nouns (e.g. Amber, Rose, or Mark), which we solve using Named Entity Recognition.

Figure 3: Bipartite matching of names by frequency and gender-specificity

4 Experimental Setup

We compare eight variations of the mitigation methods. CDA is our reimplementation of Lu et al.’s (2018) naïve intervention, gCDA uses their grammar intervention, and nCDA uses our new Names Intervention. gCDS and nCDS are variants of the grammar and Names Intervention using CDS. WED40 is our reimplementation of Bolukbasi et al.’s (2016) method, which (like the original) uses a single component to define the gender subspace, accounting for of variance. As this is much lower than in the original paper (where it was 60%, reproduced in Figure 4), we define a second space, WED70, which uses a 2D subspace accounting for of variance. To test whether WED profits from additional names, we use the 5000 paired names in the names gazetteer as additional equalise pairs (nWED70).10 As control, we also evaluate the unmitigated space (none).

We perform an empirical comparison of these bias mitigation techniques on two corpora, the Annotated English Gigaword (Napoles et al., 2012) and Wikipedia. Wikipedia is of particular interest, since though its Neutral Point of View (NPOV) policy11 predicates that all content should be presented without bias, women are nonetheless less likely to be deemed “notable” than men of equal stature (Reagle and Rhue, 2011), and there are differences in the choice of language used to describe them (Bamman and Smith, 2014; Graells-Garrido et al., 2015). We use the annotation native to the Annotated English Gigaword, and process Wikipedia with CoreNLP (statistical coreference; bidirectional tagger). Embeddings are created using Word2Vec12. We use the original complex lexical input (gender-word pairs and the like) for each algorithm as we assume that this benefits each algorithm most. 1314 Expanding the set of gender-specific words for WED (following Bolukbasi et al., using a linear classifier) on Gigaword resulted in 2141 such words, 7146 for Wikipedia.15

Figure 4: Variance explained by the top Principal Components of the definitional word pairs (left) and random unit vectors (right)

In our experiments, we test the degree to which the spaces are successful at mitigating direct and indirect bias, as well as the degree to which they can still be used in two NLP tasks standardly performed with embeddings, word similarity and sentiment classification. We also introduce one further, novel task, which is designed to quantify how well the embedding spaces capture an understanding of gender using non-biased analogies. Our evaluation matrix and methodology is expanded below.

Direct bias

Caliskan et al. (2017) introduce the Word Embedding Association Test (WEAT), which provides results analogous to earlier psychological work by Greenwald et al. (1998) by measuring the difference in relative similarity between two sets of target words and and two sets of attribute words and . We compute Cohen’s (a measure of the difference in relative similarity of the word sets within each embedding; higher is more biased), and a one-sided -value which indicates whether the bias detected by WEAT within each embedding is significant (the best outcome being that no such bias is detectable). We do this for three tests proposed by Nosek et al. (2002) which measure the strength of various gender stereotypes: art–maths, arts–sciences, and careers–family.16

Indirect bias

To demonstrate indirect gender bias we adapt a pair of methods proposed by Gonen and Goldberg (2019). First, we test whether the most-biased words prior to bias mitigation remain clustered following bias mitigation. To do this, we define a new subspace, , using the 23 word pairs used in the Google Analogy family test subset (Mikolov et al., 2013) following Bolukbasi et al.’s (2016) method, and determine the 1000 most biased words in each corpus (the 500 words most similar to and ) in the unmitigated embedding. For each debiased embedding we then project these words into 2D space with tSNE (van der Maaten and Hinton, 2008), compute clusters with k-means, and calculate the clusters’ V-measure Rosenberg and Hirschberg (2007). Low values of cluster purity indicate that biased words are less clustered following bias mitigation.

Second, we test whether a classifier can be trained to reclassify the gender of debiased words. If it succeeds, this would indicate that bias-information still remains in the embedding. We trained an RBF-kernel SVM classifier on a random sample of 1000 out of the 5000 most biased words from each corpus using (500 from each gender), then report the classifier’s accuracy when reclassifying the remaining 4000 words.

Word similarity

The quality of a space is traditionally measured by how well it replicates human judgements of word similarity. The SimLex-999 dataset (Hill et al., 2015) provides a ground-truth measure of similarity produced by 500 native English speakers.17 Similarity scores in an embedding are computed as the cosine angle between word-vector pairs, and Spearman correlation between embedding and human judgements are reported. We measure correlative significance at .

Sentiment classification

Following Le and Mikolov (2014), we use a standard sentiment classification task to quantify the downstream performance of the embedding spaces when they are used as a pretrained word embedding input (Lau and Baldwin, 2016) to Doc2Vec on the Stanford Large Movie Review dataset. The classification is performed by an SVM classifier using the document embeddings as features, trained on 40,000 labelled reviews and tested on the remaining 10,000 documents, reported as error percentage.

Non-biased gender analogies

When proposing WED, Bolukbasi et al. (2016) use human raters to class gender-analogies as either biased (woman:housewife :: man:shopkeeper) or appropriate (woman:grandmother :: man::grandfather), and postulate that whilst biased analogies are undesirable, appropriate ones should remain. Our new analogy test uses the 506 analogies in the family analogy subset of the Google Analogy Test set (Mikolov et al., 2013) to define many such appropriate analogies that should hold even in a debiased environment, such as boy:girl :: nephew:niece.18 We use a proportional pair-based analogy test, which measures each embedding’s performance when drawing a fourth word to complete each analogy, and report error percentage.

5 Results

Art–Maths Arts–Sciences Career–Family
Nosek et al.
Table 1: Direct bias results

Direct bias

Table 1 presents the scores and WEAT one-tailed -values, which indicate whether the difference in samples means between targets and and attributes and is significant. We also compute a two-tailed -value to determine whether the difference between the various sets is significant.19

On Wikipedia, nWED70 outperforms every other method (), and even at bias was undetectable. In all CDA/S variants, the Names Intervention performs significantly better than other intervention strategies (average for nCDS across all tests 0.95 vs. 1.39 for the best non-names CDA/S variants). Excluding the Wikipedia careers–family test (in which the CDA and CDS variants are indistinguishable at ), the CDS variants are numerically better than their CDA counterparts in 80% of the test cases, although many of these differences are not significant.

Generally, we notice a trend of WED reducing direct gender bias slightly better than CDA/S. Impressively, WED even successfully reduces bias in the careers–family test, where gender information is captured by names, which were not in WED’s gender-equalise word-pair list for treatment.

Figure 5: Most biased cluster purity results
Figure 6: Clustering of biased words (Gigaword)

Indirect bias

Figure 5 shows the V-measures of the clusters of the most biased words in Wikipedia for each embedding. Gigaword patterns similarly (see appendix). Figure 6 shows example tSNE projections for the Gigaword embeddings (“” refers to their V-measures; these examples were chosen as they represent the best results achieved by Bolukbasi et al.’s (2016) method, Lu et al.’s (2018) method, and our new names variant). On both corpora, the new nCDA and nCDS techniques have significantly lower purity of biased-word cluster than all other evaluated mitigation techniques (0.420 for nCDS on Gigaword, which corresponds to a reduction of purity by 58% compared to the unmitigated embedding, and 0.609 (39%) on Wikipedia). nWED70’s V-Measure is significantly higher than either of the other Names variants (reduction of 11% on Gigaword, only 1% on Wikipedia), suggesting that the success of nCDS and nCDA is not merely due to their larger list of gender-words.

Figure 7: Reclassification of most biased words results

Figure 7 shows the results of the second test of indirect bias, and reports the accuracy of a classifier trained to reclassify previously gender biased words on the Wikipedia embeddings (Gigaword patterns similarly).20 These results reinforce the finding of the clustering experiment: once again, nCDS outperforms all other methods significantly on both corpora (), although it should be noted that the successful reclassification rate remains relatively high (e.g. 88.9% on Wikipedia).

We note that nullifying indirect bias associations entirely is not necessarily the goal of debiasing, since some of these may result from causal links in the domain. For example, whilst associations between man and engineer and between man and car are each stereotypic (and thus could be considered examples of direct bias), an association between engineer and car might well have little to do with gender bias, and so should not be mitigated.

Gigaword Wikipedia
Table 2: Word similarity Results

Word similarity

Table 2 reports the SimLex-999 Spearman rank-order correlation coefficients  (all are significant, ). Surprisingly, the WED40 and 70 methods outperform the unmitigated embedding, although the difference in result is small (0.386 and 0.395 vs. 0.385 on Gigaword, 0.371 and 0.367 vs. 0.368 on Wikipedia). nWED70, on the other hand, performs worse than the unmitigated embedding (0.384 vs. 0.385 on Gigaword, 0.367 vs. 0.368 on Wikipedia). CDA and CDS methods do not match the quality of the unmitigated space, but once again the difference is small. 21It should be noted that since SimLex-999 was produced by human raters, it will reflect the human biases these methods were designed to remove, so worse performance might result from successful bias mitigation.

Figure 8: Sentiment classification results

Sentiment classification

Figure 8 shows the sentiment classification error rates for Wikipedia (Gigaword patterns similarly). Results are somewhat inconclusive. While WED70 significantly improves the performance of the sentiment classifier from the unmitigated embedding on both corpora (), the improvement is small (never more than 1.1%). On both corpora, nothing outperforms WED70 or the Names Intervention variants.

Figure 9: Non-biased gender analogy results

Non-biased gender analogies

Figure 9 shows the error rates for non-biased gender analogies for Wikipedia. CDA and CDS are numerically better than the unmitigated embeddings (an effect which is always significant on Gigaword, shown in the appendices, but sometimes insignificant on Wikipedia). The WED variants, on the other hand, perform significantly worse than the unmitigated sets on both corpora (27.1 vs. 9.3% for the best WED variant on Gigaword; 18.8 vs. 8.7% on Wikipedia). WED thus seems to remove too much gender information, whilst CDA and CDS create an improved space, perhaps because they reduce the effect of stereotypical associations which were previously used incorrectly when drawing analogies.

6 Conclusion

We have replicated two state-of-the-art bias mitigation techniques, WED and CDA, on two large corpora, Wikipedia and the English Gigaword. In our empirical comparison, we found that although both methods mitigate direct gender bias and maintain the interpretability of the space, WED failed to maintain a robust representation of gender (the best variants had an error rate of 23% average when drawing non-biased analogies, suggesting that too much gender information was removed). A new variant of CDA we propose (the Names Intervention) is the only to successfully mitigate indirect gender bias: following its application, previously biased words are significantly less clustered according to gender, with an average of 49% reduction in cluster purity when clustering the most biased words. We also proposed Counterfactual Data Substitution, which generally performed better than the CDA equivalents, was notably quicker to compute (as Word2Vec is linear in corpus size), and in theory allows for multiple intervention layers without a corpus becoming exponentially large.

A fundamental limitation of all the methods compared is their reliance on predefined lists of gender words, in particular of pairs. Lu et al.’s pairs of manager::manageress and murderer::murderess may be counterproductive, as their augmentation method perpetuates a male reading of manager, which has become gender-neutral over time. Other issues arise from differences in spelling (e.g. mum vs. mom) and morphology (e.g. his vs. her and hers). Biologically-rooted terms like breastfeed or uterus do not lend themselves to pairing either. The strict use of pairings also imposes a gender binary, and as a result non-binary identities are all but ignored in the bias mitigation literature. 22

Future work could extend the Names Intervention to names from other languages beyond the US-based gazetteer used here. Our method only allows for there to be an equal number of male and female names, but if this were not the case one ought to explore the possibility of a many-to-one mapping, or perhaps a probablistic approach (though difficulties would be encountered sampling simultaneously from two distributions, frequency and gender-specificity). A mapping between nicknames (not covered by administrative sources) and formal names could be learned from a corpus for even wider coverage, possibly via the intermediary of coreference chains. Finally, given that names have been used in psychological literature as a proxy for race (e.g. Greenwald et al.), the Names Intervention could also be used to mitigate racial biases (something which, to the authors’ best knowledge, has never been attempted), but finding pairings could prove problematic. It is important that other work looks into operationalising bias beyond the subspace definition proposed by Bolukbasi et al. (2016), as it is becoming increasingly evident that gender bias is not linear in embedding space.


We would like to thank Francisco Vargas Palomo for pointing out a few typos in the proofs App. A post publication.

Appendix A Proofs for method from Bolukbasi et al. (2016)

We found the equations suggested in \newciteDBLP:conf/nips/BolukbasiCZSK16 on the opaque side of things. So we provide here proofs missing from the original work ourselves.

Proposition 1.

DBLP:conf/nips/BolukbasiCZSK16 define


where they define . This vector is a unit vector, i.e. .


where we note that so it is orthogonal to both and by construction. ∎

Proposition 2.

The equalise step of \newciteDBLP:conf/nips/BolukbasiCZSK16 ensures that gendered pairs, e.g. man–woman, are equidistant to all gender-neutral words.


Following Bolukbasi et al., we define and as follows:

Now, we have the result that


which is the same for any .

Appendix B WEAT word sets

Below are listed the word sets we used for the WEAT to test direct bias, as defined by Nosek et al. (2002). Note that for the careers–family test, the target and attribute words have been reversed; that is, gender is captured by the target words, rather than the attribute words. Whilst this distinction is important in the source psychological literature (Greenwald et al., 1998), mathematically the target sets and attribute sets are indistinguishable and fully commutative.


: math, algebra, geometry, calculus, equations, computation, numbers, addition; : poetry, art, dance, literature, novel, symphony, drama, sculpture; : male, man, boy, brother, he, him, his, son; : female, woman, girl, sister, she, her, hers, daughter


: science, technology, physics, chemistry, Einstein, NASA, experiment, astronomy; : poetry, art, Shakespeare, dance, literature, novel, symphony, drama; : brother, father, uncle, grandfather, son, he, his, him; : sister, mother, aunt, grandmother, daughter, she, hers, her


: John, Paul, Mike, Kevin, Steve, Greg, Jeff, Bill; : Amy, Joan, Lisa, Sarah, Diana, Kate, Ann, Donna; : executive, management, professional, corporation, salary, office, business, career; : home, parents, children, family, cousins, marriage, wedding, relatives

Appendix C Additional Gigaword results

Additional results for the Annotated English Gigaword are given here.

Figure 10: Most biased cluster purity results
Figure 11: Reclassification of most biased words results
Figure 12: Sentiment classification results
Figure 13: Non-biased gender analogy results


  1. todo: I commented the last sentence out. Abstract is very long.
  2. todo: there was some one-line weirdness going on later so I added a part of it back in, that alright?
  3. todo: does not particularly like boldfacing for emphasis, but can live with.
  4. todo: first part of reaction to reviewer 4
  5. We interpret Lu et al.’s (2018) phrase “cluster” to mean “coreference chain”.
  7. The dotted line represents gender-neutrality, and more frequent names are located further from the origin.
  8. Defined as its most frequently occurring gender.
  9. The hatched area demarcates an area of the graph where no names can exist: if any name did then its primary and secondary gender would be reversed and it would belong to the alternate set.
  10. We use the 70% variant as preliminary experimentation showed that it was superior to WED40.
  12. A CBOW model was trained over five epochs to produce 300 dimensional embeddings. Words were lowercased, punctuation other than underscores and hyphens removed, and tokens with fewer than ten occurrences were discarded.
  13. todo: I am not 100% sure of which ”expansion” you are talking about here. The classifier Bolucbasi use maybe?
  14. todo: yup - clarified
  15. We modify or remove some phrases from the training data not included in the vocabulary of our embeddings.
  16. In the careers–family test the gender dimension is expressed by female and male first names, unlike in the other sets, where pronouns and typical gendered words are used.
  17. It explicitly quantifies similarity rather than association or relatedness; pairs of entities like coffee and cup have a low rating.
  18. The entire Google Analogy Test set contains 19,544 analogies, which are usually reported as a single result or as a pair of semantic and syntactic results.
  19. Throughout this paper, we test significance in the differences between the embeddings with a two-tailed Monte Carlo permutation test at significance interval with permutations.
  20. The 95% confidence interval is calculated by a Wilson score interval, i.e., assuming a normal distribution.
  21. todo: Second Part of Reaction to Reviewer 4.
  22. todo: added this para back in and chopped it up a bit, look okay?


  1. Unsupervised discovery of biographical structure from text. Transactions of the Association for Computational Linguistics 2, pp. 363–376. External Links: Link Cited by: §4.
  2. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. See ?, pp. 4349–4357. External Links: Link Cited by: Appendix A, Appendix A, §1, §2.1, §2.1, §2.1, §4, §4, §4, §4, §5, §6.
  3. Semantics derived automatically from language corpora contain human-like biases. Science 356 (6334), pp. 183–186. External Links: ISSN 0036-8075, Link, Cited by: §1, §4.
  4. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Minneapolis, Minnesota. Cited by: §1, §1, §4.
  5. First women, second sex: gender bias in Wikipedia. In Proceedings of the 26th ACM Conference on Hypertext & Social Media, HT ’15, New York, NY, USA, pp. 165–174. External Links: ISBN 978-1-4503-3395-5, Link Cited by: §4.
  6. Measuring individual differences in implicit cognition: The Implicit Association Test.. Journal of Personality and Social Psychology 74 (6), pp. 1464–1480. External Links: ISSN 1939-1315(Electronic),0022-3514(Print) Cited by: Appendix B, §4, §6.
  7. SimLex-999: evaluating semantic models with (genuine) similarity estimation. American Journal of Computational Linguistics 41 (4), pp. 665–695. External Links: Link Cited by: §4.
  8. The hungarian method for the assignment problem. Naval research logistics quarterly 2 (1-2), pp. 83–97. Cited by: §3.2.
  9. An empirical evaluation of doc2vec with practical insights into document embedding generation. In Proceedings of the 1st Workshop on Representation Learning for NLP, Berlin, Germany, pp. 78–86. External Links: Link Cited by: §4.
  10. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on Machine Learning, E. P. Xing and T. Jebara (Eds.), Proceedings of Machine Learning Research, Vol. 32, Bejing, China, pp. 1188–1196. External Links: Link Cited by: §1, §4.
  11. Gender bias in neural natural language processing. CoRR abs/1807.11714. External Links: Link, 1807.11714 Cited by: §1, §2.2, §2.2, §3.2, §3.2, §4, §5, §6, footnote 1.
  12. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani and K. Q. Weinberger (Eds.), pp. 3111–3119. External Links: Link Cited by: §4, §4.
  13. Annotated gigaword. In AKBC-WEKEX Workshop at NAACL 2012, Cited by: §4.
  14. Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice 6 1, pp. 101–115. Cited by: Appendix B, §4, Table 1.
  15. Gender bias in Wikipedia and Britannica. International Journal of Communication 5 (0). External Links: ISSN 1932-8036, Link Cited by: §4.
  16. V-measure: a conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), Prague, Czech Republic, pp. 410–420. External Links: Link Cited by: §4.
  17. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 8–14. External Links: Link Cited by: §1.
  18. Visualizing data using t-SNE. Journal of Machine Learning Research 9, pp. 2579–2605. External Links: Link Cited by: §4.
  19. Gender bias in coreference resolution: evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 15–20. External Links: Link Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description