Defending Against Neural Fake News
Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news.
Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary’s point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like ‘Link Found Between Vaccines and Autism,’ Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation.
Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias – and sampling strategies that alleviate its effects – both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news.
Defending Against Neural Fake News
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk Ali Farhadi, Franziska Roesner, Yejin Choi Paul G. Allen School of Computer Science & Engineering, University of Washington Allen Institute for Artificial Intelligence https://rowanzellers.com/grover
noticebox[b]Preprint. Under review.\end@float
Online fake news – news designed to intentionally deceive – has recently emerged as a major societal problem. Malicious actors spread fallacious viral stories in order to gain advertising revenue, influence opinions, and even tip elections (Faris et al., 2017; Wardle and Derakhshan, 2017). As such, countering the spread of disinformation online presents an urgent technical and political issue.
To the best of our knowledge, most disinformation online today is manually written (Vargo et al., 2018). However, as progress continues in natural language generation, malicious actors will increasingly be able to controllably generate realistic-looking propaganda at scale. Thus, while we are excited about recent progress in text generation (Józefowicz et al., 2016; Radford et al., 2018; 2019), we are also concerned with the inevitability of AI-generated ‘neural’ fake news.111 We thank past work, such as OpenAI’s Staged Release Policy for GPT2 for drawing attention to neural disinformation, alongside other dual-use implications.
With this paper, we seek to understand and respond to neural fake news before it manifests at scale. We draw on the field of computer security, which relies on threat modeling: analyzing the space of potential threats and vulnerabilities in a system to develop robust defenses. To scientifically study the risks of neural disinformation, we present a new model called Grover.222Short for Generating aRticles by Only Viewing mEtadata Records. Our model allows for controllable yet efficient generation of an entire news article – not just the body, but also the title, news source, publication date, and author list. This lets us study an adversary with controllable generations (e.g. Figure 1, an example anti-vaccine article written in the style of the New York Times).
Humans rate the disinformation generated by Grover as trustworthy, even more so than human-written disinformation. Thus, developing robust verification techniques against generators such as Grover is an important research area. We consider a setting in which a discriminator has access to 5000 Grover generations, but unlimited access to real news. In this setting, the best existing fake news discriminators are, themselves, deep pretrained language models (73% accuracy) (Peters et al., 2018; Radford et al., 2018; 2019; Devlin et al., 2018). However, we find that Grover, when used in a discriminative setting, performs even better at 92% accuracy. This seemingly counterintuitive finding represents an exciting opportunity for defense against neural fake news: the best models for generating neural disinformation are also the best models at detecting it.
We investigate how deep pretrained language models distinguish between real and machine-generated text. We find that key artifacts are introduced during generation as a result of exposure bias: the generator is not perfect, so randomly sampling from its distribution results in generations that fall increasingly out-of-distribution as length increases. However, sampling strategies that alleviate these effects also introduce artifacts that strong discriminators can pick up on.
We conclude with a sketch of the ethical territory that must be mapped out in order to understand our responsibilities as researchers when studying fake news, and the potential negative implications of releasing models (Hecht et al., 2018). Accordingly, we suggest a provisional policy of how such models should be released and why we believe it to be safe – and perhaps even imperative – to do so. We believe our proposed framework and accompanying models provide a concrete initial proposal for an evolving conversation about ML-based disinformation threats and how they can be countered.
2 Fake News in a Neural and Adversarial Setting
We present a framework – motivated by today’s dynamics of manually created fake news – for understanding what adversaries will attempt with deep models, and how verifiers should respond.
Scope of fake news.
There are many types of false news, ranging from satire to propaganda (Wardle, 2017). In this paper, we focus on text-only documents formatted as news articles: stories and their corresponding metadata that contain purposefully false information. Existing fake news is predominantly human-written, for two broad goals: monetization (ad revenue through clicks) and propaganda (communicating targeted information) (Bradshaw and Howard, 2017; Melford and Fagan, 2019). Achieving either goal requires the adversary to be selective about the news that they make, whether by producing only viral content, or content that advances a given agenda.
Fact checking and verification: related work.
There is considerable interest in fighting online disinformation. Major platforms such as Facebook prioritize trustworthy sources and shut down accounts linked to disinformation (Mosseri, 2018; Dwoskin and Romm, 2018). Some users of these platforms avoid fake news with tools such as NewsGuard and Hoaxy (Shao et al., 2016) and websites like Snopes and PolitiFact. These services rely on manual fact-checking efforts: verifying the accuracy of claims, articles, and entire websites. Efforts to automate fake news detection generally point out stylistic biases that exist in the text (Rashkin et al., 2017; Wang, 2017; Pérez-Rosas et al., 2018). These efforts can help moderators on social media platforms shut down suspicious accounts. However, fact checking is not a panacea – cognitive biases such as the backfire effect and confirmation bias make humans liable to believe fake news that fits their worldview (Swire et al., 2017).
We cast fake news generation and detection as an adversarial game, with two players:
Adversary. Their goal is to generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must read realistically to both human users as well as the verifier.
Verifier. Their goal is to classify news stories as real or fake. The verifier has access to unlimited real news stories, but few fake news stories from a specific adversary. This setup matches the existing landscape: when a platform blocks an account or website, their disinformative stories provide training for the verifier; but it is difficult to collect fake news from newly-created accounts.
The dual objectives of these two players suggest an escalating “arms race” between attackers and defenders. As verification systems get better, so too will adversaries. We must therefore be prepared to deal with ever- stronger adversarial attacks, which is the focus of the next section.
3 Grover: Modeling Conditional Generation of Neural Fake News
Given existing online disinformation, we have reason to believe adversaries will try to generate targeted content (e.g. clickbait and propaganda). Recently introduced large-scale generative models produce realistic-looking text (Radford et al., 2019), but they do not lend themselves to producing controllable generations (Hu et al., 2017).333A common workaround is to have a human seed the text to provide context. However, this a) is a heavy handed technique for biasing which may not capture the desired attributes, and b) leaves in place a human-written beginning (as tokens are only generated left-to-right), which may create distributional artifacts. Therefore, to probe the feasibility of realistic-looking neural fake news, we introduce Grover, which produces both realistic and controlled generations.
The current state-of-the-art in unconditional text generation views it as a language modeling problem (Bengio et al., 2003), in which the probability of a document is the product of the conditional probability of generating each token given previous tokens:
The document is typically treated as a single unstructured text field, beginning with a <start> token and ending with an <end> token. The latter, <end>, is particularly important because it indicates the end of the field, and when to should stop generating. However, a news article has necessary structure beyond the running text, or body field. Metadata fields include the domain where the article is published (indirectly marking the style), the date of publication, the names of the authors, and the headline of the article itself. Not only does generating a news article require producing all of these components, these fields also allow significant control over the generations (e.g. specifying a headline helps control the generated body). An article can be modeled by the joint distribution:
However, it is not immediately obvious how to sample from Equation 2. One option is to define a canonical order among the article’s fields : (), and model the article left-to-right in that order using Equation 1: . However, this ordering would forbid sampling certain fields without prohibitively expensive marginalization. Alternatively, one could generate fields in any order, but this requires the model to learn to handle potential orderings during inference time.
Our solution is Grover, a new approach for efficient learning and generation of multi-field documents. We adopt the language modeling framework of Equation 1 in a way that allows for flexible decomposition of Equation 2. During inference time, we start with a set of fields as context, with each field containing field-specific start and end tokens. We sort the fields using a standard order444Our ordering is the following field types in order: domain, date, authors, headline, and then the body. and combine the resulting tokens together. To generate a target field , we append the field-specific start token <start> to the context tokens; then, we sample from the model until we hit <end>.
Figure 2 shows an example of using Grover to generate an anti-vaccine article. Here, the adversary specifies a domain, date, and headline. After Grover generates the body, it can be used to generate a fake author, before finally generating a new and more appropriate headline.
During training, we simulate inference by randomly partitioning an article’s fields into two disjoint sets and . We also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation. The metadata fields in each set are sorted using the standard order, and the model is trained to minimize the cross-entropy of predicting tokens in followed by .555This trick means that Grover is only required to handle orderings during training, versus .
We draw on recent progress in training large Transformers for language modeling (Vaswani et al., 2017), building Grover using the same architecture as for GPT2 (Radford et al., 2019). We consider three model sizes. Our smallest model, Grover-Base, has 12 layers and 117 million parameters, on par with GPT and BERT-Base (Radford et al., 2018; Devlin et al., 2018). Our next model, Grover-Large, has 24 layers and 345 million parameters, on par with BERT-Large. Our largest model, Grover-Mega, has 48 layers and 1.5 billion parameters, the same as GPT2.
We present RealNews, a large corpus of news articles from Common Crawl. Training Grover requires a large corpus of news articles with metadata, but none currently exists. Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News. We used the Newspaper Python library to extract the body and metadata from each article. News from Common Crawl dumps from December 2016 through March 2019 were used as training data; articles published in April 2019 from the April 2019 dump were used for evaluation. After deduplication, RealNews is 120 gigabytes without compression.
We trained each Grover model on randomly-sampled sequences from RealNews with length 1024. Other optimization hyperparameters are in Appendix A. We trained Grover-Mega for 800k iterations, using a batch size of 512 and 256 TPU v3 cores. Training time was two weeks.
3.1 Language Modeling results: measuring the importance of data, context, and size
We validate Grover, versus standard unconditional language models, on the April 2019 test set. We consider two evaluation modes: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. In both cases, the perplexity is only calculated only over the article body.
Our results, shown in Figure 3, show several conclusions. First, Grover noticeably improves (between .6 to .9 perplexity points) when conditioned on metadata. Second, perplexity decreases with size, with Grover-Mega obtaining 8.7 perplexity in the conditional setting. Third, the data distribution is still important: though the GPT2 models with 117M parameters and 345M parameters respectively match our Grover-Base and Grover-Large architectures, our model is over 5 perplexity points lower in both cases, possibly because the OpenAI WebText corpus also contains non-news articles.
3.2 Carefully restricting the variance of generations with Nucleus Sampling
Sampling from Grover is straightforward as it behaves like a left-to-right language model during decoding. However, the choice of decoding algorithm is important. While likelihood-maximization strategies such as beam search work well for closed-ended generation tasks where the output contains the same information as the context (like machine translation), these approaches have been shown to produce degenerate text during open-ended generation (Hashimoto et al., 2019; Holtzman et al., 2019). However, as we will show in Section 6, restricting the variance of generations is also crucial.
In this paper, we primarily use Nucleus Sampling (top-): for a given threshold , at each timestep we sample from the most probable words whose cumulative probability comprises the top-% of the entire vocabulary (Holtzman et al., 2019). We also compare with top- sampling, wherein the most probable tokens are used at each timestep (Fan et al., 2018).
4 Humans are Easily Fooled by Grover-written Propaganda
We evaluate the quality of disinformation generated by our largest model, Grover-Mega, using . We consider four classes of articles: human-written articles from reputable news websites (Human News), Grover-written articles conditioned on the same metadata (Machine News), human-written articles from known propaganda websites (Human Propaganda), and Grover-written articles conditioned on the propaganda metadata (Machine Propaganda).666We use the technique described in Figure 2 to rewrite the propaganda: given the metadata, generate the article first, and then rewrite the headline. The domains used are in Appendix B; examples are in Appendix E. We asked a pool of qualified workers on Amazon Mechanical Turk to rate each article on three dimensions: stylistic consistency, content sensibility, and overall trustworthiness.
Results (Figure 4) show a striking trend: though the quality of Grover-written news is not as high as human-written news, it is adept at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by Grover.777This difference is statistically significant at .
5 Neural Fake News Detection
The high quality of neural fake news written by Grover, as judged by humans, makes automatic neural fake news detection an important research area. Using models (below) for the role of the Verifier can mitigate the harm of neural fake news by classifying articles as Human or Machine written. These decisions can assist content moderators and end users in identifying likely (neural) disinformation.
Grover. We consider a version of our model adapted for discrimination. Similar to GPT (Radford et al., 2018), we place a special [CLS] token at the end of each article, and extract the final hidden state at that point. The hidden state is fed to a linear layer to predict the label Human or Machine.
To simulate real conditions, and ensure minimal overlap between the generator and discriminator parameters, we initialize Grover for discrimination using the checkpoint at iteration 700k, whereas the generator uses the checkpoint at iteration 800k.
GPT2, a 117M or 345M parameter pretrained Transformer language model. Similar to Grover, we follow the GPT approach and extract the hidden state from a newly-added [CLS] token.
BERT, a 117M parameter (BERT-Base) or 345M parameter (BERT-Large) bidirectional Transformer encoder commonly used for discriminative tasks. We perform domain adaptation to adapt BERT to the news domain, as well as to account for long articles; details in Appendix C.
FastText, an off-the-shelf library for bag-of-ngram text classification (Joulin et al., 2017). Though not pretrained, similar models do well at detecting human-written fake news.
All models are trained to minimize the cross-entropy loss of predicting the right label. Hyperparameters used during discrimination are in Appendix D.
5.1 A semi-supervised setting for neural fake news detection
While there are many human-written articles online, most are from the distant past, whereas articles to be detected will likely be set in the present. Likewise, there might be relatively few neural fake news articles from a given adversary.888Moreover, since disinformation can be shared on a heterogeneous mix of platforms, it might be challenging to pin down a single generated model. We thus frame neural fake news detection as a semi-supervised problem. A neural verifier (or discriminator) has access to many human-written news articles from March 2019 and before – the entire RealNews training set. However, it has limited access to generations, and more recent news articles. Using 10k news articles from April 2019, we generate article body text; another 10k articles are used as a set of human-written news articles. We split the articles in a balanced way, with 10k for training (5k per label), 2k for validation, and 8k for testing.
We consider two evaluation modes. In the unpaired setting, a discriminator is provided single news articles, and must classify each independently as Human or Machine. In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The discriminator must assign the machine-written article a higher Machine probability than the human-written article. We evaluate both modes in terms of accuracy.
5.2 Discrimination results: Grover performs best at detecting Grover’s fake news
We present experimental results in Table 5.2 for all generator and discriminator combinations. For each pair, we show the test results using the most adversarial generation hyperparameters (top-&) as judged on the val set.999For each discriminator/generator pair, we search over and . The results show several trends. First, the paired setting appears significantly easier than the unpaired setting across the board, suggesting that it is often difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using Grover to discriminate Grover’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Lastly, other discriminators perform worse than Grover overall, even when controlling for architecture size and (for both BERT models) the domain. This suggests that effective discrimination requires having a similar inductive bias as the generator.101010This matches findings on the HellaSwag dataset (Zellers et al., 2019b). Given human text and machine text written by a finetuned GPT model, a GPT discriminator outperforms BERT-Base at picking out human text.
5.3 Weak supervision: what happens if we don’t have access to Grover-Mega?
These results suggest that Grover is an effective discriminator when we have a medium number of fake news examples from the exact adversary that we will encounter at test time. What happens if we relax this assumption? Here, we consider the problem of detecting an adversary who is generating news with Grover-Mega and an unknown top- threshold.111111The top- threshold used was , but we are not supposed to know this! In this setup, during training, we have access to a weaker model (Grover-Base or Grover-Large). We consider the effect of having only examples from Grover-Mega, and sampling the missing articles from one of the weaker models, where the top-p threshold is uniformly chosen for each article in the range of .
We show the results of this experiment in Figure 5. The results suggest that observing additional generations greatly helps discrimination performance when few examples of Grover-Mega are available: weak supervision with between 16 and 256 examples from Grover-Large yields around 78% accuracy, while accuracy remains around 50% without weak supervision. As the portion of examples that come from Grover-Mega increases, however, accuracy converges to 92%.
6 How does a model distinguish between human and machine text?
Why does Grover perform best at detecting its own fake news? We hypothesize that the reason may be due in part to exposure bias, the phenomenon whereby models maximizing Equation 1 are trained only conditioned on human-written text (Ranzato et al., 2016). To test our hypothesis, in Figure 6 we plot the perplexities given by Grover-Mega over each position for body text at top- thresholds of and , as well as over human text. Generating the first token after <startbody> results in high perplexity. However, the rest of the positions show a curious pattern: the perplexity of human-written text is lower than randomly sampled text, and this gap increases with sequence length, suggesting that random sampling causes Grover to fall increasingly out of the distribution of human language. However, limiting the variance () lowers the resulting perplexity and limits its growth.
Limiting the variance of a model also creates artifacts
On the other hand, clipping the model’s variance also leaves an artifact, as prior work has observed for top- sampling (Strobelt and Gehrmann, 2019). A similar phenomenon holds for Nucleus (top-) sampling. The probability of observing a human-written article where all tokens are drawn from the top-% of the distribution is , where is the document’s length. This probability goes to zero as increases. However, for Nucleus Sampled text – in which the final is cut off – all tokens come from the top-.
The visibility of the artifacts depends on the choice of discriminator.
The top- at each timestep is calculated under the generator’s worldview, meaning that if the discriminator models text in a different way, it might have a harder time pinpointing the empty tail. This could explain BERT’s lower performance during discrimination.
A sweet spot of careful variance reduction
Not reducing the variance, as well as significantly reducing the variance, both cause problems. Might there be a sweet spot for how much to truncate the variance, to make discrimination maximally hard? In Figure 7, we show results varying the top- threshold for the discrimination task applied to Grover-Mega’s generations. The results indeed show a sweet spot, roughly between and depending on the discriminator, wherein discrimination is hardest. Interestingly, we note that the most adversarial top- threshold for BERT-Large is considerably lower than the corresponding top- for Grover-Large of the same size. This supports our hypothesis that BERT’s view of language differs markedly from Grover; using a lower top- threshold does not seem to give it much more information about the missing tail.
Overall, our analysis suggests that Grover might be the best at catching Grover because it is the best at knowing where the tail is, and thus whether it was truncated.
7 Conclusion: a Release Strategy for Grover
This paper investigates the threats posed by adversaries seeking to spread disinformation. Our sketch of what these threats might look like – a controllable language model named Grover – suggests that these threats are real and dangerous. Grover can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, there are defenses to these models – notably, in the form of Grover itself. We conclude with a discussion of next steps and ethical considerations.
The Era of Neural Disinformation.
Though training Grover was challenging, it is easily achievable by real-world adversaries today. Obtaining the data required through Common Crawl cost $10k in AWS credits and can be massively parallelized over many CPUs. Training Grover-Mega is relatively inexpensive: at a cost of $0.30 per TPU v3 core-hour and two weeks of training, the total cost is $25k. Spending more money and engineering time could yield even more powerful generators.
Release of generators is critical.
At first, it would seem like keeping models like Grover private would make us safer. However, Grover serves as an effective detector of neural fake news, even when the generator is much larger (Section 5). If generators are kept private, then there will be little recourse against adversarial attacks.
Future of progress in generation.
Models like BERT are strong discriminators for many NLP tasks, but they are not as good at detecting Grover’s generations as Grover itself, even after domain adaptation. One hypothesis is that the artifacts shown in Section 6 are most visible to a left-to-right discriminator. This also suggests that recent progress on generating text in any order (Gu et al., 2019; Stern et al., 2019; Ghazvininejad et al., 2019) may lead to models that evade a Grover discriminator. Likewise, models that are trained conditioned on their own predictions might avoid exposure bias, however, these objectives often lead to low performance on language tasks (Caccia et al., 2018). One additional possibility is the use of Adversarial Filtering (Zellers et al., 2018; 2019b) to oversample and then select a subset of generations. However, we found this didn’t work well for very long sequences (up to 1024 BPE tokens), possibly as these are far from the ‘Goldilocks Zone’ wherein discrimination is hard for machines.
Future of progress in discrimination.
Our discriminators are effective, but they primarily leverage distributional features rather than evidence. In contrast, humans assess whether an article is truthful by relying on a model of the world, assessing whether the evidence in the article matches that model. Future work should investigate integrating knowledge into the discriminator (e.g. for claim verification in FEVER; Thorne et al., 2018). An open question is to scale progress in this task towards entire news articles, and without paired evidence (similar to open-domain QA; Chen et al., 2017).
What should platforms do?
Video-sharing platforms like YouTube use deep neural networks to scan videos while they are uploaded, to filter out content like pornography (Hosseini et al., 2017). We suggest platforms do the same for news articles. An ensemble of deep generative models, such as Grover, can analyze the content of text – together with more shallow models that predict human-written disinformation. However, humans must still be in the loop due to dangers of flagging real news as machine-generated, and possible unwanted social biases of these models.
We plan to make Grover-Base and Grover-Large publicly available. Interested researchers may also apply to download Grover-Mega and RealNews.121212More up-to-date information about the release policy available at https://rowanzellers.com/grover.
We thank Dan Weld for providing feedback on this work. Thanks also to Zak Stone and the Google Cloud TPU team for help with the computing infrastructure. This work was supported by the National Science Foundation through a Graduate Research Fellowship (DGE-1256082) and NSF grants (IIS-1524371, 1637479, 165205, 1703166), the DARPA CwC program through ARO (W911NF-15-1-0543), the Sloan Research Foundation through a Sloan Fellowship, the Allen Institute for Artificial Intelligence, the NVIDIA Artificial Intelligence Lab, Samsung through a Samsung AI research grant, and gifts by Google and Facebook. Computations on beaker.org were supported in part by credits from Google Cloud.
- Bengio et al.  Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003.
- Bradshaw and Howard  Samantha Bradshaw and Philip Howard. Troops, trolls and troublemakers: A global inventory of organized social media manipulation. Technical report, Oxford Internet Institute, 2017.
- Caccia et al.  Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. Language gans falling short. arXiv preprint arXiv:1811.02549, 2018.
- Chen et al.  Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, 2017.
- Devlin et al.  Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- Dicker  Rachel Dicker. Avoid These Fake News Sites at All Costs. https://www.usnews.com/news/national-news/articles/2016-11-14/avoid-these-fake-news-sites-at-all-costs, 2016. [Online; accessed 22-May-2019].
- Dwoskin and Romm  Elizabeth Dwoskin and Tony Romm. Facebook says it has uncovered a coordinated disinformation operation ahead of the 2018 midterm elections. The Washington Post, 2018.
- Fan et al.  Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, 2018.
- Faris et al.  Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler. Partisanship, propaganda, and disinformation: Online media and the 2016 us presidential election. Berkman Klein Center Research Publication 2017-6., 2017.
- Ghazvininejad et al.  Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Constant-time machine translation with conditional masked language models. arXiv preprint arXiv:1904.09324, 2019.
- Gu et al.  Jiatao Gu, Qi Liu, and Kyunghyun Cho. Insertion-based decoding with automatically inferred generation order. arXiv preprint arXiv:1902.01370, 2019.
- Han and Eisenstein  Xiaochuang Han and Jacob Eisenstein. Unsupervised domain adaptation of contextualized embeddings: A case study in early modern english. arXiv preprint arXiv:1904.02817, 2019.
- Hashimoto et al.  Tatsunori B Hashimoto, Hugh Zhang, and Percy Liang. Unifying human and statistical evaluation for natural language generation. arXiv preprint arXiv:1904.02792, 2019.
- Hecht et al.  Brent Hecht, Lauren Wilcox, Jeffrey P. Bigham, Johannes Schöning, Ehsan Hoque, Jason Ernnst, Yonatan Bisk, Luigi De Russis, Lana Yarosh, Bushra Anjum, Danish Contractor, and Cathy Wu. It’s time to do something: Mitigating the negative impacts of computing through a change to the peer review process. ACM Future of Computing Blog, 2018.
- Holtzman et al.  Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019.
- Hosseini et al.  Hossein Hosseini, Baicen Xiao, Andrew Clark, and Radha Poovendran. Attacking automatic video analysis algorithms: A case study of google cloud video intelligence api. In Proceedings of the 2017 on Multimedia Privacy and Security, pages 21–32. ACM, 2017.
- Hu et al.  Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587–1596. JMLR. org, 2017.
- Joulin et al.  Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 427–431, 2017.
- Józefowicz et al.  Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016.
- Kingma and Ba  Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
- Melford and Fagan  Clare Melford and Craig Fagan. Cutting the funding of disinformation: The ad-tech solution. Technical report, The Global Disinformation Index, 2019.
- Mosseri  Adam Mosseri. News feed fyi: Helping ensure news on facebook is from trusted sources. Facebook Newsroom, 19, 2018.
- Ott et al.  Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T Hancock. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 309–319. Association for Computational Linguistics, 2011.
- Pérez-Rosas et al.  Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. Automatic detection of fake news. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3391–3401, 2018.
- Peters et al.  Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237, 2018.
- Radford et al.  Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical report, OpenAI, 2018. URL https://blog.openai.com/language-unsupervised/.
- Radford et al.  Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. Technical report, OpenAI, 2019.
- Ranzato et al.  Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In ICLR. ICLR, 2016.
- Rashkin et al.  Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931–2937, 2017.
- Shao et al.  Chengcheng Shao, Giovanni Luca Ciampaglia, Alessandro Flammini, and Filippo Menczer. Hoaxy: A platform for tracking online misinformation. In Proceedings of the 25th international conference companion on world wide web, pages 745–750. International World Wide Web Conferences Steering Committee, 2016.
- Shazeer and Stern  Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4603–4611, 2018.
- Stern et al.  Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. Insertion transformer: Flexible sequence generation via insertion operations. arXiv preprint arXiv:1902.03249, 2019.
- Strobelt and Gehrmann  Hendrik Strobelt and Sebastian Gehrmann. Catching a unicorn with gltr: A tool to detect automatically generated text. Technical report, Harvard, 2019.
- Swire et al.  Briony Swire, Ullrich KH Ecker, and Stephan Lewandowsky. The role of familiarity in correcting inaccurate information. Journal of experimental psychology: learning, memory, and cognition, 43(12):1948, 2017.
- Thorne et al.  James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. Fever: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, 2018.
- Vargo et al.  Chris J Vargo, Lei Guo, and Michelle A Amazeen. The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20(5):2028–2049, 2018.
- Vaswani et al.  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6000–6010. Curran Associates Inc., 2017.
- Wang  William Yang Wang. “liar, liar pants on fire”: A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 422–426, 2017.
- Wardle  Claire Wardle. Fake news. it’s complicated. First Draft News, 16, 2017.
- Wardle and Derakhshan  Claire Wardle and Hossein Derakhshan. Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report, DGI (2017), 9, 2017.
- Zellers et al.  Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
- Zellers et al. [2019a] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019a.
- Zellers et al. [2019b] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019b.
Appendix A Optimization Hyperparameters
For our input representation, we use the same BPE vocabulary as [Radford et al., 2019]. We use Adafactor [Shazeer and Stern, 2018] as our optimizer. Common optimizers such as Adam [Kingma and Ba, 2014] tend to work well, but the memory cost scales linearly with the number of parameters, which renders training Grover-Mega all but impossible. Adafactor alleviates this problem by factoring the second-order momentum parameters into a tensor product of two vectors. We used a maximum learning rate of 1e-4 with linear warm-up over the first 10,000 iterations, and decay over the remaining iterations. We set Adafactor’s and clipped updates for each parameter to a root-mean-squared of at most 1. Last, we applied weight decay with coefficient . We used a batch size of 512 on 256 TPU v3 cores. which corresponds to roughly 20 epochs through our news dataset. The total training time required roughly two weeks.
Appendix B Real News and Propaganda Websites
In our generation experiments (Section 4), we consider a set of mainstream as well as propaganda websites. We used the following websites as ‘real news’: theguardian.com, reuters.com, nytimes.com, theatlantic.com, usatoday.com, huffingtonpost.com, and nbcnews.com. For propaganda sites, we chose sites that have notably spread misinformation [Dicker, 2016] and/or are alternative media sites with strong political affiliations131313See allsides.com/media-bias/media-bias-ratings.. These were breitbart.com, infowars.com, wnd.com, bigleaguepolitics.com, and naturalnews.com.
Appendix C Domain Adaptation of BERT
BERT [Devlin et al., 2018] is a strong model for most classification tasks. However, care must be taken to format the input in the right way, particularly because BERT is pretrained in a setting where it is given two spans (separated by a special [SEP] token). We thus use the following input format. The first span consists of the metadata, with each field prefixed by its name in brackets (e.g. ‘[title]’). The second span consists of the body. Because the generations are cased (with capital and lowercase letters), we used the ‘cased’ version of BERT.
Past work (e.g. Zellers et al. [2019a], Han and Eisenstein ) has found that BERT, like other language models, benefits greatly from domain adaptation. We thus perform domain adaptation on BERT, adapting it to the news domain, by training it on RealNews for 50k iterations at a batch size of 256. Additionally, BERT was trained with a sequence length of at most 512 WordPiece tokens, but generations from Grover are much longer (1024 BPE tokens). Thus, we initialized new position embeddings for positions 513-1024, and performed domain adaptation at a length of 1024 WordPiece tokens.
Appendix D Hyperparameters for the Discriminators
For our discrimination experiments, we limited the lengths of generations (and human-written articles) to 1024 BPE tokens. This was needed because our discriminators only handle documents up to 1024 words. However, we also found that the longer length empirically discrimination easier for models (see Section 6).
For our discrimination experiments, we used different hyperparameters depending on the model, after an initial grid search. For BERT, we used the Adam [Kingma and Ba, 2014] optimizer with a learning rate of and a batch size of 64. We trained BERT models for 5 epochs, with a linear warm-up of the learning rate over the initial 20% iterations. For GPT2 and Grover, we used the Adam actor optimizer [Shazeer and Stern, 2018] optimizer with a learning rate of for all models, and a batch size of 64. We applied an auxiliary language modeling loss for these models with a coefficient of . These models were trained for 10 epochs, with a linear warm-up over the initial 20% iterations.
Appendix E Examples
In Figures 8 and 9, we include examples of articles with the average scores given by human raters, who were asked to evaluate the style, content, and overall trustworthiness. In Figure 8, we show a real article (Human News) posted by the Guardian along with an article from Grover (Machine News) made using the same metadata. Figure 9 shows a real propaganda article from the Natural News (Human Propaganda) and an article made with Grover (Machine Propaganda) with the original headline and the style of Huffington Post (Grover was used to re-write the title to be more stylistically similar to the Huffington Post, as well).
We also present several other generated examples, generated from Grover-Mega with a top- threshold of . All of the examples are cut off to 1024 generated BPE tokens, since this is our setup for discrimination.
Grover can spoof the identity of writers. In Figure 11 we show a realistic-looking editorial seemingly from New York Times columnist Paul Krugman.
Grover can generate fake political news. In Figure 12 we show an article generated about Trump being impeached, written in the style of the Washington Post.
Grover can generate fake business news. In Figure 14, we show an article generated about an ‘Uber for Dogs’ startup.