CTRL: A Conditional Transformer Language Model for Controllable Generation

CTRL: A Conditional Transformer Language Model for Controllable Generation

Nitish Shirish Keskar1, Bryan McCann, Lav R. Varshney, Caiming Xiong, Richard Socher
Salesforce Research
Equal contribution.
1footnotemark: 1

Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.6 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution. We have released multiple full-sized, pretrained versions of CTRL at github.com/salesforce/ctrl.

1 Introduction

With enough data, model capacity, and compute, generative models can learn distributions powerful enough to produce high-quality samples from complex domains. In computer vision, the advent of generative adversarial networks (Goodfellow et al., 2014) improved image generation. Much research then focused on methods for controlling the generation process and improving estimation of generative distributions (Arjovsky et al., 2017; Chen et al., 2016; Kingma and Welling, 2013).

In natural language processing, language models are often trained as conditional language models for specific tasks that require text generation (Brants et al., 2007; Sutskever et al., 2014; Rush et al., 2015). They are also used as a means of learning word vectors (Mikolov et al., 2013), document vectors (Kiros et al., 2015), or contextualized word vectors (McCann et al., 2017; Peters et al., 2018; Devlin et al., 2018) for transfer learning. The language models themselves have been transferred to new tasks through fine-tuning as well (Radford et al., 2018; Howard and Ruder, 2018). Less is understood about generation that is not constrained to any specific task. Typically prompts generated by models (Fan et al., 2018) or written by humans can only be used to provide a rough guide or starting point for the generated text. This raises the question of how text generation can be controlled more explicitly.

Inspired by the degree of control available in image generation as well as the recent progress in text generation (Radford et al., 2019) and multitask learning McCann et al. (2018), we train a language model that is conditioned on a variety of control codes (Pfaff, 1979; Poplack, 1980) that make desired features of generated text more explicit. With 1.6 billion parameters, our Conditional Transformer Language (CTRL) model can generate text conditioned on control codes that specify domain, style, topics, dates, entities, relationships between entities, plot points, and task-related behavior. To preserve the generality of the language model trained in an unsupervised setting, we train CTRL on control codes derived from structure that naturally co-occurs with the raw text typically collected for training large language models. For example, large resources like Wikipedia, Project Gutenberg, and Amazon Reviews can each be assigned a domain-related control code. Smaller resources, like the content extracted from individual subreddits, often occur with both a broader domain name, reddit, as well as subdomain information, r/subdomain. In the vast majority of cases, text collected for training is associated with a URL, which often contains information pertinent to the text it represents. Humans can use these codes to trigger generation of text from different linguistic communities without having to understand how to prompt with particular linguistic patterns. Text can be generated in more predictable ways by controlling for content or changing the domain even when the initial prompt remains fixed.

Because all control codes can be traced back to a particular subset of the training data, CTRL can be used to predict the subset of training data that is most likely given a sequence. This explicit relationship between CTRL and its training data can be exploited to analyze the correlations that the language model has learned from each domain, and it provides a means of studying large amounts of text through the language model.

These control codes also allow for the straightforward inclusion of task-specific data in a way that improves important skills without harming the generality of the model. Control codes for question answering and machine translation make these skills easily accessible with CTRL. These codes can be combined with codes during generation to create novel cross-over between control codes that are task-specific behavior and those that are related to domain and content.

In order to push towards more controllable, general models for natural language processing, we have released multiple full-sized, pretrained versions of CTRL at github.com/salesforce/ctrl. We hope that the release leads to further research into how controllable generation can enhance natural language understanding.

2 Language Modeling

Given example sequences of the form where each comes from a fixed set of symbols, the goal of language modeling is to learn . Because is a sequence, it is natural to factorize this distribution using the chain rule of probability (Bengio et al., 2003):

This decomposes language modeling into next-word prediction. Current state-of-the-art methods (Dai et al., 2019; Radford et al., 2019) train a neural network with parameters to minimize the negative log-likelihood over a dataset :

Because language models learn , a new of length can be generated by sequentially sampling its constituent symbols: .

3 Language Modeling with CTRL

CTRL is a conditional language model that is always conditioned on a control code and learns the distribution . The distribution can still be decomposed using the chain rule of probability and trained with a loss that takes the control code into account.

The control code provides a point of control over the generation process. This is true even when sampling , in contrast to the traditional language modeling framework described in Sec. 2.

CTRL learns by training on sequences of raw text prepended with control codes. After minimal preprocessing (described in Sec. 3.2), a single example sequence containing tokens is embedded as a sequence of corresponding vectors in . Each vector is the sum of a learned token embedding and a sinusoidal positional embedding as in the original Transformer architecture (Vaswani et al., 2017). This sequence of vectors is stacked into a matrix so that it can be processed by attention layers (Vaswani et al., 2017). The th layer consists of two blocks, each of which preserves the model dimension .

The core of the first block is multi-head attention with heads that uses a causal mask to preclude attending to future tokens:

The core of the second block is a feedforward network with ReLU activation (Nair and Hinton, 2010) that projects inputs to an inner dimension , with parameters and :

Each block precedes core functionality with layer normalization (Ba et al., 2016; Child et al., 2019) and follows it with a residual connection (He et al., 2016). Together, they yield :

Block 1 Block 2

Scores for each token in the vocabulary are computed from the output of the last layer:

During training, these scores are the inputs of a cross-entropy loss function. During generation, the scores corresponding to the final token are normalized with a softmax, yielding a distribution for sampling a new token.

3.1 Data

We train on GB of text drawing from a wide variety of domains: Wikipedia (En, De, Es, Fr), Project Gutenberg111We use a modified version of https://github.com/chiphuyen/lazynlp, submissions from 45 subreddits, OpenWebText222We use a modified version of https://github.com/jcpeterson/openwebtext.git, a large collection of news data (Hermann et al., 2015; Barrault et al., 2019; Sandhaus, 2008; Grusky et al., 2018), Amazon Reviews (McAuley et al., 2015), Europarl and UN data from WMT (En-De, En-Es, En-Fr) (Barrault et al., 2019), question-answer pairs (no context documents) from ELI5 (Fan et al., 2019) and the MRQA shared task333https://github.com/mrqa/MRQA-Shared-Task-2019, which includes the Stanford Question Answering Dataset (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019). A full account of training data and associated control codes can be found in Table 7 in the Appendix.

3.2 Experimental Settings

We learn BPE (Sennrich et al., 2015) codes and tokenize the data using fastBPE444https://github.com/glample/fastBPE, but we use a large vocabulary of roughly K tokens. This includes the sub-word tokens necessary to mitigate problems with rare words, but it also reduces the average number of tokens required to generate long text by including most common words. We use English Wikipedia and a 5% split of our collected OpenWebText data for learning BPE codes. We also introduce an unknown token so that during preprocessing we can filter out sequences that contain more than unknown tokens. This, along with the compressed storage for efficient training (TFRecords) (Abadi et al., 2016), reduces our training data to GB from the total GB collected. Data was treated as a single stream of tokens with non-domain control codes inserted where appropriate (often at document boundaries). The stream was chunked into contiguous sequences of tokens. Each sequence originated from a domain, and it has the corresponding domain control code prepended as the first token in the sequence. In this way, domain control codes receive special treatment. They are propagated to all text in the domain as the first token. This is similar to how codes and natural language sequences have been used in multi-task settings (Wu et al., 2016; Johnson et al., 2017; McCann et al., 2018) to control conditional language models. All other control codes are injected into the data without such special treatment. We experimented with sequence lengths of and due to memory and optimization constraints. Despite training on relatively short sequences compared to other approaches, we found that a sliding-window approach allows for generation beyond these windows, and we also found little difference in quality between the two models within the first tokens. Further, we note that our vocabulary is approximately 4 times larger than similar approaches, hence the effective sequence length in characters is comparable.

CTRL has model dimension , inner dimension , layers, and heads per layer. Dropout with probability follows the residual connections in each layer. Token embeddings were tied with the final output embedding layer (Inan et al., 2016; Press and Wolf, 2016).

CTRL was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of distributed across cores of a Cloud TPU v Pod for k iterations. Training took approximately 2 weeks using Adagrad (Duchi et al., 2011) with a linear warmup from to over k steps. The norm of gradients were clipped to as in (Merity et al., 2017). Learning rate decay was not necessary due to the monotonic nature of the Adagrad accumulator. We compared to the Adam optimizer (Kingma and Ba, 2014) while training smaller models, but we noticed comparable convergence rates and significant memory savings with Adagrad. We also experimented with explicit memory-saving optimizers including SM3 (Anil et al., 2019), Adafactor (Shazeer and Stern, 2018), and NovoGrad (Ginsburg et al., 2019) with mixed results.

4 Controllable Generation

4.1 Sampling

Typically, temperature-controlled stochastic sampling methods are used for generating text from a trained language model. It is also common to limit the sampling only to the top- alternatives. Given a temperature and scores for each token in the vocabulary, the probability of predicting the th token is given by:


The next token is then chosen by sampling through a multinomial distribution with probabilities clipped at the top- tokens. In the equation above, approximates a greedy distribution which magnifies the peaks in the probability distribution while flattens the distribution to make it more uniform. Rather than choosing a fixed value of , as is common practice, Holtzman et al. (2019) suggested adapting heuristically. The nucleus sampling approach chooses a probability threshold and sets to be the lowest value such that . If the model is confident in its next-word prediction, then will be lower and vice versa. Despite the improved generative capabilities of models with such heuristics, there still exists a trade-off between these parameters depending on the generation intended.

Given a prompt: Q: What is the capital of Australia?, a well-trained model assigns higher probability mass to the correct answer, Canberra, but a non-zero probability mass to other cities such as Melbourne, Sydney, Brisbane, Darwin, and Perth, see Figure 1.

Figure 1: Next-token probability for the prompts Q: What is the capital of Australia? and Q: Who was the first man on the moon? In such cases, sampling using a distribution is detrimental to answering the question correctly.

By choosing to sample, we mistrust the model, despite it being correct. A natural solution to this is to choose the next token greedily. However, this is known to create repetitions of phrases or sentences even for large well-trained models (Radford et al., 2019; Holtzman et al., 2019). To reconcile the two, we propose a new sampling scheme that trusts the model distribution through near-greedy sampling but prevents repetitions through a penalty. This penalized sampling works by discounting the scores of previously generated tokens. The motivation is similar to coverage mechanisms (See et al., 2017) and other losses designed to discourage repetition (Welleck et al., 2019), but penalized sampling is not used during training. Given a list of generated tokens , using the notation from equation 1, the probability distribution for the next token is defined as:

We find that using a greedy sampling and yields a good balance between truthful generation and lack of repetition. Note that is equivalent to equation 1. We note in passing that this approach succeeds only if the model has learned a sufficiently reliable distribution.

4.2 Control Codes

Style by domain.

\colorred Wikipedia \colorblue Anarchism is a political philosophy that advocates the abolition of all forms of hierarchy and domination, including capitalism, patriarchy, racism, sexism, heterosexism and other oppressive social structures.n }{verb \nThe term ”anarchism” was coined by Pierre-Joseph Proudhon in his book ”The Poverty of Philosophy” (1844). It has been used to describe various movements within anarchism since then. In its modern sense, it refers to an ideology or movement advocating for social, political, economic and/or cultural change through direct action against existing institutions and practices. Anarchists are often associated with radical left-wing politics, but they have also advocated many different types of libertarian socialism. Some anarchists advocate anarchist communism, others support anarcho-syndicalism, while still others reject both socialism and communism. Many anarchists believe that capitalism should be abolished entirely, though some argue that this would require overthrowing capitalism. Other anarchists oppose any form of state control over private property, such as land ownership, which they see as inherently oppressive. They may favor limited government regulation on certain aspects of life, such as drug use, prostitution, gambling, pornography, abortion, euthanasia, etc.n }{verb \nHistory.n }{verb \nEarly history.n }{verb \nAnarchist ideas were first developed during the French Revolution (1789–1799). During these years, there was widespread opposition to traditional society based upon religion, class, gender, race, ethnicity, nationality, language, education, wealth, and so on. This led to the development of new ideologies like socialism, liberalism, republicanism, communitarianism, individualism/egoism/
\colorred Horror \colorblue A knife handle pulled through the open hole in the front. I jumped when the knife hit.n }{verb \nEyes widened in horror. Her scream was the only sound I heard besides her sobs.n }{verb \nThe spider touched her feet as it started to dig into the top of her arch. The creature’s tears began to flow.n }{verb \nThe spider looked up to her and looked back at me with eyes filled with burning tears. My heart started to race
\colorred Reviews \colorblue A knife is a tool and this one does the job well.n }{verb \nRating: 4.0n }{verb \nI bought these for my husband who has been using them to cut up his own meat since he got them. He says they are very sharp so be careful when you use them, but that doesn’t seem like much of an issue because he’s used it on everything from chicken breasts to beef tenderloin
\colorred Relationships \colorblue My neighbor is a jerk and I don’t know what to don }{verb \nText: So my neighbors are really nice people. They have been for years. We live in an apartment complex so we get along great.n }{verb \nBut recently they started acting like jerks
\colorred Legal \colorblue My neighbor is threatening to sue me for not letting him use my pooln }{verb \nText: I live in a small town and have been renting out the back of my house since last year. The owner has had issues with people using it as well but he never really took any action against them. Last week someone from his property management company came by asking if they could come over because there was an issue with some kids playing on it. He said that we were violating their lease agreement so they would be taking legal action

Most control codes for our model specify the overall style of generated text by indicating a particular domain of training data. Examples in Table LABEL:tab:same_prompt demonstrate that even for identical prompts, control codes allow for predictable variation in generation. The examples in Table LABEL:tab:more_control show how CTRL can generate domain-specific text without any prompt.

\colorred Science Title: Scientists have discovered a new type of bacteria that can survive in the presence of high levels of carbon dioxiden }Text:[Link to article] (urlhttp://news.nationalgeographic.com/news/2011/07/110717-methane-bacteria-carbon-dioxide-environment-science/)n }{verb \nThe researchers found that this particular strain of methane bacterium is able to thrive at concentrations up to 10 times higher than normal atmospheric levels.n }{verb \nThey also note that these findings are consistent with previous research showing that some types of microbes may be capable of surviving under extreme conditions such as those associated with global warming.
\colorred Politics Title: The US is the only country in history to have a national debt of more than $20 trillion.n }Text: I know this isn’t exactly an accurate representation but it gives you some idea how much money we are talking about.{verb \nn }urlhttp://en.wikipedia.org/wiki/National_debt#United_States
\colorred Running Text: I have been running for about a year and a half now but never really got into it.n }{verb \n This past weekend was my first time doing it in the last few months.n }{verb \n My goal is to run at least 3 miles every day this week.
\colorred Horror Text: I was a little girl when my parents got divorced. My dad had been in the military for years and he left me with my mom. She worked as an RN at a hospital so she could take care of me.n }{verb \n When we moved to our new house it took some time before things settled down. We were still living together but there wasn’t much going on. It didn’t help that my mom would get mad if someone came over or even just walked by her house.n }{verb \n One day while walking through the yard I noticed something out of place…
\colorred Reviews Rating: \colorred 5.0n }{verb \n I have been using this product for a few years and it is the best thing on my hair. It makes your hair soft but not greasy.n }{verb \nRating: 5.0n }{verb \nI’ve used this product before and love it. It’s light weight yet keeps my curls defined without making them crunchy!
\colorred Reviews Rating: \colorred 1.0n }{verb \nI bought this for my daughter and it was a complete waste of money. The case is very flimsy and the screen protector doesn’t fit properly.n }{verb \nRating: 1.0n }{verb \nThis product did not work at all. It would charge but when you unplug it, it does nothing. Very disappointed.

More complex control codes.

Additional control codes can be added to the domain code in order to increasingly constrain generation. In Table LABEL:tab:more_control, adding additional control codes following the domain code further constrains generation. These examples demonstrate constraints specifying that the model should start with a title and by specifying a particular rating for reviews.

Examples of more advanced control are given in Table 3. In our version of OpenWebText, we include the URL after each document as the start of the input sequence. During training, CTRL learns relationships between the structure of these URLs and the text that follows. At inference, novel URLs can be used to specify a variety of features: domain, subdomain, entities, entity relations, and even dates.

\colorred Links https://www.cnn.com/2007/09/20/us-president-meets-british-pm LONDON, England (CNN) – U.S. President George W. Bush met with British Prime Minister Tony Blair on Monday to discuss the war in Iraq, according to a statement from Blair’s office.n }{verb \nThe meeting was held at 10 Downing Street and lasted about an hour.n }{verb \n”The two leaders discussed Iraq and other international issues of mutual concern,” said Blair.n }{verb \nBush, who is scheduled to meet Wednesday with Russian President Vladimir Putin, will also visit Germany for talks later this week.n }{verb \nIn his statement, Blair said, ”We agreed that we should continue our efforts together to bring peace and stability to Iraq. We both reaffirmed our commitment to working closely together, as well as to continuing to work constructively toward achieving lasting security and prosperity throughout the Middle East region.”n }{verb \nBush’s trip comes after he visited Britain last week where he spoke out against terrorism while visiting Buckingham Palace.n }{verb \nHe has been criticized by some lawmakers over what they say are insufficient military resources being devoted to fighting terrorism.
\colorred Links https://www.cnn.com/2014/09/20/us-president-meets-british-pm
JUST WATCHEDn }{verb \nObama meets with British PMn }{verb \nMUST WATCHn }{verb \nStory highlightsn }{verb \nPresident Barack Obama met with Britain’s Prime Minister David Cameron
\colorred Links https://www.cnn.com/2018/09/20/us-president-meets-british-pm JUST WATCHEDn }{verb \nTrump and May meet for first timen }{verb \nMUST WATCHn }{verb \nWashington (CNN) President Donald Trump, who has been criticized by some in the UK over his decision to leave the European Union, met with British Prime Minister Theresa May, a White House official said on Thursday.
\colorredLinks https://www.cnn.com/09/20/2018/star-spotted JUST WATCHEDn }{verb \nStar spotted: The world’s most elusive starn }{verb \nMUST WATCHn }{verb \nStory highlightsn }{verb \nA new study suggests that the Milky Way is home to a supermassive black holen }{verb \nIt may be one of our closest galactic neighborsn }{verb \n(CNN) Astronomers have discovered what they believe could be an extremely rare astronomical event: An object so bright it can only be seen with powerful telescopes.n }{verb \nThis object, dubbed SN2014J, was first detected by NASA’s Spitzer Space Telescope in 2014.
\colorredLinks https://www.etonline.com/09/20/2018/star-spottedn }{verb \nEXCLUSIVE: Star Spotted in New York Cityn }{verb \nBy Kelly Marot — September 20, 2018 10:51 AM PDTn }{verb \nGetty Imagesn }{verb \nThe star of the new season of “Stranger Things” is making a splash on her own turf — and it’s not even Halloween.n }{verb \nWinona Ryder has been seen out and about with friends at night time in Manhattan this week, E! News can exclusively reveal.n }{verb \nA source tells us, “She was hanging around with some friends, but she wasn’t doing anything too crazy. She just wanted to get away from it all for a little bit.”
\colorredLinks https://www.cnn.com/style/09/20/2018/george-clooney-interview George Clooney on the future of his acting careern }{verb \nByn }{verb \nUpdated 10:51 AM ET, Thu September 20, 2018n }{verb \nChat with us in Facebook Messenger. Find out what’s happening in the world as it unfolds.n }{verb \nPhotos:George Clooney, ’Ocean’s 8’n }{verb \nActor George Clooney attends a photocall for ”Ocean’s 8” at Grauman’s Chinese Theatre on August 31, 2018, in Los Angeles.n }{verb \n
\colorredLinks https://www.cnn.com/politics/09/20/2018/george-clooney-interview JUST WATCHEDn }{verb \nGeorge Clooney on the Trump administrationn }{verb \nMUST WATCHn }{verb \n(CNN) Actor and activist George Clooney, who has been a vocal critic of President Donald Trump, said he is ”ready to go back into the political arena” after his role in an anti-Trump documentary was cut from theaters this week.n }{verb \n
Table 3: CTRL is trained with links as \colorred control \colorred codes. Links provide a way to specify domain, subdomain, entities, entity relations, and even date. Note that us-president is interpreted differently by the model depending on the date used (2007,2014,2018). Similarly, star is interpreted differently based on the domain (cnn vs. etonline) and topic (style vs. politics) can be varied even for identical entities (george-clooney).

Triggering specific tasks.

\colorred Questions \colorblue Q: What is the capital of India? \colorred A: New Delhin }{verb \nQ: Which city in India has a large population of people from Gujarat?n }{verb \nA: Ahmedabadn }{verb \nQ: How many languages are spoken by Indians living abroad?n }{verb \nA: more than 100
\colorred Translation \colorred English \colorred : \colorblue We release a new model for coherent language generation \colorred ; \colorred French \colorred : Nous publions un nouveau modéle de génération cohérente du langagen }{verb \nEnglish : The European Parliament has just adopted the report on the proposal for a regulation of the European Parliament and of the Council establishing an action programme to promote multilingualism in Europe. ; French : Le Parlement européen vient d’adopter le rapport sur la proposition de règlement du Parlement et du Conseil établissant un programme-cadre pour promouvoir le plurilinguisme dans l’Union européenne
Table 4: More complex templatized control codes are used for task-specific generation.

A small number of control codes are related to specific tasks like question answering and translation. These codes constrain the generation process the most, by triggering task-specific generation. In Table 4, we demonstrate relatively complex control codes for question answering and machine translation that act as a template mixed with a natural language prompt.

Zero-shot code-mixing.

\colorred Diet English \colorred : \colorblue I lost 10 kgs! \colorred ; \colorred German \colorred : Ich habe 10 Kilogramm verloren!n }{verb \nEnglish : It is a great achievement for me and my family. ; German : Das ist eine große Leistung für mich und meine Familie.n }{verb \nEnglish : Thank you very much to all the people who have helped me. ; German : Vielen Dank an alle, die mir geholfen haben.
\colorred Politics Title: \colorblue Les Etats-Unis sont un pays de droiten }{verb \nText: Je suis Américain et je ne peux pas supporter ce que font les USA en matière d’immigrationn }{verb \nJ’ai déjà écrit sur le sujet mais il n’y a aucune réaction http://www.reddit.com/r/politics/comments/1tqjzw/les_etats_unes_son_tres-de-gauche/
Table 5: Some codes can be mixed to generate text with novel cross-over behavior. In Table 5, we present two examples. In the first example, we mix translation codes into the Diet domain. By doing so, the model continues alternatively generates English and German sentences while respecting the Diet domain and remains coherent across translations. In the second example, the Politics domain is mixed with a French prompt despite never seeing this combination in training.

In the first example we mix a diet subreddit (r/keto) with machine translation control codes for English and German. In contrast to using Translation in LABEL:tab:more_control, the generated text with mixed codes is coherent across multiple translated lines. This structure is an influence of Diet because it had multiline examples in the training data, whereas the translation data consisted of shuffled single lines. In the second example we mix the politics subreddit (r/politics) with a prompt that starts in French though no examples of this kind were found in the training data.

5 Source Attribution

Query Prompt Attributed Sources
Global warming is a lie. r/unpopularopinion, r/conspiracy, r/science
Global warming is a lie r/eli5, r/science, r/unpopularopinion
Global warming is a real phenomenon r/eli5, r/science, r/changemyview
Global warming is a real phenomenon. OpenWebText, r/changemyview, r/science
I don’t think women should be allowed to vote. r/christianity, r/atheism, r/unpopularopinion
Carbs are your enemy when you want to get lean. r/fitness, r/loseit, r/keto
I just want to be a fun aunt. I’m not interested in babies. r/babybumps, r/childfree, r/twoxchromosome
My landlord is suing me for unpaid rent. r/legaladvice, r/personalfinance, r/frugal
FROM fairest creatures we desire increase,n }{verb \nThat thereby beauty’s rose might never die Gutenberg, Wikipedia, OpenWebText
Table 6: We probe CTRL for learned correlations between sequences and domains. Note that this procedure is sensitive to small changes in the prompt. For example, ”Global warming is a lie” differs from ”Global warming is a lie.” r/eli5 stands for ”Explain like I’m five”. Attribution experiments use the model trained on sequences of length ; it was trained longer and provided better estimation of source. Source attribution cannot be considered a measure of veracity, but only a measure of how much each domain token influences a given sequence.

The domain control codes can be used to partition the training data into mutually exclusive sets. This supports a simple method for determining which subsets of the training data the language model considers most likely given a sequence. Recall that the language model has learned a distribution . By specifying a prior over domain control codes for , it is straightforward to compute a ranking of domains:

We found that the empirical prior of the training data weights domains with large amounts of data too heavily. Instead, we use a uniform prior over the domain control codes. Examples can be found in Table 6.

We note that the data used to train this model does not have universal coverage and contains the cultural associations present in the original sources. All applications of the model inherently depend on those original associations for prediction. In fact, this method of source attribution relies on exploiting the original associations to establish relationships between the language model and its training data.

The model does not have a notion of whether any particular cultural association is good or bad, right or wrong, true or false. It only learns correlations between cultural associations and domains. This is evidenced by the fact that contradictory statements are often attributed to the same sources: competing claims often appear in the same contexts. CTRL provides model-based evidence that certain domains are more likely to contain language similar to given statements, but it should not be used to make normative or prescriptive claims. It is a descriptive tool for analyzing correlations in large amounts of text.

6 Related Work

Language modeling.

Language models (Bengio et al., 2003) have played an important role in natural language processing through transferrable word vectors (Mikolov et al., 2013), contextualized word vectors (Peters et al., 2018; Devlin et al., 2018; Lample and Conneau, 2019), and models (Howard and Ruder, 2018; Radford et al., 2018). Recent work on memory mechanisms (Dai et al., 2019; Lample et al., 2019) has improved perplexities on the most common benchmarks, and even without these memories, large Transformer architectures (Vaswani et al., 2017) like GPT-2 (Radford et al., 2019), OpenGPT-2555 https://blog.usejournal.com/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc, and Megatron666 https://github.com/NVIDIA/Megatron-LM can achieve state-of-the-art results without directly training for any particular language modeling benchmark. Because these latter language models are trained on far more diverse data than is used in the supervised setting, they demonstrate impressive text generation capabilities (Radford et al., 2019; Zellers et al., 2019).

Multi-task learning.

These models demonstrate the potential to learn multiple tasks as well as quick adaptation to patterns in input prompts (Radford et al., 2019). This potential showed that language models can offer an alternative to supervised multi-task learning as framed by several recent benchmarks (Wang et al., 2018; McCann et al., 2018). Language models might also offer a foundation to extend proposals of unified, multi-task systems for all of NLP (Collobert and Weston, 2008; Collobert et al., 2011), parsing and tagging (Hashimoto et al., 2016), multiple languages (Wu et al., 2016; Johnson et al., 2017), and multiple modalities (Luong et al., 2015; Kaiser et al., 2017). Several works have pointed to natural language as a means for controlling these multi-task systems (McCann et al., 2018; Radford et al., 2019; Keskar et al., 2019), and several point to the benefits of a code book either specified explicitly (Wu et al., 2016) or learned in a latent space (Kaiser et al., 2018). This work attempts to balance these approaches.

Sampling methods and coverage mechanisms.

Recent work in sampling methods for text generation has focused on reducing repetition by replacing it with novel, coherent text (Fan et al., 2018; Holtzman et al., 2019). The problem of repetition can instead be approached by altering the training objectives, as with coverage mechanisms (See et al., 2017) and context-based losses (Welleck et al., 2019). When prioritizing control, the trade-off between novelty in the generated text and consistency with prompts and prior generated text remains a difficult challenge, but this work found that relying on inference-time methods (Fan et al., 2018; Holtzman et al., 2019) that are closer in behavior to context-based losses (See et al., 2017; Welleck et al., 2019) provides a reasonable solution as long as the distribution of the language model is sufficiently confident in its decisions.

7 Future Directions

More control codes and finer-grained control.

The particular choice of control codes in this work is intended to represent a reasonably large variety in control over domain, topic, entities, entity relations, and dates. A very flexible means of control is through the natural structure of the internet in the form of URLs. Many of the domains that were mapped in this work to a single control code (e.g. Wikipedia, Project Gutenberg), could be refined to provide more fine-grained control either through further exploitation of URL structure (en.wikipedia.org, de.wikipedia.org, en.wikipedia.org/wiki/Anarchism, en.wikipedia.org/wiki/Anarchism#History) or through the manual extraction of structure already present in the data (e.g. Books Author Title Chapter). We hope future work explores extensions of CTRL to new domains in ways that provide further insight into controllable text generation.

Extensions to other areas in NLP.

This work suggests that including data for specific tasks need not harm the general nature of an unsupervised learning process. For important skills, the inclusion of supervised data or task-specific data generated through unsupervised means (Artetxe et al., 2017; Lewis et al., 2019) can lead to obvious improvements. While this work experimented with trivia-style question answering (without context documents) and small amounts of machine translation data, it remains an open question whether these language models can learn to effectively perform tasks like extractive question answering or state-of-the-art multilingual machine translation while still preserving general pattern recognition and text generation functionality.

Many tasks present difficult challenges to the supervised setting. Commonsense reasoning (Levesque et al., 2012) and abstractive summarization (Rush et al., 2015) represent two areas where these challenges remain readily apparent (Kryściński et al., 2019). Yet language models show potential for mitigating these problems directly (Trinh and Le, 2018; Radford et al., 2019) or indirectly (Rajani et al., 2019; Xenouleas et al., 2019; Scialom et al., 2019). We hope that in future work CTRL can be extended to far more tasks through the use of both unsupervised and supervised techniques.

Analyzing the relationships between language models and training data.

CTRL is trained on a small subset of the possible data available. Therefore the model is biased towards the patterns of language used in the training data. The data is likely not representative of many linguistic communities, but CTRL offers an explicit method for analyzing the relationship between the model and its current training data. As methods improve, more data is collected, and training of these large models continues, we hope to use this tool to better understand the particular cultural associations the model learns from each data source.

Making the interface between humans and language models more explicit and intuitive.

CTRL is designed to make the interface between humans and language models more intuitive. Text generation can be a powerful tool for enhancing creativity and exploration. In future work, we hope to study how the beneficial applications of such models can be enhanced by providing more control to human users.

8 CTRL-ALT-DEL: The Ethics of Large Language Models

Openness and replicability are central aspects of the scientific ethos that, prima facie, suggest the release of complete scientific research results. We reify these principles by releasing all trained CTRL models.

Although much scientific research and innovation can benefit the public, it may also be diverted to harmful uses or have unintended negative impacts (without animus). Brundage et al. (2019), among others, have argued artificial intelligence has such an omni-use character and have suggested governance policies emerging from the responsible innovation literature (Brundage, 2016). Historical evidence has pointed to the inadequacy of self-moratoriums for governing omni-use technologies (Kaiser and Moreno, 2012); we take a course of action that differs from such self-regulation.

Our actions reflect principles from a recent sociology-based AI governance framework that aims to expand responsible innovation to consider networks of users, dynamics, and feedback (Varshney et al., 2019).

  • Rather than self-governance, we sought to diversify inputs to governance through pre-release review from experts at the Partnership on AI (PAI). These experts, in turn, drew on emerging norms and governance processes that incorporate a broad set of values from across society.

  • Prior to release, the research team conducted a technology foresight exercise to anticipate possible malicious use cases. In particular, we used a scenario planning approach to technology foresight that systematically attempts to envision plausible longer-term future states of science, technology, and society. This anticipatory focus on possibilities rather than probabilities lessens several shortcomings of formal risk assessment in the face of contested assumptions, which has proven ineffective in identifying the most profound future impacts of innovation (Stilgoe et al., 2013).

  • As part of our model release, we include a code of conduct in the README at github.com/salesforce/ctrl. This code of conduct is modeled after emerging community norms ensconced in the Do No Harm and Just World Licenses. Simultaneously recognizing that it has no legal force and that users are agents of technological change embedded in social networks, the aim is to encourage reflection at the consumption junction (Cowan, 1987) through norm-setting and reduce unintended uses.

  • The README also includes a subset of the questions that the team discussed when deliberating release of the models, drawn from early drafts of community-driven PAI documents (to be released in the near future). This may further encourage users to reflect on norms and responsibilities associated with models that generate artificial content. In particular, users are asked to share answers to the included questions, to pose further questions, and suggest solutions by emailing ctrl-monitoring@salesforce.com.

  • Finally, the README asks users to develop appropriate documentation (3; M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilovic, R. Nair, K. Natesan Ramamurthy, D. Reimer, A. Olteanu, D. Piorkowski, J. Tsay, and K. R. Varshney (2018); M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru (2019)) when building on CTRL and to tell the research team how they are using CTRL by emailing ctrl-monitoring@salesforce.com. This facilitates a post-release monitoring plan that observes how people are using CTRL in the wild (together with active observations). Such post-market plans recognize that most innovations are unexpected and hard to forecast. It is intended to enable a responsive approach to responsible innovation, not just with respect to harmful uses but also unintended negative impacts without animus.

9 Conclusion

With 1.6 billion parameters, CTRL is the largest publicly released language model to date. It is trained with control codes so that text generation can be more easily controlled by human users. These codes allow users to explicitly specify domain, subdomain, entities, relationships between entities, dates, and task-specific behavior. We hope that the release of this model at github.com/salesforce/ctrl pushes towards more controllable, general models for natural language processing, and we encourage future discussion about artificial generation with our team by emailing ctrl-moinitoring@salesforce.com.

10 Acknowledgements

We would like to thank Kathy Baxter for her help in the ethical considerations of our work and facilitating the external review process; Srinath Meadusani, Lavanya Karanam, Ning Dong, and Navin Ramineni for their help with setting up and maintaining compute infrastructure; Zak Stone and his team at Google for assistance with TPU infrastructure and code; and Joseph Olsen, Roy Davis, Joshua Simmons, Denise Lo, and Sam Edwards for their help with open sourcing.


  • M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. (2016) Tensorflow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283. Cited by: §3.2, §3.2.
  • R. Anil, V. Gupta, T. Koren, and Y. Singer (2019) Memory-efficient adaptive optimization for large-scale learning. arXiv preprint arXiv:1901.11150. Cited by: §3.2.
  • [3] (2019) Annotation and benchmarking on understanding and transparency of machine learning lifecycles (ABOUT ML). Partnership on AI. Note: Partnership on AI (PAI), v0 External Links: Link Cited by: 5th item.
  • M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. Cited by: §1.
  • M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilovic, R. Nair, K. Natesan Ramamurthy, D. Reimer, A. Olteanu, D. Piorkowski, J. Tsay, and K. R. Varshney (2018) FactSheets: increasing trust in AI services through supplier’s declarations of conformity. Note: arXiv:1808.07261 [cs.CY]. Cited by: 5th item.
  • M. Artetxe, G. Labaka, E. Agirre, and K. Cho (2017) Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041. Cited by: §7.
  • J. Ba, R. Kiros, and G. E. Hinton (2016) Layer normalization. CoRR abs/1607.06450. Cited by: §3.
  • L. Barrault, O. Bojar, M. R. Costa-jussà, C. Federmann, M. Fishel, Y. Graham, B. Haddow, M. Huck, P. Koehn, S. Malmasi, et al. (2019) Findings of the 2019 conference on machine translation (wmt19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pp. 1–61. Cited by: Table 7, §3.1.
  • Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin (2003) A neural probabilistic language model. Journal of machine learning research 3 (Feb), pp. 1137–1155. Cited by: §2, §6.
  • T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean (2007) Large language models in machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pp. 858–867. Cited by: §1.
  • M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, P. Scharre, T. Zeitzoff, B. Filar, H. Anderson, H. Roff, G. C. Allen, J. Steinhardt, C. Flynn, S. Ó hÉigeartaigh, S. Beard, H. Belfield, S. Farquhar, C. Lyle, R. Crootof, O. Evans, M. Page, J. Bryson, R. Yampolskiy, and D. Amodei (2019) The malicious use of artificial intelligence: forecasting, prevention, and mitigation. Note: arXiv:1802.07228 [cs.AI]. Cited by: §8.
  • M. Brundage (2016) Artificial intelligence and responsible innovation. In Fundamental Issues of Artificial Intelligence, V. C. Müller (Ed.), pp. 543–554. Cited by: §8.
  • X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pp. 2172–2180. Cited by: §1.
  • R. Child, S. Gray, A. Radford, and I. Sutskever (2019) Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. Cited by: §3.
  • R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa (2011) Natural language processing (almost) from scratch. Journal of machine learning research 12 (Aug), pp. 2493–2537. Cited by: §6.
  • R. Collobert and J. Weston (2008) A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pp. 160–167. Cited by: §6.
  • R. S. Cowan (1987) The consumption junction: a proposal for research strategies in the sociology of technology. In The Social Construction of Technological Systems, W. E. Bijker, T. P. Hughes, and T. J. Pinch (Eds.), pp. 261–280. Cited by: 3rd item.
  • Z. Dai, Z. Yang, Y. Yang, W. W. Cohen, J. Carbonell, Q. V. Le, and R. Salakhutdinov (2019) Transformer-xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Cited by: §2, §6.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §6.
  • J. Duchi, E. Hazan, and Y. Singer (2011) Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (Jul), pp. 2121–2159. Cited by: §3.2.
  • M. Dunn, L. Sagun, M. Higgins, V. U. Guney, V. Cirik, and K. Cho (2017) Searchqa: a new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179. Cited by: §3.1.
  • A. Fan, Y. Jernite, E. Perez, D. Grangier, J. Weston, and M. Auli (2019) ELI5: long form question answering. arXiv preprint arXiv:1907.09190. Cited by: Table 7, §3.1.
  • A. Fan, M. Lewis, and Y. Dauphin (2018) Hierarchical neural story generation. arXiv preprint arXiv:1805.04833. Cited by: §1, §6.
  • B. Ginsburg, P. Castonguay, O. Hrinchuk, O. Kuchaiev, V. Lavrukhin, R. Leary, J. Li, H. Nguyen, and J. M. Cohen (2019) Stochastic gradient methods with layer-wise adaptive moments for training of deep networks. arXiv preprint arXiv:1905.11286. Cited by: §3.2.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
  • M. Grusky, M. Naaman, and Y. Artzi (2018) NEWSROOM: a dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana, pp. 708–719. External Links: Link Cited by: Table 7, §3.1.
  • K. Hashimoto, C. Xiong, Y. Tsuruoka, and R. Socher (2016) A joint many-task model: growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587. Cited by: §6.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.
  • K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom (2015) Teaching machines to read and comprehend. In Advances in neural information processing systems, pp. 1693–1701. Cited by: §3.1.
  • A. Holtzman, J. Buys, M. Forbes, and Y. Choi (2019) The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Cited by: §4.1, §4.1, §6.
  • J. Howard and S. Ruder (2018) Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Cited by: §1, §6.
  • H. Inan, K. Khosravi, and R. Socher (2016) Tying word vectors and word classifiers: a loss framework for language modeling. arXiv preprint arXiv:1611.01462. Cited by: §3.2.
  • M. Johnson, M. Schuster, Q. V. Le, M. Krikun, Y. Wu, Z. Chen, N. Thorat, F. Viégas, M. Wattenberg, G. Corrado, et al. (2017) Google’s multilingual neural machine translation system: enabling zero-shot translation. Transactions of the Association for Computational Linguistics 5, pp. 339–351. Cited by: §3.2, §6.
  • M. Joshi, E. Choi, D. S. Weld, and L. Zettlemoyer (2017) Triviaqa: a large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. Cited by: §3.1.
  • D. Kaiser and J. Moreno (2012) Self-censorship is not enough. Nature 492 (7429), pp. 345–347. External Links: Document Cited by: §8.
  • L. Kaiser, A. N. Gomez, N. Shazeer, A. Vaswani, N. Parmar, L. Jones, and J. Uszkoreit (2017) One model to learn them all. arXiv preprint arXiv:1706.05137. Cited by: §6.
  • Ł. Kaiser, A. Roy, A. Vaswani, N. Parmar, S. Bengio, J. Uszkoreit, and N. Shazeer (2018) Fast decoding in sequence models using discrete latent variables. arXiv preprint arXiv:1803.03382. Cited by: §6.
  • N. S. Keskar, B. McCann, C. Xiong, and R. Socher (2019) Unifying question answering and text classification via span extraction. arXiv preprint arXiv:1904.09286. Cited by: §6.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.2.
  • D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1.
  • R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler (2015) Skip-thought vectors. In Advances in neural information processing systems, pp. 3294–3302. Cited by: §1.
  • W. Kryściński, N. S. Keskar, B. McCann, C. Xiong, and R. Socher (2019) Neural text summarization: a critical evaluation. arXiv preprint arXiv:1908.08960. Cited by: §7.
  • T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, et al. (2019) Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics 7, pp. 453–466. Cited by: §3.1.
  • G. Lample and A. Conneau (2019) Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. Cited by: §6.
  • G. Lample, A. Sablayrolles, M. Ranzato, L. Denoyer, and H. Jégou (2019) Large memory layers with product keys. arXiv preprint arXiv:1907.05242. Cited by: §6.
  • H. Levesque, E. Davis, and L. Morgenstern (2012) The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, Cited by: §7.
  • P. Lewis, L. Denoyer, and S. Riedel (2019) Unsupervised question answering by cloze translation. arXiv preprint arXiv:1906.04980. Cited by: §7.
  • M. Luong, Q. V. Le, I. Sutskever, O. Vinyals, and L. Kaiser (2015) Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. Cited by: §6.
  • J. McAuley, C. Targett, Q. Shi, and A. Van Den Hengel (2015) Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 43–52. Cited by: Table 7, §3.1.
  • B. McCann, J. Bradbury, C. Xiong, and R. Socher (2017) Learned in translation: contextualized word vectors. In Advances in Neural Information Processing Systems, pp. 6294–6305. Cited by: §1.
  • B. McCann, N. S. Keskar, C. Xiong, and R. Socher (2018) The natural language decathlon: multitask learning as question answering. arXiv preprint arXiv:1806.08730. Cited by: §1, §3.2, §6.
  • S. Merity, N. S. Keskar, and R. Socher (2017) Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182. Cited by: §3.2.
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §1, §6.
  • M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru (2019) Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19), External Links: Document Cited by: 5th item.
  • V. Nair and G. E. Hinton (2010) Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807–814. Cited by: §3.
  • R. Nallapati, B. Zhou, C. Gulcehre, B. Xiang, et al. (2016) Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Cited by: Table 7.
  • M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Cited by: §1, §6.
  • C. W. Pfaff (1979) Constraints on language mixing: intrasentential code-switching and borrowing in spanish/english. Language, pp. 291–318. Cited by: §1.
  • S. Poplack (1980) Sometimes i’ll start a sentence in spanish y termino en espanol: toward a typology of code-switching1. Linguistics 18 (7-8), pp. 581–618. Cited by: §1.
  • O. Press and L. Wolf (2016) Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859. Cited by: §3.2.
  • A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018) Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/
    Cited by: §1, §6.
  • A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. URL https://d4mucfpksywv.cloudfront.net
    Cited by: §1, §2, §4.1, §6, §6, §7.
  • N. F. Rajani, B. McCann, C. Xiong, and R. Socher (2019) Explain yourself! leveraging language models for commonsense reasoning. arXiv preprint arXiv:1906.02361. Cited by: §7.
  • P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang (2016) Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Cited by: §3.1.
  • A. M. Rush, S. Chopra, and J. Weston (2015) A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Cited by: §1, §7.
  • E. Sandhaus (2008) The new york times annotated corpus. Linguistic Data Consortium, Philadelphia 6 (12), pp. e26752. Cited by: §3.1.
  • T. Scialom, S. Lamprier, B. Piwowarski, and J. Staiano (2019) Answers unite! unsupervised metrics for reinforced summarization models. arXiv preprint arXiv:1909.01610. Cited by: §7.
  • A. See, P. J. Liu, and C. D. Manning (2017) Get to the point: summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vol. 1, pp. 1073–1083. Cited by: §4.1, §6.
  • R. Sennrich, B. Haddow, and A. Birch (2015) Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Cited by: §3.2.
  • N. Shazeer and M. Stern (2018) Adafactor: adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235. Cited by: §3.2.
  • J. Stilgoe, R. Owen, and P. Macnaghten (2013) Developing a framework for responsible innovation. Research Policy 42 (9), pp. 1568–1580. External Links: Document Cited by: 2nd item.
  • I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112. Cited by: §1.
  • T. H. Trinh and Q. V. Le (2018) A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. Cited by: §7.
  • A. Trischler, T. Wang, X. Yuan, J. Harris, A. Sordoni, P. Bachman, and K. Suleman (2016) Newsqa: a machine comprehension dataset. arXiv preprint arXiv:1611.09830. Cited by: §3.1.
  • L. R. Varshney, N. S. Keskar, and R. Socher (2019) Pretrained AI models: performativity, mobility, and change. Note: arXiv:1909.03290 [cs.CY]. Cited by: §8.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5998–6008. External Links: Link Cited by: §3, §6.
  • A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman (2018) Glue: a multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Cited by: §6.
  • S. Welleck, I. Kulikov, S. Roller, E. Dinan, K. Cho, and J. Weston (2019) Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319. Cited by: §4.1, §6.
  • Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. (2016) Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Cited by: §3.2, §6.
  • S. Xenouleas, P. Malakasiotis, M. Apidianaki, and I. Androutsopoulos (2019) SumQE: a bert-based summary quality estimation model. arXiv preprint arXiv:1909.00578. Cited by: §7.
  • Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning (2018) Hotpotqa: a dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. Cited by: §3.1.
  • R. Zellers, A. Holtzman, H. Rashkin, Y. Bisk, A. Farhadi, F. Roesner, and Y. Choi (2019) Defending against neural fake news. arXiv preprint arXiv:1905.12616. Cited by: §6.

Appendix A Data sources and breakdown

Control Code Description
Wikipedia English Wikipedia
Books Books from Project Gutenberg
Reviews Amazon Reviews data (McAuley et al., 2015)
Links OpenWebText (See Sec. 3.2)
Translation WMT translation date (Barrault et al., 2019)
News News articles from CNN/DailyMail Nallapati et al. (2016), New York Times
and Newsroom (Grusky et al., 2018)
multilingual Wikipedias in German, Spanish and French
Questions (Questions and answers only) MRQA shared task (See Section 3.1)
Explain (Only main post) (Fan et al., 2019)
Sub-reddit data (Title, Text and Score/Karma) collected from pushshift.io.
Alone r/childfree
Atheism r/atheism
Christianity r/christianity
Computing r/computing
Confession r/offmychest
Confessions r/confession
Conspiracy r/conspiracy
Diet r/keto
Extract r/childfree
Feminism r/twoxchromosome
Finance r/personalfinance
Fitness r/fitness
Funny r/funny
Gaming r/gaming
Horror r/nosleep
Human r/nfy
India r/india
Joke r/jokes
Joker r/joke
Learned r/todayilearned
Legal r/legaladvice
Movies r/movies
Netflix r/netflix
Norman r/lifeofnorman
Notion r/unpopularopinion
Opinion r/changemyview
Politics r/politics
Pregnancy r/babybumps
Relationship r/relationshipadvice
Relationships r/relationships
Retail r/talesfromretail
Running r/running
Saving r/frugal
Scary r/scaryshortstories
Science r/science
Technologies r/technology
Teenage r/teenager
Thoughts r/showerthoughts
Tip r/lifeprotips
Weight r/loseit
Writing r/writingprompts
Table 7: Data and control codes. Wikipedia, Books, News and multilingual have no secondary code. Reviews can be followed by Rating: and a value of {1.0, 2.0, 3.0, 4.0, 5.0}. For Links, a full or partial URL can be provided (See Table 3). For all the Reddit data, the secondary code can be Title: or Text:, which is the title and text of the article, respectively.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description