Modeling Language Vagueness in Privacy Policies using Deep Neural Networks
Modeling Language Vagueness in Privacy Policies using Deep Neural Networks
Fei Liu, Nicole Lee Fella, Kexin Liao University of Central Florida, 4000 Central Florida Blvd., Orlando, Florida 32816 Manhattan College, 4513 Manhattan College Parkway, Riverdale, NY 10471 email@example.com, firstname.lastname@example.org, email@example.com
Copyright © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
One might wonder why privacy notices need to adopt such sophisticated language in the first place. Bhatia et al. (?) suggest two causes in their recent work. First, the policies need to be comprehensive, covering all possible cases such as the physical places (e.g., stores, offices) and web/mobile platforms. Second, the policy statements must be accurate, which means they are true to all data practices and systems. Clearly it will be difficult for the legal counsel to anticipate all the future needs, naturally they resort to generalization and sophistication to frame the statements, introducing vagueness to the text. An example statement is: “The email address is used for sending account notifications and other system-related information as needed.”
|Condition (9): depending, necessary, appropriate,|
|inappropriate, as needed, as applicable, otherwise|
|reasonably, sometimes, from time to time|
|Generalization (12): generally, mostly, widely, general,|
|commonly, usually, normally, typically, largely, often,|
|primarily, among other things|
|Modality (8): may, might, can, could, would, likely,|
|Numeric Quantifier (11): anyone, certain, everyone,|
|numerous, some, most, few, much, many, various,|
|including but not limited to|
Vagueness is a linguistic phenomenon that is not yet fully studied in the natural language processing (NLP) community. A concept is considered vague if it lacks clarity or corresponds to borderline cases (e.g., tall, short). Even terms like “disability” raise questions such as “how much loss of vision is required before one is legally blind?”222https://en.wikipedia.org/wiki/Vagueness Farkas et al. (?) introduce a shared task on detecting uncertainty cues (i.e., hedges and weasels) from biological articles and Wikipedia pages. Reidenberg et al. (?) manually analyze a set of 15 privacy policies and identify 40 vague terms (Table 1) which we also adopt in this study. Note that there appears to be a dilemma: if a collection of vague terms can be pre-specified, classifying a piece of text as vague or not seems trivial; on the other hand, given the richness of natural language, creating such a comprehensive list of vague terms can be highly challenging, if possible at all.
The main contribution of this work lies in learning vector representations for words in privacy policies using deep neural networks. There is one vector representation for each word token in the privacy policies. The vector representations are iteratively learnt so that they would perform well in two tasks: predicting the next word given its context (e.g., “we do not request any ____” “information”) and predicting whether or not a word is in a list of pre-specified vague terms (e.g., “may” “Vague (V)”, “email” “Not Vague (N)”. The 40 vague terms in Table 1 are used in this study. We hypothesize that certain dimensions of the vector representation encode the semantic meaning of the word, including vagueness. These vector representations are further fed to an interactive visualization tool (LSTMVis) to test on their ability to discover semantically related terms. The approach holds promise for modeling vagueness of words within context. The visualization tool allows the privacy researchers to perform knowledge discovery on the website privacy policies.333We plan to release the code and data models upon acceptance of the paper.
In the American Constitution, there is a “void for vagueness” doctrine. It states that the law should be clearly specified so that the average citizen would understand. If a rule is vague, it is unenforceable. Researchers in the law community have thus exploited the vagueness of legal language and interpretation of boundary decisions. In his seminal work, Waldron (?) distinguishes ambiguity, contestability, vagueness, and introduce a general term “indeterminacy” to cover the three cases. Post (?) argues that the legal rules cannot be simply rewritten to be more precise, since they are not in isolation but reflect the forms of social order. Jonsson (?) suggests that vagueness in law does not call for specific interpretation of the law itself, but only for an application of the law on case-by-case basis. Studies in (?) suggest that the decades-old antihacking statue Computer Fraud and Abuse Act (CFAA) is in need of a face-lift. Phrases such as “involve” and “other similar information” are not providing enough clarity. Liebwald (?) concerns that the vagueness in combination with the elasticity of legal interpretation may affect the binding force of law. The paper introduces a theory called Hyperbola of Meaning. Raffman (?) provides a characterization of linguistic vagueness. Vague words possess unclear boundaries, but are distinguished from ambiguity, underspecificity, and several forms of indeterminacy. Low et al. (?) illustrate the application of the vagueness doctrine to four Supreme Court vagueness cases. They point out that when determining vagueness of statutes, it is important to take the intersection between state and federal law into account. Hunt (?) studies “epistemicism”, which states that vague statements are either true or false even though it is impossible to know which. The author suggests that vagueness should be explained within the theory of legal interpretation.
Comprehensive studies have been missing for understanding vagueness in the natural language processing community. Farkas et al. (?) focus on the detection of uncertainty cues and their linguistic scope in natural language texts. The motivation behind this task is to distinguish factual and uncertain information in text, which is of essential importance to information extraction. Much of the techniques involve sentence-level classifications using SVM, CRF, and maximum entropy. However, it remains to be seen if a word- or sentence-level classification formulation is well suited for this task. In (?), vagueness is considered to be a linguistic phenomenon. It arises with a lack of clear boundaries and conditions. These boundaries usually do not allow concrete distinction. Classifying text as vague or not vague can be subjective, making it important to look at agreement between interpretations and annotations. Using a naive Bayes classifier, the study shows that vague and not vague senses can be separated.
Statistics of the dataset are illustrated in Table 2.
|total # of web privacy policies||1,010|
|total # of sentences||107,076|
|total # of word tokens||2,534,094|
|total # and % of vague tokens||59,026 (2.3%)|
|total # and % of sentences that|
|contain at least one vague token||41,033 (38.3%)|
Modeling Language and Vagueness
So far we have demonstrated the needs for understanding language vagueness and described our dataset, we proceed by introducing a deep neural network for learning vector representations for words in privacy policies (see Figure 1 for illustration). Traditional approaches to building feature representation have been largely based on manual feature extraction (?). The idea behind the deep neural network is that it learns to automatically construct a feature representation for each word, in the form of a dense continuous vector (). The feature representation is optimized so that it could perform well in two tasks: 1) predicting the next word given previous words in the sentence, and 2) predicting if the current word is vague or not given the context. This corresponds to a multi-task learning setting.
Deep neural networks have seen considerable success in a range of natural language processing tasks. Our work is inspired by recent advances on learning word embeddings (?; ?) and sequence-to-sequence models (?; ?).
Concretely, let be an input sentence consisting of N word tokens. The word tokens come from a vocabulary of size . Each word is replaced by a pre-trained word embedding () before it is fed to the neural network. With a slight abuse of notation, we use to represent the word token and (bold-face) to represent its embedding. We use the 300-dimension () word2vec embeddings pre-trained on Google News dataset with about 100 billion words555https://code.google.com/archive/p/word2vec/. A vocabulary of 5,000 words is employed in this study (). They correspond to the most frequent words in the 1,010 privacy policies dataset. Among them, 602 words cannot find pre-trained word2vec embeddings, we thus randomly initiate the embeddings using a standard normal distribution.
Next we feed the sentence one word at a time to a bi-directional recurrent neural network (forward layer colored in blue, backward layer colored in green, see Figure 1). A recurrent neural network (RNN) corresponds to a language model, where the goal is to predict the next word given its previous words. The probability of the entire sequence is represented in Eq.(1), whereas the individual probability is calculated by RNN.
A recurrent neural network operates on a sequence of words and creates a hidden state representation for the word at time step . It learns a function of the form , where is the hidden state representation of the previous time step and is the input word embedding of the current time step. Both the Long Short-Term Memory (LSTM) networks and Gated Recurrent Unit (GRU) networks are variants of the recurrent neural networks. They correspond to different gating mechanisms, hence different . This work specifically focuses on using GRU to produce the hidden state representations, where . GRUs have seen considerable success in recent NLP applications (?). It uses two neural gates to control the flow of information, where and respectively represent the input and reset gate. is sometimes referred to as the cell value and is the hidden state representation we are interested in.
In the above equations, and are parameters, are biases; corresponds to the element-wise product of two vectors; is the sigmoid function; is the hyperbolic tangent function. They are applied element-wise to the vectors. We experiment with a bi-directional neural network, where in the forward-pass, admits words from the sentence beginning to end (Eq.(6)), and in the backward-pass, admits the word sequence reversely (Eq.(7)). The generated hidden states are colored in blue (forward pass) and green (backward pass) respectively in Figure 1.
The hidden state generated in the forward pass () is expected to carry over semantic information from beginning of the sentence to the current time step; similarly encodes information from the current time step to end of sentence. We concatenate the two vectors of each time step and feed it to a densely connected layer to create a unified representation for each word (colored in red in Figure 1.
where and are parameters. Using the vector representation , we learn to complete two tasks: first, is used to predict the next word using a softmax activation function (Eq.(9)), where is the probability that the next word is predicted as the -th word in the vocabulary; second, is employed to predict if the current word is vague or not, where is the probability of the current word being vague () or not ().
We use to represent all the trainable parameters in the aforementioned deep neural network. The above model can be trained in an end-to-end fashion using stochastic gradient descent. In particular RMSProp is used for parameter estimation, which has been shown to perform well in sequence learning tasks. During training, the model parameters are iteratively updated so as to minimize the negative log likelihood of the training data .
where =107,076 is the total number of sentences in our dataset, =50 is set to be the maximum number of words per sentence, =5,000 is the vocabulary size, =2 is the number of categories (i.e., vague or not). and are scalar coefficients used to indicate the weights of the components in the leanring objective. They are empirically set to and in our study. This means that the system is subject to heavier penalty when it mispredicts the word vagueness. We set the dimensionality and . The deep neural network finally produces a 200-dimension vector representation for each word in the dataset. The model is trained for 25 epochs. Accuracy of predicting the identity of the next word (“Accuracy-Word”) and accuracy for predicting the word vagueness (“Accuracy-Vagueness”) are plotted in Figure 2. The “Accuracy-Vagueness” curve saturates after the first couple of epochs, suggesting word-level binary prediction is not a difficult task, whereas the “Accuracy-Word” curve increases steadily across all the training epochs.
|under the same circumstances||necessary or appropriate to||personally-identifying information|
|under the following conditions||necessary to||personal information|
|under the following circumstances||required to||access information|
|under the circumstances||otherwise permitted by||financial information|
|in any case||your right to||aggregate information|
|in this case||contact information|
Researchers strive to understand the neural models in natural language processing. Very recently, Li et al. (?) develop strategies to understand the model compositionality. That is, how sentence meanings are built from the meanings of words and phrases. The approach measures the “salience” of each dimension based on how much it contributes to the final decision, which is approximated using first-order derivatives. Strobelt et al. (?) present a visual analysis tool named LSTMVis666lstm.seas.harvard.edu. The tool explores the hidden state dynamics of a recurrent neural network. It allows the user to select an input phrase and find similar phrases in the dataset that demonstrate similar hidden state patterns. We adopt LSTMVis in our study and import the vector representations produced in the previous section. The visualization is presented in Figure 3.
The interface consists of two views: the select view corresponds to the upper panel ((A) to (D)) and the match view corresponds to the lower panel ((E) to (G)). All sentences in the dataset are concatenated into a meta word sequence and delimited by the special symbol . Each word is represented using a fixed-width box; if words do not fit into the box, they are squeezed. Users are provided with buttons to move forwards or backwards with the word sequence, as well as a search box (disabled for now) to directly jump to certain text region. Each vector dimension corresponds to a line in the select view. Because our vector representation contains 200 dimensions, there are 200 lines in the figure, numbered from 0 to 199.
The user starts by selecting a phrase in the word sequence (e.g., “as needed,” see (A)). This action turns on a set of vector dimensions (represented as ), where “turn on” means the cell value of the dimension, in both of the selected word positions, is greater than a threshold (default to 0.3, see (C)). The gray slider (see (B)) further allows the user to select a few context words (e.g., “other information”) that surround the current selected phrase. Similarly, this action turns on a second set of vector dimensions (represented as ). Note that our goal is to identify the dimensions that uniquely characterize the selected phrase (“as needed”) but not the surrounding words. As a result, the intersection of the two sets of dimensions are the ones we wish to focus on. These dimensions are listed in the interface (see (E)).
In the match view, the visual tool continues to search for text regions where the same set of vector dimensions () have been turned on. The text regions are further ranked by the inverse of number of additional “on” cells and the length of the text region. The top phrases are listed on the interface (see (F)) with length distribution plotted (see (G)). The color intensity is used to signal the value of . For the selected phrase (“as needed”), we observe that several syntactically and semantically similar phrases have been selected, including “as is appropriate to,” “as described below,” “as reasonably possible,” “as reasonably practicable,” and “as set out in.” Several similar examples are presented in Table 3. These findings suggest that even in the relatively restricted domain of website privacy policies, a large number of text variations exist. They use different text expressions to represent the same or similar meanings. It is thus left to be seen if creating a comprehensive list of vague terms is feasible given the richness and complexity of natural language.
In this work we attempt to computationally model the vagueness of privacy policies using deep neural networks. The neural networks learn to generate vector representations for words in the privacy policies. We explore visualization of the learnt vector representations, identify dimensions that could capture language specific characteristics, and present example phrases that potentially signal vagueness. Our learned model and visualization allow researchers to explore the vagueness of natural language and perform knowledge discovery. We expect future work will include empirical evaluations on vagueness datasets and use the vagueness prediction results to assist legal counsels to clarify the privacy text, as well as raise public awareness of the vague terms as presented in the website privacy policies.
-  Alexopoulos, P., and Pavlopoulos, J. 2014. A vague sense classifier for detecting vague definitions in ontologies. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
-  Bhatia, J.; Breaux, T. D.; Reidenberg, J. R.; and Norton, T. B. 2016. A theory of vagueness and privacy risk perception. In Proceedings of the IEEE International Conference on Requirements Engineering (RE).
-  Cranor, L. F.; Guduru, P.; and Arjula, M. 2006. User interfaces for privacy agents. ACM Transactions on Computer-Human Interaction (TOCHI) 13(2):135–178.
-  Cranor, L. F. 2002. Web Privacy with P3P. O’Reilly & Associates.
-  Farkas, R.; Vincze, V.; Mora, G.; Csirik, J.; and Szarvas, G. 2010. The CoNLL-2010 shared task: Learning to detect hedges and their scope in natural language text. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL).
-  Hernacki, A. 2012. A vague law in a smartphone world: Limiting the scope of unauthorized access under the computer fraud and abuse act. American University Law Review 61(5):1543–1584.
-  Hunt, L. W. 2015. What the epistemic account of vagueness means for legal interpretation. Law and Philosophy 35(1):29–54.
-  Jonsson, O. P. 2009. Vagueness, interpretation, and the law. Legal Theory 15:193–214.
-  Kelley, P. G.; Cesca, L.; Bresee, J.; and Cranor, L. F. 2010. Standardizing privacy notices: An online study of the nutrition label approach. In Proceedings of CHI.
-  Lammel, R., and Pek, E. 2013. Understanding privacy policies (a study in empirical language usage analysis). Empirical Software Engineering 18:310–374.
-  Li, J.; Chen, X.; Hovy, E.; and Jurafsky, D. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
-  Liebwald, D. 2013. Law’s capacity for vagueness. International Journal for the Semiotics of Law 26(2):391–423.
-  Low, P. W., and Johnson, J. S. 2015. Changing the vocabulary of the vagueness doctrine. Virginia Law Review 101(8):2051–2116.
-  Luong, M.-T.; Sutskever, I.; Le, Q. V.; Vinyals, O.; and Kaiser, L. 2016. Multi-task sequence to sequence learning. In Proceedings of International Conference on Learning Representations (ICLR).
-  Micheti, A.; Burkell, J.; and Steeves, V. 2010. Fixing broken doors: Strategies for drafting privacy policies young people can understand. Bulletin of Science Technology Society 30(2):130–143.
-  Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of Advances in Neural Information Processing Systems (NIPS).
-  Post, R. C. 1994. Reconceptualizing vagueness: Legal rules and social orders. California Law Review 82(3):491–507.
-  Raffman, D. 2015. Precis of unruly words: A study of vague language. Philosophy and Phenomenological Research 90(2):452–456.
-  Ramanath, R.; Liu, F.; Sadeh, N.; and Smith, N. A. 2014. Unsupervised alignment of privacy policies using hidden Markov models. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics (ACL).
- [2015a] Reidenberg, J. R.; Breaux, T.; Cranor, L. F.; French, B.; Grannis, A.; Graves, J. T.; Liu, F.; McDonald, A. M.; Norton, T. B.; Ramanath, R.; Russell, N. C.; Sadeh, N.; and Schaub, F. 2015a. Disagreeable privacy policies: Mismatches between meaning and users’ understanding. Berkeley Law Technology Journal 30(1).
- [2015b] Reidenberg, J. R.; Russell, N. C.; Callen, A.; Qasir, S.; and Norton, T. B. 2015b. Privacy harms and the effectiveness of the notice and choice framework. I/S Journal of Law and Policy for the Information Society 11:485–524.
-  Reidenberg, J. R.; Bhatia, J.; Breaux, T. D.; and Norton, T. B. 2016. Ambiguity in privacy policies and the impact of regulation. Journal of Legal Studies 45(2).
-  Strobelt, H.; Gehrmann, S.; Huber, B.; Pfister, H.; and Rush, A. M. 2016. Visual analysis of hidden state dynamics in recurrent neural networks. In arXiv:1606.07461.
-  Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In Proceedings of Advances in Neural Information Processing Systems (NIPS).
-  Tang, D.; Wei, F.; Yang, N.; Zhou, M.; Liu, T.; and Qin, B. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL).
-  Vail, M. W.; Earp, J. B.; and Anton, A. I. 2008. An empirical study of consumer perceptions and comprehension of web site privacy policies. IEEE Transactions on Engineering Management 55(3):442–454.
-  Waldron, J. 1994. Vagueness in law and language: Some philosophical issues. California Law Review 82(3):509–540.
- [2016b] Wilson, S.; Schaub, F.; Ramanath, R.; Sadeh, N.; Liu, F.; Smith, N. A.; and Liu, F. 2016b. Crowdsourcing annotations for websites’ privacy policies: Can it really work? In Proceedings of the 25th International World Wide Web Conference (WWW).