Learning Multi-faceted Representations of Individuals from Heterogeneous Evidence using Neural Networks

Learning Multi-faceted Representations of Individuals from Heterogeneous Evidence using Neural Networks

Jiwei Li, Alan Ritter and Dan Jurafsky
Stanford University, Stanford, CA, USA
Ohio State University, OH, USA

Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks.

Learning Multi-faceted Representations of Individuals from Heterogeneous Evidence using Neural Networks

Jiwei Li, Alan Ritter and Dan Jurafsky Stanford University, Stanford, CA, USA Ohio State University, OH, USA jiweil,jurafsky@stanford.edu,ritter.1492@osu.edu

1 Introduction

The recent rise of online social media presents an unprecedented opportunity for computational social science: user generated texts provide insight about user’s attributes such as employment, education or gender. At the same time the social network structure sheds light on complex real-world relationships between preferences and attributes. For instance people sharing similar attributes such as employment background or hobbies have a higher chance of becoming friends. User modeling based on information presented in social networks is an important goal, both for applications such as product recommendation, targeted online advertising, friend recommendation and for helping social scientists and political analysts gain insights into public opinions and user behaviors.

Figure 1: Illustration for the proposed method that learns latent representations for users, attributes and user-generated texts based on social network information.

Nevertheless, much important information on social networks exists in unstructured data formats. Important social insights are locked away, entangled within a heterogenous combination of social signals (Sun et al., 2009) - including text, networks, attributes, relations, preferences, etc. While recent models have attempted to link one or two aspects of the evidence, how to develop a scalable framework that incorporates massive, diverse social signals including user generated texts, tens of thousands of user attributes and network structure in an integrated way, remains an open problem.

In this paper, we propose a general deep learning framework for jointly analyzing user networks, generated context and attributes. We map users, attributes, and user-generated content to latent vector representations, which are learned in a scalable way from social network data. Figure 1 gives a brief overview of the mechanism of the proposed model: users are represented by similar vectors if they are friends, share similar attributes or write similar content. Attributes are similarly clustered if associated with similar users.111This core idea is similar to collaborative filtering (Kautz et al., 1997). In summary, we incorporate diverse social signals in a unified framework, allowing user embeddings to be jointly optimized using neural networks trained on vast quantities of rich social and linguistic context.

Based on these learned representations, our approach provides with a general paradigm on a wide range of predictive tasks concerning individual users as well as group behavior: user attribute inference (e.g., the city the user lives in), personal interest prediction (e.g, whether a user will like a particular movie), and probabilistic logical reasoning over the social network graph. For example, our models infer that:

  • Men in California are 6.8 times more likely to take an engineering occupation than women in California.

  • Users who work in the IT industry222This information comes from the Standard Occupational Classification (SOC), as will be described later. are 2.5 times more likely to like iPhones than users working in Legal Occupations.

Our methods also have the potential to seamlessly integrate rich textual context into many social network analysis tasks including: link prediction, community detection, and so on, and the learned user representations can be used as important input features for downstream machine learning models, just as how word embeddings are used in the field of natural language processing. The major contributions of this paper can be summarized as follows:

  • We propose new ways for integrating heterogeneous cues about people’s relations or attributes into a single latent representation.

  • We present inference algorithms for solving social media inference tasks related to both individual and group behaviors based on the learned user representations.

  • Cues that we take advantage of may be noisy and sometimes absent but by combining them via global inference, we can learn latent facts about people.

We evaluate the proposed model on four diverse tasks: friend-relation prediction, gender identification, occupation identification and user geolocation prediction. Experimental results demonstrate improved predictions by our model by incorporating diverse evidence from many sources of social signals.

2 Social Representation Learning

Our goal is to learn latent representations from the following three types of online information: (1) user-generated texts (2) friend networks (3) relations and attributes.

2.1 Modeling Text

User generated texts reflect a user’s interests, backgrounds, personalities, etc. We thus propose learning user representations based on the text a user generates.

We represent each user by a -dimensional vector . Suppose that denotes a sequence of tokens generated by the current user . Each word is associated with a -dimensional vector . Let denote the list of neighboring words for token . is generated based on not only a general language model shared across all users (namely, a model that predicts given ), but the representation of the current user :


From Eq.1, we are predicting the current word given the combination of its neighbors’ embeddings and the current user embedding. This is akin to the CBOW model (Mikolov et al., 2013) with the only difference that the user embedding is added into contexts. Such an idea also resembles the paragraph vector model (Le and Mikolov, 2014) and the multimodal language model (Kiros et al., 2014) .

We use negative sampling, in which we randomly generate negative words . Let denote a binary variable indicating whether the current word is generated by the current user. The loss function using negative sampling is given by:

Word prediction errors are backpropogated to user embeddings, pushing the representations of users who generate similar texts to be similar.

2.2 Modeling User Networks

By the homophily effect, individuals who are friends on social networks tend to share common characteristics. We therefore to encourage users who are friends have similar representations.

We propose using a strategy similar to skip-gram models, in which we consider users who are friends on social networks analogous to neighboring words, the representations of which we wis to be similar. On the other hand, we want embeddings of individuals who are not friends to be distant, just as words that do not co-appear in context. A similar idea of transforming social graph to vector space embeddings has been explored in the recent deepwalk model (Perozzi et al., 2014) and Line (Tang et al., 2015).

Suppose we have two users and . The probability that the two users are and are not friends are respectively given by:


From Eq 2, we can see that the model favors the cases where the dot product of friends’ embeddings is large, equivalent to their embeddings being similar. Again, we use negative sampling for optimization. For two users and who are friends, we sample random users , and we assume friendship does not hold between them. The objective function is therefore given by:


2.2.1 Modeling Relations and Attributes

Intuitively, users who share similar attributes should also have similar representations and occupy similar position in the vector space. Suppose that a specific relation holds between a user and an entity . We represent each user and entity by a K-dimensional vector and a relation by a matrix. For any tuple , we map it to a scalar within the range of [0,1], indicating the likelihood of relation holding between user and entity :

Similar scoring functions have been applied in a variety of work for relation extraction (Socher et al., 2013; Chang et al., 2014). Again we turn to negative sampling for optimization. The system randomly samples none-referred-to entities and maximizes the difference between the observed relation tuples and randomly sampled ones. Through the model described above, users who share similar attributes will have similar representations. At the same time, entities that are shared by similar users would also have similar representations and thus occupy close positions in the vector space.

2.3 Training

The global learning objective is a linear combination of the objectives from the three categories described above. User embeddings are shared across these categories, and each part can communicate with the rest: a user who publishes content about a particular city (text modeling) can have similar embeddings to those who live in that city (relation/attribution modeling); friends (graph modeling) of a basketball fan (relation/attribution modeling) are more likely to be basketball fans as well. The final objective function is given as follows:

where and denote weights for different constituents. We use Stochastic gradient decent (Zhang, 2004) to update the parameters.

The system jointly learns user embeddings, word embeddings, entity embeddings and relation matrices.

3 Inference on Social Networks

In this section, we describe how to take as input a learned user embedding for different inference tasks on social media. We divide inference tasks on social networks into two major categories: inference for individual behaviors and group behaviors. The former focuses on inferring attributes of a specific user such as whether a user likes a specific entity or whether a specific relation holds between two users, while the latter focuses on inference over a group of users, e.g., what is the probability of a new yorker being a fan of Knicks.

3.1 User Attribute Inference

Given user representation , we wish to infer the label for a specific attribute of a specific user. The label can be whether a user likes an entity in a binary classification task, or the state that a user lives in a multi-class classification task.

Suppose that we want to predict an attribute label (denoted by ) for a user . We assume that information for this attribute is embedded in user representations and therefore, and build another neural model to expose this information. Specifically, the model takes as input user embedding and outputs the attribute label using a softmax function as follows:

Parameters to learn include and . User representations are kept fixed during training. The model is optimized using AdaGrad (Zeiler, 2012).

3.2 User Relation Inference

User Relation Inference specifies whether a particular relationship holds between two users (e.g., whether the two users are friends). It takes embeddings for both users as inputs. Given a user (associated with embedding ) and a user (associated with embedding ), we wish to predict the index of relationship label that holds between the two users. A neural network prediction model is trained that takes as input the embeddings of the two users. The model considers the distances and angle between the two user embeddings. Similar strategies can be found in many existing works, e.g., (Tai et al., 2015).

Non-linear composition is first applied to both user representations:

Next the distance and angle between and are computed:

The multiplicative measure is the elementwise comparison of the signs of the input representations. Finally a softmax function is used to decide the label:

Again, parameters involved are learned using stochastic gradient decent with AdaGrad (Zeiler, 2012).

3.3 Inference of Group Behavior

We now return to the example described in Section 1, in which we wish to estimate the probability of a male located in California (Cal for short) having an engineering occupation. Given a list of users, their representations, and gold-standard labels, we first separately train the following neural classifiers:

  • whether a user is a male, i.e., P(gender()=male)

  • whether a user lives in Cal, i.e., P(LiveIn()=Cal)

  • whether a user takes an engineering occupation,
    i.e., P(Job()=engineering)

Next we estimate the general embedding (denoted by ) for the group of people that satisfy these premises, namely, they are males and live in California at the same time. This can be transformed to the following optimization problem with being the parameters to learn: 333Note that we assume propositions are independent, so in the example above, being a male and living in California are independent from each other. We leave relaxing this independence assumption to future work.


Eq.3 can be thought of as an optimization problem to find a optimal value of . This problem can be solved using SGD. The obtained optimal is used to represent that group embedding for users who satisfy the premises (e..g, males and living in Cal). is then used as inputs to classifier which returns probability of taking an engineering job.

More formally, given a list of conditions , we want to the compute the probability that another list of conditions (denoted by ) hold, in which can be a user being an engineer. This probability is denoted by . The algorithm to compute for group behavior inference is summarized in Figure 2.


  • For , , train separate classifiers based on user representations and labeled datasets.

  • Estimate group representation by solving the following optimization problem using SGD:

  • Infer the probability :


Figure 2: Algorithm for group behavior inference.

4 Dataset Construction

Existing commonly used social prediction datasets (e.g., BlogCatalog and Flickr (Tang and Liu, 2009a), YouTube (Tang and Liu, 2009b)) are designed with a specific task in mind: classifying whether social links exist between pairs of users. They contain little text or user-attribute information, and are therefore not well suited to evaluate our proposed model.

Social networks such as Facebook or LinkedIn that support structured profiles would be ideal for the purpose of this work. Unfortunately, they are publicly inaccessible. We advert to Twitter. One downside of relying on Twitter data is that gold-standard information is not immediately available. Evaluation presented in this paper therefore comes with the flaw that it relies on downstream information extraction algorithms or rule-based heuristics for the attainment of “gold standards”. Though not perfect as “gold-standards” extraction algorithm can be errorful, such a type of evaluation comes with the advantage that it can be done automatically to compare lots of different systems for development or tuning in relatively large scale. Meanwhile the way that our dataset is constructed gives important insights about how the proposed framework can be applied when some structured data is missing444Facebook and LinkedIn do come with the ideal property of supporting structured user profiles but hardly anyone fills these out. and how we can address these challenges by directly from unstructured text, making this system applicable to a much wider scenario.

4.1 User Generated Texts and Graph Networks

We randomly sample a set of Twitter users, discarding non-English tweets and users with less than 100 tweets. For each user, we crawl their published tweets and following / follower network using the publicly available Twitter API. This results in a dataset of 75 million tweets.

4.2 User Attributes

Unlike social networking websites such as Facebook, Google+ and LinkedIn, Twitter does not support structured user profile attributes such as gender, education and employer. We now briefly describe how we enrich our dataset with user attributes (location, education, gender) and user relations (friend, spouse). Note that, the goal of this section is to construct a relatively comprehensive dataset for the propose of model evaluation rather than developing user attribute extraction algorithms.

4.2.1 Location

We first associate one of the 50 US states with each user. In this paper, we employ a rule-based approach for user-location identification.555 While there has been a significant work on geolocation inference, (e.g., (Cheng et al., 2010; Conover et al., 2013; Davis Jr et al., 2011; Onnela et al., 2011; Sadilek et al., 2012)), the primary goals of this work are to develop user representations based on heterogenous social signals. We therefore take a simple high-precision, low-recall approach to identifying user locations.. We select all geo-tagged tweets from a specific user, and say an location corresponds to the location of the current user if it satisfies the following criteria, designed to ensure high-precision: (1) user published more than 10 tweets from location . (2) user published from location in at least three different months of a year. We only consider locations within the United States and entities are matched to state names via Google Geocoding. In the end, we are able to extract locations for 1.1 of the users from our dataset.

4.2.2 Education/Job

We combine two strategies to harvest gold-standard labels for users’ occupation and educational information. We use The Standard Occupational Classification (SOC)666http://www.ons.gov.uk/ons/guide-method/classifications/current-standard-classifications/soc2010/index.html to obtain a list of occupations, a approach similar to Preoţiuc-Pietro et al. (2015); Preotiuc-Pietro et al. (2015). 777 SOC is a UK government system developed by the Office of National Statistics that groups jobs into 23 major categories (for example: or Engineering Occupations or Legal Occupations), each of which is associated with a set of specific job titles (e.g., mechanical engineer and pediatrist for Professional Occupations). We construct a lookup table from job occupations to SOC and apply a rule-based mapping strategy to retrieve a users’ occupation information based on the free-text user description field from their Twitter profile. Note that this approach introduces some bias: users with high-profile occupations are more likely to self-disclose their occupations than users with less prestigious occupations. A user is assigned an occupation if one of the keywords from the lookup table is identified in his/her profile. percent of users’ occupations are identified using this strategy.

4.2.3 Gender

Following a similar strategy as was described for location and occupation, we take implement a simple high-precision approach for obtaining gold-standard user gender information. We leverage the national Social Security Gender Database (SSGD)888http://www.ssa.gov/oact/babynames/names.zip to identify users’ gender based on their first names. SSGD contains first-name records annotated for gender for every US birth since 1880 A.D999Again, we note the large amount of related work on predicting gender of social media users (e.g., (Burger et al., 2011; Ciot et al., 2013; Pennacchiotti and Popescu, 2011; Tang et al., 2011).) studying whether high level tweet features (e.g., link, mention, hashtag frequency) can help in the absence of highly-predictive user name information. As mentioned before, we do not adopted machine learning algorithms for attribute extraction. . Using this database we assign gender to of users in our dataset.

5 Experiments

We now turn to experiments on using global inference to augment individual local detectors to infer user’s attributes, relations and preferences. All experiments are based on datasets described in the previous sections. We performed 3 iterations of stochastic gradient descent training over the collected dataset to learn embeddings. For each task, we separate the dataset into 80% for training 10% development and 10% for testing.

For comparison purposes, neutral models that take into account only part of the training signals presented naturally constitute baselines. We also implement feature-based SVM models as baselines for the purpose of demonstrating strength of neural models. For neural models, we set the latent dimensionality to . Pre-trained word vectors are used based on the word2vec package.101010https://code.google.com/p/word2vec/ Embeddings are trained on a Twitter dataset consisting of roughly 1 billion tokens.

5.1 Friend-Relation (Graph Link) Prediction

Twitter supports two types of following patterns, following and followed. We consider two users as friends if they both follow each other. The friendship relation is extracted straightforwardly from the Twitter network. Models and baselines we employ include:

  • All: The proposed model that considers text, graph structure and user attributes.

  • Only Network: A simplified version of the proposed model that only used the social graph structure to learn user representations a Note that by making this simplification, the model is similar to DeepWalk (Perozzi et al., 2014) with the exception that we adopt negative sampling rather than hierarchical softmax.

  • Network+Attribute: Neural models that consider social graph and relation/entity information.

  • Network+text: Neural models that consider social graph and text information.

Performance for each model is shown in Table 1. As can be seen, taking into account different types of social signals yields progressive performance improvement: Graph+Attribute performs better than only graph, and All, which consider all different types of social signals is better than Graph+Attribute .

Model Accuracy
All 0.257
Only Network 0.179
Network+Attribute 0.198
Network+Text 0.231
Table 1: Accuracy for different models on friend relationship prediction from social representations.

5.2 User Attributes: Job Occupation

We present experimental results for job classification based on user-level representations. Evaluation is performed on the subset of users whose job labels are identified by the rule-based approach described in the previous section. Our models are trained to classify the top-frequent 10 categories of job occupations

Again, all denotes the model that utilizes all types of information. Baselines we consider include:

  • Text-SVM: We use SVM-Light package to train a unigram classifier that only considers text-level information.

  • Only Network: A simplified version of the proposed model that trains user embedding based on network graph and occupation information.

  • Network+Text: Embeddings are trained from user-generated texts and network information.

Experimental results are illustrated in Table 2. As can be seen, user generated content offers informative evidence about job occupation. We also observe that considering network information yields significant performance improvement due to the homophily effect, which has been spotted in earlier work (Li et al., 2014b). Again, the best performing model is the one that considers all sorts of evidence.

Model Accuracy
All 0.402
Only Network 0.259
SVM-text 0.330
Network+Text 0.389
Table 2: Accuracy for different models on 9-class job occupation prediction from social representations.

5.3 User Attribute: Gender

Model Accuracy
All 0.840
Only Network 0.575
Only Text 0.804
SVM-text 0.785
Attribute+Text 0.828
Table 3: Accuracy for different models on 9-class job occupation prediction from social representations.

We evaluate gender based on a dataset of 10,000 users (half male, half female). The subset is drawn from the users whose gold standard gender labels are assigned by the social-security system described in the previous section. Baselines we employ include: SVM-Text, in which a SVM binary classification model is trained on unigram features; Only-Text, in which user representations are learned only from texts; Only-Network, in which user representations are only learned from social graphs; and Text+Relation, in which representations are learned from text evidence and relation/entity information.

The proposed neural model achieves an accuracy value of 0.840. which is very close to the best performance that we are aware of described in Ciot et al. (2013), which achieves an accuracy of 0.85-0.86 on a different dataset. However, unlike in Ciot et al. (2013), the proposed model does not require massive efforts in feature engineering, which involves collecting a wide variety of manual features such as entities mentioned, links, wide range of writing style features, psycho-lingsuitic features, etc. This demonstrates the flexibility and scalability of deep learning models to utilize and integrate different types of social signals on inference tasks over social networks,

User-generated contexts offer significant evidence for gender. Again, we observe that leveraging all sorts of social evidence leads to the best performance.

Experimental results are shown in Figure 3. As can be seen, network information does significantly help the task of gender identification, only achieving slightly better performance than random guess. Such an argument is reinforced by the fact that Text+Relation yield almost the same performance as model all, which takes Text+ Relation+ network information.

Model Accuracy
All 0.152
Only Network 0.118
Only Text 0.074
Network+Text 0.120.
Attribute+Text 0.089
Table 4: Accuracy for different models location prediction from social representations.

5.4 User Attribute: Location

The last task we consider is location identification. Experiments are conducted on users whose locations have been identified using the rule-based approach described in the previous section. The task can be thought of as a 50-class classification problem and the goal is to pick one from the 50 states (with random-guess accuracy being 0.02). We employ baselines similar to earlier sections: only-text, only network, text+attribute and text+network.

Results are presented in Table 3: both text and network evidence provide informative evident about where a user lives, leading to better performances. Again, the best performance is obtained when all types of social signals are jointly considered

5.5 Examples for Group Behavior Inference

Given the trained classifiers (and additionally trained like-dislike classifiers with details shown in the Appendix), we are able to perform group behavior inference. Due to the lack gold standard labeled dataset, we did not perform evaluations, but rather list a couple of examples to give readers a general sense of the proposed paradigm:

  • P(isMaleisEngineer)=3.5 P(isFemaleisEngineer)

  • P(isMale,LiveInCaliforniaisEngineer)=
    .           6.8 P(isFemale,LiveInCaliforniaisEngineer)

  • P(LiveInColoradoLikeOmelet)=
    .           1.4P(LiveInCaliforniaLikeOmelet)

  • P(LiveInTexasLikeBarbecue)=
    .           1.7P(LiveInCaliforniaLikeBarbecue)

6 Related Work

Much work has been devoted to automatic user attribute inference given social signals. For example, (Rao et al., 2010; Ciot et al., 2013; Conover et al., 2011; Sadilek et al., 2012; Hovy et al., 2015) focus on how to infer individual user attributes such as age, gender, political polarity, locations, occupation, educational information (e.g., major, year of matriculation) given user-generated contents or network information.

Taking advantage of large scale user information, recent research has begun exploring logical reasoning over the social network (e.g., what’s the probability that a New York City resident is a fan of the New York Knicks). Some work (Li et al., 2014c; Wang et al., 2013) relies on logic reasoning paradigms such as Markov Logic Networks (MLNs) (Richardson and Domingos, 2006).

Social network inference usually takes advantage of the fundamental propoety of homophily (McPherson et al., 2001), which states that people sharing similar attributes have a higher chance of becoming friends111111Summarized by the proverb “birds of a feather flock together” (Al Zamal et al., 2012)., and conversely friends (or couples, or people living in the same location) tend to share more attributes. Such properties have been harnessed for applications like community detection (Yang et al., 2013) and user-link prediction (Perozzi et al., 2014; Tang and Liu, 2009a).

The proposed framework also focuses on attribute inference, which can be reframed as relation identification problems, i.e., whether a relation holds between a user and an entity. This work is thus related to a great deal of recent researches on relation inference (e.g., (Guu et al., 2015; Wang et al., 2014; Riedel et al., 2013)).

Our work is inspired by classic work on spectral learning for graphs e.g., (Kunegis and Lommatzsch, 2009; Estrada, 2001) and on recent research (Perozzi et al., 2014; Tang et al., 2015) that learn embedded representations for a graph’s vertices. Our model extends this work by modeling not only user-user network graphs, but also incorporating diverse social signals including unstructured text, user attributes, and relations, enabling more sophisticated inferences and offering an integrated model of homophily in social relations.

7 Conclusions

We have presented a deep learning framework for learning social representations, inferring the latent attributes of people online. Our model offers a new way to jointly integrate noisy heterogeneous cues from people’s text, social relations, or attributes into a single latent representation. The representation supports an inference algorithm that can solve social media inference tasks related to both individual and group behavior, and can scale to the large datasets necessary to provide practical solutions to inferring huge numbers of latent facts about people.

Our model has the ability to incorporate various kinds of information, and it increases in performance as more sources of evidence are added. We demonstrate benefits on a range of social media inference tasks, including predicting user gender, occupation, location and friendship relations.

Our user embeddings naturally capture the notion of homophily—users who are friends, have similar attributes, or write similar text are represented by similar vectors. These representations could benefit a wide range of downstream applications, such as friend recommendation, targeted online advertising, and further applications in the computational social sciences. Due to limited publicly accessible datasets, we only conduct our experiments on Twitter. However, our algorithms hold potentials to yield more benefits by combining different attributes from online social media, such as Facebook, Twitter, LinkedIn, Flickr121212Images can be similarly represented as vector representations obtained from CovNet (Krizhevsky et al., 2012), which can be immediately incorporated into the proposed framework.,


  • Agarwal et al. (2011) Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, and Rebecca Passonneau. 2011. Sentiment analysis of twitter data. In Proceedings of the Workshop on Languages in Social Media. Association for Computational Linguistics, pages 30–38.
  • Agichtein and Gravano (2000) Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the fifth ACM conference on Digital libraries. ACM, pages 85–94.
  • Al Zamal et al. (2012) Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and latent attribute inference: Inferring latent attributes of twitter users from neighbors. In ICWSM.
  • Burger et al. (2011) John D Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 1301–1309.
  • Chang et al. (2014) Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). pages 1568–1579.
  • Cheng et al. (2010) Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: a content-based approach to geo-locating twitter users. In Proceedings of the 19th ACM international conference on Information and knowledge management. ACM, pages 759–768.
  • Choi et al. (2005) Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. Identifying sources of opinions with conditional random fields and extraction patterns. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 355–362.
  • Ciot et al. (2013) Morgane Ciot, Morgan Sonderegger, and Derek Ruths. 2013. Gender inference of twitter users in non-english contexts. In EMNLP. pages 1136–1145.
  • Collobert and Weston (2008) Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning. ACM, pages 160–167.
  • Conover et al. (2011) Michael Conover, Jacob Ratkiewicz, Matthew Francisco, Bruno Gonçalves, Filippo Menczer, and Alessandro Flammini. 2011. Political polarization on twitter. In ICWSM.
  • Conover et al. (2013) Michael D Conover, Clayton Davis, Emilio Ferrara, Karissa McKelvey, Filippo Menczer, and Alessandro Flammini. 2013. The geospatial characteristics of a social movement communication network. PloS one 8(3):e55957.
  • Craven et al. (1999) Mark Craven, Johan Kumlien, et al. 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB. volume 1999, pages 77–86.
  • Davidov et al. (2007) Dmitry Davidov, Ari Rappoport, and Moshe Koppel. 2007. Fully unsupervised discovery of concept-specific relationships by web mining .
  • Davis Jr et al. (2011) Clodoveu A Davis Jr, Gisele L Pappa, Diogo Rennó Rocha de Oliveira, and Filipe de L Arcanjo. 2011. Inferring the location of twitter messages based on user relationships. Transactions in GIS 15(6):735–751.
  • Estrada (2001) Ernesto Estrada. 2001. Generalization of topological indices. Chemical physics letters 336(3):248–252.
  • Girvan and Newman (2002) Michelle Girvan and Mark EJ Newman. 2002. Community structure in social and biological networks. Proceedings of the national academy of sciences 99(12):7821–7826.
  • Go et al. (2009) Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford pages 1–12.
  • Guu et al. (2015) Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing knowledge graphs in vector space. arXiv preprint arXiv:1506.01094 .
  • Hoffmann et al. (2011) Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 541–550.
  • Hovy et al. (2015) Dirk Hovy, Anders Johannsen, and Anders Søgaard. 2015. User review sites as a resource for large-scale sociolinguistic studies. In Proceedings of the 24th International Conference on World Wide Web.
  • Igo and Riloff (2009) Sean P Igo and Ellen Riloff. 2009. Corpus-based semantic lexicon induction with web-based corroboration. In Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics. Association for Computational Linguistics, pages 18–26.
  • Joachims (1999) Thorsten Joachims. 1999. Making large scale svm learning practical .
  • Kautz et al. (1997) Henry Kautz, Bart Selman, and Mehul Shah. 1997. Referral web: combining social networks and collaborative filtering. Communications of the ACM 40(3):63–65.
  • Kim and Hovy (2006) Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proceedings of the Workshop on Sentiment and Subjectivity in Text. Association for Computational Linguistics, pages 1–8.
  • Kimmig et al. (2012) Angelika Kimmig, Stephen Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short introduction to probabilistic soft logic. In Proceedings of the NIPS Workshop on Probabilistic Programming: Foundations and Applications. pages 1–4.
  • Kiros et al. (2014) Ryan Kiros, Ruslan Salakhutdinov, and Rich Zemel. 2014. Multimodal neural language models. In Proceedings of the 31st International Conference on Machine Learning (ICML-14). pages 595–603.
  • Kouloumpis et al. (2011) Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the omg! ICWSM 11:538–541.
  • Kozareva and Hovy (2010a) Zornitsa Kozareva and Eduard Hovy. 2010a. Learning arguments and supertypes of semantic relations using recursive patterns. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1482–1491.
  • Kozareva and Hovy (2010b) Zornitsa Kozareva and Eduard Hovy. 2010b. Not all seeds are equal: Measuring the quality of text mining seeds. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, pages 618–626.
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. pages 1097–1105.
  • Kunegis and Lommatzsch (2009) Jérôme Kunegis and Andreas Lommatzsch. 2009. Learning spectral graph transformations for link prediction. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, pages 561–568.
  • Lafferty et al. (2001) John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data .
  • Le and Mikolov (2014) Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053 .
  • Li et al. (2014a) Jiwei Li, Alan Ritter, Claire Cardie, and Eduard Hovy. 2014a. Major life event extraction from twitter based on congratulations/condolences speech acts. In Proceedings of Empirical Methods in Natural Language Processing.
  • Li et al. (2014b) Jiwei Li, Alan Ritter, and Eduard Hovy. 2014b. Weakly supervised user profile extraction from twitter. ACL.
  • Li et al. (2014c) Jiwei Li, Alan Ritter, and Dan Jurafsky. 2014c. Inferring user preferences by probabilistic logical reasoning over social networks. arXiv preprint arXiv:1411.2679 .
  • McPherson et al. (2001) Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology pages 415–444.
  • Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .
  • Mintz et al. (2009) Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2. Association for Computational Linguistics, pages 1003–1011.
  • Onnela et al. (2011) Jukka-Pekka Onnela, Samuel Arbesman, Marta C González, Albert-László Barabási, and Nicholas A Christakis. 2011. Geographic constraints on social network groups. PLoS one 6(4):e16939.
  • Owoputi et al. (2013) Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In HLT-NAACL. pages 380–390.
  • Pak and Paroubek (2010) Alexander Pak and Patrick Paroubek. 2010. Twitter as a corpus for sentiment analysis and opinion mining. In LREC.
  • Pennacchiotti and Popescu (2011) Marco Pennacchiotti and Ana-Maria Popescu. 2011. A machine learning approach to twitter user classification. ICWSM 11:281–288.
  • Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 701–710.
  • Preotiuc-Pietro et al. (2015) Daniel Preotiuc-Pietro, Vasileios Lampos, and Nikolaos Aletras. 2015. An analysis of the user occupational class through twitter content .
  • Preoţiuc-Pietro et al. (2015) Daniel Preoţiuc-Pietro, Svitlana Volkova, Vasileios Lampos, Yoram Bachrach, and Nikolaos Aletras. 2015. Studying user income through language, behaviour and affect in social media. PloS one 10(9):e0138717.
  • Rao and Yarowsky (2010) Delip Rao and David Yarowsky. 2010. Detecting latent user properties in social media. In Proc. of the NIPS MLSN Workshop.
  • Rao et al. (2010) Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying latent user attributes in twitter. In Proceedings of the 2nd international workshop on Search and mining user-generated contents. ACM, pages 37–44.
  • Richardson and Domingos (2006) Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine learning 62(1-2):107–136.
  • Riedel et al. (2013) Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas .
  • Riloff et al. (1999) Ellen Riloff, Rosie Jones, et al. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In AAAI/IAAI. pages 474–479.
  • Ritter et al. (2013) Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Etzioni. 2013. Modeling missing data in distant supervision for information extraction. TACL 1:367–378.
  • Rocktäschel et al. (2015) Tim Rocktäschel, Sameer Singh, and Sebastian Riedel. 2015. Injecting logical background knowledge into embeddings for relation extraction. In Proceedings of the 2015 Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics.
  • Sadilek et al. (2012) Adam Sadilek, Henry Kautz, and Jeffrey P Bigham. 2012. Finding your friends and following them to where you are. In Proceedings of the fifth ACM international conference on Web search and data mining. ACM, pages 723–732.
  • Saif et al. (2012) Hassan Saif, Yulan He, and Harith Alani. 2012. Semantic sentiment analysis of twitter. In The Semantic Web–ISWC 2012, Springer, pages 508–524.
  • Socher et al. (2013) Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems. pages 926–934.
  • Speer et al. (2008) Robert Speer, Catherine Havasi, and Henry Lieberman. 2008. Analogyspace: Reducing the dimensionality of common sense knowledge. In AAAI. volume 8, pages 548–553.
  • Sun et al. (2009) Yizhou Sun, Yintao Yu, and Jiawei Han. 2009. Ranking-based clustering of heterogeneous information networks with star network schema. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 797–806.
  • Tai et al. (2015) Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 .
  • Tang et al. (2011) Cong Tang, Keith Ross, Nitesh Saxena, and Ruichuan Chen. 2011. What’s in a name: A study of names, gender inference, and gender behavior in facebook. In Database Systems for Adanced Applications, Springer, pages 344–356.
  • Tang et al. (2015) Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, pages 1067–1077.
  • Tang and Liu (2009a) Lei Tang and Huan Liu. 2009a. Relational learning via latent social dimensions. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pages 817–826.
  • Tang and Liu (2009b) Lei Tang and Huan Liu. 2009b. Scalable learning of collective behavior based on sparse social dimensions. In Proceedings of the 18th ACM conference on Information and knowledge management. ACM, pages 1107–1116.
  • Thrun (1996) Sebastian Thrun. 1996. Is learning the n-th thing any easier than learning the first? Advances in neural information processing systems pages 640–646.
  • Volkova et al. (2014) Svitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political preferences from streaming communications. In ACL.
  • Wang et al. (2013) William Yang Wang, Kathryn Mazaitis, and William W Cohen. 2013. Programming with personalized pagerank: a locally groundable first-order probabilistic logic. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management. ACM, pages 2129–2138.
  • (67) William Yang Wang and Diyi Yang. ???? That’s so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets .
  • Wang et al. (2014) Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. Citeseer, pages 1112–1119.
  • Wiebe et al. (2005) Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation 39(2-3):165–210.
  • Yang and Cardie (2013) Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In ACL (1). pages 1640–1649.
  • Yang and Leskovec (2013) Jaewon Yang and Jure Leskovec. 2013. Overlapping community detection at scale: a nonnegative matrix factorization approach. In Proceedings of the sixth ACM international conference on Web search and data mining. ACM, pages 587–596.
  • Yang et al. (2013) Jaewon Yang, Julian McAuley, and Jure Leskovec. 2013. Community detection in networks with node attributes. In Data Mining (ICDM), 2013 IEEE 13th international conference on. IEEE, pages 1151–1156.
  • Zeiler (2012) Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701 .
  • Zhang (2004) Tong Zhang. 2004. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of the twenty-first international conference on Machine learning. ACM, page 116.
  • Zhou et al. (2011) Jiayu Zhou, Jianhui Chen, and Jieping Ye. 2011. Malsar: Multi-task learning via structural regularization. Arizona State University . Appendix
    Predicting Preference: Likes or Dislikes: Here we describe how we extract user preferences, namely, like(usr,entity) and dislike(usr,entity). Similar to a wide range of work on sentiment analysis (e.g., (Choi et al., 2005; Kim and Hovy, 2006; Yang and Cardie, 2013; Agarwal et al., 2011; Kouloumpis et al., 2011; Wang and Yang, ; Pak and Paroubek, 2010; Saif et al., 2012)), our goal is to identify sentiment and extract the target and object that express the sentiment. Manually collecting training data is problematic because (1) tweets talking about what the user likes/dislikes are very sparsely distributed among the massive number of topics people discuss on Twitter and (2) tweets expressing what the user likes exist in a great variety of scenarios and forms. To deal with data sparsity issues, we collect training data by combining semi-supervised information harvesting techniques (Davidov et al., 2007; Kozareva and Hovy, 2010a, b; Li et al., 2014a) and the concept of distant supervision (Craven et al., 1999; Go et al., 2009; Mintz et al., 2009): Semi-supervised information harvesting: We employ a seed-based information-extraction method: the model recursively uses seed examples to extract patterns, which are then used to harvest new examples, which are further used as new seeds to train new patterns. We begin with pattern seeds including “I like/love/enjoy (entity)”, “I hate/dislike (entity)”, “(I think) (entity) is good/ terrific/ cool/ awesome/ fantastic”, “(I think) (entity) is bad/terrible/awful suck/sucks”. Entities extracted here should be nouns, which is determined by a Twitter-tuned POS package (Owoputi et al., 2013). Based on the harvested examples from each iteration, we train 3 machine learning classifiers:
    • A tweet-level SVM classifier (tweet-model 1) to distinguish between tweets that intend to express like/dislike properties and tweets for all other purposes.

    • A tweet-level SVM classifier (tweet-model 2) to distinguish between like and dislike131313We also investigated a 3-class classifier for like, dislike and not-related, but found the performance constantly underperforms using separate classifiers..

    • A token-level CRF sequence model (entity-model) to identify entities that are the target of the users like/dislike.

    The SVM classifiers are trained using the SVM package (Joachims, 1999) with the following features: unigrams, bigrams, part-of-speech tags and dictionary-derived features based on a subjectivity lexicon (Wiebe et al., 2005). The CRF model (Lafferty et al., 2001) is trained using the CRF++ package141414https://code.google.com/p/crfpp/ using the following features: unigrams, part-of-speech tags, NER tags, capitalization, word shape and context words within a window of 3 words The trained SVM and CRF models are used to harvest more examples, which are further used to train updated models. Distant Supervision: The main idea of distant supervision is to obtain labeled data by drawing on some external sort of evidence. The evidence may come from a database151515For example, if datasets says relation IsCapital holds between Britain and London, then all sentences with mention of “Britain” and “London” are treated as expressing IsCapital relation (Mintz et al., 2009; Ritter et al., 2013). or common-sense knowledge161616Tweets with happy emoticons such as :-) : ) are of positive sentiment (Go et al., 2009).. In this work, we assume that if a relation Like(usr, entity) holds for a specific user, then many of their published tweets mentioning the entity also express the Like relationship and are therefore treated as positive training data. Since semi-supervised approaches heavily rely on seed quality (Kozareva and Hovy, 2010b) and the patterns derived by the recursive framework may be strongly influenced by the starting seeds, adding in examples from distant supervision helps increase the diversity of positive training examples. An overview of the proposed algorithm showing how the semi-supervised approach is combined with distant supervision is illustrated in Figure 3.

Train tweet classification model (SVM) and entity labeling model (CRF) based on positive/negative data harvested from starting seeds.
While stopping condition not satisfied:

  1. Run classification model and labeling model on raw tweets. Add newly harvested positive tweets and entities to the positive dataset.

  2. For any user and entity , if relation like(usr,entity) holds, add all posts published by mentioning to positive training data.


Figure 3: Algorithm for training data harvesting for extraction user like/dislike preferences.

Stopping Condition: To decide the optimum number of steps for the algorithm to stop, we manually labeled a dataset which contains 200 positive tweets (100 like and 100 dislike) with entities. selected from the original raw tweet dataset rather than the automatically harvested data. The dataset contains 800 negative tweets . For each iteration of data harvesting, we evaluate the performance of the classification models and labeling model on this human-labeled dataset, which can be viewed as a development set for parameter tuning. Results are reported in Table 5. As can be seen, the precision score decreases as the algorithm iterates, but the recall rises. The best F1 score is obtained at the end of the third round of iteration.

Pre Rec F1
iteration 1 tweet-model 1 0.86 0.40 0.55
tweet-model 2 0.87 0.84 0.85
entity label 0.83 0.40 0.54
iteration 2 tweet-model 1 0.78 0.57 0.66
tweet-model 2 0.83 0.86 0.84
entity label 0.79 0.60 0.68
iteration 3 tweet-model 1 0.76 0.72 0.74
tweet-model 2 0.87 0.86 0.86
entity label 0.77 0.72 0.74
iteration 4 tweet-model 1 0.72 0.74 0.73
tweet-model 2 0.82 0.82 0.82
entity label 0.74 0.70 0.72
Table 5: Performance on the manually-labeled devset at different iterations of data harvesting.

For evaluation, data harvesting without distant supervision (denoted by no-distant) naturally constitutes a baseline. Another baseline (denoted by one-step-crf) trains a one-step CRF model, which directly decides whether a specific token corresponds to a like/dislike entity rather than making tweet-level decision first. Both (no-distant) and one-step-crf rely on the recursive framework and tune the number of iterations on the aforementioned gold standards. Test set consists of 100 like/dislike property related tweets (50 like and 50 dislike) with entity labels, which are then matched with 400 negative tweets. The last baseline we employ is a rule-based extraction approach using the seed patterns. We report the best performance model on the end-to-end entity extraction precision and recall.

Model Pre Rec F1
semi+distant 0.73 0.64 0.682
no-distant 0.70 0.65 0.674
one-step (CRF) 0.67 0.64 0.655
rule 0.80 0.30 0.436
Table 6: Performances of different models on extraction of user preferences (like/dislike) toward entities.

From Table 6, we observe performance improvements introduced by combining user-entity information with distant supervision. Modeling tweet-level and entity-level information yields better performance than moldeing them in a unified model (one-step-crf).

We apply the model trained in this subsection to our tweet corpora. We filter out entities that appear less than 20 times, resulting in roughly 40,000 different entities171717Consecutive entities with same type of NER labels are merged..

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description