Distributed Private Online Learning for Social Big Data Computing over Data Center Networks

Distributed Private Online Learning for Social Big Data Computing over Data Center Networks

Abstract

With the rapid growth of Internet technologies, cloud computing and social networks have become ubiquitous. An increasing number of people participate in social networks and massive online social data are obtained. In order to exploit knowledge from copious amounts of data obtained and predict social behavior of users, we urge to realize data mining in social networks. Almost all online websites use cloud services to effectively process the large scale of social data, which are gathered from distributed data centers. These data are so large-scale, high-dimension and widely distributed that we propose a distributed sparse online algorithm to handle them. Additionally, privacy-protection is an important point in social networks. We should not compromise the privacy of individuals in networks, while these social data are being learned for data mining. Thus we also consider the privacy problem in this article. Our simulations shows that the appropriate sparsity of data would enhance the performance of our algorithm and the privacy-preserving method does not significantly hurt the performance of the proposed algorithm.

1Introduction

A social network is referred as a structure of “Internet users” interconnected through a variety of relations [1]. For a single user, he/she has some different relationships in different social networks such as friends and followers. Also, one user has diverse social activities, e.g., post messages, photos and other media on Facebook and upload, view, share and comment on videos on YouTube. According to statistics, almost TB social data are generated per day. It takes high operational costs to store the data and it is a waste of resources without using them. Hence, we want to conduct the social big data analysis, in which the users active in a social, collaborative context to make sense of data. However, handling such a volume of social data brings us many challenges. We next describe the main challenges and the corresponding approaches to them.

The social data are generated all around the world and collected over distributed sources into different and interconnected data centers. Hence, it is hard to process the data in a centralized model. Concerned with this problem, cloud computing may be a good choice. As is known, many social networking websites (e.g., Facebook, Twitter, LinkedIn and YouTube) obtain computing resources across a network. These corporations host their social networks on a cloud platform. This cloud-based model owns some advantages, chief among which is the lowered costs in infrastructure. They can rent cloud computing services from other third part due to their actual needs and scale up and down at any time without taking additional cost in infrastructure [2]. Beyond that, they are able to choose different cloud computing services according to the distribution of social data. Naturally, for social data analysis in cloud, a distributed online learning algorithm is needed to handle the massive social data in distributed scenarios [3]. Based on cloud computing, we equip each data center with the independent online learning ability and they can exchange information with other data centers across the network. Each data center is urged to build a reliable model to recommend its local users without directly sharing social data with each other. In theory, this approach is a distributed optimization technology and many researches [4] have been devoted to it. To estimate the utility of the proposed model, we use the notion “regret” [7] in online learning (see Definition 3).

In Big Data era, social big data are both large scale and high dimension. A single person has a variety of social activities in a social network, so the corresponding vector of his/her social information is “long”. However, when a data miner studies the consumer behavior about one interest, some of the information in the vector may not be relevant. For example, a person’s height and age cannot contribute to predicting his taste. Thus, high dimension could enhance the computational complexity of algorithms and weaken the utility of online learning models. To deal with this problem, we introduce a sparse solution in social big data. In this paper, we introduce two classical groups of effective methods for sparse online learning [8]. The first group (e.g., [11]) induces sparsity in the weights of online learning algorithms via truncated gradient. The second group studies on sparse online learning follows the dual averaging algorithm [12]. In this paper, we will exploit online mirror descent [13] and Lasso- norm [14] to make the parameter updated in algorithm sparse.

Furthermore, exchanging information contained in social data among data centers may lead to privacy breaches as it flows across the social network. Once social data are mined without any security precautions, it is of high probability to divulge privacy. Admittedly, preserving privacy consequentially lead to the lowered performance of knowledge discovery in cloud-based social data. Therefore, we intend to design an algorithm, which protects the privacy while makes full use of the social data. Finally, we choose the “differential privacy” [15] technology to guarantee the safety of data centers in cloud. At a high level, a differentially private online learning model guarantees that its output of data mining does not change “too much” because of perturbations (i.e., add some random noise to the data transmitted) in any individual social data point. That means whether or not a data point being in the database, the mining outputs are difficult to distinguish and then the miner cannot obtain the sensitive information based on search results.

In conclusion, we make three contributions: 1) we propose a distributed online learning algorithm to handle decentralized social data in real time and demonstrate its feasibility; 2) sparsity is induced to compute the high-dimension social data for enhancing the accuracy of predictions; 3) differential privacy is used to protect the privacy of data without seriously weaken the performance of the online learning algorithm.

This paper is organized as follows. Section II introduces the system model and propose the algorithm. The privacy analysis is done in Section III. We analyze the utility of the algorithm in Section IV. Numerical results and performance improvements are shown in Section V. Section VI concludes the paper.

2system model

In this section, the system model and our private online learning algorithm are presented.

Consider a social network, in which all online users are served on cloud platforms, e.g., Fig.1. These users operate on their own personal page and the generated social data are collected and transmitted to the nearest data center on cloud, just as shown in Fig.1, all data are collected by the data centers marked with . Because of the huge network, many data centers are widely distributed. Each data center has its corresponding cloud computing node, where the nearby social data are processed in real time . As a holonomic system, the social network should have a good knowledge of all data it owns, thus data centers should exchange information with each other. Since there are too many data centers and most of them are located over the world, a data center never can communicate with all other centers. To achieve better economic benefits, each data center just can exchange information with neighboring ones (e.g., is just connected to its adjacent centers and ). Furthermore, random noise should be added to each communication for protecting the privacy (yellow arrows in Fig.1). Since such social big data need to be efficiently and privately processed with the limited communications, we focus on distributed optimization and differential privacy technologies.

We next introduce how the communications among data centers on cloud are conducted. Recall that we intend to realize knowledge discovery in social data in real time. A new parameter, e.g., , should be created to denote the online learning parameter (containing the knowledge mined from data). At each iteration, each cloud node updates based on its local data center and then exchanges with neighbors. This communication mechanism forms a network topology. The network topology can be fixed or time-variant, which is proved to have no great influence on the utility of our algorithm in Section IV.

2.1Communication Graph

For our online learning social network, we denote the communication matrix by and let be the -th element of . In the system, is the weight of the learning parameter which the -th cloud node transmits to the -th one. means there exists a communication between the -th and -th nodes at round , while means non-communication between them. . For a clear description, we denote the communication graph for a node at round by

where

To achieve the global convergence, we make some assumptions about .

Private Social Big Data Computing over Data Center Networks
Private Social Big Data Computing over Data Center Networks

Assumption 1.

For an arbitrary node , there exists a minimal scalar , such that

  • for ,

  • and ,

  • implies that .

Here, Assumptions (1) and (2) state that each node computes a weighted average of neighboring learning parameters. Assumption (3) ensures that the influences among the nodes are significant.

The above assumption is a necessary condition which presents in all researches (e.g.,[4]) about distributed optimization. Fortunately, this technology can be used to solve our distributed online learning in social networks.

2.2Sparse Online Learning

As described, a social data is high dimensional. Hence, its corresponding learning parameter is a long vector. In order to find the factors most related to one predicting behavior, we need to aggressively make the irrelevant dimensions zero. Lasso [14] is a famous method to produce some coefficients that are exactly . Lasso cannot be directly used in the algorithm, we combine it with online mirror descent (see Algorithm 1) which is a special online learning algorithm.

For convenient analysis, we next make some assumptions about the mathematical model of online learning system in social network. We assume to have data centers over the social network. Each data center collects massive social data every minute and processes them on cloud computing. For the data generated from social networks, we use to denote the social data of individual person. (e.g., ) denotes the prediction for a user, which helps the online website offer the user satisfying service. Then, the user will give a feedback denoted by telling the website whether the previous prediction makes sense for him. Finally, due to the loss function (e.g., ), we compare the and to find how many “mistakes” the online learning algorithm makes. Summing these “mistakes” over time and social networks, we obtain the regret of the whole system, based on which we can know the performance of our algorithm.

Assumption 2.

Let denote the set of , we assume and the loss function satisfy:

  • The set is closed and convex subset of . Let denotes the diameter of .

  • The loss function is strongly convex with modulus . For all , we have

  • The subgradients of are uniformly bounded, i.e., there exists , for all , we have

Assumption (1) guarantees that there exists an optimal solution in our algorithm. Assumptions (2) and (3) help us analyze the convergence of our algorithm.

2.3Differential Privacy

Dwork [15] first proposed the definition of differential privacy which makes a data miner be able to release some statistic of its database without revealing sensitive information about a particular value itself. In this paper, we realize output perturbation by adding a random noise denoted by . This noise interferes some malicious data miners to steal sensitive information (e.g., birthday and contact info). Based on the parameters defined above, we give the following definition.

Definition 1.

Assume that denotes our differentially private online learning algorithm. Let be a sequence of social data taken from an arbitrary node’s local data center. Let be a sequence of results of the node and . Then, our algorithm is -differentially private if given any two adjacent question sequences and that differ in one social data entry, the following holds:

This inequality guarantees that whether or not an individual participates in the database, it will not make any significant difference on the output of our algorithm, so the adversary is not able to gain useful information about the individual person.

2.4Private Distributed Online Learning Algorithm

We present a private distributed online learning algorithm for cloud-based social networks. Specifically, each cloud computing node propagates the parameter with noise added to neighboring nodes. After receiving the parameters from others, each node compute a weight average of the received and its old parameters. Then, each node updates the parameter due to general online mirror descent and induce sparsity using Lasso. The algorithm is summarized in Algorithm 1. Note that denotes the parameter of the -th cloud node at time . are a series of -strongly convex functions.

3Privacy Analysis

As mentioned, exploiting differential privacy (DL) protects the privacy while guarantees the usability of social data. In step 11 of Algorithm 1, is the parameter exchanged, to which we add a random noise. The added noise leads to the perturbation of , so someone else cannot mine individual privacy according to an exact parameter. To recall, DL is defined mathematically in Definition 1, which aims at weakening the significantly difference between and . Only satisfying the inequality (4), can we ensure the privacy of social data in each data center.

3.1Adding Noise

Since we add noise to mask the difference of two datasets differing at most in one point, the sensitivity should be known. Dwork [15] proposed that the magnitude of the noise depends on the largest change that a single entry in data source could have on the output of Algorithm 1; this quantity is referred to as the sensitivity of the algorithm. The sensitivity of Algorithm 1 in defined.

Definition 2 (Sensitivity).

Based on Definition 1, for any and , which differ in exactly one entry, we define the sensitivity of Algorithm 1 at -th round as

Lemma 1.

Under Assumption 1, if the -sensitivity of the parameter is computed as (5), we obtain

where denotes the dimensionality of the vectors.

See Algorithm 1, is the exchanged parameter and added with the noise . According to Definition 1, we have

Assuming that the only differenct social data comes at time , we have

where and lead to and due to Step 9 and 10 in Algorithm 1.

Then, we have

By Definition 2, we know

Finally, combining (5) and (7), we obtain (6).

We determine the magnitude of the noise as follows. is a Laplace random noise vector drawn independently according to the density function:

where . After this, we use to denote the Laplace distribution.

3.2Guaranteeing -Differentially Private

In our system model, as an independent cloud node, each data center should protect the privacy at every moment. If there is a data center invaded by a malicious user, this “bad kid” is able to get some information about other users’ social data stored in other data center across the network. Hence, every data transmitted should be processed by DL (i.e., satisfy (4)). Recalling from Fig.1, we add random noise to every communication in the data center network.

Having described the method and magnitude of adding noise, we next prove how to guarantee -differentially private for . First, we demonstrate the privacy preserving at each time .

Lemma 2.

At the -th round, the -th cloud node’s output of , , is -differentially private.

Let and , then by the definition of differential privacy (see Definition 1), is -differentially private if

We have

where the first inequality follows from the triangle inequality, and the last inequality follows from Lemma 1.

McSherry [16] has proposed that the privacy guarantee does not degrade across rounds as the samples used in the rounds are disjoint. Obviously, our system model is an online processing website, where the social data is flowing. We dynamically serve the users with favorite recommendations due to users’ recent social behavior. Hence, during the rounds of our Algorithm 1, the social data are disjoint. As Algorithm 1 runs, the privacy guarantee will not degrade. Then we obtain the following theorem.

Theorem 1 (Parallel Composition).

On the basis of Definition 1 and 3, under Assumption 1 and Lemma 2, our algorithm is -differentially private.

For details of proof of Theorem 1, readers are advised to [16].

4utility analysis

We have mentioned the notion regret, which is used to estimate the utility of online learning algorithms. The regret of our online learning algorithm represents a sum of mistakes, which are made by all data centers during the learning and predicting process. When social websites conduct personalized recommendations (e.g., songs, videos and news etc.) for users, not all recommendations make sense for individuals. But we wish that with the system working and more social data being learnt, the predictions used for recommending become more accurate. That means the regret should have an upper bound. Therefore, lower regret bounds indicates better and faster distributed online learning algorithms. Firstly, we give the definition of “regret”.

Definition 3.

We propose Algorithm 1 for social websites over data center networks. Then, we measure the regret of the algorithm as

where , denotes the average of parameters of all data centers at time . Hence, is computed with respect to an average of parameters , which approximately estimates the actual performance of the whole system.

For analyzing the regret of Algorithm 1, we firstly present a special lemma.

Lemma 3.

Let be -strongly convex functions, which have the norms and dual norms . When Algorithm 1 keeps running, we have the following inequality

We define , where .

where intuitively and .

First, according to Fenchel-Young inequality, we have

Then,

Combining (14) and (15), summing over and , we get

According to Lemma 1 of Wang et al.[9], we know

Finally, using (16) and (17), we obtain (12).

Based on Lemma 3, we easily have the regret bound of our system model.

Theorem 2.

We propose Algorithm 1 for social big data computing over data center networks. Under Assumption 1 and 2, we define regret function as (11). Set , which is -strongly convex. Let , then the regret bound is

For convex functions, we know that

Intuitively, due to (11) and (12), we obtain

Since , we have and .

where is defined previously.

We first compute : setting , we have

where is defined in Assumption 2.

Then, for , we have

Combining (20) and (21), we obtain (18).

Figure 1:  Sparsity=64.5%
Figure 1: Sparsity=64.5%
Figure 2:  Sparsity=64.5%
Figure 2: Sparsity=64.5%
Figure 3:  Sparsity=64.5%
Figure 3: Sparsity=64.5%
Figure 4:  Sparsity=64.5%
Figure 4: Sparsity=64.5%

According to Theorem 2, the regret bound becomes the classical square root regret [17], which means less mistakes are made in social recommendations as the algorithm runs. This result demonstrates that our private online learning algorithm for the social system makes sense. Further, due to (18), we find: 1) a higher privacy level can enhance the regret bound; 2) the number of data centers gets more, the regret bound become higher; 3) the communication matrix seems not to affect the bound, but we think it may affect the convergence. All the observations will be simulated in the following numerical experiments.

5simulations

In this section, we conduct four simulations. The first one is to study the privacy and predictive performance trade-offs. The second one is to find whether the topology of social networks has a big influence on the performance. The third one is to study the sparsity and performance trade-offs. The final one is to analyze the performance trade-offs between the number of data centers and accuracy. All the simulations are operated on real large-scale and high-dimension social data.

For our implementations, we have the hinge loss , where , are the data available only to the -th data center. For powers of persuasion, we use social data to experiment and the dimensionality of data is . Since the tested data are real social data, we should pretreat the data. Each dimension in vectors is normalized into a certain numerical interval. Each data point is labeled with a value into according to its classification attribute. For the simulated model, we design it as Fig.1. A few computing nodes are distributed randomly. Each node is only connected with its adjacent nodes. Everytime information exchanging is perturbed with Laplace noise. All the experiments were conducted on a distributed model designed by Hadoop under Linux (with 8 CPU cores, 64GB memory).

In Fig.2, the regret bound of the non-private algorithm has the lowest regret as expected and it shows that the regret gets closer to the non-private regret as its privacy preservation is weaker. The higher privacy level of leads to the more regret. Fig.3 demonstrates that different topologies make no significant difference on the utility of the algorithm. Fig.4 indicates that an appropriate sparsity can have the best performance and other lower or higher sparsity would lead to a worse performance. Specifically, inducing sparsity can enhance the accuracy, obtaining nearly more than the non-sparse computing does. Fig.5 studies the performance with respect to the number of data center nodes. More centers can have a little decline (as much as per 4 nodes) in the accuracy.

6conclusion

Internet has come into Big Data era. Social networks are faced with massive data to handle. Faced with these challenges, we proposed a private distributed online learning algorithm for social big data over data center networks. We demonstrated that higher privacy level leads to weaker utility of the system and the appropriate sparsity enhances the performance of online learning for high-dimension data. Furthermore, there must exist delay in social networks, which we did not consider. Hence, we hope that online learning with delay can be presented in the future work.

Acknowledgment

This research is supported by National Science Foundation of China with Grant 61401169.

References

  1. S. P. Ahuja and B. Moore, “A survey of cloud computing and social networks” Network and Communication Technologies, vol. 2, no. 2, p. 11, 2013.
  2. A. Dusenbery, K. Nguyen, D. Tran et al., “Analysis of an investment social network,” in ICC. IEEE, 2012, pp. 2087-2092.
  3. Y. Jiang and J. Jiang, “Understanding social networks from a multiagent perspective,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 10, pp. 2743-2759, 2014.
  4. J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization,” in Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2012, pp. 1564-1565.
  5. A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multiagent optimization,” IEEE Transactions on Automatic Control, vol. 54, no. 1, pp. 48-61, 2009.
  6. S. Ram, A. Nedi´c, and V. Veeravalli, “Distributed stochastic subgradient projection algorithms for convex optimization,” Journal of optimization theory and applications, vol. 147, no. 3, pp. 516-545, 2010.
  7. S. Shalev-Shwartz, “Online learning and online convex optimization,” Foundations and Trends in Machine Learning, vol. 4, no. 2, pp. 107- 194, 2011.
  8. H. Wang, A. Banerjee, C.-J. Hsieh, P. K. Ravikumar, and I. S. Dhillon, “Large scale distributed sparse precision estimation,” in In NIPS, 2013, pp. 584-592.
  9. D. Wang, P. Wu, P. Zhao, and S. C. Hoi, “A framework of sparse online learning and its applications,” arXiv preprint arXiv:1507.07146, 2015.
  10. S. Shalev-Shwartz and A. Tewari, “Stochastic methods for L1- regularized loss minimization,” The Journal of Machine Learning Research, vol. 12, pp. 1865-1892, 2011.
  11. J. Langford, L. Li, and T. Zhang, “Sparse online learning via truncated gradient,” in Journal of Machine Learning Research, 2009, pp. 777-801.
  12. L. Xiao, “Dual averaging methods for regularized stochastic learning and online optimization,” Journal of Machine Learning Research, vol. 11, pp. 2543-2596, 2010.
  13. J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari, “Composite objective mirror descent,” in COLT. Citeseer, 2010, pp. 14-26.
  14. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society. Series B (Methodological), pp. 267-288, 1996.
  15. C. Dwork, “Differential privacy,” in Proceedings of the 33rd ICALP. Springer-Verlag, 2006, pp. 1-12.
  16. F. D. McSherry, “Privacy integrated queries: an extensible platform for privacy-preserving data analysis,” in Proceedings of the SIGMOD International Conference on Management of data. ACM, 2009, pp. 19-30.
  17. M. Zinkevich, “Online convex programming and generalized infinitesimal gradient ascent,” In ICML, pp. 928-936, 2003.
26413
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description