Understanding and Controlling User Linkability in Decentralized Learning

Understanding and Controlling User Linkability in Decentralized Learning

Tribhuvanesh Orekondy  Seong Joon Oh  Bernt Schiele  Mario Fritz
Max Planck Institute for Informatics
Saarland Informatics Campus
SaarbrückenGermany orekondy, joon, schiele, mfritz@mpi-inf.mpg.de

Machine Learning techniques are widely used by online services (e.g. Google, Apple) in order to analyze and make predictions on user data. As many of the provided services are user-centric (e.g. personal photo collections, speech recognition, personal assistance), user data generated on personal devices is key to provide the service. In order to protect the data and the privacy of the user, federated learning techniques have been proposed where the data never leaves the user’s device and “only” model updates are communicated back to the server. In our work, we propose a new threat model that is not concerned with learning about the content - but rather is concerned with the linkability of users during such decentralized learning scenarios.

We show that model updates are characteristic for users and therefore lend themselves to linkability attacks. We show identification and matching of users across devices in closed and open world scenarios. In our experiments, we find our attacks to be highly effective, achieving 20-175 chance-level performance.

In order to mitigate the risks of linkability attacks, we study various strategies. As adding random noise does not offer convincing operation points, we propose strategies based on using calibrated domain-specific data; we find these strategies offers substantial protection against linkability threats with little effect to utility.

privacy; machine learning

1. Introduction

Advances in machine learning (ML) has paved the way to solving numerous problems across various domains e.g., medical diagnosis, autonomous driving, or fraud detection. As a result, many companies provide ML-based services such as virtual assistants that can understand and communicate in natural language. Training models for many such ML applications however require large-scale data. One such source of data are mobile devices which capture signals using various sensors (e.g., GPS, camera, biometrics, microphone).

Due to the privacy-sensitive nature of this data, many methods have been proposed recently (Shokri and Shmatikov, 2015; McMahan et al., 2017; McMahan and Ramage, 2017; Team, 2017; Geyer et al., 2017) to learn from such decentralized data sources and with a special emphasis of learning on mobile devices. Primarily, these methods enable training ML models without the raw data ever leaving the device and hence already provide users one layer of privacy. Moreover, this also allows application developers to offload computation to increasingly powerful mobile devices.

Despite the apparent advantage that raw private data can be kept local, decentralized learning does not yet completely solve privacy concerns. While initial research (Shokri and Shmatikov, 2015; McMahan et al., 2018; Geyer et al., 2017; Bonawitz et al., 2017a) has been conducted on keeping the data private, risks about linkability have been largely overlooked. This work, to the best of our knowledge, is the first to study linkability threats in a decentralized machine learning scenario.

In our experiments, a set of users train a Convolutional Neural Network (CNN) for image classification in a federated learning scheme. Within this decentralized learning framework, we present two linkability attacks which solely use user-specific model updates communicated by devices. We find that these model updates contain sufficient user-specific information for linkability attacks, remaining consistent across different sets of users’ data across multiple devices and time-spans. In our experiments, we show our linkability attacks succeed with 85% (48 chance-level) and 52% (175 chance-level) accuracy on two large-scale challenging datasets.

Furthermore, we propose the first mitigation strategies that are effective in preventing such linkability attacks, while maintaining utility of the task. More specifically, we find our methods mitigate the attacks with up to 86% and 95% effectiveness on the two datasets.

2. Related Work

In this section we look at related work specific to our task of identifying linkability threats in decentralized learning setups.

Deep Learning and Privacy.   Deep learning (Krizhevsky et al., 2012; He et al., 2016) is a highly effective machine learning technology with applications encompassing language translation, autonomous driving, and medical diagnosis. Recently, applications have also been proposed to assist and safeguard user privacy (Orekondy et al., 2017, 2018; Oh et al., 2017) introducing techniques such as erasing sensitive content in users’ images before sharing it on social networks. Across many applications of deep learning, since resulting models are often complex comprising millions of parameters and are trained on a large amounts of data, there is a push (Abadi et al., 2016b; Shokri et al., 2017; Papernot et al., 2017; Song et al., 2017; Pyrgelis et al., 2018) in addressing the model privacy. For instance, these models can be analyzed to recover private information about the users or the model itself (Shokri et al., 2017; Oh et al., 2018; Fredrikson et al., 2015; Dang et al., 2017). Our work is closest to the model privacy threats where the adversary attempts to link and re-identify users from the model parameter updates in a decentralized learning setup.

Privacy Threats in Decentralized Learning.   Training ML models for many tasks often require sensitive training data e.g., personal photos, medical diagnosis, and financial records. Decentralized learning (also referred to as collaborative (Shokri et al., 2017) or federated learning (McMahan et al., 2017)) are proposed as a solution for users to train ML models without explicitly sharing the raw data. Such protocols work by users training local ML models on their private datasets and communicating updates over multiple rounds to a centralized server, which co-ordinates the training. This results in an ML model effectively trained on all user data, although the user does not share the raw private data with another party. Work in this context has been studied along two attack surfaces – the trained model itself and the model updates communicated to the server. (Hitaj et al., 2017) focus on the former, exploring reconstruction and poisoning attacks in this setting. With respect to the model updates, prior works (McMahan et al., 2018; Shokri and Shmatikov, 2015; Geyer et al., 2017) have addressed it within the differential privacy framework; but without considering an explicit threat model. Our work proposes an explicit linkability threat model which uses model updates as the attack surface to perform the attacks.

Linkability Attacks.   Linkability attacks involve an adversary attributing one or more disconnected entities (e.g., social media profiles, text) to a single identity. A number of works have studied linkability threats on social networks, such as cross-site identity tracking to match profiles of users based. This is typically done using various user cues – profile attributes (Goga et al., 2015; Northern and Nelson, 2011; Perito et al., 2011), geo-location (Cecaj et al., 2016), social graph structure (Korula and Lattanzi, 2014; Labitzke et al., 2011; You et al., 2011) or content (Iofciu et al., 2011; Goga et al., 2013). Some works have addressed linking written texts such as reviews or articles using the stylometric features (Almishari and Tsudik, 2012; Shetty et al., 2017). Recently, (Backes et al., 2016) studied the temporal linkability of microRNA expression profiles. In this work, to the best of our knowledge, we are the first to demonstrate a new linkability threat based on incremental parameter updates of users’ ML models.

Mitigation.   There exist some prior works on mitigating the vulnerability of models. In our task, a user needs to privately communicate a vector (a model update) to a server. Specific to the decentralized learning setup, (Shokri and Shmatikov, 2015) propose a selective SGD algorithm which partially communicates these vectors. This work, along with few others (McMahan et al., 2018; Abadi et al., 2016b; Geyer et al., 2017) propose adding properly-calibrated random noise via differential privacy (Dwork, 2006, 2008) mechanisms to achieve a reasonable privacy-utility trade-off. Other alternate countermeasures typically involve encryption (Yonetani et al., 2017; Bost et al., 2015; Gilad-Bachrach et al., 2016; Graepel et al., 2012) or secure multiparty communication (Yao, 1986; Bonawitz et al., 2017b). In this work, we propose an alternative client-sided distortion strategy which adds domain-specific noise with the intention of achieving minimal loss to utility.

3. Background: Decentralized Learning

(a) Collaborative Learning (Shokri and Shmatikov, 2015)
(b) Federated Learning (McMahan et al., 2017)
(c) Client-based Federated Learning
Figure 1. Learning from decentralized data visualized w.r.t to client’s data distribution. Colors of clients and users indicate their data distribution.

In this section, we briefly introduce the task and prior work on decentralized learning. We motivate the task in Section 3.1. We present the terminology and notation specific to this setting in Section 3.2 and discuss distributed data scenarios in Section 3.3. Section 3.4 covers a popular decentralized algorithm (McMahan et al., 2017), which we use to train our collaborative models.

3.1. Motivation

Machine Learning pipelines generally involve collecting data suitable for the task and training it on a single machine. However, datasets can be massive (Krasin et al., 2017; Abu-El-Haija et al., 2016; Sun et al., 2017), thereby making it inefficient or even impossible to train the model on a single machine. As a result, decentralized ML algorithms have been proposed (McDonald et al., 2010; Povey et al., 2014; Fercoq et al., 2014; Ma et al., 2015; Shokri and Shmatikov, 2015; Yang, 2013; Dean et al., 2012) as a solution to learn when the data is spread across multiple machines in a data center.

Alternatively, large amounts of data can naturally be available across multiple clients. For instance, user-generated data on personal computing devices such as smart phones, tablets or laptops. In these cases, service providers can use decentralized algorithms to train ML models on this naturally distributed user-centric dataset. As an added benefit, the model can be trained with the raw personal data of the users, without it ever leaving the users’ personal devices. Moreover, since these devices capture and generate data using multiple sensors (e.g., GPS, camera, audio, biometrics), this allows for training of models for many interesting applications. For example, photo assistants to help automatically organize images on their mobile devices; or intelligent on-screen keyboards to provide customized predictions and auto-corrections.

For the rest of the paper, we specifically focus on the case of learning from decentralized user data on personal computing devices, which we refer to as clients. Due to its applicability, there is a growing interest among both industry (McMahan and Ramage, 2017; Team, 2017) and academia (Shokri and Shmatikov, 2015; McMahan et al., 2017; McMahan et al., 2018; Bonawitz et al., 2017b; Konečnỳ et al., 2016) to leverage such methods.

3.2. Notation and Terminology

In the previous section, we motivated how a decentralized learning scheme is beneficial to both users and service providers. Now, we move the focus to the technical details.

The overall objective is to perform supervised training and learn parameters of a model using a dataset by minimizing the loss function . We use to denote the loss computed over the set of training examples:

, client, user
, client set, user set
, # clients, users
server’s/global weight at round
weight update by client at round
user of client k
Data on client
# datapoints on client
Table 1. Notation

In a decentralized setting, we refer to the notation in Table 1. Each client has a local dataset . Collectively, across all clients we have , such that . A server is available to help co-ordinate the training process.

Following the example of a photo-assistant, a real-world user owns a smart-phone which is used to capture personal photos . A photo assistant could be effectively trained across photos across all users .

Using to denote the objective solved locally on client , the objective in Equation 1 can now be re-written as:


With respect to this setting, we discuss the data distribution in Section 3.3 and the optimization protocol in Section 3.4.

3.3. Decentralized Data Scenarios

Distributed learning algorithms (McDonald et al., 2010; Povey et al., 2014; Fercoq et al., 2014; Ma et al., 2015; Shokri and Shmatikov, 2015; Yang, 2013; Dean et al., 2012) typically assume data among the clients are (a) independent and identically distributed (IID) and (b) balanced, as can be seen in Figure 0(a). Consequently, each partition on client is a random subset of . For instance, this occurs when a company has collected large amounts of data and this data is spread across multiple machines in one or more data centers.

In contrast, data can be non-IID and imbalanced among clients, which is characteristic to user-generated data on mobile devices (visualized in Figure 0(b)). For instance, user-specific photographic style and content when capturing image data on smart-phones. The recently proposed federated learning approaches by Google (McMahan et al., 2017; McMahan and Ramage, 2017; McMahan et al., 2018) address training ML models in this decentralized data scenario.

3.4. Algorithm for Decentralized Learning

Given the data partitioned among clients , the objective is to learn parameters of the model , in the presence of a server . We use the popular FederatedAveraging algorithm (McMahan et al., 2017; McMahan and Ramage, 2017) (Algorithm 1), proposed specifically to perform training on non-IID and imbalanced decentralized data; this has also served as the footing for multiple prior works (Geyer et al., 2017; McMahan et al., 2018; Bonawitz et al., 2017b; Smith et al., 2017). This idea here is that training occurs over multiple rounds, where in each round , a fraction of clients train models using the local data and only communicate incremental model update towards the server’s global model . The server aggregates updates from multiple clients and shares back an updated improved model after each round. Over multiple rounds of communications, the users converge to model parameters that has been effectively learnt from all the data , without their raw data ever being communicated to the server or another client. Performance of this algorithm is influenced by three parameters: (a) , the fraction of clients sampled in each round to perform a model update (b) , the number of local epochs (training passes) the client performs before communicating an update and (c) , the local batch size used for client updates.

Server’s algorithm: Input: clients; number of rounds; fraction of clients sampled each round; client’s batch size; number of local epochs
Randomly initialize for round 1 to  do
       sample clients from for  do
       end for
end for
ClientUpdate split local data into batches of size for local epoch 1 to E do
       for batch  do
       end for
end for
Algorithm 1 FederatedAveraging (McMahan et al., 2017)

4. Setup: Decentralized Learning

In the previous section, we introduced the idea and concepts relevant for training tasks in a decentralized manner. Now, we discuss how we extend it to our setting, where users collaboratively train a photo assistant. Section 4.1 presents our decentralized data model. We then cover the specifics of the collaborative task and additionally discuss the CNN model to perform this task in Section 4.2.

4.1. Modeling Per-Client User Data

In typical federated learning settings (Figure 0(b)), each client corresponds to one particular user. However, recent studies (Internet, 2018; Anderson, 2015) demonstrate ownership of multiple devices per user. In particular, (Internet, 2018) shows 77% of Americans own a smart-phone, 73% a desktop/laptop computer, 53% a tablet computer and 22% an e-reader. In such cases, a user could capture, store or sync data among many devices and as a result, exhibit similar data patterns (photographic styles, language patterns, etc.) across multiple clients.

In order to model this, as shown in Figure 0(c), we consider the case of multiple clients associated with each user in the federated-learning setup. Specifically, we consider (a) , where each user’s data is available on multiple clients participating simultaneously in the federated learning setup (b) a client strictly contains data of only a single user (c) each client contains a unique set of data points.

4.2. Collaborative Task

Using the data partitioned across multiple clients, the central objective is to perform a collaborative task effectively training over all the data. This is achieved by supervised training of model .

For this work, we choose multilabel classification as the collaborative task: given an input , predict labels where denotes the probability of output label . Note that this is a generalization of single-label classification task (where the probabilities sum to one). Classification models find their applicability in numerous real-world predictive tasks (e.g., predicting next word, spam classification, or fraud detection).

Model for Collaborative Task.   In our experiments, we consider the task of multilabel image classification, in the spirit of a photo assistant. For example, such assistants can be used to organize personal user photos by labeling them with ‘things’ that appear in the photo e.g., boats, towers, cathedrals and selfies. Since training them relies on human-centric and private image data that is readily available on mobile devices, it is ideal if the assistant can be trained without the raw data ever leaving the client.

We model the multilabel image classifier in Equation 1 as a deep Convolutional Neural Network (CNN), arguably the most effective architecture for many visual recognition tasks – image classification (Krizhevsky et al., 2012; Szegedy et al., 2015; He et al., 2016), object detection (Ren et al., 2015; Redmon and Farhadi, 2017), segmentation (Long et al., 2015; He et al., 2017), etc. Training of modern, parameter-heavy CNN architectures requires heavy computational resources; it takes 14 GPU days (You et al., 2017) to train ResNet, a state of the art ImageNet classifier. Since our scenario involves the computation of model updates in local clients like mobile devices and laptops, it is not feasible for practitioners to implement federated learning on such a large-scale architecture. In order to reflect such realism, we use the MobileNet (Howard et al., 2017) as the model architecture. It is a lightweight architecture that strikes a good balance between latency, accuracy and size compared to other popular models. Consequently, it is supported as a standard model on popular on-device machine learning frameworks e.g., Apple’s CoreML111https://developer.apple.com/machine-learning/ and Google’s Tensorflow Lite222https://www.tensorflow.org/mobile/tflite/.

4.3. Assumptions for Collaborative Learning

We make the following assumptions in the decentralized setting where the users (victims for linkability attacks in the next section) are training the collaborative task:

  1. All clients collaboratively train a model for the same task.

  2. Data distribution on clients is stationary.

  3. All clients are honest and do not intend to poison or compromise performance on the collaborative task.

We consider these assumptions reasonable that will translate well to real-world scenarios.

5. Threat Model

In this section, begin by motivating the linkability threat in Section 5.1. Based on the assumptions in Section 5.2, we present two linkability attacks in the next section.

5.1. Motivation and Examples

The key hypothesis behind our threat model is that there exist user-specific patterns that can be captured and exploited by adversaries to link users across clients. These patterns may be encoded into the model update signals of multiple clients of the same user. This insight allows an adversary to link users, clients and model updates in the decentralized learning setup.

Examples.   Our threat model can lead to attacks across a variety of scenarios compromising the users’ privacy. Consider user Alice who uses the same popular photo assistant application on a smart-phone and tablet, which uses images on the device for federated learning (Algorithm 1). To train the central recognition model, the devices regularly communicates model updates computed on the local user-centric data to a server. The user-specific patterns in the model updates across these devices potentially leads to numerous linkability threats:

  1. The adversary eavesdrops on Alice’s model updates and learns that the devices belong to the same user. As a result, present and future personal information in the smart phone is associated with that in the tablet.

  2. Alice regularly connects to a VPN or Tor network to anonymize Internet traffic by obfuscating the source IP address. However, if the photo assistant app is communicating model updates in the background, this can be used as a signature to link Alice across various IP addresses.

  3. The adversary creates multiple clients with customized data distributions, such as by mimicking photos of users interested in automobiles, fashion or food. The adversary can then perform linkability attacks to associate model updates from these mimicked users to Alice and find her interests.

  4. The adversary creates a shadow device with Alice’s photos retrieved using public information or through black markets. The adversary can now link updates from this shadow device to Alice’s devices. Upon retrieving the new user-specific model updates, the adversary can recover the IP addresses of the requests (possibly exposing location) or perform reconstruction attacks (Hitaj et al., 2017; Yonetani et al., 2017).

5.2. Assumptions

We assume that the adversary’s objective is to perform linkability attacks solely using model updates. Furthermore, we assume the adversary:

  1. does not modify model updates communicated during decentralized training;

  2. has access to unencrypted weight updates to perform the attack;

  3. performs learning-based attacks.

In addition, we assume the adversary knows the model architecture. However, this is not strictly required since the adversary can perform random access into the parameter values. Note that 2 is a standard assumption in prior work (McMahan et al., 2018; Shokri and Shmatikov, 2015) that study privacy of model updates. This holds when the server is malicious or when an adversary eavesdrops on updates sent over an unencrypted channel.

6. Linkability Attacks

In this section, we present two linkability attacks: an identification attack to associate a user profile with a model update and matching attack to associate two model updates with each other. In both cases, the adversary uses machine learning models to learn generalizable user-specific patterns in the model updates.

Data for Attacks.   As a result of communications between clients and the server during the decentralized training, a set of model updates is generated. In the following, we denote a model update simply by . For training and evaluating attack models, we enrich each model update with the corresponding client , user , and the training step number as labels.

Training and evaluation of all our attack models use as input. In our setting (Section 4.2), these are model update parameters of the MobileNet CNN generated by each client. contains 3.5M parameters (float32 numbers) representing the weight updates of 26 convolutional and 1 Fully Connected (FC) layers. We assume the adversary’s ML attack models are: {enumerate*}[label=()]

trained on flattened and normalized parameters of the FC layer (a 19k-dim vector)

independent of round

performed post-hoc i.e., the adversary attacks recorded or stored updates . We will later relax these assumptions and experimentally show that it has minimal effect on the proposed attack scenarios.

6.1. Identification Attack

In this attack, the adversary knows a certain set of updates and the corresponding user profiles (training data). The task is then to apply this knowledge to new updates:


The adversary may have acquired the user profile to perform this attack based on e.g., information leaked by the device333https://www.theverge.com/circuitbreaker/2017/10/11/16457954/oneplus-phones-collecting-sensitive-data or generated by an adversary (T3-4). An identification attack allows the adversary to identify a target user from the updates allowing other attacks to be performed (Hitaj et al., 2017; Shokri et al., 2017).

We consider the following identification attack models:

  1. Chance: The adversary classifies users uniformly at random.

  2. SVM: The adversary trains a multi-class linear Support Vector Machine (SVM) to classify users.

  3. K-NN: The adversary performs performs the K-Nearest Neighbor classification. We use =10.

  4. MLP: The adversary trains a Multilayer Perceptron classifier. We use a Multilayer Perceptron with 128 Hidden units, ReLU activation and Dropout with rate 0.2 to classify users from input updates. The MLP is trained using SGD with learning rate 0.01, 0.9 momentum and 1e-6 LR decay. Architecture is visualized in Appendix A.

For the experiments, we split the client set as , such that for every user at least one client is in and . During training the adversary observes , where is the training split of clients. We use the other partition for evaluation. Each client produces an equal number of updates during decentralized training and this set is balanced w.r.t user labels.

6.2. Matching Attack

In this task, the adversary’s objective is to predict the match probability given a pair of parameter updates:


By matching, the adversary can associate information from different clients and link it to the same user profile. We consider the following identification attack models:

  1. Chance: The adversary performs chance-level classification by predicting independent of model updates.

  2. MLP: The adversary trains the MLP-based identification model to predict per-user probabilities for every user given the weight updates: and . We then compute the final match probability by .

  3. Siamese: The adversary trains a Siamese network (Bromley et al., 1994; Chopra et al., 2005) with metric learning (Xing et al., 2003; Weinberger et al., 2006) structured as: (a) Two FC-128 layers with ReLU activations which individually encodes into a 128-dim embedding (b) distance layer to represent distance between these embeddings (c) FC-1 layer with sigmoid activation to predict the match probability. We minimize the binary-cross entropy loss and perform optimization using RMSProp with learning rate . Architecture is visualized in Appendix A.

We evaluate the attacks on a balanced set of update pairs . We always consider the challenging case where and are generated from two different clients. Further details on generating this data for train and evaluation across a range of scenarios will be discussed in the evaluation sections of the attack (Sections 8.3-8.4).

6.3. Attack Scenarios

We consider two phases to the adversary’s attack. In the observation (training) phase, the adversary creates an attack model based on updates from a certain set of users. In the test (evaluation) phase, the adversary performs the attack on a new set of clients. We now present two attack scenarios, based on the occurrence of seen and unseen users at test-time.

Closed-world.   In this scenario, we assume in the observation phase that the adversary has encountered at least one update from each user to create both attacks models.

Open-world.   Now we consider a more unrestricted scenario, where the adversary encounters updates from new unseen users at test time. We first partition the user set as: (). In the observation phase, the adversary can create an attack model solely based on updates from the seen set of users. As a result for the identification task at test time, the adversary additionally encounters unseen users. Specific to identification attack, the adversary now predicts , where ukn denotes a special class for unseen users. The matching attack remains unaffected.

7. Datasets

To study the federated learning setup, we need datasets to train a model for the collaborative task. In our experiments, we consider training a multi-label image classifier (Section 4.2) through federated learning. We use two public, academic image datasets, PIPA (Zhang et al., 2015) and OpenImages (Krasin et al., 2017) to train this classifier. Since both these datasets are based on images from the popular photo-sharing website Flickr444https://www.flickr.com/, we find the training images (a) have the notion of “owner” or “user” and (b) contain user-specific patterns; this makes them ideal to train a classifier on examples resembling user-generated data on clients. All images in both datasets are publicly available on Flickr and have been used in various prior works (Veit et al., 2017; Shen et al., 2017; Masi et al., 2016; Zheng et al., 2017; Oh et al., 2016).

Figure 2. Examples of users (with anonymized Flickr userids) and their images from our dataset. Images are grouped by photoset (PIPA) or captured date (OpenImages).

7.1. Pipa

PIPA (Zhang et al., 2015) is a large-scale dataset of 37k personal photos uploaded by actual Flickr users (indicated in the author field in Flickr photo metadata). To assure certain minimal amount of per-user data, we only use users with at least 100 images, resulting in 33k images over 53 users. For each user, images are further grouped into Flickr albums (denoted as photosets in Flickr metadata) that roughly captures event changes. As an example, Figure 2 displays images of two random users and their photosets.

PIPA was originally published as person recognition dataset; to enable the training of multi-label image classifier, we obtain labels for each image by running a state-of-the-art object detector (Huang et al., 2017) that detects 80 COCO (Lin et al., 2014) classes, such as umbrella, backpack, and bicycle. To perform reasonable training and evaluation of the multilabel classification task, we use 19 classes that occur in approximately ¿1% of images with high precision.

Per-user Data Splits.   In this paper, we consider the data to be distributed across multiple clients (Section 4.1). There are multiple ways we can model this in the experiments. In the one extreme, all clients of each user will have the same data distribution (e.g. whenever Alice wants to take a photo, she flips a coin to decide whether to use her cell phone or tablet); in the other extreme, data in clients will not share any common feature at all (e.g. Clark Kent takes photos of his daily life with his cell phone, but as the Superman, he uses his tablet to take photos all over the world). Depending on the similarity of per-user data across clients, the linkability attack may be easier or harder. For PIPA, we use two types of per-user data splits:

  1. random: images per user are randomly distributed across clients;

  2. photoset: photosets and their corresponding images are distributed across clients. For instance, for the user bob in Figure 2, {32, 165, ..} are placed on client and {765, ..} on client . This is a more challenging split due to variations of locations, events, and subjects in images across photosets.

PIPA OpenImages
# Images 33,051 317,008
# Users 53 327
# Labels 19 18
Min. #images/user 104 500
Avg. #images/user 623 969
Max #images/user 3,710 3,895
Table 2. Dataset Statistics

7.2. OpenImages

OpenImages (Krasin et al., 2017) is a large-scale public dataset from Google, consisting of 9M Flickr image URLs and weakly labeled image-level annotations across 19.8k classes. We work with a subset of 1.7M images, for which additional bounding box annotations using 600 classes is provided.

The annotations for the images in this dataset are severely imbalanced, with just 100 classes occurring in approx. ¿1% of the images. Moreover, the number of classes is large and often semantically overlap (e.g., person, man, human) Hence, for learning the multilabel classifier, we choose 18 classes with minimal overlap (e.g., car, person, building).

Every image in OpenImages contains user information (the same author Flickr photo metadata as in PIPA). The dataset contains images spanning 79k users, with many users associated with very few images. Hence, to make the decentralized learning feasible, we prune out users with less than 500 images with EXIF data. This results in 317k images from 327 Flickr users. Using the EXIF data, we group images with the same captured dates; examples are visualized in Figure 2. We find user images often cover a wide time span (¿7 years in the displayed examples). Similar to PIPA, we find some users have more consistent themes of photographs (e.g., user ‘dave’ in Figure 2) compared to others.

Per-user Data Splits.   As for PIPA, we consider multiple strategies to split the images across clients per user:

  1. random: Images are randomly distributed across clients;

  2. day: Images are distributed across clients in chunks according to the date metadata;

  3. chrono: We first sort the photos of each user based on their EXIF timestamps. This photo stream is then split in half and placed on different clients. This is the most challenging split; this represents the case where the user has bought a new mobile device.

7.3. Analysis

We hypothesized in Section 5.1 that there exist user-specific patterns in data that can be captured and exploited by adversaries to link users across clients. Before studying and exploiting patterns in the model updates, we measure the amount of such patterns in the raw data themselves. We obtain the following statistics per user based their images: {enumerate*}[label=()]

intra-user distance: the median image-feature distance between images within each user;

inter-user distance: the median image-feature distance between the user images and random images in the dataset. Distances are computed based on distance of image features extracted using MobileNet pretrained on ImageNet. We scatter plot the intra- and inter-user image distances for every user in Figure 3.

We observe intra-user distances are much lower than the inter-user distances, indicating higher similarity within the users’ images compared to similarity with other users’ images. Upon manual inspection, we find users with low intra-user distance indicate users capturing images with a constant semantic theme (e.g., of only cars or football matches) and with high intra-user distance covering a broad range of concepts. Users with low-inter user distance exhibit images closer to what a typical user might share (often people-centric) and with higher distances that deviate from this (e.g., user has only images of insects).

Figure 3. Distinctiveness of users data. Each point represents distances computed over the images of a single user.

8. Evaluation

In this section, using the datasets previously presented (Section 7), we first evaluate the performance of the collaborative task in the federated learning setup in Section 8.1. We evaluate the previously discussed identification (Section 6.1) and matching (Section 6.2) attacks based on multiple scenarios (Section 6.3). We evaluate these attacks in the closed-world scenario in Section 8.2 and 8.3; followed by open-world scenario in Section 8.4.

8.1. Collaborative Task

Before evaluating the linkability attacks, we evaluate the decentralized learning performance on the collaborative task of multilabel image classification. We show that our federated learning scheme indeed results in a good image classifier.

Evaluation Metric.   We use Class-mean Average Precision (AP) as the evaluation metric for the collaborative multilabel classification task, which is commonly used for such tasks (Orekondy et al., 2017; Everingham et al., 2010; Lin et al., 2014; Wang et al., 2016). We first plot the Precision-Recall curve for each class, and compute the areas under the curves (per-class Average Precisions). The overall performance is then computed by taking the mean over all per-class Average Precisions. We compute these AP scores using the scikit-learn (Pedregosa et al., 2011) library. We hold-out 20% of the data for both PIPA and OpenImages for evaluating the collaborative task.

Implementation Details.   We use the pretrained ImageNet weights as initialization to the MobileNet model. Unless stated otherwise, we retrain the final four convolutional layers and the last fully-connected layer effectively training 1.6M/3.2M parameters. We find that this (instead of fine-tuning the entire CNN) is faster and has a negligible effect on the collaborative or adversarial tasks. We perform optimization using Stochastic Gradient Descent against the binary cross-entropy loss.

We follow the federated learning algorithm in Section 3.4 and Algorithm 1. By default, we use =0.1, =1 and =8, which we empirically find, results in a good trade-off between convergence and communications required. We train for 200 epochs for both the datasets with learning rate =0.01. Since we are memory-constrained to train hundreds of clients in parallel, we execute the client-update step sequentially per round. The clients are stateless and their weights are re-initialized to the global weights each round. Training the MobileNet this way requires 1 day for PIPA and 4 days for OpenImages on an Nvidia Tesla V100 GPU with 16 GB memory. All models are written in Python and implemented using Keras (Chollet et al., 2015) with the Tensorflow (Abadi et al., 2016a) backend.

Evaluation.   We train and evaluate the collaboratively learnt models and baselines across multiple splits (see Section 7) on both datasets. See Table 3 for the summary of APs for multiple models on different datasets and splits.

We compare decentralized learning (“FedAvg”) against baselines including the centralized learning (“Centr.”), K-nearest neighbor (“K-NN”), and the chance level (“Chance”) methods. K-Nearest Neighbors (KNN) model at : for a given test image, averaged labels of the closest images (computed using distance on features extracted for images from MobileNet pretrained on ImageNet) in the training set are predicted. Chance-level classification means that we predict each label with a probability .

We observe from Table 3 that {enumerate*}[label=()]

the baselines chance level and K-NN have low performances (9.5% and 14.9% AP, respectively, for PIPA-random), demonstrating the non-trivial nature of the task;

the decentralized learning (FedAvg) achieves a comparable performance to the centralized learning (45.1% versus 49.7% AP for PIPA-random). Figure 4 show the loss curves across training with different values. We observe that provides a good trade-off between the computational efficiency and AP performance.

Having shown that our decentralized learning indeed achieves a reasonable performance in the multi-label image classification task, we proceed to the evaluation of linkability attacks.

PIPA OpenImages
split random photoset random day chrono
FedAvg 45.1 37.7 62.9 63.9 62.2
Centr. 49.7 40.7 68.0 69.2 67.8
K-NN 14.9 15.8 09.7 13.5 13.6
Chance 9.5 9.7 6.3 6.3 6.3
Table 3. AP scores for Collaborative Task (Higher is better). FedAvg refers to the Federated Averaging algorithm (Algorithm 1) to train the CNN. Centr. refers to training the same CNN in a centralized manner.
Figure 4. Training Loss of FedAvg obtained by varying , the fraction of clients sampled each round

8.2. Closed-World Scenario: Identification Attacks

In federated learning, clients communicate multiple model updates based on the local data. To understand how much linkable information exists in those updates within the closed-world scenario (Section 6.3), we now measure performance of the adversary’s identification (Section 8.2) using the attack methods discussed in Section 6.1. This section also provides sensitivity analyses with respect to multiple factors.

Evaluation Metrics for Identification Attack.   We use the following metrics (computed using scikit-learn (Pedregosa et al., 2011)) to evaluate the adversary’s identification performance:

  1. Mean Average Precision (AP): Adversary’s precision-recall curves for each user on the test set clients is computed. We then compute the per-user Average Precision (area under the precision-recall curves). We report the mean of the per-user Average Precisions across users.

  2. Top-1 accuracy: We compute the classification success rates over all updates in the test set.

  3. Top-5 accuracy: We compute the classification success rates, where the prediction is successful if the ground-truth user is among the top 5 predictions.

These metrics are common among classification tasks e.g., (Orekondy et al., 2017; Everingham et al., 2010; Lin et al., 2014; Wang et al., 2016) for AP and (Krizhevsky et al., 2012; He et al., 2016; Deng et al., 2009) for Top-1/5 accuracy. We use the AP as the primary metric, since it also takes into account the predicted probabilities per class. In order to compute the adversary’s information gain due to its access to gradient updates, we also measure the Increase over Chance AP metric, computed by (predicted AP)/(chance AP).

Layer-wise Identification Performance.   Adversary’s linkability attacks are based on the models updates . In our experiments, the model updates are 3.5M dimensional vector updates of the MobileNet model covering 88 layers. Layers at various depths of the network are known to learn various concepts (Zeiler and Fergus, 2014) – lower level features (e.g., corners, edges) at the initial layer and higher level features (e.g., wheel, bird’s feet) at the final layers. In this part of the analysis, we analyze the layer-wise amount of user-specific information that can by exploited by the adversary. We restrict the analysis to layers that have trainable parameters – the 27 convolutional and the final fully connected layer.

Figure 5. Identification performance by depth. Bubble sizes indicate the number of parameters in each layer. Last two layers contains 1M and 19K parameters respectively.

We train and evaluate the adversary MLP model (Section 6.1) using the parameter updates for each trainable layer of the network. The layer-wise identification attack AP scores are presented in Figure 5. We perform this analysis on the PIPA dataset, random split (Section 7.1).

From Figure 5, we observe: {enumerate*}[label=()]

all layers provide above-chance level information to the adversary to perform the identification attack;

higher level layers contain more identifiable information;

for two convolutional layers at similar depths, the layer with a higher number of parameters is more informative (e.g., layer 14 vs. 15);

the final Fully Connected (FC) layer updates are the most informative. Hence, going forward, we will only use parameters of this FC layer to train all attack models

Identification Attack Performance.   We evaluate the identification attack in the closed-world scenario on both PIPA and OpenImages datasets and all splits (Section 7). The results are given in Table 4 and 6 over PIPA and OpenImages, respectively.

We observe: {enumerate*}[label=()]

All attacks greatly outperform the chance-level performances, with as much as 175x boost for MLP on the OpenImages dataset under the random split, highlighting the effectiveness of the proposed linkability attack;

Even the most simple K-NN attack is as effective as other attacks and already presents a significant threat (150x over random chance on OpenImages, random split);

MLP is the most effective attack across both datasets and splits (175x over random chance on OpenImages, random split);

Although the absolute AP scores are lower for the more challenging and larger OpenImages dataset (53.7% AP on random split), the increase over chance level performance is significantly higher (48x on PIPA vs. 175x on OpenImages under the same split);

The attack is highly effective (106x) even on challenging OpenImages-chrono split, where the clients of the user have images across two different time spans. This implies that even if the user switches over to a new phone, the adversary can use the gradient updates from the new device to link back to the user of the previous device.

There is strong evidence that model updates do contain ample linkable information.

random photoset
AP Top-1 Top-5 AP Top-1 Top-5
MLP 91.0 (48x) 84.7 96.3 42.2 (22x) 40.0 68.8
SVM 81.3 (43x) 89.3 91.9 27.7 (15x) 43.7 49.6
kNN 85.4 (45x) 82.6 92.6 31.5 (17x) 38.4 54.8
Chance 1.9 (1x) 2.0 9.9 1.9 (1x) 2.0 9.9
Table 4. PIPA - Performance of closed-world identification attack (). Numbers in brackets indicate increase over chance level classification.
random photoset
AP Top-1 Top-5 AP Top-1 Top-5
MobileNet 60.9 (32x) 73.1 90.7 38.8 (21x) 53.2 76.1
SVM 18.1 (10x) 47.6 50.5 10.9 (6x) 36.8 40.0
kNN 34.5 (18x) 47.2 72.3 18.0 (10x) 32.4 55.0
Chance 1.9 (1x) 1.9 5.8 1.9 (1x) 1.8 5.6
Prior 1.9 (1x) 11.5 38.8 1.9 (1x) 11.2 38.6
Table 5. PIPA - Performance of image-classification ().

Comparison with Image Classification.   The user-specific patterns in the model updates ultimately result from the user-specific patterns in the images themselves. To obtain a reference point for the identification attack results (Tables 4 and 6), we measure the identifiability from the image pixels themselves ().

We train and evaluate a user classifier using a single raw image as input. We train a MobileNet model to perform this image to user classification. We use the identical client splits to assign the images to training and test sets.

Note that we can use the same evaluation scheme as before (AP and Top-1&5 accuracies). From results in Tables 5 and 7, we observe: {enumerate*}[label=()]

All attack models are reasonably effective at this task (18x chance-level in PIPA and 25x in OpenImages) – raw image pixels do indeed contain user-identifiable information;

this task is more difficult than the identification based on model updates (71x vs. 175x for identification based on images and model updates, respectively, on the OpenImages-random split.

random day chrono
AP Top-1 Top-5 AP Top-1 Top-5 AP Top-1 Top-5
MLP 53.7 (175x) 51.9 77.9 44.0 (143x) 42.5 68.6 32.5 (106x) 31.9 57.1
SVM 49.0 (159x) 66.5 67.0 38.2 (124x) 56.6 57.4 24.6 (80x) 41.7 42.5
kNN 46.0 (150x) 49.2 63.9 35.6 (116x) 40.8 54.9 25.1 (82x) 30.3 43.1
Chance 0.3 (1x) 0.3 1.5 0.3 (1x) 0.3 1.5 0.3 (1x) 0.3 1.5
Table 6. OpenImages - Performance of closed-world classification attack ().
random day chrono
AP Top-1 Top-5 AP Top-1 Top-5 AP Top-1 Top-5
MobileNet 21.8 (71x) 27.2 43.5 19.5 (64x) 25.3 41.1 16.8 (55x) 22.1 37.1
SVM 3.4 (11x) 16.9 17.9 3.2 (10x) 16.3 17.3 2.8 (9x) 14.7 15.7
kNN 7.6 (25x) 13.7 23.4 6.5 (21x) 12.0 20.5 5.5 (18x) 10.8 18.9
Chance 0.3 (1x) 0.3 1.5 0.3 (1x) 0.3 1.5 0.3 (1x) 0.3 1.5
Prior Probs 0.3 (1x) 1.2 5.3 0.3 (1x) 1.2 5.3 0.3 (1x) 1.2 5.3
Table 7. OpenImages - Performance of image-classification ().

Effect of the Amount of Training Examples.   In the previous experiments, we analyzed the effectiveness of the identification attack with the adversary’s attacks are trained on all updates from each training split clients. Now we study the sensitivity to the amount of training examples.

Figure 6. Adversary’s MLP identification attack performance w.r.t # of examples per user used to train

We train multiple MLP adversary models, each trained on a random subset of the training data with various sizes. We evaluate performance with respect to training set size in Figure 6 across all datasets and splits. We observe that the adversary can perform reasonable attacks based on a small number of updates per user. On OpenImages, the adversary already achieves 21.5x chance-level performance with single training example per user.

Explaining the High Effectiveness of the Attack.   We have observed in the previous experiments that {enumerate*}[label=()]

update-based user identification outperforms the raw-image-based one (Tables 4-7);

identification attack achieves reasonable performance even when only a few training examples are used per user (Figure 6);

even simple baselines (such as K-NN) are comparably effective. In the following experiments, we analyze what makes the attacks so effective.

Figure 7. t-SNE visualization of model updates (left) and images (right). Colors indicate users.

To show how well the model updates are clustered, we visualize them using the t-SNE (Maaten and Hinton, 2008) algorithm. t-SNE projects high-dimension data (in our case, ¿19k dimensions) onto a 2-dimensional plane while approximately preserving the distance graph, making the data visualization-friendly. We use the implementation in sci-kit learn (Pedregosa et al., 2011). As suggested in the documentation, for computational efficiency, we first reduce the dimensionality of the parameters to 50 dimensions using PCA and then perform t-SNE to obtain a 2-dimensional embedding of the training examples. Figure 7 (left) shows the t-SNE embeddings of gradient updates where each color denotes a unique user. We immediately observe the distinctiveness of the user updates, which is characterized by the coherent clusters of updates per user.

For comparison, we show the t-SNE embeddings of the images . We embed the image features from the ImageNet pretrained MobileNet. The embedding is visualized in Figure 7 (right). We find that the images exhibit larger variances per user in this embedding space, compared to the model updates.

We conjecture that the model update provides a good aggregate statistics of the data for each client, empowering the linkability attacks. The low variance among the model updates per client (Figure 7) can be explained by this.

Figure 8. Effect of aggregation on image-classification task. is the size of the aggregated image set and is the aggregation strategy.

To validate this conjecture, we now perform an image set classification: users are classified based on aggregated subsets of their images. Let denote an aggregation function, which reduces a set of examples into a single aggregated representation. For instance, choosing denotes computing the mean over the image representations. Using this construction, we can now evaluate attacks for various choices of and .

Figure 8 displays the identification AP score influenced by aggregate sizes across multiple aggregation strategies . We observe across both datasets: {enumerate*}[label=()]

classification AP increases with size of the aggregation factor , indicating that aggregation does lead to better classification;

this occurs for multiple choices of aggregation functions ;

the performance eventually plateaus, resulting in the performance comparable to the identification attack for PIPA;

this aggregation strategy is even more effective for the OpenImages dataset, resulting in even better user prediction performance than the update-based prediction. From these observations we conclude that the aggregate statistics of the users data captured makes the identification attacks highly effective, when compared to performing similar attacks on individual user images ; hence validating our conjecture.

Influence of the Update Round .   Recall from the FederatedAveraging algorithm (Algorithm 1) used for decentralized training, optimization occurs over multiple rounds . We now study the sensitivity of attack efficacy on the update round by training and evaluating the attack at different .

Figure 9. Effect of the epoch on the identification attack . For instance, the top-right cell denotes when the MLP was trained on updates and evaluated on

For experiments, we group the updates (separately for train and test), based on the epoch ranges during which they were generated. The collaborative task is trained for 200 epochs, which are split into 10 ranges, each with 20 epochs. We train and evaluate the same MLP attack model over all train-eval pairs. The matrices of performances across all those pairs are presented in Figure 9 for PIPA and OpenImages. We observe that the round at which the update was generated has little influence on the performance. Thus, the adversary does not require access to the model update rounds to achieve the most effective attacks.

8.3. Closed-World Scenario: Matching Attacks

We now study the matching attacks that performs the following task: (Section 6). I.e., given a pair of updates were they generated by the same user? We always consider the updates and arising from different clients.

Table 8 presents the AP performances on this task. We find that the adversary can match the users very well. On PIPA, the MLP based attack achieves nearly perfect (99.5% AP) matching on random split. Across all datasets and splits considered, the matching can be done very successfully, all greater than 91.2% achieved in the PIPA-photoset split. There is ample, generalizable user-specific information in the gradient updates that enable the adversary to match them across clients.

PIPA OpenImages
random photoset random day chrono
MLP 99.5 91.2 98.2 97.2 94.8
Chance 49.1 49.4 50.8 48.3 49.0
Table 8. AP performance on closed-world matching

8.4. Open-World Scenario

In the previous sections, we studied the identification (Section 8.2) and matching attacks (Section 8.3) in the closed-world, where during both training and evaluation, updates were used across all users . In this section, we consider the open-world scenario (Section 6.3), where at test time the adversary encounters a new set of users that were unseen during training. This is more realistic and challenging setup for the adversary.

User Split.   We split the users and their updates into three sets: (a) : none of the updates from all clients of these users are available during training (b) : updates from only a single client of the user is available during training and the other reserved for evaluation (c) : updates from both clients of these users are reserved purely for training purposes.

Identification.   In the closed-world scenario, we trained a classifier with classes representing all users at test time. Now, in the open-world scenario, we perform classification over classes with class ukn collectively denoting unseen users. During training, we use user and their updates to train the ukn class. We train an MLP classifier as described in the Section 6, with the same hyper-parameters, although with a varying number of output classes depending on the number of seen users.

Matching.   To perform the matching attack in the closed-world scenario, we previously (Section 8.3) used a classifier to classify the pair of updates individually. However, since both updates at test-time could be generated from different unseen users, this is no longer possible in the current scenario. Hence, we train a Siamese network (discussed in Section 6) to perform the task of matching, which does not require user labels. The network is trained using the data , using updates from held-out and seen set of clients. Given an update pair , the network is trained to predict the probability that they have been generated by the same user .

Figure 10. Open-world evaluation across both identification and matching attack models.

Evaluation.   The performances are evaluated at different ratios of seen and unseen users at test time. We keep the size of the hold-out set constant to one-third of the total number of users. Evaluation for both identification and matching tasks are presented in Figure 10. We observe: {enumerate*}[label=()]

even in the open-world scenario, we perform much higher than chance-level for both the tasks (3x-14x for identification and 1.5x-1.8x for matching in PIPA) consistently across a wide range of seen vs. unseen scenarios

for the identification attack, although we notice a slight drop in AP performance (67%43% in PIPA) as %seen users increase, performance compared to chance-level significantly increases (3x14x) due to increase in output-space of the attack;

in the matching task, the Siamese model performs much higher than chance-level even in a purely open-world setting, with no seen users (1.5x for PIPA and 1.8x for OpenImages). Even in the presence of unseen users at test time, our identification and matching attacks are robust and generalizable.

9. Mitigation Strategies

In the previous section, we have evaluated our threat models across closed- and open-world scenarios. We have observed the high effectiveness of linkability attacks. In this section, we present some mitigation strategies to counter these attacks.

The effectiveness of the proposed adversary can be attributed to the distinctiveness of the model updates per user (Section 8.2). The focus of our mitigation strategies is to reduce such distinctiveness in model updates, while satisfying the conditions that they should {enumerate*}[label=()]

not decrease utility (decentralized learning performance) too much,

involve low computation overhead,

not rely on a trusted-third party, and

allow users to selectively employ the strategy to various extents depending on personal preferences.

We spell out the assumptions for our mitigation strategies. Together with the assumptions for the collaborative learning (Section 5.2), we assume that the mitigation strategies are

  1. client-sided,

  2. used by all clients throughout the training (),

  3. aimed to prevent linkability attacks in Section 6,

  4. uniformly enforced by all the clients,

  5. occurring during adversary’s observation and testing phases.

9.1. Method

Based on the requirements and assumptions, we propose data-centric mitigation strategies: clients shift their data distributions, rather than transforming the model updates. More specifically, clients mix their original data with certain “background” data to “blend into the crowd”, thereby rendering the model updates less user-specific. The mixing takes place before the adversary’s training process. We will explain the strategies in greater detail, and discuss how they address the requirements.

Collecting .   The background dataset can by any labeled set of training examples for the same decentralized learning task (e.g. a trusted open-source dataset or user-annotated dataset). To create the background dataset, we depend on the entire original OpenImages dataset (Section 7.2) of 1.7M examples, since it is the largest available dataset for our multilabel classification task. This makes for a great starting point, since these images represent a random selection of Flickr images, which is also the image source for both our datasets. Out of these, we select a random sample of 59k images to be used with PIPA and 490k images to be used with OpenImages, to reflect roughly 2x amount of training data.

Figure 11. Mitigation strategies against closed-world identification MLP attack. Top-left is the ideal region.

We introduce four mitigation strategies in the following.

Gaussian Noise (noise).   As a baseline, we add zero-centered Gaussian noises to updates:


Note that the noised updates will no longer be true gradients; it is hard to predict the optimization results.

Data Replacement (bkg-repl).   Each client replaces a fraction of its data with ones from . At , no mitigation strategy takes place; at , every user has identical data composition.

Data Augmentation (rand-aug).   Instead of replacing, each client augments data (since more data helps (Sun et al., 2017; Halevy et al., 2009)) from :


where determines the size of augmentation. As , clients’ data distributions converge to , making them indistinguishable from each other.

Mode-specific Data Augmentation (mm-aug).   So far, the clients’ strategies were to mix their data with background data from a single source . We now consider the strategy where each client mixes data from different sources. For instance, Bob has photographs of football matches on his mobile and tablet, providing user-specific information. mm-aug then adds ‘car’ images to mobile and ‘flower’ images to the tablet. Note the difference from the previous rand-aug strategy, where Bob adds random images (car, wine, dog, etc.) on both devices.

We perform this by first clustering into clusters . We use the k-means clustering over the ImageNet pretrained Mobilenet features. Each client picks a cluster at random, and augments its data with ones from the cluster:


where controls the degree of mix. We use =100 for PIPA and =500 for OpenImages.

9.2. Evaluation

We evaluate the proposed mitigation strategies by measuring the adversary’s performance against our countermeasures. For simplicity, we constrain the analysis to the closed-world identification attack on the random splits, where the adversary’s performance is strongest (Section 8.2).

We evaluate the strategies in terms of trade-off between privacy (reduction in adversary’s performance) and utility (decentralized learning performance). As in Section 8.2, we measure the adversary’s performance as increase over chance-level AP scores. We measure utility by collaborative multilabel classification AP scores, normalized to have utility= when no mitigation takes place.

For each mitigation strategy, multiple hyperparameters are considered. For noise, we consider Gaussian noise with and , , , , . For bkg-repl, we use {0.0, 0.25, 0.50, 0.75, 1.0}. For rand-aug and mm-aug, we use {0.0, 0.5, 1.0, 2.0}.

We present evaluation for our strategies in Figure 11. Better mitigation strategies have curves towards the top-left corners in each plot (high privacy, high utility). We observe: {enumerate*}[label=()]

the noise baseline decreases utility severely at a small gain in privacy;

replacing data with background samples (bkg-repl) is a good alternative strategy: we have both higher privacy and utility than noise. However, when contains a domain-shift or non-representative examples for training, this impacts utility. This can be observed in PIPA, where it achieves 0.75x utility since the user data is no longer used;

the augmentation-based strategies rand-aug and mm-aug outperforms noise and bkg-repl in terms of utility and privacy;

for the mm-aug strategy, already at , we observe a good combination of privacy and utility (75% decrease in adversary’s AP in OpenImages, compared to 45% for rand-aug and 67% for bkg-replace).

We find the strategy mm-aug offer the most effective and practical operating points, requiring the user to perform minimal augmentation to achieve reasonable privacy. We remark that the utility for mm-aug can be more than even at higher privacy level. This is due to the effect of additional data (Halevy et al., 2009; Sun et al., 2017). This increased privacy and utility comes at the cost of preparing a labeled dataset and increased training time (training set becomes bulky). However, this overhead will be less costly with increasingly powerful devices and energy-efficient ML models for mobile devices (Howard et al., 2017; Sandler et al., 2018).

10. Conclusion

We make first steps towards addressing the linkability attacks in the decentralized training setup. We have shown that our attack models can exploit the user-specific data patterns in the communicated model update signals to link two instances of the same user. The established links open ways to other threats on user privacy. To mitigate such attacks, we proposed calibrated domain-specific data augmentation, which shows promising results of achieving privacy with minimal impact to utility.

This research was supported in part by the German Research Foundation (DFG CRC 1223). We would like to thank Yang Zhang and Kathrin Grosse for helpful feedback and discussions.


  • (1)
  • Abadi et al. (2016a) Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016a. TensorFlow: A System for Large-Scale Machine Learning.. In Symposium on Operating Systems Design and Implementation (OSDI).
  • Abadi et al. (2016b) Martín Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016b. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS).
  • Abu-El-Haija et al. (2016) Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. 2016. Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016).
  • Almishari and Tsudik (2012) Mishari Almishari and Gene Tsudik. 2012. Exploring linkability of user reviews. In European Symposium on Research in Computer Security. Springer, 307–324.
  • Anderson (2015) Monica Anderson. 2015. Technology device ownership, 2015. Pew Research Center.
  • Backes et al. (2016) Michael Backes, Pascal Berrang, Anna Hecksteden, Mathias Humbert, Andreas Keller, and Tim Meyer. 2016. Privacy in Epigenetics: Temporal Linkability of MicroRNA Expression Profiles.. In USENIX Security Symposium. 1223–1240.
  • Bonawitz et al. (2017a) Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017a. Practical Secure Aggregation for Privacy Preserving Machine Learning. Cryptology ePrint Archive, Report 2017/281. (2017). https://eprint.iacr.org/2017/281.
  • Bonawitz et al. (2017b) Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017b. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1175–1191.
  • Bost et al. (2015) Raphael Bost, Raluca Ada Popa, Stephen Tu, and Shafi Goldwasser. 2015. Machine learning classification over encrypted data.. In The Network and Distributed System Security Symposium (NDSS), Vol. 4324. 4325.
  • Bromley et al. (1994) Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1994. Signature verification using a” siamese” time delay neural network. In Advances in Neural Information Processing Systems (NIPS). 737–744.
  • Cecaj et al. (2016) Alket Cecaj, Marco Mamei, and Franco Zambonelli. 2016. Re-identification and information fusion between anonymized CDR and social network data. Journal of Ambient Intelligence and Humanized Computing 7, 1 (2016), 83–96.
  • Chollet et al. (2015) François Chollet et al. 2015. Keras. https://keras.io. (2015).
  • Chopra et al. (2005) Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Dang et al. (2017) Hung Dang, Yue Huang, and Ee-Chien Chang. 2017. Evading classifiers by morphing in the dark. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 119–133.
  • Dean et al. (2012) Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. 2012. Large scale distributed deep networks. In Advances in Neural Information Processing Systems (NIPS). 1223–1231.
  • Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Dwork (2006) Cynthia Dwork. 2006. Differential Privacy. In ICALP.
  • Dwork (2008) Cynthia Dwork. 2008. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation. Springer, 1–19.
  • Everingham et al. (2010) Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. 2010. The pascal visual object classes (voc) challenge. International journal of computer vision (IJCV) 88, 2 (2010), 303–338.
  • Fercoq et al. (2014) Olivier Fercoq, Zheng Qu, Peter Richtárik, and Martin Takáč. 2014. Fast distributed coordinate descent for non-strongly convex losses. In International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 1–6.
  • Fredrikson et al. (2015) Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1322–1333.
  • Geyer et al. (2017) Robin C Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially Private Federated Learning: A Client Level Perspective. In NIPS Workshop on Private Multi-Party Machine Learning.
  • Gilad-Bachrach et al. (2016) Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning. 201–210.
  • Goga et al. (2013) Oana Goga, Howard Lei, Sree Hari Krishnan Parthasarathi, Gerald Friedland, Robin Sommer, and Renata Teixeira. 2013. Exploiting innocuous activity for correlating users across sites. In Proceedings of the 22nd international conference on World Wide Web (WWW). ACM, 447–458.
  • Goga et al. (2015) Oana Goga, Patrick Loiseau, Robin Sommer, Renata Teixeira, and Krishna P Gummadi. 2015. On the reliability of profile matching across large online social networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1799–1808.
  • Graepel et al. (2012) Thore Graepel, Kristin Lauter, and Michael Naehrig. 2012. ML confidential: Machine learning on encrypted data. In International Conference on Information Security and Cryptology. Springer, 1–21.
  • Halevy et al. (2009) Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The unreasonable effectiveness of data. IEEE Intelligent Systems 24, 2 (2009), 8–12.
  • He et al. (2017) Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. 2017. Mask r-cnn. In International Conference on Computer Vision (ICCV). IEEE, 2980–2988.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Hitaj et al. (2017) Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. 2017. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS).
  • Howard et al. (2017) Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).
  • Huang et al. (2017) Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, et al. 2017. Speed/accuracy trade-offs for modern convolutional object detectors. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Internet (2018) Pew Internet. 2018. Mobile Fact Sheet. http://www.pewinternet.org/fact-sheet/mobile/. (2018). Accessed: 2018-05-04.
  • Iofciu et al. (2011) Tereza Iofciu, Peter Fankhauser, Fabian Abel, and Kerstin Bischoff. 2011. Identifying Users Across Social Tagging Systems.. In International AAAI Conference on Web and Social Media (ICWSM).
  • Konečnỳ et al. (2016) Jakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. In NIPS Workshop on Private Multi-Party Machine Learning.
  • Korula and Lattanzi (2014) Nitish Korula and Silvio Lattanzi. 2014. An efficient reconciliation algorithm for social networks. Proceedings of the VLDB Endowment 7, 5 (2014), 377–388.
  • Krasin et al. (2017) Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Shahab Kamali, Matteo Malloci, Jordi Pont-Tuset, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. 2017. OpenImages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://storage.googleapis.com/openimages/web/index.html (2017).
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS).
  • Labitzke et al. (2011) Sebastian Labitzke, Irina Taranu, and Hannes Hartenstein. 2011. What your friends tell others about you: Low cost linkability of social network profiles. In Proc. 5th International ACM Workshop on Social Network Mining and Analysis, San Diego, CA, USA. 1065–1070.
  • Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision (ECCV). Springer, 740–755.
  • Long et al. (2015) Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Ma et al. (2015) Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I Jordan, Peter Richtárik, and Martin Takáč. 2015. Adding vs. averaging in distributed primal-dual optimization. In International Conference on Machine Learning (ICML).
  • Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research (JMLR) 9, Nov (2008), 2579–2605.
  • Masi et al. (2016) Iacopo Masi, Stephen Rawls, Gérard Medioni, and Prem Natarajan. 2016. Pose-aware face recognition in the wild. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • McDonald et al. (2010) Ryan McDonald, Keith Hall, and Gideon Mann. 2010. Distributed training strategies for the structured perceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. 456–464.
  • McMahan and Ramage (2017) Brendan McMahan and Daniel Ramage. 2017. Federated Learning: Collaborative Machine Learning without Centralized Training Data. https://research.googleblog.com/2017/04/federated-learning-collaborative.html. (2017). Accessed January 21, 2018.
  • McMahan et al. (2017) H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In International Conference on Artificial Intelligence and Statistics (AISTATS).
  • McMahan et al. (2018) H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning Differentially Private Recurrent Language Models. In International Conference on Learning Representations (ICLR).
  • Northern and Nelson (2011) Carlton T Northern and Michael L Nelson. 2011. An unsupervised approach to discovering and disambiguating social media profiles. In Proceedings of Mining Data Semantics Workshop.
  • Oh et al. (2018) Seong Joon Oh, Max Augustin, Bernt Schiele, and Mario Fritz. 2018. Towards Reverse-Engineering Black-Box Neural Networks. In International Conference on Learning Representations (ICLR).
  • Oh et al. (2016) Seong Joon Oh, Rodrigo Benenson, Mario Fritz, and Bernt Schiele. 2016. Faceless person recognition: Privacy implications in social media. In European Conference on Computer Vision (ECCV). Springer, 19–35.
  • Oh et al. (2017) Seong Joon Oh, Mario Fritz, and Bernt Schiele. 2017. Adversarial Image Perturbation for Privacy Protection – A Game Theory Perspective. In International Conference on Computer Vision (ICCV).
  • Orekondy et al. (2018) Tribhuvanesh Orekondy, Mario Fritz, and Bernt Schiele. 2018. Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Orekondy et al. (2017) Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. 2017. Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images. In International Conference on Computer Vision (ICCV).
  • Papernot et al. (2017) Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, and Kunal Talwar. 2017. Semi-supervised knowledge transfer for deep learning from private training data. In International Conference on Learning Representations (ICLR).
  • Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research (JMLR) 12 (2011), 2825–2830.
  • Perito et al. (2011) Daniele Perito, Claude Castelluccia, Mohamed Ali Kaafar, and Pere Manils. 2011. How unique and traceable are usernames?. In International Symposium on Privacy Enhancing Technologies Symposium. Springer, 1–17.
  • Povey et al. (2014) Daniel Povey, Xiaohui Zhang, and Sanjeev Khudanpur. 2014. Parallel training of deep neural networks with natural gradient and parameter averaging. In ICLR Workshop track.
  • Pyrgelis et al. (2018) Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2018. Knock Knock, Who’s There? Membership Inference on Aggregate Location Data. In The Network and Distributed System Security Symposium (NDSS).
  • Redmon and Farhadi (2017) Joseph Redmon and Ali Farhadi. 2017. YOLO9000: better, faster, stronger. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Ren et al. (2015) Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS).
  • Sandler et al. (2018) Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation. arXiv preprint arXiv:1801.04381 (2018).
  • Shen et al. (2017) Zhiqiang Shen, Zhuang Liu, Jianguo Li, Yu-Gang Jiang, Yurong Chen, and Xiangyang Xue. 2017. Dsod: Learning deeply supervised object detectors from scratch. In The IEEE International Conference on Computer Vision (ICCV).
  • Shetty et al. (2017) Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2017. Author Attribute Anonymity by Adversarial Training of Neural Machine Translation. arXiv preprint arXiv:1711.01921 (2017).
  • Shokri and Shmatikov (2015) Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS).
  • Shokri et al. (2017) Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In Security and Privacy (SP).
  • Smith et al. (2017) Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. 2017. Federated Multi-Task Learning. In Advances in Neural Information Processing Systems (NIPS). 4427–4437.
  • Song et al. (2017) Congzheng Song, Thomas Ristenpart, and Vitaly Shmatikov. 2017. Machine Learning Models that Remember Too Much. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 587–601.
  • Sun et al. (2017) Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. 2017. Revisiting unreasonable effectiveness of data in deep learning era. In International Conference on Computer Vision (ICCV).
  • Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, et al. 2015. Going deeper with convolutions. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Team (2017) Apple Differential Privacy Team. 2017. Learning with Privacy at Scale. https://machinelearning.apple.com/2017/12/06/learning-with-privacy-at-scale.html. (2017). Accessed January 21, 2018.
  • Veit et al. (2017) Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, and Serge Belongie. 2017. Learning from noisy large-scale datasets with minimal supervision. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Wang et al. (2016) Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. 2016. Cnn-rnn: A unified framework for multi-label image classification. In Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
  • Weinberger et al. (2006) Kilian Q Weinberger, John Blitzer, and Lawrence K Saul. 2006. Distance metric learning for large margin nearest neighbor classification. In Advances in Neural Information Processing Systems (NIPS). 1473–1480.
  • Xing et al. (2003) Eric P Xing, Michael I Jordan, Stuart J Russell, and Andrew Y Ng. 2003. Distance metric learning with application to clustering with side-information. In Advances in Neural Information Processing Systems (NIPS). 521–528.
  • Yang (2013) Tianbao Yang. 2013. Trading computation for communication: Distributed stochastic dual coordinate ascent. In Advances in Neural Information Processing Systems (NIPS). 629–637.
  • Yao (1986) Andrew Chi-Chih Yao. 1986. How to generate and exchange secrets. In Foundations of Computer Science, 1986., 27th Annual Symposium on. IEEE, 162–167.
  • Yonetani et al. (2017) Ryo Yonetani, Vishnu Naresh Boddeti, Kris M Kitani, and Yoichi Sato. 2017. Privacy-Preserving Visual Learning Using Doubly Permuted Homomorphic Encryption. In International Conference on Computer Vision (ICCV).
  • You et al. (2011) Gae-won You, Seung-won Hwang, Zaiqing Nie, and Ji-Rong Wen. 2011. Socialsearch: enhancing entity search with social network matching. In Proceedings of the 14th International Conference on Extending Database Technology. ACM, 515–519.
  • You et al. (2017) Yang You, Zhao Zhang, C Hsieh, James Demmel, and Kurt Keutzer. 2017. ImageNet training in minutes. CoRR, abs/1709.05011 (2017).
  • Zeiler and Fergus (2014) Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision (ECCV). Springer, 818–833.
  • Zhang et al. (2015) Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, and Lubomir Bourdev. 2015. Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Zheng et al. (2017) Liang Zheng, Hengheng Zhang, Shaoyan Sun, Manmohan Chandraker, and Qi Tian. 2017. Person re-identification in the wild. In Conference on Computer Vision and Pattern Recognition (CVPR).


Appendix A Model Architectures

(a) MLP model for Identification Attack (Section 6.1)
(b) Siamese model for Matching Attack (Section 6.2)
Figure 12. Architectures of linkability attack models discussed in Section 6. Dotted lines indicate shared layers.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description