Enabling Efficient PrivacyAssured Outlier Detection over Encrypted Incremental Datasets
Abstract
Outlier detection is widely used in practice to track the anomaly on incremental datasets such as network traffic and system logs. However, these datasets often involve sensitive information, and sharing the data to third parties for anomaly detection raises privacy concerns. In this paper, we present a privacypreserving outlier detection protocol (PPOD) for incremental datasets. The protocol decomposes the outlier detection algorithm into several phases and recognises the necessary cryptographic operations in each phase. It realises several cryptographic modules via efficient and interchangeable protocols to support the above cryptographic operations and composes them in the overall protocol to enable outlier detection over encrypted datasets. To support efficient updates, it integrates the sliding window model to periodically evict the expired data in order to maintain a constant update time. We build a prototype of PPOD and systematically evaluates the cryptographic modules and the overall protocols under various parameter settings. Our results show that PPOD can handle encrypted incremental datasets with a moderate computation and communication cost.
I Introduction
The increasing demands of local and global secrecy and private computations inside a cloud atmosphere reveal essential requirements and commitments to technologies attaining a high level of security. In the past few years, the advances in technologies related to the Internet of Things (IoT) has led to a boom in a broad spectrum of areas such as cloud computing, data mining, and information security. In particular, cloud service providers are to remove the burden of data management using costefficient data mining approaches. Hence, it is quite natural for both individuals and organisations to outsource their information data into a cloud server and allow this entity to process the data and run different data mining algorithms on the user’s behalf. However, storing/processing sensitive data on untrusted cloud servers may raise serious privacy and security concerns for timeseries related data in IoT applications.
One of the significant data processing tasks in IoT applications is anomaly detection (outlier detection).^{1}^{1}1We use these two terms interchangeably in our paper. Anomaly detection is the process of finding unusual patterns in data, and it has many applications in [1], intrusion detection [2], and fraud detection [3]. In the context of IoT devices, the anomaly detection can be used to remotely detect the malicious behaviours of IoT sensors, which are compromised by attackers [4]. The generated data is incremental/temporal (timeseries data) and the volume of data to be analysed is effectively large and unbounded [1, 5]. Hence, an anomaly detection algorithm in this setting should be efficient in terms of computational costs and effective in terms of detection accuracy. While encryption can be used to address data privacy issues, it prevents the server from mining/processing the encrypted data unconditionally. In this paper, we shall propose a new mechanism “PrivacyPreserving Outlier Detection (PPOD)” that addresses the problem of mining on encrypted data efficiently and effectively. Moreover, in order to make the process of anomaly detection more effective, we aim to consider the temporal relationships in timeseries by leveraging the ideas in autoregression forecasting models in the context of uni/multivariate timeseries. These models can detect deviation based anomalies by considering temporal relationships of measurements in timeseries [6]. The privacypreserving anomaly detection is a significant area of research and none of the stateoftheart techniques has addressed it in the presence of temporal data.
Our system architecture comprises of a user (Gateway) and two honest but noncolluding servers in charge of performing secure outlier detection (See Fig. 2). A PPOD for an incremental dataset contains four algorithms: (1) Data Preprocessing: to generate an encrypted incremental dataset ED and distribute shares of received data points to servers. (2) Initialisation: to apply forms of secure multiparty computations to model the outliers of ED. This phase outputs the initial list of distances. (3) Query: to run a data mining algorithm on servers and detect anomalies associated with ED in a privacyassured form. (4) Update: to take into account the newly arrived data points, compute their distances, and decide if they are anomaly or not. Hence, the specific contributions of this work are as follows:

design a PPOD scheme based on wellknown cryptographic protocols/primitives such as additive secret sharing and Yao’s garbled circuit and efficient data mining anomaly detectors such as kNN suitable for current IoT cloud services. We also prove that our PPOD scheme is secure with a given leakage function in a hybrid model, where parties are given access to the trusted party computing the ideal function of oblivious transfer (OT) [7].

implement such a construction using computer simulations and analyse its accuracy and efficiency on incremental datasets for different system parameters. Our evaluations on a realworld dataset with 16dimensional data points show that PPOD has a practical performance: it can answer outlier queries within ms and take s to update the outlier model after receiving a new data point.
Organisation. The rest of this chapter is structured as follows. We discuss related work in Section II. In Section III, we introduce the background knowledge of distancebased outlier detection algorithms and the needed cryptographic primitives. Then, we describe the system overview and its threat model in Section IV. A detailed construction of cryptographic modules and protocols is presented in Section V. In Section VI, we briefly discuss the security of PPOD. Next, we describe our prototype implementation and evaluation results in Section VII. We give a conclusion in Section VIII.
Ii Related Work
Privacypreserving outlier detection. The research in privacypreserving outlier detection has two main streams, i.e., differential privacybased approaches [10, 11, 12] and cryptographicbased (secure computationbased) approaches [13, 14, 15]. The differential privacybased approaches rely on the data perturbation technique to add noise to protect the inputs from the multiparty [10]. To address the collusion issue in [10], Random Multiparty Perturbation (RMP) technique [12] is proposed to allow each party to use a unique and different perturbation matrix to randomise their data. A recent differential privacybased work [11] leverages a relaxed version of differential privacy to process the data in data streams. However, the differential privacybased approaches lead to an accuracy loss in practice, while our PPOD does not degrade the accuracy comparing to the outlier detection algorithm for unencrypted datasets. The secure computationbased approaches are devised via Yao’s Garbled Circuit [15], Homomorphic Encryption [13] and the hybrid approach like in this paper [14]. Note that the above approaches are designed for the multiparty setting, i.e., each party has its private input, which are not suitable in the application scenario of this paper (outsourced outlier detection).
Distancebased Outlier detection for incremental datasets (data streams). A large number of outlier detection algorithms (e.g. [8, 16, 9]) are proposed to support efficient outlier detection over the incremental datasets (or data streams). However, all the above algorithms only can process the data in an unencrypted form. Furthermore, these algorithms involve range queries which has multiple dedicated attacks for its encrypted version [17, 18] recently.
Iii Preliminaries
Iiia Distancebased Outlier Detection
We briefly review the formal definitions of distancebased outlier detection. A more detailed introduction can be found in [19].
Distancebased outlier detection aims to detect an abnormal data point (a.k.a., outliers) via a distance measure between the target point and other points in a given dataset. In particular, a neighbour of an dimensional data point in the distancebased approach is defined as follows.
Definition 1 (Neighbour).
Given a distance threshold , a data point is a neighbour of the target point if the distance between them is not greater than , where is a distance measurement function.
In the distancebased outlier detection approach, normal data points are assumed to have a dense neighbourhood while outliers are far apart from their neighbours (i.e., have a sparse neighbourhood). Therefore, the distancebased approach utilises the number of neighbours to detect outliers in a dataset.
Definition 2 (Distancebased Outlier).
Given a dataset and a positive integer count threshold , a data point is a distancebased outlier in if it has less than neighbours. Otherwise, it is called an inlier.
Fig. 1 depicts a scenario where the distance threshold is fixed and . According to the above definition, a point is an outlier if there are less than points within the distance from (excluding itself). Thus, is an outlier while is an inliner in the example.
IiiB Outlier Detection for Incremental Datasets
The distancebased outlier detection can be exploited to detect outliers in an incremental dataset too, where the dataset is continuously updated with the newlypresented data points. In this work, we adopt the socalled countbased window model as in the previous works [8, 9].
Definition 3 (Countbased Sliding Window).
Given a window size and a slide size . Each window has a starting count and an ending count . The window ‘slides’ periodically after receiving a specific number of new data points, causing and to increase by .
In this sliding window model, each data point is associated with a counting number . A data point is active if its counting number satisfies the following (i.e., active points).
To detect outliers over incremental datasets, a naive solution is to recompute the neighbours for all active points when the window slides, which can be computationally expensive. Thus, recent studies devised incremental algorithms: during the update, only the data points which have at least one added/expired neighbour will be updated. In particular, those algorithms [8, 9] involve two steps:

[leftmargin=*]

Expired slide processing: Data points in the expired slide are removed from the outlier set and data point set . However, the expired point can still resident in the neighbour list of active points [8].

New slide processing: For each new data point , the algorithm computes its neighbourhood information to determine whether is an outlier or not ( of neighbours of ). Then, for each neighbour point of , the neighbour information will be updated regarding the newlyadded distance (if , then has one new neighbour). Finally, the algorithm rechecks to decide its outlier status according to the new neighbourhood information of .
Note that PPOD follows the above two steps to update the outlier model securely. Thus, it will not incur accuracy loss compared to the algorithms for plaintext data.
IiiC Secure Computation
We briefly review the secure computation technologies used in this paper. Furthermore, we introduce the secure conversion method, which helps to mix efficient, secure protocols for different computations (e.g., addition, multiplication and sorting) together to support complex computations that involved in our secure outlier detection protocol efficiently. Readers can find a more detailed introduction in [20, 21].
Additive sharing and multiplication triplets. To additively share () an bit integer between two parties and , the client generates uniformly at random and computes . The first party’s share is denoted by and the second party’s is , the modulo operation is omitted in the description later. To reconstruct () a shared value , each party sends its share to the client who computes . Given two shared values and , Addition () is easily performed noninteractively. In detail, locally computes , which also can be denoted by .
To multiply () two shared values and , we leverage Beaver’s multiplication triplets technique [22]. Assuming that the two parties have already precomputed and shared , and , where are uniformly random values in , and . Then, computes and . Both parties run and to get and , and lets .
Garbled circuit and Yao’s sharing. Yao’s Garbled Circuit (GC) is first introduced in [23], and its security model has been formalised in [24]. GC is a generic tool to support secure twoparty computation. The protocol is run between a “garbler” with a private input and an “evaluator” with its private input . The above two parties wish to securely evaluate a function . At the end of the protocol, both parties learn the value of , but no party learns more than what is revealed from this output value.
In the rest of this paper and without loss of generality, we assume that is the garbler and is the evaluator. GC can also be considered as a protocol which takes as inputs the Yao’s shares and produces the Yao’s shares of outputs. In particular, the Yao’s shares of 1bit value is denoted as and , where are the labels representing and , respectively. The garbler runs a garbling algorithm to generate the garbled circuit and its encoded inputs in the form of Yao’s shares. Then, the garbler sends the Yao’s shares corresponding to its input to the evaluator. Meanwhile, the evaluator runs an oblivious transfer (OT) [25] protocol with the garbler to acquire the Yao’s shares corresponding to its input. Then, the evaluator uses the received shares to evaluate the generated circuit and gets the output shares (other labels).
Conversion. Secure computations based on the above two schemes can be combined by converting one representation of intermediate values to the other [21]. Additive shares can be switched to Yao’s shares () efficiently. To be more precise, two parties share their additive shares , in a bitwise fashion via Yao’s sharing. The evaluator then receives and and evaluates the circuit to get the label of . Similarly, Yao’s shares of can be converted to additive shares using a subtraction circuit (). In specific, the garbler chooses a random value as and gives the Yao’s share of to the evaluator, who evaluates the subtraction circuit . The evaluator can recover locally and set it as .
Iv System Overview
Iva System Architecture
Fig. 2 shows the system architecture of the PPOD system. There are two entities in the PPOD system: the private gateway connected with data acquisition units (DAUs) and the server with the outlier detection service in an untrusted cloud. Note that this setting reflects the system model of many industrial corporations such as AgentVi [26] and HoneyWell [27], who provide data collection facilities and anomaly detection services while outsourcing the computation part of the service to the cloud service provider. In addition, popular cloud providers start to offer the incremental anomaly detection services in their dedicated data mining platform, e.g., Amazon Kinesis [28] and Azure Machine Learning Studio [29]. Our PPOD system aims to protect the confidentiality of the outsourced data in such a trend of using the data analysis cloud platform.
Our system flow involves four phases: (1) Data Preprocessing: For each new data point from DAUs, the gateway preprocesses this point to meet the input requirement of the additive sharing scheme and shares it between two untrusted but noncolluding cloud servers and . (2) Initialisation: During this phase, the servers execute secure computation protocols to compute nearest neighbours of each point and to determine the outlier list based on the distance (i.e., the distance between the data point and its th nearest neighbour). Additionally, each server stores the computed nearest neighbours list and distances as a reference for the update phase. (3) Query: The user of the PPOD system can submit a query point to the gateway to check whether the point is an outlier or not regarding the current outlier model. The query point is also preprocessed and shared by the gateway, and later the server leverages the share to measure the distance between outliers and the query point. The query point is an outlier in the current model if the computed distance is not greater than an outlier threshold, which is set by the system user. (4) Update: For each new data point, the server follows the same procedure as in the initialisation phase to find the nearest neighbours of new point and to find the new outlier based on the distance metric ( and ). Moreover, the server also updates the nearest neighbours’ information of the newcoming data points. In this stage, the server combines the precomputed information and new distance information to update the nearest neighbours list for these affected points. At last, the server refers to the new nearest neighbours information to decide the status (i.e., outlier or inlier) of these points.
Our system considers a serveraid computation scenario where the internal gateway distributes the computation tasks to two untrusted but noncolluding cloud servers. Such a twoserver approach has been formalised [30] and widely utilised in the literature [31, 32, 33, 34] to protect the data confidentiality in the outsourcing computation context.
IvB Threat Model
In this work, we assume that the gateway and the attached DAUs are maintained by a data analytics service provider, which is a trusted party. Meanwhile, we consider that the two servers belong to two different semihonest but noncolluding parties (e.g., two cloud providers). They will follow our protocol honestly, but they are interested in learning the underlying private information, which, in our case, are the coordinates of data points. In the rest of the paper, we use to denote the adversary who compromises . In our security model, we require that the is capable of seeing the protocol messages in and tries to infer the user’s private information. However, should not learn any information about its counterparty’s data beyond the protocol output. This model aims to protect the confidentiality of data points when the data analytics service providers outsource the computation task to the public cloud.
V PPOD Protocol Construction
We now explain the construction of PPOD in more details. The notations we used for the algorithms are summarised in Table I.
Notation  Meaning  

A data point with ndimensional coordinates  
The distance between data point and  
The identity of  


The distance between and its th nearest neighbour  


A data point set  
The data point set keeps the secret shares of data points 
Va Cryptographic Modules
In order to explain the design clearly, we break the protocol into common used cryptographic modules implemented by the cryptographic primitives (see Section III for details). In this section, we discuss the design and implementation of these cryptographic modules.
VA1 Distance measurement
In this work, we leverage the squared Euclidean distance to measure the distance between two data points. Note that such a distance metric is commonly used in outlier detection algorithms [35, 36] as it requires less computations (i.e., only addition and multiplication). However, directly computing is not applicable in our system due to the nonnegative input restriction of additive sharing scheme, i.e., if there exists an such that , the square operation will produce the additive shares of an undesired result . A naive solution for this issue is to make a comparison and swap before computing to ensure that , yet it requires additional steps to convert the additive shares to Yao’s shares (for comparison) and convert it back (for computation), which leads to extra computation and communication cost. Therefore, the distance measurement function in our system is defined as , which can avoid all negative results as well as the expensive comparison and swap. Noted that this metric also works when data points are secretly shared. In particular, the two servers can run arithmetic operations to compute their shares as
independently.
VA2 nearest neighbours (kNN) and distance
To detect outliers in a given set of data points, PPOD employs the distance metric in Section VA1 to compute the share of distances and utilises these shares to compute the nearest neighbours (kNN) and distance of each data point. Then, it compares the distance with the parameter ; if the distance is greater than , the data point is an outlier in the current model. Note that this approach can detect the outliers defined in Section IIIA: the distance of a data point is greater than is equivalent to the point has less than neighbours within the given range , and it is an outlier.
The simplest way to securely compute the kNN list and distance is to retrieve those information from a sorted point list after evaluating a sorting circuit over the share of distances. However, the above solution still has two security issues. First, sorting reveals the order of data points, while some recent works [17, 18] demonstrated that it is possible to precisely reconstruct the underling values (i.e., distances) if an adversary knows the rank and some auxiliary information. Furthermore, different kNN lists may include the same data point, which means that the adversary can compare the identities/distance shares in different kNN lists to learn extra information about common neighbours for different data points. Thus, to protect the privacy of data points, the procedures for kNN and distance evaluation should not reveal the order as well as the identities/distance shares.
Algorithm 1 and 2 outline the overall process of computing kNN and distance for a point . These algorithms employ two cryptographic submodules to implement the secure sorting (SortShuffle) and comparison (Max). Besides, we provide two other cryptographic modules to preprocess (Randomise) and postprocess (Derandomise) the kNN list and distance to hide the repeat patterns. Fig. 3 summarizes these cryptographic submodules.
SortShuffle. Fig. (a)a shows the structure of the secure sorting module for kNN computation, our system follows the standard procedure (see Section IIIC for details) to evaluate the circuit in Fig. (a)a and receives an unordered kNN list as the result. The secure sorting module inputs the shares of distance to an function implemented via an efficient scheme in [21] to convert the additive shares to Yao’s shares. It then adopts the sorting circuit based on sorting network [37] to sort the Yao’s shares from the function. In addition, the garbler concatenates the sorting circuit with a shuffling circuit based on a pseudorandom permutation (PRP) [38], and the evaluator supplies a random key to disrupt the order of the kNN list and remove the remaining points. Finally, the SortShuffle circuit outputs the Yao’s shares of the unordered kNN list, which ensures that the order does not reveal in the sorting procedure.
Max. To retrieve the distance from an unordered kNN list, the kDist algorithm employs the Max circuit shown in Fig. (b)b. The circuit takes as inputs a list of Yao’s shares (e.g., Yao’s shares of distances in the kNN list), and it consists of a chain of MAX gates to compute the maximum value (e.g., the distance in kDist) of the given inputs. In order to protect the underlying values (the distances), the output of the Max circuit is also in the form of Yao’s shares.
Randomise/Derandomise. Each entity in the kNN list comprises two data types, i.e., the values that indicate the computed distance, and the identities which assist the server to work consistently. To hide the repeat patterns after kNN and distance evaluations, we should protect the above two data types when the server converts Yao’s shares back to the additive shares for the storage purpose. We design a Randomise function (see Fig. (c)c) to achieve the above goal: To protect the distance value, the garbler generates a new random value and garbles the circuit for to reshare the distance as the additive shares; To hide the identity on servers, we introduce a flag independent from the data point id to aid the server to find the position of corresponding shares in its counterparty before starting the computations. More specifically, the evaluator selects a magic number to deidentify its local points and leverages random numbers generated by the gabler to mask via xor operations. After circuit evaluation, the garbler stores the generated random vectors as its local data point shares and the evaluator takes the output of the circuit as the new data point shares.
The Derandomise function is used to pair the randomised shares between two servers. As shown in Fig. (d)d, for a randomised list with elements, the Derandomise function generates xor gates revealing the “paired” positions, i.e., the position where the xor gate returns . The server then exploits its local shares to run the following secure protocols according to the revealed position. After computing, two servers run Randomise function again to invalidate the revealed patterns.
VB Data Preprocessing
Overview. Input preprocessing runs for all data points receiving from some DAUs (e.g., sensors). As shown in Algorithm 3, the gateway performs a twostep preprocessing over the received data points before giving them to the server for secure outlier detection. The first step is to dissolve the input format mismatch between the client and the server. Namely, the data point from DAUs consists of fractional numbers and may also include negative numbers, while the additive sharing scheme in our protocol only works over nonnegative integers. Thus, the gateway should preprocess these received coordinates via normalisation and rounding to meet the input requirement of cryptographic primitives before it shares the data to servers After preprocessing, the gateway generates the additive shares for these adjusted data points and distributes the generated shares to two cloud servers. The detailed construction of the above two preprocessing steps are discussed below:
1) Normalise: This function runs to eliminate the negative numbers in coordinates. For each coordinate, we assume that the maximum/minimum values are fixed at the beginning of data collection, as it is possible for the gateway to know these parameters referring to the hardware specification of DAUs. Therefore, the gateway can store the maximum/minimum values ( and ) for each . When the gateway receives a data point , it extracts its coordinate value and computes , which outputs a value as the corresponding normalised coordinate value for .
2) Rounding: After normalisation, the coordinates of a normalised point have only positive fractional numbers in . To handle fractional coordinate values, we introduce a rounding factor to scale up the fractional number into an integer , while preserving bits in the fractional part of the original number. This is a common strategy adopted in several prior works [39, 31]. As illustrated in the evaluation, the accuracy of the outlier model is not affected under a deliberately selected .
Discussion. The input preprocessing should be applied to all new arrival data points on the gateway before the gateway gives it to the server. Nevertheless, this will not incur a heavy workload on the gateway and lead to a noticeable delay to the system performance for the following two reasons: First, the input preprocessing phase can run independently for each data point. Thus, the gateway can leverage parallel processing to handle the received data points in a batch, which can highly improve the preprocessing process. Besides, the gateway does not involve any computation task other than input preprocessing under the twoserver setting. The main computation of the outlier detection algorithm is located on the server.
VC Initialisation
Overview. For the first batch of the preprocessed data points (their additive shares) from the gateway, the server invokes the initialisation phase to create the outlier model. To realise this phase, our system adapts the outlier detection algorithm from [36] as it can be implemented via arithmetic operations and sorting only, which perfectly suits the secure computation model we used. In particular, the server uses the received data points and some preset parameters to execute the algorithm and gets the nearest neighbours of each data point as well as the corresponding distance (denoted as ). Consequently, it compares with the distance threshold to find the outliers (i.e., if is greater than , the point is an outlier). The server also stores the computed information, i.e., nearest neighbours, values and the distance threshold (as additive shares) to support the update phase (see Section VE). The completed procedure of the based privacypreserving outlier detection is shown in Algorithm 4.
Discussion. The initialisation is a timeconsuming procedure, as it follows a nested loop (NL) strategy, i.e., it traverses each pair of data points, which infers an computational complexity, where is the number of data points in the batch. Despite the relatively higher computation cost, we argue that this phase only needs to run once for the entire outlier modelling process, and the model can be updated within (see Section VE for details).
In terms of security, the algorithm with the NL strategy executes the same sequence of operations over all data points. Hence, the initialisation phase is a dataoblivious process under the twoparty secure computation context, that is, the initialisation phase only reveals the information about outliers. Conversely, the other information, such as the coordinates, kNN list, and the memory access pattern during the outlier detection process, can be kept in secret.
VD Outlier Query
Overview. The system user can issue a data point query to check whether the data is an outlier or not by referring to the outlier model on the server side. The query consists of a preprocessed data point and an outlier threshold. Once the server receives a query, it evaluates the distance between the query point and outliers using the given additive shares. Consequently, it utilises the garbled circuit to compare the computed distances and the threshold and produces the final assertion without revealing any subresult (i.e., which distance is smaller than the threshold). Algorithm 5 outlines the query process on each server. Next, we present the detailed construction of the assertion function.
Assertion function. In the assertion function , the server makes a comparison between all distances and the given outlier threshold (line 5 – 8 in Algorithm 5). If one of those distances is not greater than the threshold, the query point is considered as an outlier, so the server returns ‘True’; otherwise, it returns ‘False’. Finally, the output assertion is generated via an OR gate, which mixes each pair of distance comparison as the final output.
Discussion. Query phase is an efficient stage, as it only performs arithmetic operations and comparison with the known outlier list. Therefore, its computational complexity is bounded by the size of the outlier list , which is much smaller than the other phases. In addition, Algorithm 5 is also a dataoblivious algorithm because it loops for each outlier to produce the result. During this process, the server only knows the final assertion, but not any intermediate result (e.g., each pair of comparison result) and the input.
VE Model Update
Overview. In the model update phase, each server receives a new batch of the preprocessed data points and computes a new outlier model, which takes these new data points into consideration. To ensure the efficiency of this phase, the update protocol for PPOD uses the sliding window model and maintains a list of active points and only recomputes/reports the outliers for the active points. Also, the update algorithm only updates the data points that affect by the added/expired data points, which is consistent to the incremental algorithms [8, 9] for the plaintext outlier detection scheme. In particular, the update protocol removes the expired points from the active point set and the outlier list . Then, it computes the kNN and distance information for the new data point by utilising the remaining and determines whether the new point is an outlier. Later, the protocol updates the points in which are also in the kNN list of the new data point. The procedure of the update phase is given in Algorithm 6.
Discussion. The simplest solution is to run the initialisation protocol for the updated dataset. However, as mentioned in Section VC, the initialisation is an inefficient phase (, where is the size of dataset). Compared to the naive approach in the above, the complexity of the proposed update approach is lower: For each new data point, the update phase only refers the active data points to compute the kNN list, which takes , where is the sliding window size, and to update the existing information. And the whole update procedure runs for the new points after sliding (add new points), which indicates that the overall runtime complexity is
In terms of the security, the update approach does not guarantee the dataoblivious, because it retrieves the id of the kNN list when it updates for the existing data points (line 11 – 12 in Algorithm 6). Nevertheless, we stress that this is the only additional leakage comparing with the other phases, and it enables a more efficient update phase.
Vi Security Analysis
We give the security analysis following the classic paradigm of comparing the realworld execution of the protocol to an idealworld execution where a trusted third party evaluates the functions on behalf of the involved parties. The only difference is that we consider an ideal world that the adversary is allowed to learn the nearest neighbours of a new arrival data point when adding it into the sliding window. Note that we leverage an OThybrid model where parties are given access to the trusted party computing the ideal function of OT. The following theorem shows that the PPOD protocol is secure with the given leakage function in this hybrid model. Thus, the PPOD protocol remains secure if the trusted party is replaced by the real OT.
To start with, we give a security analysis for the secure kNN and distance modules in Section VA2, as our PPOD protocol highly depends on these modules.
Theorem 1.
Proof. We denote the secure kNN and distance protocols as , and our proof shows that securely realises the ideal functions in Algorithm 7. As the adversary in our model only corrupts one server at most, and the view of two servers are slightly different (one garbler and one evaluator), we separately consider the scenario that the adversary corrupts . For each , we describe how to construct a simulator that simulates in the ideal model. For two varieties of the kNN evaluations (i.e., and ), the only difference between them is how they handle the output of SortShuffle. In particular, in , returns the identities from the trusted party to and should give the simulated decoded information of identities to . On the other hand, in , both simulators are only required to return the random shares of distance to the adversary.
We claim that ’s view in the real and ideal model is indistinguishable for the kNN evaluations: Since the security of the additive sharing scheme and multiplication triplets ensure the randomness of distance shares, and the protocol is a composition of a sequence of secure modules (SortShuffle, Randomise). It follows from the modular composition theorem [40] that the adversaries’ views are both identical. The kDist function is almost identical to the kNN functions except it connects the output of SortShuffle gate to a Max gate to retrieve the maximum distance in kNN list. Therefore, we can follow the same path to show the security of the kDist function, i.e., the modular composition theorem is applied for SortShuffle gate, Max gate, and Y2A gate to get the same view in real/ideal models. The update function only involves garbled circuit evaluation, and the security of the garbled circuit ensures that no adversary can learn the input (i.e., previous kNN list) from the output and the execution on the circuit. ∎
We now provide the security proof of the PPOD protocol. The ideal function of our PPOD is given in Algorithm 8. The following theorem demonstrates the PPOD scheme is secure under the noncolluding semihonest server model.
Theorem 2.
Consider a protocol where clients distribute shares of data points among two servers who run the PPOD protocol in Section V. In the (, OT)hybrid model, the PPOD protocol adopts the ideal function with leakage consisting of nearest neighbours of new arrival data points in Algorithm 8 in semihonest but noncolluding adversarial model.
Proof. We follow the same setting to prove the security of the PPOD system. In the initialisation phase, runs and sends randomly generated shares in with identity as the shared points to . Besides, the computation and randomisation of kNN list and distance can be simulated by calling the ideal function and . Finally, utilises a dummy circuit and simulates input labels and plays the role of the trusted server to send the detected outlier identities to simulate the view of . On the other hand, relies on to get the comparison result between distance and threshold and sends the simulated circuit with the same output to as its view.
Now, we illustrate the security of PPOD in each phase, respectively: During initialisation, ’s view in the real and ideal model is indistinguishable: provides the random value as the shared points and simulates the garbled circuit via the output of for the corresponding . Besides, it uses the ideal function to return the result to .
For the query phase, the simulator leverages the random input to simulate the query points, and then, it can simulate the adversaries’ view similarly as above. In particular, the distance shares is also a random number as it leverages the randomly generated multiplication triplets. Moreover, the simulator utilises the simulator of garble circuit to simulate the rest of the protocol and returns the assertion to the client. Therefore, the modular composition theorem also implies that the query protocol remains secure after combining the additive sharing scheme and the garbled circuit.
The update phase is almost identical to the initialisation phase, except that it additionally reveals the kNN list of new arrival data, and there is an extra round to update the information of these nearest neighbours. Specifically, the update phase requires to call and updates the points with the returned id. As a result, the update phase is secure with one extra leakage, as it is the composition of the initialisation phase and the functionalities in and . As securely realises , the PPOD scheme also securely realises the with the leakage of nearest neighbours of a new arrival data point in the (, OT)hybrid model. ∎
Vii Evaluation
Implementation. We implement our PPOD system in Java. To enable the efficient and secure twoparty computation on the cloud server, we first implement the additive sharing scheme. The arithmetic operations in the additive sharing scheme are computed by several regular addition and multiplication operations with the modulo operation over Java primitive types. Note that the modulo operation implemented via Java primitive types (e.g. long, int) is much faster than the native modulo operation in Java BigInteger type (about 50x faster). For the oblivious transfer (OT) and garbled circuit protocol, we leverage FlexSC [41], which includes the implementation of extended OTs [25] and the optimised garbled circuit scheme. To improve the runtime performance of our prototype, PPOD system maintains a pool of precomputed multiplication triplets, and it periodically refreshes it to avoid extra computation/communication cost onthefly.
Setup. The experiments are executed on two EC2 c5.4xlarge instances running Ubuntu 18.04LTS. Each instance has 16 cores and 48 GB of memory. Besides, we create a c5.large instance (4 cores and 8GB memory) serving as the client (i.e., gateway) in the PPOD system. It preprocesses and distributes the dataset to the above two more powerful servers to execute the PPOD protocol. Our servers are connected with a 10Gb NIC. To evaluate the performance of PPOD, we use a realworld dataset from UCI [42], which contains records of dimension.
Parameters. There are four parameters in our PPOD system: the window size , the slide size , the count threshold and the distance threshold . We evaluate the PPOD system under different and because they are the main factors affecting the performance of our PPOD. In particular, determines the number of distance measurement functions to be executed as well as the input size of the SortShuffle circuit. On the other hand, determines the size of Randomise/Derandomise and kDist function, which are frequently used during the update phase. By default, we set , , and in our dataset. Unless specified otherwise, all the parameters take on their default values in the experiments.
In the rest of this section, we first benchmark the performance of the kNN module, and then we report the runtime performance of our PPOD.
Viia Performance of the kNN module
CPU Time. Fig. 6 depicts the resulting CPU time of the secure kNN module in different phases. In particular, Fig. (a)a shows the CPU time when adding a new point into the current model: despite the increasing of , the CPU time of adding a new point is a constant (around 12s). This is because the kNN is executed during the initialisation phase and the update phase to process the new arrival points, and it involves the distance measurement computation and SortShuffle evaluation with the existing data points (380400 points). Compared to the above two steps, the remaining steps, i.e., computing the distance and Randomise with inputs, can be done efficiently (less than ms according to our evaluation).
The CPU time of updating (see Fig. (b)b) an existing point is varying from ms to ms with the increasing of . The update function of the kNN module only runs in the update phase to update the kNN list of the target point. The parameter affects the runtime performance of the update function, since the parameter determines the size of kNN lists, and the server takes more time to evaluate a larger circuit to update if the size of kNN lists is larger. Finally, we examine the impact of the proposed Randomise/Derandomise cryptographic modules. As shown in Fig. 6, the costs of using these two modules are almost negligible (Randomise: 0.6  5.7 ms, Deandomise: 0.03  1.28 ms), because they only include simple circuit structure (i.e., Subtract gates and free xor gates). Therefore, these two modules help our PPOD to achieve a better security guarantee with a small cost when computing the kNN .
Communication. Fig. 6 demonstrates the communication overhead of processing one data point via the kNN module. It shows a similar pattern as in the CPU time evaluation. Specifically, the garbler in the kNN module requires to send a constant size of the input (90 MB) to the evaluator, because the major part of the input is the SortShuffle circuit, and its size is dependent on . The communication overhead of the update function is relatively small (1 MB  12 MB), but it is proportional to for the same reason as in the CPU time evaluation, i.e., the generated circuit size is proportional to . The communication overhead slightly increases when the system facilities the Randomise/Derandomise cryptographic modules to enhance the security of data points, especially for the Derandomise module, where the size complexity is . As shown in Fig. (b)b, it incurs at most more communication overhead when the randomisation is deployed. Nevertheless, we claim that this overhead is affordable, as it only consists of xor gates, which is a small object comparing to the sorting circuit and it is easy to evaluate (free xor gates).
Phase  Preprocess  Initialisation  Query  Update 
Time  46 ms  35 min  217 ms  9 s 
ViiB Performance of PPOD
First, we note that our proposed PPOD achieves the same accuracy as running the plaintext outlier detection protocol [8] on the unencrypted dataset. Next, we illustrate the runtime performance of each phase of PPOD in Table II. It shows that the preprocess and query can be done in several milliseconds, which indicates that the client (the gateway) can preprocess the data point with small computational resources and get a realtime query result regarding the current outlier model. In addition, although the initialisation needs minutes to execute, it only runs for the first data points. After initialisation, the system can update the existing point only in s, which is a moderate runtime in the application context.
Impact of . We further examine the runtime performance and memory usage of the initialisation phase for different s, as this phase highly depends on the window size . Fig. 6 depicts the result runtime and memory usage respectively. When increases, the CPU time and memory consumption are expected to increase as well. Besides, we observe that the memory consumption increases sharply when the reaches (see Fig. (b)b). The increase of not only affects the size of the generated circuit and the number of multiplication triplets but also the delay of evaluating the circuit and computing distance via triplets. Therefore, there are more objects residing in the memory for computation, and it leads to the rapid growth of memory consumption. However, such a memory consumption is in an acceptable level in our evaluation platform (48 GB memory) and the other public clouds such as Azure.
Viii Conclusion
This paper presents a privacypreserving outlier detection (PPOD) protocol targeting the encrypted incremental dataset. Our PPOD protocol leverages the advanced cryptographic primitives (i.e., secure twoparty computation protocols) to build several secure and efficient modules. In addition, it adopts the sliding window technique to ensure a practical performance during the update phase with new arrival data points. We implemented our PPOD as a prototype system, and we provided a performance evaluation based on a realworld dataset to demonstrates its accuracy and efficiency.
Acknowledgement
This work was supported in part by the Monash FIT Multidisciplinary Seed Funding Scheme, the Data61 Collaborative Research Project (UbiSENSE for Cities), and an AWS Research Grant.
References
 [1] S. Sadik and L. Gruenwald, “Research Issues in Outlier Detection for Data Streams,” ACM SIGKDD Explorations Newsletter, vol. 15, no. 1, pp. 33–40, 2014.
 [2] X. Yuan, X. Wang, J. Lin, and C. Wang, “PrivacyPreserving Deep Packet Inspection in Outsourced Middleboxes,” in IEEE INFOCOM’16, 2016.
 [3] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly Detection: A Survey,” ACM Computing Surveys (CSUR), vol. 41, no. 3, 2009.
 [4] K. Fu and W. Xu, “Risks of Trusting the Physics of Sensors,” Communications of the ACM, vol. 61, no. 2, pp. 20–23, 2018.
 [5] M. Salehi, C. Leckie, J. C. Bezdek, T. Vaithianathan, and X. Zhang, “Fast Memory Efficient Local Outlier Detection in Data Streams,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 12, pp. 3246–3260, 2016.
 [6] M. Gupta, J. Gao, C. Aggarwal, and J. Han, “Outlier Detection for Temporal Data: A Survey,” IEEE Transactions on Knowledge and Data Engineering, vol. 26, no. 9, pp. 2250–2267, 2014.
 [7] M. Rabin, “How To Exchange Secrets with Oblivious Transfer,” Cryptology ePrint Archive, Report 2005/187, 2005.
 [8] F. Angiulli and F. Fassetti, “Detecting DistanceBased Outliers in Streams of Data,” in CIKM’07, 2007.
 [9] M. Kontaki, A. Gounaris, A. Papadopoulos, K. Tsichlas, and Y. Manolopoulos, “Continuous Monitoring of DistanceBased Outliers over Data Streams,” in IEEE ICDE’11, 2011.
 [10] K. Bhaduri, M. Stefanski, and A. Srivastava, “PrivacyPreserving Outlier Detection through Random Nonlinear Data Distortion,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 41, no. 1, pp. 260–272, 2011.
 [11] J. Böhler, D. Bernau, and F. Kerschbaum, “PrivacyPreserving Outlier Detection for Data Streams,” in DBSec’17, 2017.
 [12] S. Erfani, Y. Law, S. Karunasekera, A. Leckie, and M. Palaniswami, “PrivacyPreserving Collaborative Anomaly Detection for Participatory Sensing,” in PAKDD’14, 2014.
 [13] A. Alabdulatif, H. Kumarage, I. Khalil, and X. Yi, “PrivacyPreserving Anomaly Detection in Cloud with Lightweight Homomorphic Encryption,” Journal of Computer and System Sciences, vol. 90, pp. 28–45, 2017.
 [14] L. Li, L. Huang, W. Yang, X. Yao, and A. Liu, “PrivacyPreserving LOF Outlier Detection,” Knowledge and Information Systems, vol. 42, no. 3, pp. 579–597, 2015.
 [15] J. Vaidya and C. Clifton, “PrivacyPreserving Outlier Detection,” in ICDM’04, 2004.
 [16] L. Cao et al., “Scalable DistanceBased Outlier Detection over HighVolume Data Streams,” in IEEE ICDE’14, 2014.
 [17] P. Grubbs, K. Sekniqi, V. Bindschaedler, M. Naveed, and T. Ristenpart, “LeakageAbuse Attacks against OrderRevealing Encryption,” in IEEE S&P’17, 2017.
 [18] E. Kornaropoulos, C. Papamanthou, and R. Tamassia, “Data Recovery on Encrypted Databases with kNearest Neighbor Qquery Leakage,” in IEEE S&P’19, 2019.
 [19] L. Tran, L. Fan, and C. Shahabi, “DistanceBased Outlier Detection in Data Streams,” Proceedings of the VLDB Endowment, vol. 9, no. 12, pp. 1089–1100, 2016.
 [20] P. Pullonen, D. Bogdanov, and T. Schneider, “The Design and Implementation of A TwoParty Protocol Suite for Sharemind 3,” http://tubiblio.ulb.tudarmstadt.de/61259/[online], 2012.
 [21] D. Demmler, T. Schneider, and M. Zohner, “ABYA Framework for Efficient MixedProtocol Secure TwoParty Computation,” in NDSS’15, 2015.
 [22] D. Beaver, “Efficient Multiparty Protocols using Circuit Randomization,” in CRYPTO’91, 1991.
 [23] A. Yao, “Protocols for Secure Computations,” in IEEE SFCS’82, 1982.
 [24] M. Bellare, V. Hoang, and P. Rogaway, “Foundations of Garbled Circuits,” in ACM CCS’12, 2012.
 [25] G. Asharov, Y. Lindell, T. Schneider, and M. Zohner, “More Efficient Oblivious Transfer and Extensions for Faster Secure Computation,” in ACM CCS’13, 2013.

[26]
AgentVi, “innoVi Enterprise,”
https://www.agentvi.com/products/innovi/
innovienterprise/[online], 2018.  [27] Microsoft, “Tracking a Building’s Vital Signs to Keep it Safe and Healthy,” https://customers.microsoft.com/enus/story/trackingabuildingsvitalsignstokeepitsafeandh[online], 2016.
 [28] Amazon, “Amazon Kinesis,” https://aws.amazon.com/kinesis/[online], 2018.

[29]
Microsoft, “Time Series Anomaly Detection,”
https://docs.microsoft.com
/enus/azure/machinelearning/studiomodulereference/timeseries
anomalydetection#howtoconfiguretimeseriesanomalydetection [online], 2018.  [30] S. Kamara, P. Mohassel, and M. Raykova, “Outsourcing MultiParty Computation,” Cryptology ePrint Archive, Report 2011/272, 2011.
 [31] P. Mohassel and Y. Zhang, “SecureML: A System for Scalable PrivacyPreserving Machine Learning,” in IEEE S&P’17, 2017.
 [32] V. Nikolaenko et al., “PrivacyPreserving Matrix Factorization,” in ACM CCS’13, 2013.
 [33] ——, “PrivacyPreserving Ridge Regression on Hundreds of Millions of Records,” in IEEE S&P’13, 2013.
 [34] S. Lai, X. Yuan, S.F. Sun, J. K. Liu, Y. Liu, and D. Liu, “GraphSE: An Encrypted Graph Database for PrivacyPreserving Social Search,” in ACM ASIACCS’19, 2019.
 [35] P. Chan and M. Mahoney, “Modeling Multiple Time Series for Anomaly Detection,” in IEEE ICDE’05, 2005.
 [36] S. Ramaswamy, R. Rastogi, and K. Shim, “Efficient Algorithms for Mining Outliers from Large Data Sets,” in ACM SIGMOD’00, 2000.
 [37] K. Batcher, “Sorting Networks and their Applications,” in ACM SJCC’68, 1968.
 [38] M. Luby and C. Rackoff, “How to Construct Pseudorandom Permutations from Pseudorandom Functions,” SIAM Journal on Computing, vol. 17, pp. 373–386, 1988.
 [39] R. Bost, R. Popa, S. Tu, and S. Goldwasser, “Machine Learning Classification over Encrypted Data,” in NDSS’15, 2015.
 [40] R. Canetti, “Security and Composition of Multiparty Cryptographic Protocols,” Journal of Cryptology, vol. 13, no. 1, pp. 143–202, 2000.
 [41] X. Wang, “FlexSC,” https://github.com/wangxiao1254/FlexSC[online], 2018.
 [42] D. Dua and C. Graff, “UCI Machine Learning Repository,” http://archive.ics.uci.edu/ml[online], 2017.