COBRA: Contextaware Bernoulli Neural Networks for Reputation Assessment
Abstract
Trust and reputation management (TRM) plays an increasingly important role in largescale online environments such as multiagent systems (MAS) and the Internet of Things (IoT). One main objective of TRM is to achieve accurate trust assessment of entities such as agents or IoT service providers. However, this encouters an accuracyprivacy dilemma as we identify in this paper, and we propose a framework called Contextaware Bernoulli Neural Network based Reputation Assessment (COBRA) to address this challenge. COBRA encapsulates agent interactions or transactions, which are prone to privacy leak, in machine learning models, and aggregates multiple such models using a Bernoulli neural network to predict a trust score for an agent. COBRA preserves agent privacy and retains interaction contexts via the machine learning models, and achieves more accurate trust prediction than a fullyconnected neural network alternative. COBRA is also robust to security attacks by agents who inject fake machine learning models; notably, it is resistant to the 51percent attack. The performance of COBRA is validated by our experiments using a real dataset, and by our simulations, where we also show that COBRA outperforms other stateoftheart TRM systems.
1 Introduction
Trust and reputation management (TRM) systems are critical to largescale online environments such as multiagent systems (MAS) and the Internet of Things (IoT), where agents
Early TRM systems such as [6] rely on firsthand evidence to derive trust scores of agents. For example, an agent Alice assigns a trust score to another agent Bob based on the outcome of her previous interactions with Bob. However, as the scale of the systems grows (e.g., IoT), firsthand evidence becomes too sparse to support reliable trust evaluation. Hence, secondhand evidence was exploited by researchers to supplement firsthand evidence. In that case, Alice would assign a trust score to Bob based not only on her own interactions with Bob but also on what other agents advise about Bob.
However, what form the secondhand evidence should take has been largely overlooked. This engenders an important issue which we refer to as the accuracyprivacy dilemma. To illustrate this, suppose Alice consults another agent Judy about how trustworthy Bob is. One way is to let Judy give a trust score or rating about Bob [19], which is the approach commonly adopted in the trust research community. This approach is simple but loses the context information of the interactions between agents. For example, the context could be the transaction time and location, and service provided by an agent during offpeak hours could have higher quality (more SLAconformant) than during peak hours. Without such context information, trust assessment based on just ratings or scores would have lower accuracy. On the other hand, another method is to let Judy reveal her entire interaction history with Bob (e.g., in the form of a detailed review), which is the approach commonly used in recommender systems. Although the information disclosed as such is helpful for trust assessment of Bob, it is likely to expose substantial privacy of Bob and Judy to Alice and possibly the public.
To address this accuracyprivacy dilemma, and in the meantime avoid relying on a trusted thirdparty which is often not available in practice, we propose a framework called Contextaware Bernoulli Neural Network based Reputation Assessment (COBRA). It encapsulates the detailed secondhand evidence using machine learning models, and then aggregate these model using a Bernoulli neural network (BNN) to predict the trustworthiness of an agent of interest (e.g., an IoT service provider). The encapsulation protects agent privacy and retains the context information to enable more accurate trust assessment, and the BNN accepts the outputs of those ML models and the informationseeking agent’s (Alice’ as in the above example) firsthand evidence as input, to make more accurate trustworthiness prediction (of Bob as in the above example).
The contributions of this paper are summarized below:

We identify the accuracyprivacy dilemma and propose COBRA to solve this problem using a model encapsulation technique and a Bernoulli neural network. COBRA preserves privacy by encapsulating secondhand evidence using ML models, and makes accurate trust predictions using BNN which fuses both firsthand and secondhand evidence, where the valuable context information was preserved by the ML models.

The proposed BNN yields more accurate predictions than the standard fullyconnected feedforward neutral networks, and trains significantly faster. In addition, it is also general enough to be applied to similar tasks when the input is a set of probabilities associated with Bernoulli random variables.

The design of COBRA takes security into consideration and it is robust to fake ML models; in particular, it is resistant to the 51percent attack, where the majority of the models are compromised.

We evaluate the performance of COBRA using both experiments based on a real dataset, and simulations. The results validate the above performance claims and also show that COBRA outperforms other stateoftheart TRM systems.
2 Related Work
A large strand of literature has attempted to address TRM in multiagent systems. The earliest line of research had a focus on firsthand evidence [19], using it as the main source of trustworthiness calculation. For example, Beta reputation system [6] proposes a formula to aggregate firsthand evidence represented by binary values indicating positive or negative outcomes of interaction. Concurrently with the spike in popularity of recommender systems in late 2004 [3, 1], the alternative usage of TRM in preference and rating management gained much research attention. However, the binary nature of trust definition presents a barrier because recommender systems conventionally use nonbinary numerical ratings. To this end, Dirichlet reputation systems [5] generalize the binomial nature of beta reputation systems to accommodate multinomial values.
A different line of research focuses on secondhand evidence [19] as a supplementary source of trustworthiness calculation. These works calculate a trust score by aggregating secondhand evidence and a separate trust score by aggregating firsthand evidence, and then a final score by aggregating these two scores. Some early trust models such as [6] are also applicable to secondhand evidence. The challenges in this line of research are [20]: (i) How to determine which secondhand evidence is less reliable, since secondhand evidence is provided by other agents? (ii) How much to rely on trust scores that are derived from secondhand evidence compared to scores derived from firsthand evidence?
To address the first challenge, the Regret model [15] assumes the existence of social relationships among agents (and owners of agents), and assigns weights to secondhand evidence based on the type and the closeness of these social relationships. These weights are then used in the aggregation of secondhand evidence. More sophisticated approaches like Blade [14] and Habit [17] tackle this issue with a statistical approach using Bayesian networks and hence do not rely on heuristics. To address the second challenge, [2] uses a Qlearning technique to calculate a weight which determines the extent to which the score derived from secondhand evidence affects the final trust score.
A separate thread of research relies solely on stereotypical intrinsic properties of the agents and the environment in which they operate, to derive a likelihood of trustworthiness without using any evidence. These approaches [23, 13, 16, 8] are considered a complement to evidencebased trust and are beneficial when there is no enough evidence available.
Our proposed approach does not fall under any of these categories; instead, we introduce model encapsulation as a new way of incorporating evidence into TRM. We make no assumptions on the existence of stereotypical or sociocognitive information, as opposed to [23, 13, 16, 8, 15]). Our approach has minimal privacy exposure, which is unlike [14, 17], and preserves important context information.
3 Model Encapsulation
COBRA encapsulates secondhand evidence in ML models, which achieves two purposes: (i) it preserves privacy of agents who are involved in the past interactions; (ii) it retains context information which will help more accurate trust prediction later (described in the next section).
In this technique, each agent trains a ML model using its past interaction records with other agents in different contexts. Specifically, an agent (the set of all the agents) trains a model based on its past direct interaction (i.e., firsthand evidence) with an agent . The input to the model is a set of context features (e.g., date, time, location), and the output is a predicted conditional probability indicating how trustworthy is for a given context .
To build this model, the agent maintains a dataset that records its past interactions with each other agent, where each record includes the context and the outcome with 0 and 1 meaning a negative and a positive outcome respectively (e.g., whether the SLA is met or not). For nonbinary outcomes, they can be handled by using the common method of converting a multiclass classification problem into multiple binary classification problems (so there will be multiple models for each agent). Then, agent trains a machine learning model for each agent, say , using the corresponding dataset to obtain .
COBRA does not restrict the choice of ML models and this is up to the application and the agents. For example, agents hosted on mobile phones can choose simple models such as decision trees and Naive Bayes, while those on desktop computers or in the cloud can use more complex models such as random forests and AdaBoost. Furthermore, agents can choose different models in the same application, meaning that may not be the same type as . On the other hand, the context feature set needs to be fixed for the same application.
Model Sharing. Whenever an agent, say , seeks advice (i.e., secondhand evidence) from another agent, say , about an agent of interest, say , the agent can share its model with . This avoids exposing secondhand evidence directly and thereby preserves privacy of both and . It also retains context information as compared to providing with just a single trust score of , and hence helps more informed decision making in the subsequent step (described in Section 4).
Note that the information we seek to keep private is the contextual details of the interactions between and , whereas concealing the identities of and is not the focus of this work.
Sharing a model is as straightforward as transferring the model parameters to the soliciting agent (i.e., in the above example), or making it accessible to all the agents (in a readonly mode). This sharing process does not require a trusted intermediary because the model does not present a risk of privacy leaking about and . The required storage is also very low as compared to storing the original evidence.
Moreover, COBRA does not assume that all or most models are accurate. Unlike many existing work assuming honest majority and hence being vulnerable to 51percent attack, COBRA use a novel neural network architecture (Section 4) that is more robust to model inaccuracy and even malice (e.g., models that give opposite outputs).
4 Bernoulli Neural Network
After model encapsulation which allows for a compressed transfer of contextaware secondhand evidence with privacy preservation, the next question is how to aggregate these models to achieve an accurate prediction of the trustworthiness of a target agent. Using common measures of central tendency such as mean, mode, etc. will yield misleading results because an adviser agent’s (’s) model was trained on a dataset with likely different contexts than the advisee agent’s (’s) context. In a sense, this problem is akin to the problem found in transfer learning. Besides, COBRA aims to relax the assumption of honest majority and give accurate predictions even when the majority of the models are inaccurate or malicious.
In this section, we propose a solution based on artificial neural networks (ANN). The reasons for choosing ANN are two. First, the task of predicting trustworthiness in a specific context given other agents’ models, is a linearly nonseparable task with high dimensional input space (detailed in Section 4.1). Such tasks can specifically benefit from ANN’s capability of discovering intrinsic relationships hidden in data samples [21]. Second, the models are nonideal due to the possibly noisy agent datasets, but ANN is highly noise tolerant and sometimes can even be positively affected by noise [10, 9].
Therefore, we propose a Bernoulli Neural Network (BNN) as our solution. BNN specializes in processing data that is a set of probabilities associated with random variables of Bernoulli distribution, which perfectly matches our input space which is a set of predicted trust scores between zero and one indicating the probability of an agent being trustworthy in a given context. In contrast to the widely used Convolutional Neural Network (CNN), BNN does not require data to have a gridlike or structured topology, and hence matches well with trust or reputation scores. Specifically, unlike CNN which uses the hierarchical pattern in data, BNN uses information entropy, to determine the connections in the network.
Fig. 1 provides an overview of COBRA, where the models on the left hand side are from the encapsulation technique described in Section 3, and the right hand side is the BNN described in this section. In the following, we explain the architectural design of BNN in Section 4.13 and assemble the data required for training the BNN in Section 4.4.
4.1 Topology
We propose a layer network architecture for the BNN, where the input layer is denoted by , the output layer by , and the hidden layers by where . The weight of an edge is denoted as where and , . The bias at layer is . Thus, the entire network can be compiled from Eq. 1 where the output of any node is given by . The inputs of the network in Eq. 1 are assembled from (1) the models explained in Section 3 and (2) the context features, where the assembling process is explained in Section 4.4. The former are values between zero and one which indicate the predicted trustworthiness probability of an entity in the given context sourced from different predictors.
(1) 
The probabilistic nature of the inputs, enables us to calculate how informative an input is, by calculating the entropy of the predicted trustworthiness for which the input indicates the probability. This is used in Eq. 1 to ensure that the number of neural units an input connects to is inversely proportional to the average of information entropy calculated for input samples. Each sample of an input in the training dataset (exclusive of context features) is a probability (sourced prediction) associated with a random variable with Bernoulli distribution (trustworthiness). Hence, is defined recursively by Eq. 2 where is the average entropy of in the training dataset. For context features because the values of these features are not probabilities of Bernoulli random variables and hence the notion of entropy can only be applied to their entire feature space not individual values which is what is used in Eq. 1.
Moreover, in Eq. 1 is the activation function of layer . The design choice of the activation functions is explained in Section 4.3.
(2) 
4.2 Depth and width
The depth of BNN is since the input layer is not counted by convention. A feedforward network with two hidden layers can be trained to represent any arbitrary function, given sufficient width and density (number of edges) [4]. Our goal is to find the function which most accurately weights the predictions sourced from multiple predictors (i.e. high dimensional input space). Many of such sources can be unreliable or misleading either unintentionally (e.g. malfunction) or deliberately (e.g. malicious). There often does not exist a single source which is always reliable and some sources are more reliable in some contexts. Moreover the malicious sources sometimes collude with each other to make the attack harder to detect. Therefore, the function that we aim to estimate in this linearly nonseparable task can have any arbitrary shape. Hence, we choose in our design to benefit from two hidden layers which suffice to estimate the aforementioned function as we demonstrate by our experiment results in Section 5.
The width of a layer is the number of units (nodes) in that layer, and accordingly we denote the width of a layer by . Determining the width is largely an empirical task, and there are many ruleofthumb methods used by the practitioners. For instance, [4] suggests that the width of a hidden layer be the width of the previous layer plus the width of the next layer. Inspired by this method we propose a measure called output gain defined as the summation of the information gain of the inputs of a node and determine by Eq. 3. The width is set to because the network has only a single output which is the trust score (probability of being trustworthy). And the width is set to the total number of input nodes denoted by .
(3) 
4.3 Activation and loss functions
Let us recall the activation function from Eq. 1 in Section 4.1. Since we choose as explained in Section 4.2, we need to specify three activation functions , and for the first hidden layer, second hidden layer, and the output layer, respectively.
For the output layer, we choose the sigmoid logistic function because we aim to output a trust score (the probability that the outcome of interacting with a certain agent is positive for a given context). For the hidden layers, we choose the rectified linear unit (ReLU) [7] function as , because the focus of hidden layers is to exploit the compositional hierarchy of the inputs to compose higher level (combinatoric) features so that data become more separable in the next layer, and hence the speed of convergence is a main consideration.
The weights in the BNN are computed using gradient descent back propagation during the training process. However, sigmoid activation functions, as we choose, have a saturation effect which will result in small gradients, while gradients need to be large enough for weight updates to be propagated back into all the layers. Hence, we use crossentropy as the loss function to mitigate the saturation effect of the sigmoid function. Specifically, the component in the loss function counteracts the component in the sigmoid activation function, thereby enabling gradientbased learning process.
4.4 Assembling training data
Having explained the architectural design aspects of our Bernoulli neural network, now we explain its computational aspects.
The output of the neural network is a predicted probability that a target agent is trustworthy (e.g., meets SLA) in a certain context , which (the probability) is what an agent tries to find out. The input of the network consists of (1) all the context features and (2) all the probabilities predicted by models shared by all the where is the agents is seeking advice from. In the case that some agents do not share their models with agent , the corresponding input probability will be set to 0.5 to represent absolute no information. Formally, the input from the models to the neural network is given by
(4) 
where is the set of models available to . Most precisely, each input variable (to layer ) is specified by
(5) 
which also gives the number of input nodes (i.e. input dimension)
Thus, we transform the recursive Eq. 3 into a system of linear equations:
(6) 
Solving Eq. 6 yields the widths of all the layers of our neural network:
(7) 
The weights are calculated using gradient descent back propagation based on training data. The training data is initialized once using Algorithm 1 and updated vertically upon acquiring new firsthand evidence using Algorithm 2 and updated horizontally upon acquiring a new model using Algorithm 3.
In Algorithm 1, the training data  which consists of as given by Eq. 4 and  is first initialized to . Then, the firsthand evidence is being iterated over (line 3) to find out historical information about the agent , i.e., the outcome and context of each interaction. This information is then supplied to (Eq. 4) to obtain the predicted conditional probability . The probabilities and the corresponding labels are then added to to form the training data (lines 812).
After initialization, all the subsequent updates are performed using Algorithm 2 and 3, where Algorithm 2 is executed when a new firsthand evidence is available at and Algorithm 3 is executed when receives a new model from a new advisor agent or an updated model from an existing advisor agent.
Proposition 1.
The time complexity of Algorithm 1 is .
The training and retraining of the neural network using the above training dataset can be either performed by the agent itself or outsourced to fog computing [18]. Similarly is the storage of the neural network.
5 Evaluation
We evaluate COBRA using both experiments and simulations.
5.1 Experiment setup
Dataset. We use a public dataset obtained from [22] which contains the responsetime values of web services invoked by service users over time slices. The dataset contains records of data in total, which translates to a data sparsity of . Following [11], we assume a standard SLA which specifies that 1 second is the limit that keeps a user’s flow of thought uninterrupted. Hence, response time above second is considered violation of SLA and assigned a False label, while response time below or equal to second is assigned a True label which indicates that the SLA is met.
Platform. All measurements are conducted using the same Linux workstation with 12 CPU cores and 32GB of RAM. The functional API of Keras is used for the implementation of the neural network architectures on top of TensorFlow backend while scikitlearn is used for the implementation of Gaussian process, decision tree, and Gaussian Naive Bayes models.
Benchmark methods. We use the following benchmarks for comparison:

Trust and Reputation using Hierarchical Bayesian Modelling (HABIT) : This probabilistic trust model is proposed by [17] and uses Bayesian modelling in a hierarchical fashion to infer the probability of trustworthiness based on direct and thirdparty information and outperforms other existing probabilistic trust models.

Trust Management in Social IoT (TMSIoT): This model is proposed by [12], in which the trustworthiness of a service provider is a weighted sum of a node’s own experience and the opinions of other nodes that have interacted with the service provider.

Beta Reputation System (BRS): This wellknown model as proposed by [6] uses the beta probability density function to combine feedback from various agents to calculate a trust score.
Evaluation metrics. We employ two commonly used metrics. One is the accuracy defined as
where TP = True Positive, FP = False Positive, TN = True Negative, and FN = False Negative. The other metric is the root mean squared error (RMSE) defined by
Where is the groundtruth trustworthiness and is the predicted probability of trustworthiness and is the total number of predictions.
5.2 Experiment procedure and results
We run COBRA for each of the 142 webservice clients to predict whether a webservice provider can be trusted to meet SLA, given a context which is the time slice during which the service was consumed. We experiment on random samples of the dataset due to two main considerations: (1) COBRA is a multiagent approach but in the experiment we build all the models and BNNs on one machine, (2) the significantly high time and space complexity of the Gaussian process used in HABIT restricts us to work with a sample of the dataset. We employ fold cross validation and compare the performance of COBRA with the benchmark methods described in Section 5.1. In COBRADT, decision tree is used for model encapsulation for all 142 agents, in COBRAGNB, Gaussian Naive Bayes is used for the encapsulation for all 142 agents, and in a hybrid approach, COBRAHyb, decision tree is used for 71 randomly selected agents while Gaussian Naive Bayes is used for the rest. In HABIT the reputation model is instantiated using Gaussian process with a combination of dot product + white kernel covariance functions. In COBRADT/GNB/HybB, our proposed neural network architecture in Section 4 (BNN) is used, while in COBRADT/GNB/HybD, a fully connected feedforward architecture (Dense) is used instead.
The results, as illustrated in Fig. 2 and Fig. 2, indicate that all the versions of COBRA with Bernoulli neural engine outperform the benchmark methods, while without our proposed Bernoulli neural architecture, HABIT is competent to Dense version of COBRAGNB. The choice of the encapsulation model only slightly affects the performance in hybrid mode, which suggests that the performance of COBRA is stable.
Furthermore, we present the moving average of prediction and training time for BNN versions of COBRA compared to Dense versions of COBRA respectively in Fig. 2 and Fig. 2. The results indicate that our proposed BNN architecture significantly reduces the time required for training and making predictions.
Moreover, as illustrated in Fig. 2, the divergence between training accuracy and validation accuracy of BNN is significantly smaller than that of Dense. Similarly, Fig. 2 depicts a smaller divergence between training loss and validation loss of BNN compared to that of Dense. These results indicate that Dense is more prone to overfitting as the epochs increase.
5.3 Simulation setup
For a more extensive evaluation of COBRA, especially with respect to extreme scenarios which may not be observed often in the real world, we also conduct simulations.
We simulate a multiagent system with 51 malicious agents and 49 legitimate agents, in consideration of the 51 percent attack. The attack model used for malicious agents consists of fake and misleading testimonies which is a common attack in TRM systems. Specifically, a model shared by a malicious agent provides opposite prediction of the trustworthiness of a target agent, i.e., it outputs when the model would predict if it were not malicious.
Denote by the probability that an arbitrary agent interacts with an arbitrary target agent, which we treat as a random variable with beta distribution parameterized by and . We run simulations each with a different distribution of . For example, means that one group of agents interact with the target agent frequently while another group seldom interact with the target agent; means that most of the agents have half chance to interact with the target agent; means that most of the agents interact with the target agent frequently, while and means that most of the agents seldom interact with the target agent.
We use 4 synthesized context features randomly distributed in the range , and generate different target agents that violates SLA with a probability following the normal distribution on condition of each context feature.
5.4 Simulation results
The simulation results are shown in Fig. 33, where the key observations are:

COBRA is able to predict accurate trust scores (probability of being trustworthiness) for the majority of the cases. Particularly, in 90 out of 100 simulated distributions of an accuracy greater than or equal to 85% is achieved.

It is crucial to note that these results are achieved when 51% of the agents are malicious. This shows that COBRA is resistant to the percent attack.
6 Conclusion
This paper proposes COBRA, a contextaware trust assessment framework for largescale online environments (e.g., MAS and IoT) without a trusted intermediary. The main issue it addresses is an accuracyprivacy dilemma. Specifically, COBRA uses model encapsulation to preserve privacy that could otherwise be exposed by secondhand evidence, and in the meantime to retain context information as well. It then uses our proposed Bernoulli neural network (BNN) to aggregate the encapsulated models and firsthand evidence to make an accurate prediction of the trustworthiness of a target agent. Our experiments and simulations demonstrate that COBRA achieves higher prediction accuracy than stateoftheart TRM systems, and is robust to 51 percent attack in which the majority of agents are malicious. It is also shown that the proposed BNN trains much faster than a standard fullyconnected feedforward neural network, and is less prone to overfitting.
7 Acknowledgments
This work is partially supported by the MOE AcRF Tier 1 funding (M4011894.020) awarded to Dr. Jie Zhang.
Footnotes
 Throughout this paper, we use the term agents in a broader sense which is not limited to agents in MAS, but also includes IoT service providers and consumers as well as other similar cases.
 Recommender systems can take this approach because they are generally considered trusted intermediaries, and they focus on preference modeling rather than trust and reputation modelling.
References
 (2015) Multifaceted trust and distrust prediction for recommender systems. Decision Support Systems 71, pp. 37–47. Cited by: §2.
 (2007) Dynamically learning sources of trust information: experience vs. reputation. In AAMAS, pp. 164. Cited by: §2.
 (2018) Recommender systems trend analysis from 2004 to present. Note: \urlhttps://trends.google.com/trends/(Accessed on 6 Nov 2018) Cited by: §2.
 (2008) Introduction to neural networks with java. Heaton Research, Inc.. Cited by: §4.2, §4.2.
 (2007) Dirichlet reputation systems. In Proceedings of the Second International Conference on Availability, Reliability and Security (ARES), pp. 112–119. Cited by: §2.
 (2002) The beta reputation system. In Proceedings of the 15th bled electronic commerce conference, Vol. 5, pp. 2502–2511. Cited by: §1, §2, §2, 3rd item.
 (2015) Deep learning. nature 521 (7553), pp. 436. Cited by: §4.3.
 (2012) Modeling context aware dynamic trust using hidden markov model.. In AAAI, pp. 1938–1944. Cited by: §2, §2.
 (2014) Deep learning with noise. hp://www. andrew. cmu. edu/user/fanyang1/deeplearningwithnoise. pdf. Cited by: §4.
 (2010) Assessing the sensitivity of the artificial neural network to experimental noise: a case study. FME Transactions 38 (4), pp. 189–195. Cited by: §4.
 (1994) Usability engineering. Elsevier. Cited by: §5.1.
 (201405) Trustworthiness management in the social internet of things. IEEE Transactions on Knowledge and Data Engineering 26 (5), pp. 1253–1266. External Links: Document, ISSN 10414347 Cited by: 2nd item.
 (2011) Multilayer cognitive filtering by behavioral modeling. In AAMAS, pp. 871–878. Cited by: §2, §2.
 (2006) Bayesian reputation modeling in emarketplaces sensitive to subjectivity, deception and change. In AAAI, Vol. 21, pp. 1206. Cited by: §2, §2.
 (2001) Regret: a reputation model for gregarious societies. In Proceedings of the fourth workshop on deception fraud and trust in agent societies, Vol. 70, pp. 61–69. Cited by: §2, §2.
 (2011) Trust as dependence: a logical approach. In AAMAS, pp. 863–870. Cited by: §2, §2.
 (2012) An efficient and versatile approach to trust and reputation using hierarchical bayesian modelling. Artificial Intelligence 193, pp. 149–185. Cited by: §2, §2, 1st item.
 (2019) All one needs to know about fog computing and related edge computing paradigms: a complete survey. Journal of Systems Architecture. Cited by: §4.4.
 (2013) A survey of multiagent trust management systems. IEEE Access 1, pp. 35–50. Cited by: §1, §2, §2.
 (2008) Evaluating the trustworthiness of advice about seller agents in emarketplaces: a personalized approach. Electronic Commerce Research and Applications 7 (3), pp. 330–340. Cited by: §2.
 (2010) Computational ecology: artificial neural networks and their applications. World Scientific. Cited by: §4.
 (201401) Investigating qos of realworld web services. IEEE Trans. Serv. Comput. 7 (1), pp. 32–39. External Links: ISSN 19391374, Link, Document Cited by: §5.1.
 (2015) A priori trust inference with contextaware stereotypical deep learning. KnowledgeBased Systems 88, pp. 97–106. Cited by: §2, §2.