Skip to main content

A mechanism for detecting dishonest recommendation in indirect trust computation

Abstract

Indirect trust computation based on recommendations form an important component in trust-based access control models for pervasive environment. It can provide the service provider the confidence to interact with unknown service requesters. However, recommendation-based indirect trust computation is vulnerable to various types of attacks. This paper proposes a defense mechanism for filtering out dishonest recommendations based on a measure of dissimilarity function between the two subsets. A subset of recommendations with the highest measure of dissimilarity is considered as a set of dishonest recommendations. To analyze the effectiveness of the proposed approach, we have simulated three inherent attack scenarios for recommendation models (bad mouthing, ballot stuffing, and random opinion attack). The simulation results show that the proposed approach can effectively filter out the dishonest recommendations based on the majority rule. A comparison between the exiting schemes and our proposed approach is also given.

Introduction

The rapid development of collaborative, dynamic, and open environments has increased awareness on security issues. It is becoming widely acknowledged that traditional security measures fail to provide the necessary flexibility for interactions between known and unknown entities in an uncertain environment due to statically defined security policies and capabilities[1]. Much research on trust-based access control models for pervasive environment has been carried out[2–6], which use trust as an elementary criterion for authorizing known, partially known, and unknown entities to interact with each other. Indirect trust computation holds key importance in trust- based access control models. When the service provider has no personal experience with the requesting entity to compute direct trust, indirect trust computation is used as a way to evaluate and distribute trust[1]. The basis for indirect trust computation is seeking recommendation for further information to define the trustworthiness of the unfamiliar service requester. It requests recommendation, with respect to the entity in question, from peer services. If peer services provide honest recommendations, a service provider can accurately determine the trustworthiness of an unknown service requester. This gives the service provider the confidence to interact with an unknown service requester[2].

However, reliance on peer services to seek the recommendation of an unfamiliar service requester can lead to erroneous decisions if the recommender provides recommendations that deviate from their experience. The recommenders can falsely provide dishonest recommendation either to elevate trust values of malicious entities or to lessen the trust values of honest entities. If these recommendations are aggregated blindly without filtering false recommendations, they can skew the evaluation of an entity’s trustworthiness. Therefore, a mechanism to avoid the influence of dishonest recommendations from malicious recommenders is a fundamental problem for trust models.

Consider the following scenarios that show the importance of recommendation models in a pervasive environment as well as a mechanism to filter dishonest recommendations in such models:

Scenario 1. Bob is an employee of a Paris-based multinational company and, due to official commitments, travels frequently between France and the USA. Bob just arrived at Los Angeles Airport on a business trip and was consuming his lunch at a cafe when he received a call from his employer to reach Ohio by night to attend an important meeting the next day. Incidentally Bob’s travel agent was not available. However, Prime Travels had a seat available to Ohio in their next flight. Using his smart phone, Bob registers himself with the booking service of Prime Travels and generates a request for reservation. Since Bob has never made any reservation with the registration service of Prime Travels before, it broadcasts a recommendation request to the services being offered at the airport to ascertain Bob’s trustworthiness. It received recommendations from different services Bob used during his transits at Los Angeles Airport including payment to a cafe, internet access, money transfer to a money exchange service, and reservations made through the registration service of some other travel agents. A few travel agents, competitor to Prime Travels, intentionally responded with bad recommendations. Prime Travels requires a mechanism to filter these dishonest recommendations from the honest one to ascertain the trustworthiness of Bob for his decision making.

Scenario 2. Alice is a frequent visitor of H&M shopping mall near her work place. After office hours, while Alice was visiting the shopping mall, she received a call from her colleague that she has forgotten to mail an important document to one of her customers. Alice needed internet access on her smart phone to mail the document. Alice searches for available internet service providers in the mall and forwards a request to an available wireless hotspot identified as MegaIT in the mall to allow internet access on her device. Since she had never used the service before, MegaIT broadcasts a message to different services available in H&M to ask for recommendations. Since Alice had been a frequent visitor with a history of interactions with other shopping, saloon, and dining services in the mall, these service providers give recommendations to MegaIT. In order to grant access, MegaIT requires some mechanism to determine which recommendations it should use to determine the trustworthiness of Alice.

There can be three possible types of malicious recommendation[7]. Bad mouthing recommendations (BM) are those malicious recommendations that cause the evaluated trustworthiness of an entity to decrease, ballot stuffing (BS) recommendations cause the evaluated trustworthiness of the entity to increase, and random opinion (RO) recommendations are those in which a recommender gives the recommendations randomly opposite the true behavior of the entity in question. In this paper, we propose a new mechanism to filter out dishonest nodes from influencing the indirect trust computation. The proposed mechanism (an extension of[8]) is based on the assumption that a dishonest recommendation is one that is inconsistent with other recommendations and has a low probability of occurrence in the recommendation set. Based on this assumption, a new dissimilarity function for detecting deviations in a recommendation set is defined. An extensive comparison between the proposed and existing techniques is also provided to demonstrate the effectiveness of the proposed mechanism.

Related work

The dynamism of pervasive computing environment allows ad hoc interaction of known and unknown autonomous entities that are unfamiliar and possibly hostile. In such environment where the service providers have no personal experience with unknown service requesters, trust and recommendation models are used to evaluate the trustworthiness of unfamiliar entities. Recently, research in designing defense mechanisms to detect dishonest recommendation in these open distributed environments has been carried out[9–26]. The defense mechanisms against dishonest recommendations has been grouped into two broad categories, namely exogenous method and endogenous method[9]. The approaches that fall under endogenous method use other external factors along with the recommendations (reputation of recommender and credibility of recommender) to decide the trustworthiness of the given recommendation. However, these approaches assume that only highly reputed recommenders can give honest recommendations and vice versa. In[10], Xiong and Liu presented an approach (PeerTrust) that avoids aggregation of the individual interactions. Their model computes the trustworthiness of a given peer based on the community feedback about the participant’s past behavior. The credibility factor of the feedback source is computed using a function of trust value as its credibility value. The model also incorporates personalized similarity between the experience with other partners for reputation on ranking discrepancy. Chen et al.[11] distinguishes between recommendations by computing the reputation for each recommender. The reputation is measured on the basis of the quality and quantity of recommendation it provides. The recommender’s reputation is used as a weight when aggregating the recommendations of all the recommenders. However, the model does not consider the service type of the recommender on which its recommendation is based. Malik and Bouguettaya[12] also proposed using rater credibility for its recommendation assessment. It believes that only highly reputed recommenders can give honest recommendations. These models use other external information sources to gather the reputations of the recommender. Ganeriwal et al.[13] believe that the weight of its recommendations about others is dependent on its own reputation for service providing. In other words, if it provides a reliable service, then the recommendations it provides is also reliable. In[14], the global reputation of a node is aggregated from local trust scores weighted by the global reputation scores of all senders. Since these models are based on the assumption that entities with high reputation provide honest recommendations, that makes it vulnerable to attack. A smart attacker may behave well for a while to get a high reputation and then provide all dishonest recommendations that cannot be detected by schemes using reputation[15], that is, a recommender can build reputation with different expectations and intentions, and the recommendation they provide can be different from their experience. Recently, models for online communities have proposed using the social element of the recommender as an additional source of information in the recommender system. They believe that people trust their peers with whom they are socially connected and use their recommendations. The main idea behind this approach is that users tend to connect to users with similar preferences. Trusting the opinion of others is based on the social link between the two entities. In[16], the authors presented the correlation between trust and social networks by establishing a rating system for movies based on community system. They demonstrated in their experiments that social trust is able to evaluate similarity in a more distinctive way when the ratings are extreme and with large differences. In[17], the authors have modeled a social network as a directed graph and have evaluated recommendations based on the position and interconnections of the user represented as actors in the graph. The model employs social network analysis metric including centrality and rank prestige to identify the influence of actors in the social network. In[18], a framework to build a recommendation system by identifying a group of experts in a social network is proposed. The model recommends experts with appropriate knowledge based on the information desired by the user. The authors elaborate the efficiency of the proposed approach by applying the model in a research community. In[19], the authors present a probabilistic matrix factorization approach for the recommender system. The model applies trusted friends’ opinion in a social network to gather recommendations. The research believes that the user’s friend recommendation has an impact on user preferences in a social network. All these models[16–19] believe that there exist social relationships between users in the system that affect the evaluation of recommendation trustworthiness. However, in open spaces comprised of multiple devices (pervasive environment), these devices in close physical proximity form an ad hoc network for spontaneous service access[20]. In such an open, dynamic environment where devices are continuously leaving/joining the network, it is difficult to rely on a formal social relationship.

In endogenous method, the recommendation seeker has no personal experience with the entity in question. It relies only on the recommendations provided by the recommender to detect dishonest recommendation. The method believes that dishonest recommendations have different statistical patterns from honest recommendations. Therefore, in this method, filtering of dishonest recommendation is based on analyzing and comparing the recommendations themselves. In trust models where indirect trust based on recommendations is used only once to allow a stranger entity to interact, endogenous method based on the majority rule is commonly used. Dellarocas[21] has proposed an approach based on controlled anonymity to separate unfairly high ratings and fair ratings. This approach is unable to handle unfairly low ratings[22]. In[23], a filtering algorithm based on the beta distribution is proposed to determine whether each recommendation R i falls between q quartile (lower) and (1 − q) quartile (upper). Whenever a recommendation does not lie between the lower and upper quartile, it is considered malicious and its recommendation is excluded. The technique assumes that recommendations follow beta distribution and is effective only if there are effectively a large number of recommendations. Weng et al. in[24] proposed a filtering mechanism based on entropy.The basic idea is that if a recommendation is too different from majority opinion, then it could be unfair. The approach is similar to other reputation-based models except that it uses entropy to differentiate between different recommendations. A context-specific and reputation-based trust model for pervasive computing environment was proposed[25] to detect malicious recommendation based on control chart method. The control chart method uses mean and standard deviation to calculate the lower confidence limit (LCL) and upper confidence limit (UCL). It is assumed that the recommendation values that lie outside the interval defined by LCL and UCL are malicious, therefore discarded from the set of valid recommendations. It considers that a metrical distance exists between valid and invalid recommendations. As a result, the rate of filtering out the false positive and false negative recommendation is really high. Deno et al.[26] proposed an iterative filtering method for the process of detecting malicious recommendations. In this model[26], an average trust value (Tavg) of all the recommendations received (TR) is calculated.

The inequality ∣Tavg(B)−TR(B)∣ > S, where B is the entity for which recommendations are collected from i recommenders (R) and S is a predefined threshold in the interval [0 1], is evaluated. If that inequality holds, then the recommendation is false and is filtered out. The method is repeated until all false recommendations are filtered out. The effectiveness of this approach depends on choosing a suitable value for S. These detection mechanisms can be easily bypassed if a relatively small bias is introduced in dishonest recommendations.

Proposed approach

The objective of indirect trust computation is to determine the trustworthiness of an unfamiliar service requester from the set of recommendations that narrow the gap between the derived recommendation and the actual trustworthiness of the target service. In our approach, a dishonest recommendation is defined as an outlier that appears to be inconsistent with other recommendations and has a low probability that it originated from the same statistical distribution as the other recommendation in the data set. The importance of detecting outliers in data has been recognized in the fields of database and data mining for a long time. The outlier deviation-based approach was first proposed in[27], in which an exact exception problem was discussed. In[8], the author presented a new method for deviation-based outlier detection in a large database. The algorithm locates the outlier by a dynamic programming method. In this paper, we have extended this outlier detection technique to filter out dishonest recommendations. Our approach (Algorithm 1) is based on the fact that if a recommendation is far from the median value of a given recommendation set and has a lower frequency of occurrence, it is filtered out as a dishonest recommendation. Suppose that an entity X requests to access service A. If service A has no previous interaction history with X, it will broadcast the request for recommendations, with respect to X. Let R denote the set of recommendations collected from recommenders.

R = r 1 , r 2 , r 3 , … … , r n

where n is the total number of recommendations. Since smart attackers can give recommendations with little bias to go undetected, we divide the range of possible recommendation values into b intervals (or bins). These bins define which recommendations we consider to be similar to each other such that all recommendations that lie in the same bin are considered alike. b has an impact on the detection rate. If the bins are too wide, honest recommendations might get filtered out as dishonest. On the other hand, if the bins are too narrow, some dishonest recommendations may appear to be honest and vise versa. In this paper, we have tuned b = 10 such that R c1 comprises all recommendations that lie between interval [0 0.1], R c2 comprises all recommendations between interval [0.1 0.2], and so on for (R c3, …, R c10). After grouping the recommendations in their respective bins, we compute a histogram that shows count f i of the recommendations falling in each bin. Let H be a histogram of a set of recommendation classes where

H ( R ) = 〈 R c 1 , f 1 〉 , 〈 R c 2 , f 2 〉 , 〈 R c 3 , f 3 〉 , 〈 R c 4 , f 4 〉 , 〈 R c 5 , f 5 〉 , 〈 R c 6 , f 6 〉 , 〈 R c 7 , f 71 〉 , 〈 R c 8 , f 8 〉 , 〈 R c 9 , f 9 〉 , 〈 R c 10 , f 10 〉

where f i is the total number of recommendations falling in R c i . From this histogram H(R), we remove all the recommendation classes with zero frequencies and get the domain set (R domain) and frequency set (f)

R domain = R c 1 , R c 2 , R c 3 , … … , R c 10 f = f 1 , f 2 , f 3 , … … , f 10 .

Algorithm 1 Recommendation

Definition 1

The dissimilarity function DF(x i ) is defined as

DF( x i )= | x i − median ( x ) | 2 f i
(1)

where x i is a recommendation class from a recommendation set x.

Under the proposed approach, the dissimilarity value of x i is dependent on the square of absolute deviation from the median, i.e., |x i  − median(x)|2. The median is used to detect deviation because it is resistant to outliers. The presence of outliers does not change the value of the median. In Equation 1, the square of absolute deviation from the median is taken to signify the impact of extremes, i.e., the farther the recommendation value x i is from the median, the larger the squared deviation is. Moreover, the dissimilarity value of x i is inversely proportional to its frequency. In Equation 1, |x i  − median(x)|2 is divided by frequency f i . In this way, if a recommendation is very far from the rest of the recommendations and its frequency of occurrence is also low, Equation 1 will return a high value. Similarly, if a recommendation is close to the rest of the recommendations (i.e., similar to each other) and its frequency of occurrence is also high, Equation 1 will return a low value.

For each R c i , a dissimilarity value is computed using Equation 1 to represent its dissimilarity from the rest of the recommendations with regard to their frequency of occurrence. All the recommendation classes in R domain are then sorted with respect to their dissimilarity value DF(R c i ) in descending order. The recommendation class at the top of the sorted R domain with respect to its DF(x j ) is considered to be the most suspicious one to be filtered out as dishonest recommendation. Once the R domain is sorted, the next step is to determine the set of dishonest recommendation classes from R domain set. To help find the set of dishonest recommendation classes from the set of recommendations in R domain, Arning et al.[27] defined a measure called smoothing factor (SF).

Definition 2

A SF for each SRdomain is computed as

SF ( SRdomain j ) = C ( R domain − SRdomain j ) × ( DF ( R domain ) − DF ( SRdomain j ) )
(2)

where j = 1,2,3…,m, and m is the total number of distinct elements in SRdomain. C is the cardinality function and is taken as the frequency of elements in a set {R domain−SRdomain j }. The SF indicates how much the dissimilarity can be reduced by removing a suspicious set of recommendation (SRdomain) from the R domain.

Definition 3

The dishonest recommendation domain (R domaindishonest) is a subset of R domain that contributes most to the dissimilarity of R domain and with the least number of recommendations, i.e., R domaindishonest ⊆ R domain. We say that SRdomain x is a set of dishonest recommendation classes with respect to SRdomain, C, and DF(SRdomain j ) if

SF ( SRdomain x ) ≥ SF ( SRdomain j ) x , j ∈ m

for all R domain, C, and SRdomain j .

In order to find out the set of dishonest recommendation R domaindishonest from R domain, the mechanism defined by the proposed approach is as follows:

  •  Let R c k be the kth recommendation class of R domain and SRdomain be the set of suspicious recommendation classes from R domain, i.e., SRdomain ⊆ R domain.

  •  Initially, SRdomain is an empty set, SRdomain0 = {}

  •  Compute SF(SRdomain k ) for each SRdomain k formed by taking the union of SRdomaink−1 and R c k .

    SRdomain k = SRdomain k − 1 ∪R c k
    (3)
  •  where k = 1,2,3…,m − 1, and m is the distinct recommendation class value number in sorted R domain.

  •  The subset SRdomain k with the largest SF(SRdomain k ) is considered as a set containing dishonest recommendation classes.

  •  If two or more subsets in SRdomain k have the largest SF, the one with minimum frequency is detected as the set containing dishonest recommendation classes.

After detecting the set R domaindishonest, we remove all recommendations that fall under the dishonest recommendation classes.

An illustrative example

To illustrate how our deviation detection mechanism filters out unfair recommendations, this section provides an example that goes through each step of our proposed approach. Let X be a service requester who has no prior experience with service provider A. In order to determine the trustworthiness of X, A will request recommendations from its peer services who have previous interaction with X. Let R = {r1,r2,r3,……,r n } be a set of recommendations received by n = 122 recommenders for service requester R. After receiving the recommendations, they are grouped in their respective bins. Table1 shows how the received recommendations are grouped in their respective classes. After arranging the recommendations in their respective recommendation class R c i , we remove the recommendation classes with zero frequencies and calculate DF(R c i ) for each recommendation class using Equation 1. Table2 shows the sorted list of recommendation classes with respect to their dissimilarity value.

Table 1 Frequency distribution of recommendations
Table 2 Recommendation classes sorted with respect to their DF

In Table2 the recommendation class R c5 has the highest deviation value, so it is taken as a suspicious recommendation class and is added to the suspicious recommendation domain (SRdomain), and its SF is calculated. Next we take the union of the suspicious recommendation domain SRdomain1 and the next recommendation class in the sorted list, i.e., R c4 and calculate its SF using Equation 2. This process is repeated for each R c i of R domain until SRdomain = R domain−R c m , where m = 5.

Table3 shows that the SF of SRdomain2 has the highest value. Therefore, the recommendation classes {0.8,0.9} in SRdomain3 are considered as dishonest recommendation classes, and these recommendation classes are removed from the R domain.

Table 3 Smoothing factor computation

Performance evaluation of the proposed approach

In this section, we evaluate our model in a simulated multi-agent environment. We carry out different sets of experiments to demonstrate the effectiveness of the proposed model against different attack scenarios (BM attack, BS attack, and RO attack). Results indicate that the model is able to respond to all three types of attack when the percentage of malicious recommenders is varied from 10% to 40%. We have also studied the performance of the model by varying the offset introduced by the malicious recommender in their recommended trust value. It was observed that the performance of the models decreases only when the percentage of malicious recommenders is above 30% and the mean offset between the honest and dishonest recommendation is minimum (0.2).

Experimental setup

We simulate a multi-agent environment using AnyLogic 6.4, where agents (offering and requesting services) are continuously joining and leaving the environment. The agents are categorized into two groups, i.e., agents offering services as service provider agents (SPA) and agents consuming services as service requesting agents (SRA). We conduct a series of experiments for a new SPA to evaluate the trustworthiness of an unknown SRA by requesting recommendation from other SPAs in the environment. All SPAs can also act as recommending agents (RA) for other SPAs. The RA gives recommendations, in a continuous range [ 0 1], for a given SRA on the request of a SPA. The RA can either be honest or dishonest depending on the trustworthiness of its recommendation. An honest RA truthfully provides recommendation based on its personal experience, whereas a dishonest RA insinuates a true experience to a high, low, or erratic recommendation with a malicious intent. The environment is initialized with set numbers of honest and dishonest recommenders (N = 100). The simulation is run in steps, the total number of which is defined by NSTEPS.

Experiment 1 : validation against attacks

To analyze the effectiveness of the proposed approach, three inherent attack scenarios (bad mouthing, ballot stuffing, and random opinion attack) for recommendation models have been implemented in the above defined simulation environment.

Bad mouthing attack

BM is one in which the intention of the attacker is to send malicious recommendations that will cause the evaluated trustworthiness of an entity to decrease. Let us suppose that the service provider asks for recommendations regarding an unknown service requester A. In this experiment we assume that a certain percentage of the recommenders are dishonest and launch a BM attack against (A) by giving dishonest recommendations. It is assumed that the actual trust value of A is 0.7. At the initial step of the simulation, the environment has 10% dishonest RA who attempt to launch a bad mouthing attack against A by providing low recommended trust values (between the range [0 0.3]). To elaborate the efficacy of the proposed approach, we vary the percentage of dishonest recommenders from 10% to 40%. Figure1a,b,c,d shows the SF calculated for each SRdomain. It is shown that in each case the proposed approach is able to detect the set of bad mouthers giving low recommendation between 0.1 and 0.3. For example, in Figure1a when the percentage of dishonest recommenders is 10%, the SRdomains and respective SF values are as follows:

Figure 1
figure 1

Detecting attack. (a) BM, 10% dishonest recommender. (b) BM, 20% dishonest recommender. (c) BM, 30% dishonest recommender. (d) BM, 40% dishonest recommender. (e) BS, 10% dishonest recommender. (f) BS, 20% dishonest recommender. (g) BS, 30% dishonest recommender. (h) BS, 40% dishonest recommender. (i) RO, 10% dishonest recommender. (j) RO, 20% dishonest recommender. (k) RO, 30% dishonest recommender. (l) RO, 40% dishonest recommender.

Since the SF of SRdomain5 has the highest value, the recommendation classes {0.1,0.2,0.3} are considered as dishonest recommendation classes, and the recommendations that belong to these recommendation classes are considered as dishonest recommendations.

Ballot stuffing attack

BS is one in which the intention of the attacker is to send malicious recommendations that will cause the evaluated trustworthiness of an entity to increase. Let us suppose that the service provider asks for recommendations regarding an unknown service requester B. It is assumed that the actual trust value of B is 0.3. A certain percentage of recommenders providing the recommendation to the service provider are dishonest and gives a high recommendation value between 0.8 to 1.0, thus launching a BS attack. We evaluate the proposed approach by varying the percentage of dishonest recommenders from 10% to 40%. Figure1e,f,g,h shows the SF values for SRdomains in each case. It is evident from the results that the model is able to detect dishonest recommendations even when the percentage of dishonest recommendations is 40%. From Figure1h (when the percentage of dishonest recommendations is 40%), the SF values of each SRdomain are as follows:

The proposed approach is able to detect the dishonest recommendations as SRdomain 3 with the highest SF value of 5.47.

Random opinion attack

RO attack is one in which the malicious recommender gives the recommendations randomly opposite the true behavior of the entity in question. Let us suppose that the recommenders launch a RO attack while providing recommendations for a service requester C. The dishonest recommenders provide either very low recommendations (0.1 to 0.2) or very high recommendations (0.8 to 1.0). We vary the percentage of dishonest recommenders from 10% to 40% for the experiment. The SF values for the respective SRdomains in each case are shown in Figure1i,j,k,l. The proposed approach successfully detects random opinion attack and is able to filter out the dishonest set of recommenders in each case.

Experiment 2: validation against deviation

The detection rate of unfair recommendations by varying the number of malicious recommenders cannot fully describe the performance of the model as the damage caused by different malicious recommenders can be very different depending on the disparity between the true recommendation and unfair recommendation(i.e., offset). The offset introduced by the attackers in the recommended trust value is a key factor in instilling deviation in the evaluated trust value of SRA. We have carried out a set of experiments to observe the impact of different offset values introduced by different malicious recommenders on the final trust value. We define mean offset (MO) as the difference between the mean of honest recommendations and the mean of dishonest recommendations. For the experiment, we have divided MO into four different levels L1 = 0.2, L2 = 0.4, L3 = 0.6, and L4 = 0.8. It is assumed that the actual trust value of SRA is 0.2, and the dishonest recommender’s goal is to boost the recommended trust value of SRA (BS attack). The experiment was conducted in four different rounds by varying the MO level from L4 to L1 (i.e., from maximum to minimum). In each round, the recommended trust value is computed with different percentages of dishonest recommenders (10%, 20%, 30%, and 40%).

Figure2 shows the performance of the proposed approach during each round of the experiment. The results in Figure2a,b show that when the MO level is high (L3 and L4), the proposed approach computes the actual recommended trust value accurately for all percentages of dishonest recommenders. However, in Figure2c,d, when the MO level (L1 and L2) is low, the detection rate of the proposed approach deteriorates slightly because the dishonest recommendations are very close to the honest recommendations. However, it is also observed that even though the detection rate is low due to less MO between honest and dishonest recommendations, the damage caused by undetected dishonest recommendations is very low. The largest damage was observed when the percentage of dishonest recommenders is 40% and the mean offset is 0.2 (Figure2d). In this case, the bias introduced by the undetected dishonest recommendation in recommended trust value (0.25−0.2 = 0.05) is a very low value and does not have much impact on the final recommended trust value.

Figure 2
figure 2

Accuracy by varying offset. Mean offset at (a) L4, (b) L3, (c) L2, and (d) L1.

Comparative experiments

In this section, we focus on the comparative analysis on our proposed approach with other competing approaches. Since the proposed approach is an extension of[8], we have carried out a set of experiments to demonstrate the improved performance of the proposed approach as compared to[8] termed as the base model. The experimental results substantiate the enhanced capability of the proposed approach to detect dishonest recommendations by varying MO and the percentage of dishonest recommendations. On the contrary, the performance of the base model degrades considerably as compared to the proposed approach. In the literature, many approaches have been proposed to evaluate accurate recommended trust value in the presence of dishonest recommendations. We compare the performance of our proposed approach with those of Ahamed et al.[25], Whitby et al.[23], and Deno et al.[26]. These three models utilize endogenous approach based on majority rule to evaluate the recommended trust value and are, therefore, comparable in their capability and performance with the proposed approach.

Comparison with the base model

In the last section, it was observed that MO and the number of dishonest recommenders play a vital role in introducing deviation in the recommended trust value. The efficiency of the proposed approach has been established through a series of experimental results. In order to further elucidate the performance of the proposed approach, a comparative analysis between the proposed approach and the base model[8] was carried out. It has already been established that dishonest recommendations are difficult to detect when either the percentage of the dishonest recommenders is high or the MO level is very low. Therefore, in this experiment we have simulated two scenarios: (1) the MO level is set very low (L1 = 0.2), and the percentage of dishonest recommenders is varied from 10 % to 48%; (2) the MO level is kept very high (L4 = 0.8) while varying the percentage of dishonest recommenders from 10 % to 48%. The experiment was conducted for 50 simulation runs, each time with different randomly generated data set of honest and dishonest recommendations. Figure3 shows the average detection rate of the proposed approach and base model observed during each round of the experiment. Figure3a shows that the proposed approach can accurately detect dishonest recommendations when their percentage is less than 36%. Even when the percentage of dishonest recommenders is 48%, the detection rate is higher than 70%, whereas the base model is unable to detect all dishonest recommenders even when the percentage of dishonest recommendations is as low as 10%. Moreover, Figure3b shows that at high MO level (L4), the detection rate of the base model drastically falls as the percentage of dishonest recommendations exceeds 28%. On the contrary, the performance of the proposed approach remains 100% when the percentage of dishonest recommenders is less than 50%.

Figure 3
figure 3

Comparison between the base model and proposed model. (a) Mean offset is low. (b) Mean offset is high.

Comparison with existing approaches

To illustrate the effectiveness of the proposed deviation-based approach in detecting dishonest recommendations, we have compared our approach with other approaches proposed in the literature based on quartile[23], control limit chart[25], and iterative filtering[26] to detect dishonest recommendations in indirect trust computation. A set of experiments has been carried out by applying the approaches to detect dishonest recommendations in two different scenarios. For the first set of experiments, we assume that a certain percentage of the recommenders are dishonest and launch bad mouthing attack by giving recommendations between 0.1 to 0.3. For the second set of experiments, the dishonest recommenders are assumed to give a high recommendation value between 0.8 to 1.0, thus launching a ballot stuffing attack. In both set of experiments, the percentage of dishonest recommenders is varied from 10% to 45%. For comparison, we have used Matthews correlation coefficient (MCC) to measure the accuracy of all four approaches in detecting dishonest recommendations[28]. MCC is defined as a measure of the quality of binary (two-class) classifications. It takes into account true and false positives and negatives. The formula used for MCC calculation is

MCC = ( TP × TN ) − ( FP × FN ) ( TP + FP ) ( TP + FN ) ( TN + FP ) ( TN + FN )

where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives, and FN is the number of false negatives. MCC returns a value between −1 and 1 (1 means perfect filtering, 0 indicates no better than random filtering, and −1 represents total inverse filtering). To avoid infinite results while calculating MCC, it is assumed that if any of the four sums (TP, FP, TN, and FN) in the denominator is zero, the denominator is arbitrarily set to one.

The Figure4 shows the comparison of MCC values of the proposed approach with different models with varying percentage of dishonest recommendations (from 10% to 4%). According to the results, the proposed approach can effectively detect dishonest recommendations evident from a constant MCC of +1 for both sets of experiments. On the other hand, in[25], in the case of bad mouthing attack (Figure4a), MCC increases slowly as the percentage of dishonest recommenders increases from 10% to 30% but then decreases promptly to 0 as the percentage of dishonest recommender increases from 30% to 45%. The same behavior was observed in the case of ballot stuffing attack (Figure4b). In[26], when the percentage of dishonest recommender increases to 40%, the MCC rate starts to decrease as well. Thus, all three approaches ([25, 26], and[23]) fail to achieve perfect filtering of dishonest recommendation as the percentage of dishonest recommenders increases.

Figure 4
figure 4

Filtering accuracy in terms of MCC. (a) Bad mouthing attack and (b) ballot stuffing.

For an in-depth analysis of[25, 26], and[23], false positive rate (FPR) and false negative rate (FNR) are computed for using the following equations:

FPR = FP FP + TN

and

FNR = FN FN + TP

The value of FPR and FNR lies between [0 1]. The lower value of FPR and FNR indicates better performance. Figure5a shows the comparison of FNR and FPR of[25] with the proposed approach based on the results accumulated after the experiments for BM attack. Although the FPR of[25] remains consistent at zero when the percentage of dishonest recommendations is increased from 10% to 40%, at the same time, its FNR progressively increases and reaches its maximum value at 40%. Similarly, Figure5b shows that[26] maintains zero FPR and FNR until dishonest recommenders are less than 30% of the total recommenders. However, as the number of dishonest recommenders increases above 30%, the model behaves poorly by showing a rapid increase in FPR and FNR. Figure5c shows that although the FPR of[23] improves as the percentage of dishonest recommenders increases, simultaneously, the FNR starts to grow rapidly for percentages greater than 20%. On the contrary, the proposed approach maintains zero FNR and FPR even when the percentage of dishonest recommenders reaches 40%. Figure6a explicates the results observed from the performance of[25] under ballot stuffing attack. The approach maintains zero FPR throughout the experiment; however, it filtered out a high number of honest recommenders as dishonest, evident from the high FNR. Similarly, the performance of[26] remains stable until the percentage of dishonest recommenders remains below 30% (Figure6b). However, the approach also shows a rapid growth in FNR as the percentage of dishonest recommenders increases above 30%. Figure6c shows that[23] is completely unable to detect ballot stuffing. The approach shows a high FPR even at low percentages of dishonest recommenders. It can be seen from the results of Figures5 and6 that the proposed approach remains resistant to the attack under both experiments (as the FBR and FNR remains zero), thus outperforming other approaches.

Figure 5
figure 5

FPR vs FNR (in bad mouthing attack). (a) Control limit charts. (b) Iterative filtering. (c) Quartile.

Figure 6
figure 6

FPR vs FNR (in ballot stuffing attack). (a) Control limit charts. (b) Iterative filtering. (c) Quartile.

From the above discussion, we can conclude that both[25] and[23] perform poorly in the presence of increasing percentage of dishonest recommenders. It is also observed that[26] performs well provided that the recommendation threshold is selected appropriately. On the contrary, the proposed approach is not reliant on any external parameter and is able to detect 100% dishonest recommenders provided that they are in the minority (<50%).

Conclusions

A mechanism for detecting dishonest recommendation in indirect trust computation is proposed. The main focus in the present work was to detect dishonest recommendations based on their dissimilarity value from the complete recommendation set. Since median is resistent to outlier, we have proposed a dissimilarity function that captures how dissimilar a recommendation class is from the median of the recommendation set. The algorithm uses a smoothing factor which detects malicious recommendations by evaluating the impact on the dissimilarity metric by removing a subset of recommendation classes from the set of recommendations.

Experimental evaluation shows the effectiveness of our proposed method in filtering dishonest recommendations in comparison with the base model. Results show that the proposed method is successfully able to detect dishonest recommendations by utilizing absolute deviation from the median as compared to the base technique which tends to fail as the percentage of dishonest recommendations increases. We have carried out a detailed comparative analysis with the base approach by varying the percentage and the offset introduced by the dishonest recommendations. Results that indicate improved performance of the proposed approach, which is able to produce 70% detection rate at a minimum offset of 0.2, have been shown. On the contrary, the base approach is unable to detect any dishonest recommendations at all. It is also shown that for different attacks (bad mouthing, ballot stuffing, and random opinion attack), the proposed method successfully filters out dishonest recommendations. A comparison between existing approaches and the proposed approach is also presented, which clearly shows the better performance of the proposed approach. In our future work, we will study the possibility of incorporating the proposed approach to existing reputation models that make decision on the basis of recommendations.

References

  1. Wagealla W, Carbone M, English C, Terzis S, Nixon P: A formal model on trust lifecycle management. In Workshop on Formal Aspects of Security and Trust. Pisa; 9–12 September 2003:184-195.

    Google Scholar 

  2. English C, Wagealla W, Nixon P, Terzis S, McGettrick A, Lowe H: Trusting collaboration in global computing. In 1st International Conference on Trust Management. Hiedelberg: Springer; 2003:136-149.

    Chapter  Google Scholar 

  3. Shand B, Dimmock N, Bacon J: Trust for ubiquitous,transparent collaboration. In Proceedings of the First IEEE International Conference on Pervasive Computing Communications. Los Alamitos: IEEE; 2003:153-160.

    Google Scholar 

  4. Deno MK, Sun T: Probabilistic trust management in pervasive computing. IEEE/IFIP Int. Conf. Embedded Ubiquitous Comput 2008, 2: 610-615.

    Article  Google Scholar 

  5. Iltaf N, Ghafoor A, Hussain M: Modeling interaction using trust and recommendation in ubiquitous computing environment. EURASIP J. Wireless Commun. Netw 2012, 2012: 119. 10.1186/1687-1499-2012-119

    Article  Google Scholar 

  6. Almenarez F, Marin A, Diaz D, Cortes A, Campo C, Garcia C: Trust management for multimedia P2P applications in autonomic networking. Adhoc Netw 2011, 9: 687-690.

    Google Scholar 

  7. Hoffman K, Zage D, Nita-Rotaru C: A survey of attack and defense techniques for reputation systems. ACM Comput. Surv. 2009, 42(1):1-31.

    Article  Google Scholar 

  8. Zhang Z, Feng X: New methods for deviation-based outlier detection in large database. In Sixth International Conference on Fuzzy Systems and Knowledge Discovery. Los Alamitos: IEEE; 2009:495-499.

    Google Scholar 

  9. Josang A, Ismail R, Boyd C: A survey of trust and reputation systems for online service provision. Decis. Support Syst. 2007, 43(2):618-644. 10.1016/j.dss.2005.05.019

    Article  Google Scholar 

  10. Xiong L, Liu L: Peertrust: supporting reputation-based trust for peer-to-peer electronic communities. IEEE Trans. Knowl. Data Engr 2004, 16(7):843-857. 10.1109/TKDE.2004.1318566

    Article  Google Scholar 

  11. Chen M, Singh JP: Computing and using reputations for internet ratings. In 3rd ACM Conference on Electronic Commerce. New York: ACM; 2001:154-162.

    Chapter  Google Scholar 

  12. Malik Z, Bouguettaya A: Evaluating rater credibility for reputation assessment of web services. In 8th International Conference on Web Information Systems Engineering. Heidelberg: Springer; 2007:38-49.

    Google Scholar 

  13. Ganeriwal S, Balzano LK, Srivastava MB: Reputation-based framework for high integrity sensor networks. ACM Trans. Sensor Netw. 2008, 4: 1-37.

    Article  Google Scholar 

  14. Zhou R, Hwang K: Powertrust: a robust and scalable reputation system for trusted peer-to-peer computing. IEEE Trans. Parallel Distributed Syst. 2007, 18(4):460-473.

    Article  Google Scholar 

  15. Liu X, Datta A, Fang H, Zhang J: Detecting imprudence of reliable sellers in online auction sites. In IEEE 11th International Conference on Trust, Security and Privacy in Computing and Communications. Los Alamitos: IEEE; 2012:246-253.

    Google Scholar 

  16. Ziegler C, Golbeck J: Investigating interactions of trust and interest similarity, Decision Support Systems. 2007, 43(2):460-475.http://dx.doi.org/10.1016/j.dss.2006.11.003

    Article  Google Scholar 

  17. Varlamis I, Eirinaki M, Louta M: A study on social network metrics and their application in trust networks. In 2010 International Conference on Advances in Social Networks Analysis and Mining (ASONAM). Los Alamitos: IEEE; 2010:168-175.

    Chapter  Google Scholar 

  18. Davoodi E, Afsharchi M, Kianmehr K: A social network-based approach to expert recommendation system. In Hybrid Artificial Systems. Heidelberg: Springer; 2012:91-102.

    Chapter  Google Scholar 

  19. Ma H, King I, Lyu M: Learning to recommend with social trust ensemble. In 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM; 2006:203-210.

    Google Scholar 

  20. Almenarez F, Marin A, Diaz D, Cortes A, Campo C, Garcia C: Managing ad-hoc trust relationships in pervasive computing environments. In Proceedings of the Workshop on Security and Privacy in Pervasive Computing, SPPC’04. Vienna; 20 April 2004.

    Google Scholar 

  21. Dellarocas C: Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior. In 2nd ACM Conference on Electronic Commerce. New York: ACM; 2000:150-157.

    Chapter  Google Scholar 

  22. Liu S, Zhang J, Miao C, Theng Y, Kot A: An integrated clustering-based approach to filtering unfair multi-nominal testimonies. Comput. Intell. 2012. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8640.2012.00464.x/full

    Google Scholar 

  23. Whitby A, Josang A, Indulska J: Filtering out unfair ratings in Bayesian reputation systems. In 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems. Washington: IEEE; 2005:106-117.

    Google Scholar 

  24. Weng J, Miao C, Goh A: An entropy-based approach to protecting rating systems from unfair testimonies. IEICE Trans. Inf. Syst 2006, 89(9):2502-2511.

    Article  Google Scholar 

  25. Ahamed SI, Haque M, Endadul M, Rahman F, Talukder N: Design, analysis, and deployment of omnipresent formal trust model (FTM) with trust bootstrapping for pervasive environments. J. Syst. Software 2010, 83(2):253-270. 10.1016/j.jss.2009.09.040

    Article  Google Scholar 

  26. Deno MK, Sun T, Woungang I: Trust management in ubiquitous computing: a Bayesian approach. Comput. Commun. 2011, 34(3):398-406. 10.1016/j.comcom.2010.01.023

    Article  Google Scholar 

  27. Arning A, Agrawal R, Raghavan P: A linear method for deviation detection in large databases. In 2nd International Conference on Data Mining and Knowledge Discovery. Portland: AAAI; 1996:164-169.

    Google Scholar 

  28. Matthews BW: Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta 1975, 405: 442-451. 10.1016/0005-2795(75)90109-9

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abdul Ghafoor.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Iltaf, N., Ghafoor, A. & Zia, U. A mechanism for detecting dishonest recommendation in indirect trust computation. J Wireless Com Network 2013, 189 (2013). https://doi.org/10.1186/1687-1499-2013-189

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2013-189

Keywords