This section describes the distributed trust computation method to adapt the active topology and to secure PKIMANET system. The proposed trust methodology is assumed to deploy in the clustered environment with header and members nodes. Generally, the trust of a node can be defined as the probability of belief of a trustor (m) on a trustee (n), varying from 0 (complete distrust) to 1 (complete trust). The probability of trust and distrust of the trustor on information (i) send by the trustee with context to belief (b) is given as:
$$ Trust\ Degree, TD\left(m,n,i,b\right)=P\left[ belief\ \left(m,i\right)\left made\ By\left(i,n,b\right)\bigwedge beTrue(b)\ \right.\right] $$
(1)
$$ Distrust\ Degree, DTD\left(m,n,i,b\right)=P\left[ belief\ \left(m,\dot{\neg}i\right)\left made\ By\ \left(i,n,b\right)\bigwedge beTrue(b)\ \right.\right] $$
(2)
Distributed trust management
The distributed trust is computed based on a hybrid method which combines the direct and indirect trust values. The direct trust is based on direct observations obtained by sending SENSE beacon constantly to the neighbouring nodes and evaluating these observations. Whereas, recommendations from the onehop neighbour contributes to the indirect trust computation. The hybrid trust is computed by combining the direct as well as the indirect components. Unlike a centralized trust calculation, here, each node computes its own trust value on its neighbour. The trust computation of trustor x on trustee y, (T_{x, y}), by hybrid mechanism is given in Fig. 2: hybrid trust method. It is calculated as:
$$ T_{xy}= (1\textmd{f}\,){T_{xy}}^{D}+\textmd{f}\,{T_{xy}}^{ID} $$
(3)
where ƭ is the trust component; 0 ≤ ƭ ≤ 1
T_{x, y}^{D} is the direct trust made by m on n; 0 ≤ T_{x, y}^{D} ≤ 1
T_{x, y}^{ID}is the indirect trust made by m on n; 0 ≤ T_{x, y}^{ID} ≤ 1
The direct trust computation is performed with the direct observations of x on y at time t is given by (4). The trust may decay with the change in the time (t_{1}), represented by the fading component δ.
$$ {T_{x,y}}^D=\left\{\begin{array}{c}{T_{x,y}}^D(t)\kern3.75em ;\mathrm{if}\ \mathrm{hop}\ \mathrm{count}==1\kern0.75em \\ {}\kern3em \\ {}\delta\ {T_{x,y}}^D\left(t{t}_1\right)\kern1.50em ;\kern3.00em \mathrm{else}\kern4.5em \end{array}\right. $$
(4)
The indirect trust evaluated by x on y with respect to the recommendation from onehop neighbour of x (node k), at time t is given by (5). The trust reduces with t_{1} when y receives false recommendations from a recommender (say node p) located within an appropriate trust length from y.
$$ {T_{x,y}}^{ID}=\left\{\begin{array}{c}{T}_{k,n}\kern5.25em ;\left TR\right>0\kern7.75em \\ {}\kern0.75em \\ {}\begin{array}{cc}\delta {T}_{x,y}\ \left(t{t}_1\right)&; \kern1.75em \mathrm{else}\end{array}\kern7.25em \\ {}\kern4.5em \end{array}\right. $$
(5)
where TR is the set of true recommendations received from x’s onehop neighbour (i.e., k). When TR > 0, x appoints those neighbouring nodes to evaluate the trust indirectly. On the other hand, if TR = 0, y uses its previous trust value T_{x, y} (t − t_{1}), since it received no true recommendations.
Direct and indirect trust management
Uncertainty is an unresolved problem in MANET, especially while evaluating the trust of the network. With the uncertainty, the nodes may misbehave due to selfish or malicious attackers. In each cluster, the cluster heads are authorized to monitor the misbehaviours locally and to collaborate the cluster members to further investigate the effect of misbehaviour on the network. When a cluster head detects a sign of misbehaviour from any node (say node x), it first evaluates the credibility of the message. Subsequently, the CH requests the cluster members, especially the onehop neighbours of the suspicious node to share their individual observations about x. We consider these observations as evidences which are assembled to evaluate the evidence trust factor (\( {\mathbb{E}}^x(e) \)). Furthermore, the CH monitors the rate of misbehaviour by directly observing the node x as (\( {\mathbb{E}}^x(d) \)). The trust management systems combines these direct observations and the evidences obtained from the onehop cluster members to evaluate the trustworthiness of x.
The trust management becomes more complex when the observing node (called recommender) itself behaves untrustworthy, which contributes false evidences. Such system makes MANET trust evaluation impracticable especially in detecting which recommender is untrustworthy. Therefore, we make use of the wellknown DempsterShafer (DS) evidence theory, where the uncertainty of nodes is represented using belief functions. The main idea of the DS theory is that a recommender attains a certain degree of belief on a hypothesis based on the subjective probability. DS theory provides an appropriate mathematical model for MANET, to combine distributed information gathered from different sources.
Trust verification with the Bayesian theory
We consider that the CH monitors the packet forwarded by the suspected node and compare them with the original packets send directly to the node, in order to identify the misbehaviour nature of the node x. Let consider a node x maintains for its neighbouring node y. Then, for a set of nodes N, the CH supervises the packet ratio as in (6):
$$ {\sum}_{x\in N}{S}_{xy}={\sum}_{x\in N}{F}_{xy} $$
(6)
where S_{
xy
} is the number of packets forwarded to node x by the neighbouring node y and F_{
xy
} is the number of packets forwarded by node x. If the packet ratio is not equal, a misbehaviour is identified by the CH, i.e., if \( {\sum}_{x\in N}{S}_{xy}\ne {\sum}_{x\in N}{F}_{xy} \), it is understandable that node x is misbehaving either due to selfish or malicious attackers.
Thus, the CH directly evaluate the misbehaviour and calculates the trust factor of its cluster members with a Bayesian inference, where the unknown probabilities are hypothesized using observations. The measure of belief about a hypothesis shall be represented by the wellknown Baye’s theorem:
$$ P\left(\left.\mathcal{i}\rightj\right)=\frac{P\left(\left.j\right\mathcal{i}\right)P\left(\mathcal{i}\right)}{P(j)} $$
(7)
where \( \left[\left.\mathcal{i}\rightj\right] \) is the measure of belief about the hypothesis (\( \mathcal{i}\Big) \) on the subject of the evidence (\( j\Big) \)
\( P\left[\mathcal{i}\right] \) is the belief about ɑ in the absence of \( j \)
In MANET, the higher the probability of any misbehaviour, the more likely it is that the misbehaviour will occur. Therefore, the Baye’s theorem may be expressed in terms of probability distributions as:
$$ P\left(\left.\delta \right data\right)=\frac{P\left(\left. data\right\delta \right)P\left(\delta \right)}{P(data)} $$
(8)
where [δdata] is the posterior distribution for the parameter δ, P[dataδ] is the sampling density function, P[δ] is the prior distribution and P[data] is the marginal probability function of data.
From (8), we shall modify the misbehaviour verification as:
$$ P\left(\delta, \left.a\rightb\right)=\frac{f\left(\left.b\right\delta, \alpha \right)P\left(\delta, \alpha \right)}{\int_0^1f\left(\left.b\right\delta, \alpha \right)P\left(\delta, \alpha \right) d\delta} $$
(9)
where degree of belief and 0 ≤ δ ≤ 1, b is the rate of correctly forwarded packets by a node, \( \alpha \) is the rate of packets received by the node, \( f\left(\left.b\right\delta, \alpha \right) \) is the probability function that follows a binomial distribution given by
$$ f\left(\left.b\right\delta, \alpha \right)=\left(\genfrac{}{}{0pt}{}{\alpha }{b}\right){\delta}^b{\left(1\delta \right)}^{\alpha b} $$
(10)
To describe the initial knowledge concerning probabilities of success, we use the beta distribution to the Bayesian approach and hence the prior distribution P(, , i) can be stated as:
where α, β > 0, is the power function of \( \mathcal{a} \) and b.
The mean and variance of the beta distribution function is given as:
$$ M\Big(\delta \left\alpha, \beta\ \Big)=\right.\frac{\alpha\ }{\alpha +\beta\ } $$
(12)
and
$$ V\Big(\delta \left\alpha, \beta\ \Big)=\right.\frac{\alpha \beta\ }{\alpha +\beta +1} \ast \frac{1}{{\left(\alpha +\beta \right)}^2} $$
(13)
In our scheme, the trust factor represents the behaviour which grows feebly, thereby giving more impact on the misbehaving rate in Bayesian networks. The trust factor for misbehaviour verification is given as:
(12)⇒
$$ M\Big(\delta \left\alpha, \beta\ \Big)=\right.\frac{\alpha\ }{\alpha +{\alpha}^x\beta\ } $$
(14)
The beta distribution is well suitable for the random behaviour of proportions. While considering the event history in the Bayesian framework, the expected value of beta distribution can be written as
(14)⇒
$$ M\Big(\delta \left\alpha, \beta\ \Big)\ \right.=\frac{\alpha_t}{\alpha_t+{\alpha^x}_t{\beta}_t\ } $$
(15)
where
$$ {\alpha}_t={\alpha}_{t1}+{i}_{t1} $$
$$ {\beta}_t={\beta}_{t1}+{b}_{t1} $$
and with the prior probability distribution, we assume no observations are made initially and so α_{0}, β_{0} = 0. Therefore, the direct trust factor that quantifies the behaviour of node x is deduced from the above calculations as:
$$ {T_x}^D(t)=\left({\mathbb{E}}^x(d)\right)=M\Big(\delta \left\alpha, \beta\ \Big)\ \right. $$
(16)
The accuracy of the proposed direct trust evaluation is improved by calculating the rate of correctly forwarded packets (b), which is incremented by one for each successful transmission. If the rate b is not increased, either due to unreliable network conditions or packet lifetime, the packets are considered as dropped and so discarded from the communication. Algorithm 1 describes the accuracy of direct calculation trust in the Bayesian framework.
Misbehavior verification with evidence theory
This section describes the misbehaviour verification with respect to the recommendations for the suspicious node x from the onehop neighbours within each cluster. The cluster head requests the onehop neighbours of x referred as recommenders, to verify the misbehaviours based on their independent observations, as shown in Fig. 3: indirect misbehaviour verification. The recommendations called evidences received from the cluster neighbours give assistance in evaluating the trust value of x.The DS theory is used in practice with uncertainty or ignorance to evaluate the value of trust. This theory utilizes a belief function to combine the indirect evidences, which reflects the subjective probabilities.
The probabilities which are mutually exclusive and exhaustive are computed as a set of functions with ^{'}Φ^{′} as a frame of discernment, in the DS evidence system. By including all the probabilities of the hypothesis called focal values P_{
k
} as a function of m, we consider a power set 2^{Φ} and satisfy the conditions as follows:

1.
The probability value of the null set is zero, i.e., \( \mathcal{M}\left(\delta \right)=0 \).

2.
The sum of all elements in the power set is 1, i.e., \( {\sum}_{P_k\subseteq \varPhi}\mathcal{M}\left({P}_k\right)=1 \)
The belief function of subjective probabilities shall therefore be defined as
$$ F(x)={\sum}_{P_k\subseteq x}\mathcal{M}\left({P}_k\right) $$
(17)
In the proposed trust management scheme, we consider two node behaviour states, i.e.,{accept, evict} represented with the DS theory. Using this, the frame of discernment is included with a set of probability pair regarding the behaviour of any random node. That is Φ = {trust, distrust}; where ‘ust’ represents the trustworthy behaviour of the node and ‘distrust’ represents the misbehaving node state which occurred in the presence of selfish and malicious attackers.
On considering Fig. 3, the neighbours node A, B and C of the suspicious node x at a hop distance equal to 1 shares their evidences with the CH, as a subset of Φ. We interpret the power set with three probability forms of proposition, i.e., proposition T = {trust}, proposition M = {distrust} and finally proposition H = Φ, which represent the uncertainty state where node x is uncertain whether to include as acceptable or misbehaving state. The neighbours provide recommendations as evidences by sharing its belief over Φ.
Consider an example, if node A believes that node x behaves trustworthy, then \( {\mathcal{M}}_A(T) is\ {\mathbb{E}}^x(A) \) and therefore \( {\mathcal{M}}_A(M) \) is 0. The evidence from node A can be stated as:
$$ {\displaystyle \begin{array}{c}{\mathcal{M}}_A(T)={\mathbb{E}}^x(A)\\ {}{\mathcal{M}}_A(M)=0\\ {}{\mathcal{M}}_A(H)=1{\mathbb{E}}^x(A)\end{array}} $$
(18)
Likewise, if node B believes that node x misbehaves, its recommendations favours the evict function as follows:
$$ {\displaystyle \begin{array}{c}{\mathcal{M}}_B(T)=0\\ {}{\mathcal{M}}_B(M)={\mathbb{E}}^x(B)\\ {}{\mathcal{M}}_B(H)=1{\mathbb{E}}^x(B)\end{array}} $$
(19)
DS theory of combining evidences
In the proposed trust management scheme, the DS theory combines all the recommendations of onehop neighbours with the condition that the recommendations are independent. Suppose F_{1}(x) and F_{2}(x) are belief functions of two independent recommending nodes, over the same suspicious node, then the orthogonal sum of these belief functions is given and represented as:
$$ {\displaystyle \begin{array}{c}F(x)={F}_1(x)+{F}_2(x)\\ {}=\frac{\sum_{j,k,{P}_j\cap {P}_k=x}\ {\mathcal{M}}_1\left({P}_j\right)\ast {\mathcal{M}}_2\left({P}_k\right)}{\sum_{j,k,{P}_j\cap {P}_k\ne \Phi}\ {\mathcal{M}}_1\left({P}_j\right)\ast {\mathcal{M}}_2\left({P}_k\right)}\end{array}} $$
(20)
where P_{
j
}, P_{
k
} ⊆ Φ.
With reference to Fig. 3, the belief of node A and B is calculated as
$$ {\displaystyle \begin{array}{c}{\mathcal{M}}_A(T)\oplus {\mathcal{M}}_B(T)\\ {}=\frac{1}{I}\left[{\mathcal{M}}_A(T)\ {\mathcal{M}}_B(T)+{\mathcal{M}}_A(T){\mathcal{M}}_B(H)+{\mathcal{M}}_A(H)\ {\mathcal{M}}_B(T)\right]\\ {}{\mathcal{M}}_A(M)\oplus {\mathcal{M}}_B(M)\\ {}\begin{array}{c}=\frac{1}{I}\left[{\mathcal{M}}_A(M)\ {\mathcal{M}}_B(M)+{\mathcal{M}}_A(M){\mathcal{M}}_B(H)+{\mathcal{M}}_A(H)\ {\mathcal{M}}_B(M)\right]\end{array}\\ {}{\mathcal{M}}_A(H)\oplus {\mathcal{M}}_B(H)\\ {}\begin{array}{c}=\frac{1}{I}\left[{\mathcal{M}}_A(H)\ {\mathcal{M}}_B(H)\right]\end{array}\end{array}} $$
(21)
where
$$ I={\mathcal{M}}_A(T)\ {\mathcal{M}}_B(T)+{\mathcal{M}}_A(T){\mathcal{M}}_B(H)+{\mathcal{M}}_A(H)\ {\mathcal{M}}_B(H)+{\mathcal{M}}_A(H)\ {\mathcal{M}}_B(T)+{\mathcal{M}}_A(H)\ {\mathcal{M}}_B(M)+{\mathcal{M}}_A(M)\ {\mathcal{M}}_B(D)+{\mathcal{M}}_A(M){\mathcal{M}}_B(H) $$
(22)
We assume the rate of acceptance of the probability of node A and B as 0.8 and 0.7, respectively, and thus,
$$ {\displaystyle \begin{array}{c}F(T)=0.94\\ {}F(M)=0\\ {}F(H)=0.6\end{array}} $$
Thus, we shall conclude the acceptable behaviour rate from the indirect evidences with DS theory is 0.9. By combining all the belief values, we get
$$ {T_x}^{ID}(t)={\mathcal{M}}_A(T)\bigoplus {\mathcal{M}}_B(T)\bigoplus \dots ..\bigoplus {\mathcal{M}}_N(T) $$
(23)
where nodes A, B, . …N are onehop recommenders of node x.
Therefore, the evidence trust value obtained from the recommendations can be computed as
$$ {T_x}^{ID}(t)=\left({\mathbb{E}}^x(e)\right)=F(x) $$
(24)
The indirect trust evaluation with Evidence theory and DST is depicted in Algorithm 2.