 Research
 Open Access
Inconsistency resolving of safety and utility in access control
 Jianfeng Lu^{1}Email author,
 Ruixuan Li^{2},
 Jinwei Hu^{3} and
 Dewu Xu^{1}
https://doi.org/10.1186/168714992011101
© Lu et al; licensee Springer. 2011
 Received: 14 November 2010
 Accepted: 18 September 2011
 Published: 18 September 2011
Abstract
Policy inconsistencies may arise between safety and utility policies due to their opposite objectives. In this work we provide a formal examination of policy inconsistencies resolution for the coexistence of static separationofduty (SSoD) policies and strict availability (SA) policies. Firstly, we reduce the complexity of reasoning about policy inconsistencies by static pruning technique and minimal inconsistency cover set. Secondly, we present a systematic methodology for measuring safety loss and utility loss, and evaluate the safetyutility tradeoff for each choice. Thirdly, we present two prioritizedbased resolutions to deal with policy inconsistencies based on safetyutility tradeoff. Finally, experiments show the effectiveness and efficiency of our approach.
Keywords
 access control
 safety
 utility
 separationofduty
1. Introduction
The safety and utility policies are very important in an access control system for ensuring security and availability when performing a certain task. Safety policies are used to describe safety requirements which ensure that users who should not have access do not get access. Such focus on safety requirements probably stems from the fact that safety policies have been mostly viewed as a tool for restricting access. An example of the safety policy is a static separationofduty (SSoD) policy, which precludes any group of users from possessing too many permissions [1]. An equally important aspect of access control is the utility policies that enables access [2, 3]. In our previous work [4], we have introduced the notion of availability policies which is an example of an utility policy. In this paper, we introduce the notion of strict availability (SA) policies, which is also an example of utility policy that requires that the cooperation among at most a certain number of users is necessary to perform a task. Due to their opposite objectives, safety policies and utility policies can conflict with each other. For example, let p_{1} and p_{2} be two permissions, and u_{1} and u_{2} two users. Assume that an SSoD policy requires that neither u_{1} nor u_{2} possess all permissions in {p_{1}, p_{2}}. An SA policy requires both u_{1} and u_{2} possess all permissions in {p_{1}, p_{2}}. Clearly, the two policies cannot be satisfied simultaneously.
This paper examines this kind of conflict: policy inconsistencies that result from the incompatibility between safety policies and utility policies, especially for the coexistence of SSoD policies and SA policies. Policy inconsistencies differ from the traditional policy conflicts [5] in that the composition of safety and utility policies is never supposed to be inconsistent. That means policy inconsistencies are checked at compiletime to prevent the construction of any safety or utility policy that may conflict with each other. A policy inconsistency results in a policy compilation error. Hence, the resolution for policy inconsistencies is a policy design problem, whereas policy conflicts are resolved at runtime. In practice, the policy administrator may define many safety and utility policies and these policies may be inconsistent. However, it is not easy to detect and resolve these policy inconsistencies. Thus, it is very important to help the policy administrator to detect and resolve the policy inconsistencies at compiletime. The above discussion motivates the problem considered in this paper.
In our previous work [4], we have addressed the problem of consistency checking for the coexistence of safety and utility policies [4]. In this paper, we aim for providing a formal examination of policy inconsistency resolution for safety and utility policies, which can help the policy administrators to specify reasonable access control policies when both safety and utility policies coexist. Our contributions are as follows:

We formally define the policy inconsistency for the coexistence of safety policies and utility policies.

We describe a static pruning technique that aims to reduce the number of policies that need to be taken into account.

We compute the minimal inconsistency cover set that is responsible for the policy inconsistencies; thus we only need to examine the minimum number of policies.

We present a systematic methodology for measuring safety loss and utility loss, and evaluate the safetyutility tradeoff for each candidate resolution.

We present two prioritizedbased resolutions to deal with policy inconsistencies for safety and utility policies based on safetyutility tradeoff.
The remainder of this paper is organized as follows. Section 2 formally defines the policy inconsistency problem for the coexistence of safety policies and utility policies. Section 3 presents prioritizedbased resolutions for policy inconsistencies. The evaluation and illustration of our approaches are given in Section 4. Section 5 discusses related work, and Section 6 concludes and discusses the future work.
2. Policy inconsistency problem
We assume that there are two countably infinite sets in an access control state: U (the set of all possible users), and P (the set of all possible permissions). An access control state ε is a binary relation UP ⊆ U × P, which determines the set of permissions a user possesses. Note that by assuming that an access control state ε is given by a binary relation UP ⊆ U × P, we are not assuming permissions are directly assigned to users; rather, we assume only that one can calculate the relation UP from the access control state.
Safety policies are used to describe safety requirements which ensure that users who should not have access do not get access. A safety policy is specified by giving a predicate on sets of executions. If conditions on (users, permissions) are satisfied, then a set U of users are prohibited from covering a set P of permissions. One example of a safety policy is an static separationofduty (SSoD) policy. SSoD policy is considered as a fundamental principle of information security that has been widely used in business, industry, and government applications [6]. An SSoD policy typically constrains the assignment of permissions to users, which precludes any group of users from possessing too many permissions. We first reproduce the definitions of SSoD policies from [4].
Definition 1. An SSoD policy ensures that at least k users from a user set are required to perform a task that requires all these permissions. It is formally defined as

P and U denote the set of permissions and the set of users, respectively.

UP ⊆ U × P, is a userpermission assignment relation.

auth_p_{ ε }(u) = {p(p ∈ P ) ⋀ ((u, p) ∈ UP)}.

∀(P, U, k) ∈ SSoD, ∀U' ⊆ U : U'  < k ⇒ ∪_{u∈U'} auth_p_{ ε }(u) ⊉ P.
where P = {p_{1}, ..., p_{ m } }, U = {u_{1}, ..., u_{ n } }, each p_{ i } in P is a permission, u_{ j } in U is a user, and m, n, and k are integers, such that 2 ≤ k ≤ min(m, n), where min returns the smaller value of the two. We write an SSoD policy as ssod <P, U, k>. An access control state ε satisfies an SSoD policy e = ssod <P, U, k>, which is denoted by sat_{ e } (ε). And sat_{ E } (ε) represents ε satisfies a set E of SSoD policies.
A utility policy is also specified by giving a predicate on sets of executions. If conditions on (users, permissions) are satisfied, then a set U of users are obligated to possess all the permissions in P. We now introduce the notion of strict availability (SA) policies, which is an example of utility policies that states properties about enabling access in access control. An SA policy requires that the cooperation among at most a certain number of users is necessary to perform a task.
Definition 2. An strict availability (SA) policy ensures that all sizet subsets of U are required to complete a task that requires all these permissions in P. It is formally defined as

P and U denote the set of permissions and the set of users, respectively.

UP ⊆ U × P, is a userpermission assignment relation.

auth_p_{ ε }(u) = {p(p ∈ P) ⋀ ((u, p) ∈ UP)}.

∀(P, U, t) ∈ SA, ∀U' ⊆ U : U'  = t ⇒ ∪_{u∈U'} auth_p_{ ε }(u) ⊇ P.
Where P = {p_{1}, ..., p_{ m } }, U = {u_{1}, ..., u_{ n } }, each p_{ i } in P is a permission, u_{ j } in U is a user, and m, n, and t are integers, such that 1 ≤ t ≤ min(m, n), where min returns the smaller value of the two, and the variable t in sizet is used to represent the cardinality of a set. We write an SA policy as sa <P, U, t>. An access control state ε satisfies an SA policy f = sa <P, U, t>, which is denoted by sat_{ f } (ε). And sat_{ F } (ε) represents ε satisfies a set F of SA policies.
Definition 3. UCP (the Utility Checking Problem) is defined as follows: Given an access control state ε and a set F of SA policies, determining whether sat_{ F } (ε) is true.
Theorem 1. UCP is in P.
PROOF. Given an access control state ε and a set F of SA policies, if for each SA policy f = sa(P, U, t) in F that sat_{ f } (ε) is true, then sat_{ F } (ε) is true. In the following, we prove that sat_{ f } (ε) is true if and only if each permission p ∈ P is assigned to no less than (U + 1  t) users in the user set U, where U represents the cardinality of set U.
For the "only if" part, sat_{ f } (ε) being true means that the users in each sizet subsets of U together possess all the permissions in P. Suppose, for the sake of contradiction, that sat_{ f } (ε) is true, and there exists a permission p ∈ P that is only assigned to (U  t) users in U. Then we can find a user set U' where U = t, and each users in U' do not possess p. Thus sat_{ f } (ε) is false, and this contradicts the assumption; therefore, each permission p ∈ P must be assigned to no less than (U + 1  t) users in U.
For the "if" part, if each permission p ∈ P is assigned to no less than (U+ 1  t) users in U, then the users in each sizet user set U' will together possess p. Thus all the permissions in p will be covered by each sizet user set. In other word, the users in each sizet user set together are authorized for all permissions in P. Therefore, sat_{ f } (ε) is true.
Together with the above discussions, we now give a lineartime algorithm for determining whether sat_{ F } (ε) is true: For each SA policy sa <P, U, t> in F, and for each permission p ∈ P. One first computes the set of all users the permission p is a member of, and compares this number with (U+1t). This algorithm has a time complexity of O(N_{ U } N_{ P } M), where N_{ U } is the number of users in U, N_{ P } the number of permission in P, and M is the number of SA policies. □
An availability policy ap<P, U, t> ensures that there exists a sizet subset of U that the users in this subset are required to possess all these permissions in P[4]. We now show that sa<P, U, t> is at least as restrictive as ap<P, U, t>.
Definition 4. Let P_{1}and P_{2}be two policies. We say that P_{1}is at least as restrictive as P_{2}(denoted by P_{1} ≽ P_{2}) if$\forall \epsilon \left(sa{t}_{{P}_{1}}\left(\epsilon \right)\Rightarrow sa{t}_{{P}_{2}}\left(\epsilon \right)\right)$. When P_{1} ≽ p_{2}but not P_{2} ≽ p_{1}, we say that P_{1}is more restrictive than P_{2}(denoted by P_{1} ≻ P_{2}). And when (P_{1} ≽ p_{2}) ⋀ (P_{2} ≽ p_{1}), we say P_{1}and P_{2}are equivalent (denoted by P_{1} ≜ P_{2}).
By definition, the ≽ relation among all policies is a partial order. The ≻ relation among all policies is a quasi order.
Theorem 2. Given an SA policy f = sa<P, U, t>, and an availability policy g = ap<P, U, t>, f ≻ g if and only if U > t.
PROOF. For the "only if", We show that if f ≻ g then U > t. Suppose, for the sake of contradiction that U ≤ t. By Definition 2, t ≤ U, then U = t. For any access control state ε, if sat_{ g } (ε) is true, then (∃U' ⊆ U) ⋀ (U'  = t)[∪_{u∈U'}auth_p_{ ε } (u) ⊇ P], and U' = U as U = t. Then ∃U' ⊆ U ⋀ U'  = t(∪_{u∈U'}auth_p_{ ε } (u) ⊇ P) has the same meaning as (∀U' ⊆ U) ⋀ (U'  = t) (∪_{u∈U'}auth_p_{ ε } (u) ⊇ P). That means P_{1} ≜ P_{2}, which contradicts the assumption. Therefore, if f ≻ g then U > t.
For the "if" part, if U > t then f ≻ g. By Definition 2, for each access control state ε that satisfies f if and only if the users in all sizet subsets of U together possess all the permissions in P, Let U' is a subset of U, that the users in U' together possess all of the permissions in P, and U'  = t, then ε satisfies ap<P, U, t>. Therefore, ∀ε(sat_{ f } (ε) ⇒ sat_{ g } (ε)), and f ≽ g. We construct a new state ε' that satisfies g but does not satisfy f as follows: assign all permissions in P to only one user u ∈ U, but do not assign any permissions in P to any other users in U. Then we can find a user set $\left({U}^{\prime}\subset U\right)\wedge \left(\left{U}^{\prime}\right\phantom{\rule{2.77695pt}{0ex}}=t\right)\left[{\bigcup}_{u\in {U}^{\prime}}aut{h}_{}{{p}^{\prime}}_{\epsilon}\left(u\right)\supseteq P\right]$, and sat_{ g } (ε') is true. However, for any user set U'' that (U'' ⊂ U) ⋀ (U''  = t) ⋀ (u ∉ U''), as ${\bigcup}_{u\in {U}^{\u2033}}aut{h}_{}{p}_{\epsilon}^{\prime}\left(u\right)\u2289P$, sat_{ f } (ε') is false. Therefore, if U > t, then f ≻ g. □
Intuitively, SA policies are a natural complement to SSoD policies in access control. Neither SA nor SSoD by itself is sufficient to capture both safety and utility requirements. Without the utility requirement, an access control state can satisfy any SSoD policy if the state does not contain any user set that covers all the permissions needed to accomplish the sensitive task. Similarly, without the safety requirement, any SA policy can be satisfied by giving all permissions to all users, which allows each single user be able to accomplish any task. In many cases, it is desirable for an access control system to have both SSoD and SA policies. However, these policies may conflict with each other due to their opposite objectives. Therefore, a formal description of policy inconsistency is necessary to detect and resolve it.
Definition 5. CCP (the Consistency Checking Problem) is defined as follows: Given a set E of SSoD policies and a set F of SA policies, determining that whether there exists an access control state ε that sat_{ E } (ε) ⋀ sat_{ F } (ε) is true.
Corollary 1. CCP is coNPcomplete.
PROOF. That CCP is coNPcomplete follows directly from the fact that the problem of determining whether sat_{ E } (ε) is true is coNPcomplete (Theorem 1 in [4]), and the problem of determining whether sat_{ F } (ε) is true is in P (Theorem 1). □
Consider the following example of SSoD and SA policies. It is not easy to check whether the policies in the set Q is consistent.
We now show that the above SSoD and SA policies are inconsistent. Given any access control state ε, if $sa{t}_{{f}_{2}}\left(\epsilon \right)$ is true, that means p_{2} and p_{3} must be authorized to both u_{2} and u_{3}. If $sa{t}_{{f}_{1}}\left(\epsilon \right)$ is true, then p_{1} must be authorized to either u_{2} or u_{3}. If u_{2} possesses p_{1}, u_{2} will possess all of the permissions in {p_{1}, p_{2}, p_{3}}, which violates both e_{1} and e_{2}. If u_{3} possesses p_{1}, u_{3} will possess all of the permissions in {p_{1}, p_{2}, p_{3}}, which violates both e_{1}. Therefore, there does not exist an access control state ε that satisfies all of the four policies in Q.
In general, there may be many policy inconsistencies in a large access control policy set. Thus the following issues should be considered: (1) A large number of policy inconsistencies are possible, but many of them may be the result of a small number of policies that apply to aggregates. The key is to figure out the minimum number of policies that are responsible for the policy inconsistencies. (2) Once all the inconsistencies are known, we must determine the appropriate resolutions with little effort to resolve them, and estimate their impact on the policies. Like traditional policy conflict resolution, the theoretical resolution of policy inconsistencies is basically the same: remove some policies in the policy set. The primary difficulty is to determine which policies should be removed, and the resolution addresses the inconsistency most effectively.
3. Policy inconsistency resolution approaches
In this section, we provide a formal examination of policy inconsistencies resolution for the coexistence of SSoD and SA policies.
3.1. Reducing complexity
Once all the inconsistencies are known, we must find a way to resolve them. However, determining which policy to remove is difficult because there may be many policy inconsistencies. In order to simplify the resolution task, we consider as few policies as possible. Thus we reduce the complexity of reasoning about policy inconsistencies by the techniques of static pruning and minimal inconsistency cover set.
3.1.1. Static pruning
SSoD and SA policies can conflict with each other due to their opposite objectives. In general, not all SSoD or SA policies should be taken into account as they do not cause inconsistencies. The following theorem asserts that the special cases of SSoD(or SA) policies do not affect its compatibility with SA(or SSoD) policies. This enables us to remove them from our consideration. This greatly simplifies the problem.
Theorem 3. Let Q = {e_{1}, ..., e_{ m }, f_{1}, ..., f_{ n } }, where e_{ i } = ssod <P_{ i }, U_{ i }, k_{ i } > (1 ≤ i ≤ m),${f}_{j}=ap\phantom{\rule{0.3em}{0ex}}\u27e8{P}_{j}^{\prime},\phantom{\rule{2.77695pt}{0ex}}{U}_{j}^{\prime},\phantom{\rule{2.77695pt}{0ex}}{t}_{j}\u27e9\phantom{\rule{0.3em}{0ex}}\left(1\le j\le n\right)$. If ∃e_{ i } ∈ Q[(P_{ i }  R > 0) ⋁ (U_{ i } ∩ T = 0)], where$R={\bigcup}_{j=1}^{n}{P}_{j}^{\prime},T={\bigcup}_{j=1}^{n}{U}_{j}^{\prime}$, then let Q' = Q'  {e_{ i } }; If$\exists {f}_{j}\in Q\left[\left({U}_{j}^{\prime}\cap S\phantom{\rule{2.77695pt}{0ex}}<{t}_{j}\right)\vee \left({P}_{j}^{\prime}\cap \phantom{\rule{2.77695pt}{0ex}}W\phantom{\rule{2.77695pt}{0ex}}=0\right)\right]$, where$S={\bigcup}_{i=1}^{m}{U}_{i},\phantom{\rule{2.77695pt}{0ex}}W={\bigcup}_{i=1}^{m}{P}_{i}$, then let Q' = Q'  {f_{ j } }. Q is consistent if and only if Q' is consistent.
PROOF. For the "only if" part, it is clear that if Q is consistent then Q' is consistent as Q' ⊆ Q.
For the "if" part, we show that if Q' is consistent then Q is consistent. Q' is consistent implies that there exists an access control state ε satisfies all policies in Q'. We now construct a new state ε' that satisfies both Q' and Q as follows: for each e_{ i } ∈ Q/Q', where P_{ i }  R > 0. Add all users in U_{ i } to ε, but do not assign any permissions in P_{ i } ∩ R. In this way, ε' satisfies e_{ i } as no less than k_{ i } users in U_{ i } together having all permissions in P_{ i } , and note that adding new users will not lead to inconsistency of policies in Q'. If U_{ i } ∩ T = 0, not assigning any permission in P_{ i } to any user in U_{ i } will not lead to inconsistency of policies in Q', but the new state satisfies e_{ i } . For each f_{ j } ∈ Q/Q', where ${U}_{j}^{\prime}\cap S\phantom{\rule{2.77695pt}{0ex}}<{t}_{j}$, add all users in ${U}_{j}^{\prime}$ to ε, and assign all permissions in ${P}_{j}^{\prime}$ to each user in ${U}_{j}^{\prime}\cap S$. Then there is at least one user $u\in {{U}^{\prime}}_{j}/S$ in each sizet_{ j } user set in ${U}_{j}^{\prime}$, as u has all the permissions in ${P}_{j}^{\prime}$, thus each sizet_{ j } user set in ${U}_{j}^{\prime}$ together having all the permissions in ${P}_{j}^{\prime}$. In this way, ε' satisfies f_{ j } , and note that adding new users, and assigning permissions to these new users will not lead to violation of policies in Q'. If ${P}_{j}^{\prime}\cap W\phantom{\rule{2.77695pt}{0ex}}=0$, assigning any permissions in ${P}_{j}^{\prime}$ to each user in ${U}_{j}^{\prime}$ will not lead to inconsistency of policies in Q', and thus the new state ε' satisfies f_{ j } . Therefore, Q is consistent if and only if Q' is consistent. □
3.1.2. Minimal inconsistency cover set
There may exist many policy inconsistencies in a policy set which contains a large number of SSoD and SA policies. But many of these inconsistencies may result from only a small number of these policies, and they may be disjoint with each other. We find the minimal inconsistency cover set is the minimal number of policies that represent a policy inconsistency. Therefore, the key question is how to organize the policy inconsistencies, so as to examine the minimum number of policies that are responsible for all the inconsistencies.
Definition 6. We define a minimal inconsistency cover (MIC) set responsible for a policy inconsistency that includes the smallest number of policies.
Note that for a policy inconsistency, there might be several policy sets that are responsible for this inconsistency. By definition, we say that a set S is an MIC set, if there does not exist another set S' responsible for this inconsistency and S' ⊂ S. We have the following property for MIC.
Theorem 4. Given any two MIC sets A and B, let P_{ A } denotes the union of permissions in all policies in A, and U_{ A } denotes the union of users in all policies in A. P_{ B } and U_{ B } have the similar meanings. Then (P_{ A } ∩ P_{ B } = ∅) ⋁ (U_{ A } ∩ U_{ B } = ∅).
 (1)
Permissions and users for {e _{1} , ..., e_{ m } } ⊆ A(m ≥ 1) and $\left\{{e}_{1}^{\prime},\phantom{\rule{2.77695pt}{0ex}}.\phantom{\rule{2.77695pt}{0ex}}.\phantom{\rule{2.77695pt}{0ex}}.\phantom{\rule{2.77695pt}{0ex}},\phantom{\rule{2.77695pt}{0ex}}{e}_{n}^{\prime}\right\}\subseteq B\left(n\ge 1\right)$ are shared;
 (2)
Permissions and users for {e _{1} , ..., e_{ m } } ⊆ A(m ≥ 1) and {f _{1} , ..., f_{ n } } ⊆ B(n ≥ 1) are shared;
 (3)
Permissions and users for {f _{1} , ..., f_{ m } } ⊆ A(m ≥ 1) and $\left\{{f}_{1}^{\prime},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{f}_{n}^{\prime}\right\}\subseteq B\left(n\ge 1\right)$ are shared;
 (4)
Permissions and users for {e _{1} , ..., e_{ m }, f _{1} , ..., f_{ n } } ⊆ A(m ≥ 1, n ≥ 1) and $\left\{{e}_{1}^{\prime},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{e}_{l}^{\prime},\phantom{\rule{2.77695pt}{0ex}}{f}_{1}^{\prime},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{f}_{k}^{\prime}\right\}\subseteq B\left(l\phantom{\rule{2.77695pt}{0ex}}\ge 1,\phantom{\rule{2.77695pt}{0ex}}k\ge 1\right)$ are shared.
For case (1), there exists at least one permission $p\in {P}_{\left\{{e}_{1},\dots ,{e}_{m}\right\}}$, but p does not belong to any other policies in A. By Theorem 3, {e_{1}, ..., e_{ m } } does not affect the inconsistency of other permissions in A, and thus {e_{1}, ..., e_{ m } } can be removed from A. This would contradict the assertion that A is an MIC set. Moreover, there exists at least one permission $p\in {P}_{\left\{{e}_{1}^{\prime},\dots ,{e}_{n}^{\prime}\right\}}$, but p does not belong to any other policies in B. Thus $\left\{{e}_{1}^{\prime},\phantom{\rule{2.77695pt}{0ex}}.\phantom{\rule{2.77695pt}{0ex}}.\phantom{\rule{2.77695pt}{0ex}}.\phantom{\rule{2.77695pt}{0ex}},\phantom{\rule{2.77695pt}{0ex}}{e}_{n}^{\prime}\right\}$ also can be removed from B. For case (2) and case (3), the proof is essentially the same as the case (1). It should be noted that there exists at least one user u belongs to the policies in {f_{1}, ..., f_{ n } }, but u does not belong to any other policies in B. Thus {f_{1}, ..., f_{ n } } should be removed from B by Theorem 3. For case (4), no policies can be removed from $\left\{{e}_{1},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{e}_{m},\phantom{\rule{2.77695pt}{0ex}}{f}_{1},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{f}_{n}\right\}\cup \left\{{e}_{1}^{\prime},\phantom{\rule{2.77695pt}{0ex}}\dots ,{e}_{l}^{\prime},\phantom{\rule{2.77695pt}{0ex}}{f}_{1}^{\prime},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{f}_{k}^{\prime}\right\}$, which means these policies may conflict with each other due to their opposite objectives. Therefore, these policies should be included by only one MIC set. This would contradict the assertion that A and B are two MIC sets. Together with the above discussions, given any two MIC sets, that (P_{ A } ∩ P_{ B } = ∅) ⋁ (U_{ A } ∩ U_{ B } = ∅). □
We now give an algorithm to generate the MIC sets for an access control policy set. Algorithm 1 includes an underlying presumption that all SSoD and SA policies which do not cause policy inconsistencies have been removed from our consideration by using "static pruning" technique. Given a policy set Q, the algorithm first divides Q into several subsets by the step 1 to 20. By the step 21 to 27, the algorithm combines the different sets which share the permissions and users. This algorithm has a worstcase time complexity of O(mnMN), where m is the number of SSoD policies, n is the number of SA policies, M is the number of users, N is the number of permissions. The fact that CCP is intractable (coNPcomplete) means that there exist difficult problem instances that take exponential time in the worst case, while efficient algorithms for CCP exist when the number of policies is not too large. MIC helps to reduce the complexity of reasoning about policy inconsistencies.
By Theorem 3, no policy can be removed from our consideration by static pruning. But the permissions in {p_{4}, p_{5}, p_{6}} and the users in {u_{4}, u_{5}, u_{6}} only exist in {e_{3}, e_{4}, f_{3}, f_{4}}, and the policies in {e_{3}, e_{4}, f_{3}, f_{4}} do not affect the consistency of {e_{1}, e_{2}, f_{1}, f_{2}}. By Algorithm 1, Q' can be divided into two policy set ${Q}_{1}^{\prime}=\left\{{e}_{1},\phantom{\rule{2.77695pt}{0ex}}{e}_{2},\phantom{\rule{2.77695pt}{0ex}}{f}_{1},\phantom{\rule{2.77695pt}{0ex}}{f}_{2}\right\}$, and ${Q}_{2}^{\prime}=\left\{{e}_{3},\phantom{\rule{2.77695pt}{0ex}}{e}_{4},\phantom{\rule{2.77695pt}{0ex}}{f}_{3},\phantom{\rule{2.77695pt}{0ex}}{f}_{4}\right\}$, such that each set is an MIC set. As shown in Example 1, the policies in ${Q}_{1}^{\prime}$ are inconsistent. It is easy to find that the policies in ${Q}_{2}^{\prime}$ are inconsistent, too. Continuing from Example 2, assume that there exist another two policies e_{5} = ssod <p_{1}, p_{2}, p_{4}, p_{5}, p_{6}}, {u_{1}, u_{2}, u_{3}, u_{4}, u_{5}, u_{6}}, 3> and f_{5} = sa <{p_{1}, p_{2}, p_{3}, p_{4}, p_{5}, p_{6}}, {u_{1}, u_{2}, u_{4}, u_{6}}, 3>, then the whole policies in {e_{1}, e_{2}, e_{3}, e_{4}, e_{5}, f_{1}, f_{2}, f_{3}, f_{4}, f_{5}} is only one MIC set.
3.2. Measuring the safetyutility tradeoff
Given an MIC set for a policy inconsistency. Often, there may exist many choices for resolving this inconsistency. An interesting question for them is "which choice is optimal?". Our methodology helps policy administrators answer this question.
Algorithm 1. ComputeMIC (Q)
Input: Q = {e_{1}, ..., e_{ m }, f_{1}, ..., f_{ n } }
Output: the MIC sets of Q : S_{1}, ..., S_{ x }
1: initialize S_{1} = ∅, i = 1, j = 1, k = 1;
2: while (i < mj < n) do
3: if$\left(\left({P}_{{e}_{i}}\cap \phantom{\rule{2.77695pt}{0ex}}{P}_{{S}_{k}}\ne \varnothing \right)\wedge \left({U}_{{e}_{i}}\cap \phantom{\rule{2.77695pt}{0ex}}{U}_{{S}_{k}}\ne \varnothing \right)\right)$then
4: S_{ k } = S_{ k } ∪ e_{ i } ;
5: i + +;
6: else
7: k + +;
8: continue;
9: end if
10: k = 1;
11: if$\left(\left({P}_{{f}_{j}}\cap \phantom{\rule{2.77695pt}{0ex}}{P}_{{S}_{k}}\ne \varnothing \right)\wedge \left({U}_{{f}_{j}}\cap \phantom{\rule{2.77695pt}{0ex}}{U}_{{S}_{k}}\ne \varnothing \right)\right)$then
12: S_{ k } = S_{ k } ∪ f_{ j } ;
13: j + +;
14: else
15: k++;
16: continue;
17: end if
18: k = 1;
19: end while;
20: MIC(Q) ← S_{1}, ..., S_{ x } ;
21: for S_{ k } ∈ MIC(Q) do
22: if$\exists {S}_{t}\in MIC\left(Q\right)\left[\left({P}_{{S}_{t}}\cap \phantom{\rule{2.77695pt}{0ex}}{P}_{{S}_{k}}\ne \varnothing \right)\wedge \left({U}_{{S}_{t}}\cap \phantom{\rule{2.77695pt}{0ex}}{U}_{{S}_{k}}\ne \varnothing \right)\right]$then
23: MIC(Q) = MIC(Q)  S_{ t }  S_{ k } ;
24: S_{ k } = S_{ k } ∪ S_{ t } ;
25: MIC(Q) ← S_{ k } ;
26: end if
27: end for
28: return MIC(Q).
Example 3. Let us consider the same policies as the one from Example 1. After removing some policies from Q, the rest of policies will be consistent with each other. For example, resolving the policy inconsistency has the following choices.

Removing only one policy:{e_{1}}, {f_{1}}, or {f_{2}}.

Removing two policies:{e_{1}, e_{2}}, {e_{1}, f_{1}}, {e_{1}, f_{2}}, {e_{2}, f_{1}}, {e_{2}, f_{2}}, or {f_{1}, f_{2}}.

Removing three policies:{e_{1}, e_{2}, f_{1}}, {e_{1}, e_{2}, f_{2}}, {e_{1}, f_{1}, f_{2}}, or {e_{2}, f_{1}, f_{2}}.
Currently we lack a method for measuring the safetyutility tradeoff in policy inconsistency resolving. Removing SSoD policies result in safety loss for the whole safety requirement in Q. Similarly, Removing SA policies result in utility loss for the whole utility requirement in Q. Hence before making the choice, one must ensure that the safety loss and utility loss are limited to an acceptable level. To use our method, one must choose a measure for safety loss (S_{ loss } ) and utility loss (U_{ loss } ).
Definition 7. Let e_{1}and e_{2}be two SSoD policies, we say that${S}_{loss}^{{e}_{1}}\ge {S}_{loss}^{{e}_{2}}$if and only if e_{1} ≽ e_{2}. And${S}_{loss}^{{e}_{1}}>{S}_{loss}^{{e}_{2}}$if and only if e_{1} ≻ e_{2}.
Where ${S}_{loss}^{{e}_{1}}$ denotes the safety loss caused by removing e_{1}. As is intuitive, choosing to remove the policy with higher restrictive will cause more safety (or utility) loss.
Theorem 5. For any SSoD policies e_{1}= ssod <P_{1}, U_{1}, k_{1}> and e_{2}= ssod <P_{2}, U_{2}, k_{2}>, e_{1} ≻ e_{2}if and only if (U_{1} ⊇ U_{2}) ⋀ (k_{1}≥ k_{2} +  P_{1}  P_{2}).
PROOF. For the "if" part, given (U_{1} ⊇ U_{2}) ⋀ (k_{1}≥ k_{2} +  P_{1}  P_{2}), we show that $\forall \epsilon \left(\neg sa{t}_{{e}_{2}}\left(\epsilon \right)\Rightarrow \neg sa{t}_{{e}_{1}}\left(\epsilon \right)\right)$. There are two cases for (U_{1} ⊇ U_{2}) ⋀ (k_{1}≥ k_{2} +  P_{1}  P_{2}): (1) P_{1} ⊆ P_{2}, (2) P_{1} ⊃ P_{2}. $\neg sa{t}_{{e}_{2}}\left(\epsilon \right)$ being true means that there exist k_{2}1 users in U_{2} together having all the permissions in P_{2}. For case (1), there also exists k_{2}1 users in U_{1} together having all the permissions in P_{1} as (P_{1} ⊆ P_{2}) ⋀ (U_{1} ⊇ U_{2}), and (k_{1} ≥ k_{2} + P_{1}  P_{2}) ⇒ (k_{1} 1) ≥ (k_{2}  1). Therefore, there exists k_{1}1 users in U_{1} together having all the permissions in P_{1}, in other words, $\neg sa{t}_{{e}_{1}}\left(\epsilon \right)$ is true. For case (2), there also exist k_{2}1 users in U_{1} together having all the permissions in P_{1} ∪ {P_{2}  P_{1}} as (U_{1} ⊇ U_{2}). At most P_{1}  P_{2} users together having all the permissions in {P_{2}  P_{1}}, and (k_{1} ≥ k_{2} +  P_{1}  P_{2}) ⇒ (k_{2}  1) ≤ (k_{1}  1)  P_{1}  P_{2}. Thus there exists k_{1}1 users in U_{1} together having all the permissions in P_{1}, $sa{t}_{{e}_{1}}\left(\epsilon \right)$ is also false. Therefore, $\forall \epsilon \left(\neg sa{t}_{{e}_{2}}\left(\epsilon \right)\Rightarrow \neg sa{t}_{{e}_{1}}\left(\epsilon \right)\right)$ is true.
For the "only if" part, given e_{1} ≽ e_{2}, we show that (U_{1} ⊇ U_{2}) ⋀ (k_{1} ≥ k_{2} + P_{1}  P_{2}) is true. Suppose, for the sake of contradiction, that ¬((U_{1} ⊇ U_{2}) ⋀ (k_{1} ≥ k_{2} + P_{1}  P_{2})) is true. In other words, both U_{1} ⊇ U_{2} and k_{1} ≥ k_{2} + P_{1}  P_{2} are false. Let e_{1} and e_{2} are two SSoD policies, where e_{1} = ssod <P_{1}, U_{1}, k_{1}>, e_{2} = ssod <P_{2}, U_{2}, k_{2}>. If U_{1} ⊇ U_{2} is false, then ∃u ∈ U_{2}/U_{1}. Assuming that $sa{t}_{{e}_{1}}\left(\epsilon \right)$ is true, assign all the permissions in P_{2} to u, and then $sa{t}_{{e}_{2}}\left(\epsilon \right)$ is false as k_{2}> 1. Therefore, U_{1} ⊇ U_{2} is true. If k_{1} ≥ k_{2} + P_{1}  P_{2} is false, then k_{1}< k_{2} + P_{1}  P_{2}. If P_{1} ⊆ P_{2}, then k_{1}< k_{2} ⇒ k_{1} ≤ k_{2}  1. $sa{t}_{{e}_{1}}\left(\epsilon \right)$ being true means that at least k_{1} users in U_{1} together having all the permissions in P_{1}. We assume that there exist k_{1} users in U_{1} together having all the permissions in P_{1} in ε; then there exist k_{2}1 users in U_{2} together having all the permissions in P_{2} as to ε (let U_{1} = U_{2}, and these k_{1} users also have all the permissions in {P_{2}  P_{1}}), then $sa{t}_{{e}_{2}}\left(\epsilon \right)$ is false. If P_{1} ⊃ P_{2}, let k_{1}< k_{2} + P_{1}  P_{2}; given an access control state ε that $sa{t}_{{e}_{1}}\left(\epsilon \right)$ is true, for each permission in {P_{2}  P_{1}}, assign it to P_{1}  P_{2} different users, and these users are not assigned any other permissions in P_{1}, and then k_{1}P_{1}  P_{2} users together having all the permissions in P_{1}. Therefore, there exist less than k_{2} users in U_{2} together having all the permissions in P_{2} (let U_{1} = U_{2}), and therefore, $sa{t}_{{e}_{2}}\left(\epsilon \right)$ is false. This contradicts the assumption that e_{1} ≽ e_{2}. Therefore, if e_{1} ≽ e_{2}, then (U_{1} ⊇ U_{2}) ⋀ (k_{1} ≥ k_{2} + P_{1}  P_{2}). □
Definition 8. Let f_{1}and f_{2}be two SA policies, we say that${U}_{loss}^{{f}_{1}}\ge {U}_{loss}^{{f}_{2}}$if and only if f_{1} ≽ f_{2}. And${U}_{loss}^{{f}_{1}}>{U}_{loss}^{{f}_{2}}$if and only if f_{1} ≻ f_{2}.
Theorem 6. For any SA policies f_{1}= sa <P_{1}, U_{1}, t_{1}> and f_{2}= sa <P_{2}, U_{2}, t_{2}>, f_{1} ≽ f_{2}if and only if (P_{1} ⊇ P_{2}) ⋀ (U_{1} ⊇ U_{2}) ⋀ (t_{1} ≤ t_{2}).
PROOF. For the "if" part, given (P_{1} ⊇ P_{2}) ⋀ (U_{1} ⊇ U_{2}) ⋀ (t_{1} ≤ t_{2}), we show that $\forall \epsilon \left(sa{t}_{{f}_{1}}\left(\epsilon \right)\Rightarrow sa{t}_{{f}_{2}}\left(\epsilon \right)\right)$ is true. $sa{t}_{{f}_{1}}\left(\epsilon \right)$ being true means that any sizet_{1} user set ${U}_{1}^{\prime}$ from U_{1} together having all the permissions in P_{1}. Since (P_{1} ⊇ P_{2}) ⋀ (U_{1} ⊇ U_{2}) ⋀ (t_{1} ≤ t_{2}), for each ${U}_{1}^{\prime}\subseteq {U}_{2}\subseteq {U}_{1},\phantom{\rule{2.77695pt}{0ex}}{\bigcup}_{u\in {U}_{1}^{\prime}}aut{h}_{}{p}_{\epsilon}\left(u\right)\supseteq {P}_{1}\supseteq {P}_{2}$, and $\left{U}_{1}^{\prime}\right\phantom{\rule{2.77695pt}{0ex}}={t}_{1}\le {t}_{2}$. Therefore, $sa{t}_{{f}_{2}}\left(\epsilon \right)$ is also true.
For the "only if" part, given f_{1} ≽ f_{2}, we show that (P_{1} ⊇ P_{2}) ⋀ (U_{1} ⊇ U_{2}) ⋀ (t_{1} ≤ t_{2}) is true. Suppose, for the sake of contradiction, that ¬( P_{1} ⊇ P_{2}) ⋀ (U_{1} ⊇ U_{2}) ⋀ (t_{1} ≤ t_{2}) is true, thus (P_{1} ⊂ P_{2}) ⋁ (U_{1} ⊂ U_{2}) ⋁ (t_{1}> t_{2}) is true, then ∃P ∈ P_{2}/P_{1}. Assuming that there exists an access control state ε, and $sa{t}_{{f}_{1}}\left(\epsilon \right)$ is true. Let P be not assigned to any user in U_{2}, that does not affect $sa{t}_{{f}_{1}}\left(\epsilon \right)$. But $sa{t}_{{f}_{2}}\left(\epsilon \right)$ is false, as no sizet_{2} user set from U_{2} can together cover P_{2}. Thus the assumption is false, and P_{1} ⊇ P_{2} is true.
If U_{1} ⊂ U_{2} is true, then ∃u ∈ U_{2}/ U_{1}. We now can construct a state ε that makes $sa{t}_{{f}_{2}}\left(\epsilon \right)$ true, but $sa{t}_{{f}_{1}}\left(\epsilon \right)$ false. By Theorem 1, sat_{ f } (ε) being true means that each sizet user sets from U cover the permission set P. The above discussion shown that P_{1} ⊇ P_{2} is true, and let t_{1} = t_{2}. As U_{2} + 1  t_{2}> U_{1} + 1  t_{1}, $sa{t}_{{f}_{1}}\left(\epsilon \right)$ is true, which contradicts the assumption, and thus U_{1} ⊇ U_{2} is true.
If t_{1}> t_{2} is true, let ${f}_{1}^{\prime}=sa\u27e8{P}_{2},\phantom{\rule{2.77695pt}{0ex}}{U}_{2},\phantom{\rule{2.77695pt}{0ex}}{t}_{1}\u27e9$. As shown above, ${f}_{1}\succcurlyeq {f}_{1}^{\prime}$, such as for any state ε that $\neg sa{t}_{{f}_{1}^{\prime}}\left(\epsilon \right)\Rightarrow \neg sa{t}_{{f}_{1}}\left(\epsilon \right)$. Thus we only need to construct a state ε that $sa{t}_{{f}_{2}}\left(\epsilon \right)$is true, but $sa{t}_{{f}_{1}^{\prime}}\left(\epsilon \right)$ is false as follows. Find a sizet_{1} user set U' ⊂ U_{2}, and partition P_{2} into t_{1} disjoint sets ${v}_{1},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{v}_{{t}_{1}}$, such that the permissions in each set be assigned to each user in U', respectively. Without any one user in U' can not cover P_{2}. Since t_{1}> t_{2}, we can find a sizet_{2} user set U'' ⊂ U' that the users in U'' do not together have all the permissions in P_{2}. In other words, $sa{t}_{{f}_{1}^{\prime}}\left(\epsilon \right)$ is false, and $sa{t}_{{f}_{1}}\left(\epsilon \right)$ is also false. This contradicts the assumption, and thus t_{1} ≤ t_{2} is true. Consequently, if f_{1} ≽ f_{2}, then (P_{1} ⊇ P_{2}) ⋀ (U_{1} ⊇ U_{2}) ⋀ (t_{1} ≤ t_{2}). □
After computing the rank of S_{ loss } for each SSoD policy and U_{ loss } for each SA policy. A fundamental problem in inconsistency resolving is how to make the right tradeoff between safety and utility. However, it is inappropriate to directly compare safety with utility. The most important reason is that removing SSoD policies will increase the safety loss for the whole policies, but will not increase the utility gain. Similarly, removing SA policies will increase the utility loss for the whole policies, but will not increase the safety gain. For example, if we choose to remove {e_{1}, e_{2}} in Example 5, then S_{ loss } = 100%, U_{ loss } = 0%. And if we choose to remove {f_{1}, f_{2}}, then S_{ loss } = 0%, U_{ loss } = 100%.
If safety and utility cannot be directly compared, how should one consider them in a policy set for inconsistency resolution? For this, given a number of policy sets that are candidates for removing, for each of which we measure its safety loss S_{ loss } and its utility loss U_{ loss } . We can obtain a set of (S_{ loss }, U_{ loss } ) pairs, one for each set. An ideal (but unachievable) choice will have the smallest S_{ loss } and U_{ loss } . For this, we need to be able to compare two different (S_{ loss }, U_{ loss } ) pairs.
Definition 9. Given two pairs (S_{ loss }, U_{ loss } )_{1}, and (S_{ loss }, U_{ loss } )_{2}, we define (S_{ loss }, U_{ loss } )_{1} ≤ (S_{ loss }, U_{ loss } )_{2}if and only if$\left({S}_{loss}^{1}\le {S}_{loss}^{2}\right)\wedge \left({U}_{loss}^{1}\le {U}_{loss}^{2}\right)$. And (S_{ loss }, U_{ loss } )_{1} < (S_{ loss }, U_{ loss } )_{2}if and only if$\left({S}_{loss}^{1}<{S}_{loss}^{2}\right)\wedge \left({U}_{loss}^{1}<{U}_{loss}^{2}\right)$.
Definition 10. Let A and B be two policy sets; removing A will caused (S_{ loss }, U_{ loss } ) _{ A }, and removing B will caused (S_{ loss }, U_{ loss } ) _{ B }. We say that the choice of removing A is at least as optimal as removing B (denoted by (S_{ loss }, U_{ loss } )_{ A }⊵ (S_{ loss }, U_{ loss } )_{ B }) if (S_{ loss }, U_{ loss } )_{ A }≤ (S_{ loss }, U_{ loss } ) _{ B }. And the the choice of removing A is better than removing B (denoted by (S_{ loss }, U_{ loss } )_{ A }⊳ (S_{ loss }, U_{ loss } )_{ B }) if (S_{ loss }, U_{ loss } ) _{ A } < (S_{ loss }, U_{ loss } )_{ B }.
Example 4. Let us consider the following policy sets from Example 3 that can be removed to resolve the policy inconsistency. S_{1} = {e_{1}}, S_{2} = {f_{1}}, S_{3} = {e_{1}, e_{2}}, S_{4} = {f_{1}, f_{2}}, S_{5} = {e_{1}, e_{2}, f_{1}}.
Obviously, ${\left({S}_{loss},\phantom{\rule{2.77695pt}{0ex}}{U}_{loss}\right)}_{{S}_{1}}<{\left({S}_{loss},\phantom{\rule{2.77695pt}{0ex}}{U}_{loss}\right)}_{{S}_{3}}<{\left({S}_{loss},\phantom{\rule{2.77695pt}{0ex}}{U}_{loss}\right)}_{{S}_{5}}$, and ${\left({S}_{loss},\phantom{\rule{2.77695pt}{0ex}}{U}_{loss}\right)}_{{S}_{2}}<{\left({S}_{loss},\phantom{\rule{2.77695pt}{0ex}}{U}_{loss}\right)}_{{S}_{4}}<{\left({S}_{loss},\phantom{\rule{2.77695pt}{0ex}}{U}_{loss}\right)}_{{S}_{5}}$. Thus S_{1} and S_{2} are two ideal choices to resolve the policy inconsistency.
3.3. Prioritizedbased resolution
The notion of priority is very important in the study of knowledge based systems, since inconsistencies have a better chance to be resolved. The following subsections present two prioritizedbased approaches to deal with policy inconsistencies. We first present the possibilistic logic approach, which selects one consistent subbase. And we then give the lexicographical inference approach, which selects several maximally consistent subbases [7]. We assume that knowledge bases Ψ are prioritized. Prioritized knowledge bases have the form Ψ = Ψ ^{ E } ∪ Ψ ^{ F } , where ${\Psi}^{E}={S}_{1}^{E}\mathsf{\text{U}}\cdots \mathsf{\text{U}}{S}_{m}^{E}$, ${\Psi}^{F}={S}_{1}^{F}\mathsf{\text{U}}\cdots \mathsf{\text{U}}{S}_{n}^{F}$, E and F denote all the SSoD and SA policies in the system, respectively. Formulas in ${S}_{i}^{E}$(or ${S}_{i}^{F}$) have the same level of priority and have higher priority than the ones in ${S}_{j}^{E}$(or ${S}_{j}^{F}$) where j > i. ${S}_{1}^{E}$ (or ${S}_{1}^{F}$) contains the one which have the highest priority in Ψ, and ${S}_{m}^{E}$(or ${S}_{n}^{F}$) contains the one which have the lowest priority in Ψ.
3.3.1. Possibilistic logic approach
Possibilistic logic approach selects one suitable consistent prioritized subbase of Ψ, whereas the other policies in complement set for the subbase of Ψ
Algorithm 2. GeneratePoss(Ψ)
Input: knowledge bases Ψ = Ψ ^{ E } ∪ Ψ ^{ F }
Output: Poss(Ψ)
1: initialize$Poss\left(\Psi \right)={S}_{1}^{E}\cup {S}_{1}^{F}$, i = 1, j = 1;
2: while (i ≤ m&&j ≤ n) do
3: if Poss(Ψ) is inconsistent then
4: $Poss\left(\Psi \right)=Poss\left(\Psi \right){S}_{i}^{E}{S}_{j}^{F}$;
5: if$Poss\left(\Psi \right)\cup {S}_{i}^{E}$ is consistent then
6: $Poss\left(\Psi \right)=Poss\left(\Psi \right)\cup {S}_{i}^{E}$;
7: i++;
8: else
9: for$e\in {S}_{i}^{E}$do
10: if Poss(Ψ) ∪ p is consistent then
11: Poss(Ψ) = Poss(Ψ) ∪ p;
12: end if
13: end for
14: end if
15: if$Poss\left(\Psi \right)\cup {S}_{i}^{E}$ is consistent then
16: $Poss\left(\Psi \right)=Poss\left(\Psi \right)\cup {S}_{j}^{F}$;
17: j + +;
18: else
19: for$f\in {S}_{j}^{F}$do
20: if Poss(Ψ) ∪ f is consistent then
21: Poss(Ψ) = Poss(Ψ) ∪ f;
22: end if
23: end for
24: end if
25: else
26: i++;
27: j ++;
28: $Poss\left(\Psi \right)=Poss\left(\Psi \right)\cup {S}_{i}^{E}\cup {S}_{j}^{F}$;
29: end if
30: end while;
31: return Poss(Ψ).
should be removed. We should extract a subbase φ(Ψ) from Ψ, which is made of the first ximportant and consistent strata(levels): φ(Ψ) = S_{1} ∪ ⋯ ∪ S_{ x } , such that S_{1} ∪ ⋯ ∪ S_{ x } is consistent, but S_{1} ∪ ⋯ ∪ S_{x+1} is inconsistent.
Definition 11. We define Poss(Ψ) as the set of the preferred consistent possibilistic subbase of Ψ : Poss(Ψ) = {A: A ⊆ Ψ is consistent and ∄B ⊆ Ψ is consistent where B ⊃ A}.
We now give an algorithm to compute the Poss(Ψ) for Ψ (shown in Algorithm 2). This algorithm iteratively adds the SSoD and SA policies with higher priority. Removal of the policies not in Poss(Ψ) is essential to satisfy the consistency for the other policies in Ψ. This algorithm has a bestcase time complexity of O(mn), and a worstcase time complexity of O(mnM 2 ^{ N } ), wherem is the number of SSoD policies, n is the number of SA policies, M is the number of users, and N is the number of permissions.
By Theorems 5 and 6, we can find that e_{1} ≻ e_{2}, f_{1} ≻ f_{2}. Thus Ψ = Ψ ^{ E } ∪ Ψ ^{ F } , where ${\Psi}^{E}={S}_{1}^{E}\cup {S}_{2}^{E}$, ${\Psi}^{F}={S}_{1}^{F}\cup {S}_{2}^{F}$, ${S}_{1}^{E}=\left\{{e}_{1}\right\}$, ${S}_{2}^{E}=\left\{{e}_{2}\right\}$, ${S}_{1}^{F}=\left\{{f}_{1}\right\}$, ${S}_{2}^{F}=\left\{{f}_{2},\phantom{\rule{2.77695pt}{0ex}}{f}_{3}\right\}$. By Algorithm 2, $Poss\left(\Psi \right)={S}_{1}^{E}\cup {S}_{1}^{F}\cup {S}_{2}^{E}\cup \left\{{f}_{2}\right\}=\left\{{e}_{1},\phantom{\rule{2.77695pt}{0ex}}{e}_{2},\phantom{\rule{2.77695pt}{0ex}}{f}_{1},\phantom{\rule{2.77695pt}{0ex}}{f}_{2}\right\}$. Therefore, the removal of f_{3} is an optimal choice to resolve the policy inconsistency.
3.3.2. Lexicographical inference approach
The possibilistic way of dealing with inconsistency is not entirely satisfactory since it only considers the first ximportant consistent formulas having the highest priority. However, the less certain formulas may be not responsible for inconsistencies that should also be taken into account. The idea of lexicographical inference approach is to select not only one consistent subbase but several maximally consistent subbases. Obviously, the lexicographical inference is more expensive than the possibilistic logic.
Definition 13. We define Lex(Ψ) as the set of all preferred consistent lexicographical subbases of Ψ : Lex(Ψ) = {A: A ⊆ Ψ is consistent and ∄B ⊆ Ψ is consistent, B ⊳ _{ lex } A}.
We now give an algorithm to generate Lex(Ψ) that covers all preferred consistent possibilistic subbases of Ψ. The algorithm is similar to Algorithm 2, but we add following improvements as follows. Given the knowledge bases Ψ = Ψ ^{ E } ∪ Ψ ^{ F } : if $Poss\left(\Psi \right)\cup {S}_{i}^{E}$ or $Poss\left(\Psi \right)\cup {S}_{j}^{F}$ is inconsistent, the algorithm does not stop (While in Algorithm 2, any policies in ${S}_{k}^{E}$, ${S}_{l}^{F}$ will not be considered, where k > i, l > j), by repeatedly adding policies in ${S}_{k}^{E}$ and ${S}_{l}^{F}$ to Poss(Ψ). In the enumeration approach, the algorithm tries all possibilities. Eventually, the algorithm outputs all preferred consistent possibilistic subbases of Ψ, such as Lex(Ψ). In Example 4. There exists two lexicographically consistent subbases that A = {e_{1}, e_{2}, f_{1}, f_{2}}, B = {e_{1}, f_{1}, f_{2}, f_{3}}, then Lex(Ψ) = {A, B}.
4. Illustration and evaluation
 1.
Removing SSoD and SA policies from our consideration which do not cause inconsistencies by static pruning.
 2.
Generating MIC sets.
 3.
Consistency checking for each MIC set.
 4.
Extracting priorities based on safetyutility tradeoff.
 5.
Employing possibilistic logic (or lexicographical inference)approach
4.1. Running example
We now give a running example to show the validity of our approach for policy inconsistency resolving.
We now implement the proposed approach to resolve the policy inconsistency problem in Q. Firstly, by Theorem 3, we find that e_{4}, e_{5} and f_{5} can be removed from our consideration. Let Q' = {e_{1}, e_{2}, e_{3}, f_{1}, f_{2}, f_{3}, f_{4}}, thus we only need to consider the policies in Q'. Secondly, by Algorithm 1, we can get two MIC sets: {e_{1}, e_{2}, f_{1}, f_{2}, f_{3}} and {e_{3}, f_{4}}. Let Q_{ A } = {e_{1}, e_{2}, f_{1}, f_{2}, f_{3}}, Q_{ B } = {e_{3}, f_{4}}. Thirdly, we check whether the policies in each MIC set are consistent, and find that the policies in Q_{ A } are inconsistent, but the policies in Q_{ B } are consistent. Thus we only need to resolve the policy inconsistency in Q_{ A } (Section 4.2 will give a more detailed description of consistency checking approach). Fourthly, we measure the safety loss for each SSoD policy and the utility loss for each SA policy. Via Theorem 5, we find that e_{1} ≻ e_{2}, f_{1} ≻ f_{2}. Thus we can have the form for prioritized knowledge bases Ψ = Ψ ^{ E } ∪ Ψ ^{ F } (where ${\Psi}^{E}={S}_{1}^{E}\cup {S}_{2}^{E}$, ${\Psi}^{F}={S}_{1}^{F}\cup {S}_{2}^{F}$, ${S}_{1}^{E}=\left\{{e}_{1}\right\}$, ${S}_{2}^{E}=\left\{{e}_{2}\right\}$, ${S}_{1}^{F}=\left\{{f}_{1}\right\}$, ${S}_{2}^{F}=\left\{{f}_{2},\phantom{\rule{2.77695pt}{0ex}}{f}_{3}\right\}$.). We give the method for computing the S_{ loss } and U_{ loss } for each SSoD and SA policy, respectively as follows:

${S}_{loss}^{e}=\frac{rank\left(e\right)}{{\Sigma}_{\left\{{e}^{\prime}\in {\Psi}^{E}\right\}}rank\left({e}^{\prime}\right)}$

${U}_{loss}^{f}=\frac{rank\left(f\right)}{{\Sigma}_{\left\{{f}^{\prime}\in {\Psi}^{F}\right\}}rank\left({f}^{\prime}\right)}$
Let rank(e_{1}) = 2, rank(e_{2}) = 1, rank(f_{1}) = 2, and rank(f_{2}) = rank(f_{3}) = 1. Thus ${S}_{loss}^{{e}_{1}}=\frac{2}{2\times 1+1\times 1}\times 100\%\approx 66.7\%$, ${S}_{loss}^{{e}_{2}}\approx 33.3\%$, ${U}_{loss}^{{f}_{1}}=50\%$, ${U}_{loss}^{{f}_{2}}={U}_{loss}^{{f}_{3}}=25\%$. Lastly, we employ Algorithm 2 to generate possibilistic logic subbase Poss(Ψ) = {e_{1}, e_{2}, f_{1}, f_{2}}, and compute its safetyutility pair (S_{ loss }, U_{ loss } )_{Poss(Ψ)} = (0, 25%). We also generate Lex(Ψ) and find that there exist two lexicographically consistent subbases that Lex(Ψ) = {Q_{1}, Q_{2}}, where Q_{1} = {e_{1}, e_{2}, f_{1}, f_{2}}, and Q_{2} = {e_{2}, f_{1}, f_{2}, f_{3}}. ${\left({S}_{loss},{U}_{loss}\right)}_{{Q}_{1}}=\left(0,25\%\right)$, ${\left({S}_{loss},{U}_{loss}\right)}_{{Q}_{2}}=\left(66.7\%,0\%\right)$.
The results above can help the policy administrator to resolve the policy inconsistency by removing some policies, and can specify reasonable access control policies. For example, if the safety requirement is more critical than the utility requirement in this running example, the policy administrator can choose to remove f_{3}, as it causes no safety loss, but 25% utility loss. Otherwise, he can choose to remove e_{1} where it causes about 66.7% safety loss, but no utility loss.
4.2. Performance evaluation
In order to understand the effectiveness of our approach, we have implemented two algorithms, and performed several experiments using the running example as shown in Section 4.1. One is called improved algorithm based on our approach as discussed in above sections (employ the possibilistic logic approach), whereas the other is called straightforward algorithm discussed based on consistency checking problem [4]. The implementation of these two algorithms was written in Java. Experiments were carried out on a machine with an Intel(R) Core(TM)2 Duo CPU T5750 running at 2.0 GHz, and with DDR2 2 GB 667 Mhz RAM, running Microsoft Windows XP Professional.
Straightforward algorithm
 (1)
Removing SSoD and SA policies from our consideration which do not cause policy inconsistencies using "static pruning" technique.
 (2)
Reducing the number of access control states that need to be considered. Given an access control state ε, for each SA policy f = sa<P, U, t>, ε satisfies f if and only if for each sizet set of users from U such that these users together possessing all permissions in P. One only needs to compute the set of permissions of each sizet subsets of U, and check whether it is a superset of P. There exist ${C}_{\leftU\right}^{t}$ sizet user sets for U. If the return for the algorithm is "no", then we know that the state ε does not satisfy f, and thus need not to be considered. By Lemma 1, for the sake of "least privilege" principle, in order to ensure sat_{ f } (ε) being true, we let each permission p ∈ P be assigned to only (U + 1  t) users in U. This can greatly reduce the number of access control states that should be taken into consideration.
(3)Reduction to SAT: Given an SSoD policy e = ssod<P, U, k> and an access control state ε, we have shown that determining whether sat_{ e } (ε) is true is coNPcomplete problem [8]. Thus we can use the algorithms for SAT to solve this problem. The SAT solver we use is SAT4J [9]. The translation works are as follows. Given an SSoD policy e = ssod<P, U, k> and an access control state ε, for each u_{ i } ∈ U, we have a propositional variable v_{ i } . This variable is true if u_{ i } is a member of size(k1) user set U' ⊆ U to cover all the permissions in P. Then we have the following two kinds of constraints. For each p ∈ P, let ${u}_{{i}_{1}},\phantom{\rule{2.77695pt}{0ex}}{u}_{{i}_{2}},\phantom{\rule{2.77695pt}{0ex}}\dots ,\phantom{\rule{2.77695pt}{0ex}}{u}_{{i}_{x}}$ be the users who are authorized for p. We add the first constraint ${v}_{{i}_{1}}+{v}_{{i}_{2}}+\cdots +{v}_{{i}_{x}}\ge 1$, which ensures that all the permissions in P are covered by U'. There are P such constraints. Then we add the second constraint v_{1} + v_{2} + ⋯ + v_{ n } ≤ k  1(n = U), which ensures that U'  ≤ k  1. There is only one such constraint. If the return for the algorithm is "true", then we know that sat_{ e } (ε) is false; otherwise, sat_{ e } (ε) is true.
Comparisons between straightforward algorithm (SA) and improved algorithm (IA)
Policies  e _{1}  f _{1}  e _{2}  f _{2}  e _{3}  f _{3}  e _{4}  f _{4}  e _{5}  f _{5}  Total  

Policies  SA  0  0  0  3  3  4  3  5  8  9  34 
IA  0  0  0  0  0  0  0  0  0  5  5  
States  SA  0  0  0  9  9  9  9  9  648  648  1341 
IA  0  0  0  0  0  0  0  0  0  324  324  
Runtime  SA  0  0  0  3.5  4.3  5.0  3.8  8.1  829.4  956.3  1810.4 
IA  0  0  0  0  0  0  0  0  0  178.2  178.2 
5. Related work
We examine related work in four categories: safety analysis, utility analysis, policy conflicts, and policy inconsistencies.
Safety analysis has been the main research area in access control for several decades. Harrison et al. [10] formalized a simple safety analysis that determining whether an access control system can reach a state in which an unsafe access is allowed in the context of the wellknown access matrix model. Following that, there have been various efforts in designing access control systems in which simple safety analysis is decidable or efficiently decidable, e.g., Li et al. [2] generalized safety analysis in the context of a trust management framework. They also studied the safety analysis in the context of rolebased access control (RBAC), where they gave a precise definition of a family of safety analysis problems in RBAC. It is more general than safety analysis that is studied in the literature [6]. SoD policy has been considered as a fundamental principle of information security; the concept of SoD can be traced back to 1975 when Saltzer and Schroeder [11] took it as one of the design principles for protecting information, under the name "separationofprivilege". Later on, SoD has been vastly studied by various researchers as a principle to avoid frauds. It has been recognized that "one of RBAC's great advantages is that SoD rules can be implemented in a natural and efficient way" [12]. Various frameworks have been developed for specifying SoD in the context of access control. However, it should be noted that most existing approaches on SoD only consider authorization constraint sets with exact two elements. We employ the definition for SoD by our previous work [8], which considers the total number of available users as a limitation factor through referring to the Jason's work [13]. In general, the problem of deciding whether a term is satisfied by a set of users is NPcomplete [14]. Therefore, it comes as no surprise that directly enforcing SSoD policies is intractable (coNPcomplete) [4]. Li et al. [15] seek to enforce an SSoD constraint using SMER(statically mutually exclusive roles) constraints, but provide no analysis of the complexity of computing the set of all such constraints. Chen et al. [16] study some variations on the set cover problem, and show that the RSSoD generation problem is NPhard.
Safety policy is mostly viewed as a tool for restricting access. An equally important aspect of access control is to enable access. We introduce the notion of utility policies in this paper, which state properties about enabling access in access control. Li et al. introduces the related concept of availability policies in [2, 6], which discriminates whether a user always possesses certain permissions across state changes. A similar concept is resiliency policy [3], which requires an access control system to be resilient to the absence of users. Following the preliminary version of this paper, Wang and Li [17] studied resiliency in workflow authorization systems. They proposed three levels of resiliency in workflow systems, namely, static resiliency, decremental resiliency and dynamic resiliency. Unlike the work by Li et al., the availability policy in [4] is a highlevel requirement, and it is expressed in terms of restrictions on permission set and user set. AS shown in Theorem 2, SA policy is strict type of availability policy. Such policies are particularly useful when evaluating whether the access control configuration of a system is ready for emergency response. When an emergency such as a natural diaster or a terrorist attack occurs, an organization may need any teams of employees to respond to the emergency.
Policybased authorization systems are becoming more common as information systems become larger and more complex. The overall authorization policy may be defined by different entities, which may produce conflicting authorization decisions. Arbitrary rules can be used to resolve Policy conflicts, but typically a generic resolution method is defined, such as first rule wins in firewalls or denials take precedence in ASL [18]. However, resolution of policy conflicts by manual intervention of policy administrator is a slow and ad hoc process and provides no guarantee on the optimality of the resulting interoperation system. Gong et al. [19] have investigated interoperation of systems employing multilevel access control policies. They have proposed several optimization techniques for resolution of interoperation conflicts. Ferrari and Thuraisingham have identified that several conflict resolution strategies may be useful depending on the domain [20]. In the current systems, rules and policy combination algorithms are defined on a static basis during policy composition, which is not desirable in dynamic systems with fast changing environments. Apurva Mohan et al. [21] propose a framework that supports the need for changing the rule and policy combination algorithms dynamically based on contextual information and also eliminates the need to recompose policies. The resolution for policy inconsistencies differs from policy conflicts that is resolved at compiletime. That means it is a static conflict resolution which is independent of access control system environments.
Policy inconsistencies may arise between safety and utility policies due to their opposite objectives. And in many cases, it is desirable for access control system to have both of safety and utility policies. Li et al. [4] attempts to address the problem of consistency checking for safety and availability in the context of access control. Based on the consistency checking method, it can help the policy administrator to specify reasonable access control policies without policy inconsistencies. However, this approach has its own shortcomings, the computing cost is usually unacceptable, and it does not consider optimization on tradeoff between safety and utility. In this paper, we provide a formal examination of policy inconsistencies resolution for safety and utility policies, especially for the coexistence of static separationofduty (SSoD) policies and strict availability (SA) policies. The experimental results show the validity of our approach. The resolution for policy inconsistencies is very important for policy administrators to specify reasonable access control policies when both safety and utility policies coexists.
6. Conclusion and future work
In this paper, we handled policy inconsistency of safety and utility policies based on the safetyutility tradeoff in the context of access control. We formally defined the policy inconsistency for the coexistence of safety policies and utility policies, and some key formal properties that resolved policy inconsistencies. We first reduced the complexity of reasoning about policy inconsistencies by static pruning and MIC sets; we then presented a systematic method for measuring safety loss and utility loss; Finally, we evaluated the safetyutility tradeoff, and presented two prioritizedbased approaches to deal with policy inconsistencies. Our work can help the policy administrators to specify reasonable access control policies.
In the future research, we intend to address the policy inconsistencies by modifying policies rather than removing policies. It is difficult because there may be many choices, and to find the best choice is a challenging work. Continuing from Example 6, removing the SSoD policy e_{1} = ssod<{order, goods, invoice}, {alice, bob, carl}, 2>, or the SA policy f_{3} = sa<{order, goods}, {alice, bob, carl}, 2> can both resolve the policy inconsistency. Assuming that we modify e_{1} as ${e}_{1}^{\prime}=ssod<\left\{order,goods,invoice\right\},\left\{alice,bob\right\},2>$, or modify f_{3} as ${f}_{3}^{\prime}=sa<\left\{order,goods\right\},\left\{alice,bob\right\},2>$. Then the policy inconsistency also can be resolved, and both of the safety loss and utility loss is lesser than removing e_{1} or f_{3}.
Declarations
Acknowledgements
This work is supported by National Natural Science Foundation of China under Grant 60873225, Zhejiang Province Education Foundation under Grant No.201120897.
Authors’ Affiliations
References
 Clark DD, Wilson DR: A comparison of commercial and military computer security policies. In Proceedings of 8th IEEE Symposium on Security and Privacy (SP). IEEE Computer Society Press, Oakland, California, USA; 1987:184195.Google Scholar
 Li N, Mitchell JC, Winsborough WH: Beyond proofofcompliance: security analysis in trust management. J ACM 2005, 52(3):474514. 10.1145/1066100.1066103MathSciNetView ArticleGoogle Scholar
 Li N, Tripunitara MV, Wang Q: Resiliency policies in access control. ACM Trans Inf Syst Secur 2009, 12(4):113137.View ArticleGoogle Scholar
 Li R, Lu J, Lu Z, Ma X: Consistency checking of safety and availability in access control. IEICE Trans Inf Syst Soc 2010, E93D(3):491502. 10.1587/transinf.E93.D.491View ArticleGoogle Scholar
 Benferhat S, El Baida R, Cuppens F: A stratificationbased approach for handling conflicts in access control. In Proceedings of the 8th Symposium on Access Control Models and Technologies. Villa Gallia, Como, Italy; 2003:189195.Google Scholar
 Li N, Tripunitara MV: Security analysis in rolebased access control. ACM Trans Info Sys Secur 2006, 9(4):391420. 10.1145/1187441.1187442View ArticleGoogle Scholar
 Dubois D, Lang J, Prade H: Possibilistic logic. In Handbook of Logic in Artifical Intelligence and Logic Programming. Volume 3. Oxford University Press, Oxford; 1994:439513.Google Scholar
 Lu J, Li R, Lu Z, Hu J, Ma X: Specification and enforcement of static separationofduty policies in usage control. In Proceeding 12th Information Security Conference (ISC). Pisa, Italy; 2009:403410.View ArticleGoogle Scholar
 Le Berre D, project leader: SAT4J: A satisfiability library for Java.2006. [http://www.sat4j.org/]Google Scholar
 Harrison MA, Ruzzo WL, Ullman JD: Protection in operating systems. Commun ACM 1976, 19(8):461471. 10.1145/360303.360333MATHMathSciNetView ArticleGoogle Scholar
 Saltzer JH, Schroeder MD: The protection of information in computer systems. Proceed IEEE 2005, 63(9):12781308.View ArticleGoogle Scholar
 Sandhu R, Coyne E, Feinstein H, Youman C: Rolebased access control models. Computer 1996, 29(2):3847. 10.1109/2.485845View ArticleGoogle Scholar
 Crampton J: Specifying and enforcing constraints in rolebased access control. In Proceedings 8th ACM Symposium on Access Control Models and Technologies (SACMAT). Villa Gallia, Como, Italy; 2003:4350.Google Scholar
 Li N, Wang Q: Beyond separation of duty: an algebra for specifying highlevel security policies. J ACM 2008, 55(3):146.MathSciNetView ArticleGoogle Scholar
 Li N, Tripunitara MV, Bizri Z: On mutually exclusive roles and separationofduty. ACM Trans Info Syst Secur 2007, 10(2):231272.Google Scholar
 Chen L, Crampton J: Set covering problems in rolebased access control. In Proceedings of 14th European Symposium on Research in Computer Security. SaintMalo, France; 2009:689704.Google Scholar
 Wang Q, Li N: Satisfiability and resiliency in workflow systems. In Proceedings 12th European Symposium on Research in Computer Security. Dresden, Germany; 2007:90105.Google Scholar
 Jajodia S, Samarati P, Subrahmanian VS: A logical language for expressing authorizations. In Proceedings of 18th IEEE Symposium on Security and Privacy (SP). Oakland, California, USA; 1997:3142.Google Scholar
 Gong L, Qian X: Computational issues in secure interoperation. IEEE Trans Soft Eng 1996, 22(1):1423.Google Scholar
 Ferrari E, Thuraisingham B: Secure database systems. In Advanced Databases: Technology and Design. Edited by: Diaz O, Piattini M. Artech House, London; 2000.Google Scholar
 Mohan A, Blough DM: An attributebased authorization policy framework with dynamic conflict resolution. In Proceedings of 9th Symposium on Identity and Trust on the Internet. New York, NY, USA; 2010:3750.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.