- Research Article
- Open Access
-Net Approach to Sensor -Coverage
EURASIP Journal on Wireless Communications and Networking volume 2010, Article number: 192752 (2009)
Wireless sensors rely on battery power, and in many applications it is difficult or prohibitive to replace them. Hence, in order to prolongate the system's lifetime, some sensors can be kept inactive while others perform all the tasks. In this paper, we study the -coverage problem of activating the minimum number of sensors to ensure that every point in the area is covered by at least sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection. The -coverage problem is trivially NP-complete, and hence we can only provide approximation algorithms. In this paper, we present an algorithm based on an extension of the classical -net technique. This method gives an -approximation, where is the number of sensors in an optimal solution. We do not make any particular assumption on the shape of the areas covered by each sensor, besides that they must be closed, connected, and without holes.
Coverage problems have been extensively studied in the context of sensor networks (see, e.g., [1–4]). The objective of sensor coverage problems is to minimize the number of active sensors, to conserve energy usage, while ensuring that the required region is sufficiently monitored by the active sensors. In an over-deployed network we can also seek -coverage, in which every point in the area is covered by at least sensors. This ensures higher fault tolerance, robustness, and improves many operations, among which position detection and intrusion detection.
The -coverage problem is trivially NP-complete, and hence we focus on designing approximation algorithms. In this paper, we extend the well-known -net technique to our problem and present an -factor approximation algorithm, where is the size of the optimal solution. The classical greedy algorithm for set cover , when applied to -coverage, delivers an -approximation solution, where is the number of target points to be covered. Our approximation algorithm is an improvement over the greedy algorithm, since our approximation factor of is independent of and of the number of target points.
Instead of solving the sensor's -coverage problem directly, we consider a dual problem, the -hitting set. In the -hitting set problem, we are given sets and points, and we look for the minimum number of points that "hit" each set at least times (a set is hit by a point if it contains it). Brönnimann and Goodrich were the first  to solve the hitting set using the -net technique . In this paper, we introduce a generalization of -nets, which we call -nets. Using -nets with the Brönnimann and Goodrich algorithm's , we can solve the -hitting set, and hence the sensor's -coverage problem. Our main contribution is a way of constructing -nets by random sampling. A recent Infocom paper  uses -nets to solve the -coverage problem. However we believe that their result is fundamentally flawed (see Section 2.1 for more details). So, to the best of our knowledge, we are the first to give a correct extension of -nets for the -coverage problem.
The rest of the paper is organized as follow. The -coverage problem is introduced in Section 2. Section 2.1 contains detailed discussion about related work. The -net approach is presented in Section 3.
2. Problem Formulation and Related Work
We start by defining the sensing region and then we will define the -coverage problem with sensors. In the literature, sensing regions have been often modeled as disks. In this paper, we consider sensing regions of general shape, because this reflects a more realistic scenario.
Definition 1 (sensing region).
The sensing region of a sensor is the area "covered" by a sensor. Sensing regions can have any shape that is closed, connected, and without holes, as in Figure 1(a). Often, sensing regions are modeled as disks as in Figure 1(b), but we consider more general shapes.
Definition 2 (target points).
Target points are the given points in the 2D plane that we wish to cover using the sensors.
Given a set of sensors with fixed positions and a set of target points, select the minimum number of sensors, such that each target point is covered (is contained in the selected sensing region) by at least of the selected sensors.
For simplicity, we have defined the above -SC problem's objective as coverage of a set of given target points. However, as discussed later, our algorithms and techniques easily generalize to the problem of covering a given area.
Suppose we are given 4 sensors and 20 points as in Figure 2(a), and we want to select the minimum number of sensors to 2-cover all points. In this particular example, 2 sensors are not enough to 2-cover all points. Instead, 3 sensors suffice, as shown in Figure 2(b).
2.1. Related Work
In the recent years, there has been a lot of research done [1–3, 9] to address the coverage problem in sensor network. In particular, Slijepcevic and Potkonjak  design a centralized heuristic to select mutually exclusive sensor covers that independently cover the network region. In , Charkrabarty et al. investigate linear programming techniques to optimally place a set of sensors on a sensor field for a complete coverage of the field. In , Shakkottai et al. consider an unreliable sensor network, and derive necessary and sufficient conditions for the coverage of the region and connectivity of the network with high probability. In one of our prior works , we designed a greedy approximation algorithm that delivers a connected sensor cover within a logarithmic factor of the optimal solution; this work was later generalized to -coverage in .
Recently, Hefeeda and Bagheri  used the well-known -net technique to solve the problem of -covering the sensor's locations. However, we strongly believe that their result is fundamentally flawed. Essentially, they select a set of subsets of size (called -flowers) represented by the center of their locations. However, their result is based on the following incorrect claim that if the centers of a set of -flowers 1-cover a set of points , then the set of sensors associated with the -flowers will -cover . In addition, in their analysis, they implicitly assume that an optimal solution can be represented as a disjoint union of -flowers, which is incorrect. In this paper we present a correct extension of the -net technique for the -coverage problem in sensor networks.
Two closely related problems to the sensor-coverage problem are set cover and hitting set problems. The area covered by a sensor can be thought as a set, which contains the points covered by that sensor. The hitting set problem is a "dual" of the set cover problem. In both set cover and hitting set problems, we are given sets and elements. While in set cover the goal is to select the minimum number of sets to cover all elements/points, in hitting set the goal is to select a subset of elements/points such that each set is hit. The classical result for set cover  gives an -approximation algorithm, where is the number of target points to be covered. The same greedy algorithm also delivers a -approximation solution for the -SC problem. In contrast, the result in this paper yields an -approximation algorithm for the -SC problem, where is the optimal size (i.e., minimum number of sensors needed to provide -coverage of the given target points). Note that our approximation factor is independent of and of the number of target points.
Brönnimann and Goodrich  were the first to use the -net technique  to solve the hitting set problem and hence the set cover with an -approximation, where is the size of the optimal solution. In this paper, we extend their -net technique to -coverage. It is interesting to observe that our extension is independent of and it gives an -approximation also for -coverage. For the particular case of 1-coverage with disks, it is possible to build "small" -nets using the method of Matoušek, Seidel, and Welzl  and obtain a constant-factor approximation for the 1-hitting set problem. Their method  can be easily extended to -hitting set, and this would give a constant-factor approximation for the -SC problem when the sensing regions are disks. However, in this paper we focus on sensing regions of arbitrary shapes and sizes, as long as they are closed, connected, and without holes.
Another related problem is the art gallery problem (see  for a survey) which is to place a minimum number of guards in a polygon so that each point in the polygon is visible from at least one of the guards. Guards may be looked upon as sensors with infinite range. However, in this paper, we focus on selecting already deployed sensor.
3. The -Net-Based Approach
In this section, we present an algorithm based on the classical -net technique, to solve the -coverage problem. The classical -net technique is used to solve the hitting set problem, which is the dual of the set cover problem. The -SC problem is essentially a generalization of the set cover problem—thus, we will extend the -net technique to solve the corresponding generalization of the hitting set problem.
3.1. Hitting Set Problem and the -Net Technique
We start by describing the use of the classical -net technique to solve the traditional hitting set problem. We begin with a couple of formal definitions.
Set Cover (SC); Hitting Set (HS)
Given a set of points and a collection of sets , the set cover (SC) problem is to select the minimum number of sets from whose union contains (covers) all points in . The hitting set (HS) problem is to select the minimum number of points from such that all sets in are "hit" (a set is considered hit, if one of its points has been selected).
Note that HS is a dual of SC, and hence solving HS is sufficient to solve SC.
We now define -nets. Intuitively, an -net is a set of points that hits all large sets (but may not hit the smaller ones). For the overall scheme, we will assign weights to points, and use a generalized concept of weighted -nets that must hit all large-weighted sets.
Definition 3 (-Net; Weighted -Net).
Given a set system , where is a set of points and is a collection of sets, a subset is an -net if for every set in s.t. , we have that .
Given a set system , and a weight function , define for . A subset is a weighted-net for if for every set in s.t. , we have that .
Using -Nets to Solve the Hitting Set Problem
The original algorithm for solving hitting set problem using -net was invented by Brönnimann and Goodrich . Below, we give a high-level description of their overall approach (referred to as the BG algorithm), because it will help understand our own extension. We begin by showing how -nets are related to hitting sets, and then, show how to use -nets to actually compute hitting sets.
Let us assume that we have a black-box to compute weighted -nets, and that we know the optimal hitting set which is of size . Now, define a weight function as if and otherwise. Then, set , and use the black-box to compute a weighted -net for . It is easy to see that this weighted -net is actually a hitting set for , since for all sets . There are known techniques  to compute weighted -nets of size for set systems with a constant VC-dimension (defined later); thus, the above gives us an -approximate solution. For the particular case of disks, it is possible to construct -nets of size  and hence obtain a constant-factor approximation.
However, in reality, we do not know the optimal hitting set. So, we iteratively guess its size , starting with and progressively doubling until we obtain a hitting set solution (using the above approach). Also, to "converge" close to the above, we use the following scheme. We start with all weights set to 1. If the computed weighted -net is not a hitting set, then we pick one set in that is not hit by it and double the weights of all points that it contains. Then, we iterate with the new weights. It can be shown that if the estimate of is correct and using , then we are guaranteed to find a hitting set using the previous approach after a certain number of iterations. Thus, if we do not find a hitting set after enough iterations, we double the estimate of and try again. It can be shown in  that the previous approach finds an -approximate hitting set in polynomial time for set systems with constant VC-dimension (defined below), where is the size of the optimal hitting set.
We end the description of the BG algorithm, with the definition of Vapnik-ervonenkis (VC) dimension of set systems. Informally, the VC-dimension of a set system is a mathematical way of characterizing the "regularity" of the sets in (with respect to the points ) in the system. A bounded VC-dimension allows the construction of an -net through random sampling of large enough size. The VC-dimension is formally defined in terms of set shattering, as follows.
Definition 4 (VC-dimension).
A set is considered to be shattered by a collection of sets C if for each , there exists a set such that . The VC-dimension of a set system is the cardinality of the largest set of points in that can be shattered by .
In our case, the VC dimension is at most 23 as given by the following theorem by Valtr .
If is compact and simply connected, then VC-dimension of the set system , where is a set of points and is a collection of sets, is at most 23.
Note that for a finite collection of sensors, whose covering regions are compact and simply connected, the dual is compact and simply connected too.
3.2. -Hitting Set Problem and the -Net Technique
We now formulate the -hitting set (-HS) problem, which is a generalization of the hitting set problem, normely, we want each set in the system to be hit by selected points.
Definition 5 (-hitting set (-HS)).
Given a set system , the -hitting set(-HS) problem is to find the smallest subset of points with at most one point for each sibling-set such that hits every set in at least times.
3.2.1. Connection between -HS and -SC Problem
Note that the previous -HS problem is the (generalized) dual of our sensor -coverage problem (-SC problem). Essentially, each point in the -HS problem corresponds to a sensing region of a sensor, and each set in the -HS problem corresponds to a target point. In what followos, we describe how to solve the -HS problem, which essentially solves our -SC problem. To solve the -HS, we need to define and use a generalized notion of -net.
Definition 6 (weighted -net).
Suppose that is a sibling-set system, and is a weight function. Define for . A set is a weighted-net for if , whenever and .
Using -Nets to Solve -HS
We can solve the -HS problem using the BG algorithm , without much modification. However, we need an algorithm compute weighted -nets. The below theorem states that an appropriate random sampling of about points from gives a -net with high probability, if the set system has a bounded VC-dimension. For the sake of clarity, we defer the proof of the following theorem.
Let be a weighted set system. For a given number , let be a subset of points of size picked randomly from with probability proportional to the total weight of the points in such subset.
the subset is a weighted -sibling-net with probability at least , where , and is the VC-dimension of the set system.
Now, based on the prevouise theorem, we can use the BG algorithm with some modifications to solve the -HS problem. Essentially, we estimate the size of an optimal -HS (starting with 1 and iteratively doubling it), set , and use Theorem 2 to compute a -net of size . Theorem 2 gives a -net with high probability. It is possible to check efficiently if the obtained set is indeed a -net. If it is not, we can try again until we get one. On average, a small number of trials are sufficient to obtain a -net. If is indeed a -hitting set, we stop; else, we pick a set in the system that is not -hit and double the weight of all the points it contains. With the new weights, we iterate the process. It can be shown that within iterations of weight-doubling, we are guaranteed to get a -HS solution if the optimal size of a -HS is indeed . See Appendix for the proof, which is similar to the one for BG in . Thus, after iterations, if we have not found a -HS, we can double our current estimate of , and iterate. see Algorithm 1. The below theorem shows that the previous algorithm gives an -approximate solution in polynomial time with high probability for general sets. The proof of the following theorem is again similar to that for the BG algorithm .
Algorithm 1: Solving -HS problem using -nets. Since -HS is the dual of -SC, this algorithm also solves -SC (in -SC, corresponds to the set of sensors, and C to the set of target points).
Given a set system .
for (; ; )
reset the weights of all points in to 1;
for (; ; )
Compute a -net of size using Theorem 2;
if each set in is -hit by , return ;
select a set in that is not -hit, and double the weight of all the points in the set;
The algorithm described previous (Algorithm 1) runs in time and gives a -approximate solution for the -HS problem for a general set systems of constant VC-dimension, where is the optimal size of a -HS.
The outer for loop, where is doubled each time, is run at most times. The inner for loop, where the weights are doubled for a set, is executed at most times. Computing a -net using Theorem 2 takes at most time, while the doubling-weight process may take up to time.
We now prove the approximation factor. An optimal algorithm would find a -hitting set of size . If the VC-dimension is a constant, the -HS method of Theorem 2 finds a -net of size . So if , the size of the -hitting set is , which is an -approximation.
Outline of Proof of Theorem 2
There are two challenges in generalizing the random-sampling technique of , namely, (i) sampling with replacement cannot be used, and (ii) weights must be part of the sampling process.
3.2.2. Challenges in Extending the Technique of  to -Hitting Set
The classical method  of constructing a -net consists of randomly picking a set of at least points, for a certain , where each point is picked independently and randomly from the given set of points. This way of constructing a -net may result in duplicate points in , but the presence of duplicates does not cause a problem in the analysis. Thus, we can also construct weighted -net easily by emulating weights using duplicated copies of the same point. The above described approach works well for 1-hitting set, partly because we do not count the number of times each set is hit. However, for the case of -hitting set, when constructing a -net, we need to ensure that the number of distinct points that hit each set is at least . Thus, constructing a -net by picking points independently at random (with duplicates) does not lead to correct analysis. Instead, we suggest a novel method to construct a weighted -net by: (i) selecting a random subset of points (without duplicates) at once, and (ii) including the weights directly in the previous sampling process. To the best of our knowledge, we are the first one to propose this extension (as discussed before,  uses -nets to solve sensor's -coverage, but their method is flawed).
Proof Sketch of Theorem 2
Let be as given by (1), and let be the subset of points randomly picked from as described in Theorem 2. After picking , pick another set (for the purposes of the below analysis) in the same way as . We now define two events
where and . The proof consists of 3 major steps.
First, we show that .
Then, it is easier to bound the probability of(3)
Finally, we have that verifies
The outline of each step follows.
() From the definition of conditional probability
So we just need to show that . Let (where 's are pairwise different). Define the random variable
Set , and we have . It is possible to show that , and . Applying Chebyshev's inequality
the result follows.
() We use an alternate view. Instead of picking and then , pick of size , then pick and set . It can be shown that the two views are equivalent. Now, define
Since and are disjoint and then if and only if . So we have that happens only if . By counting the number of ways of choosing s.t. , we can bound
Since depends only on the intersection
() it is similar to .
Please, refer to Appendix for the detailed proof.
Note that the approximation factor of Theorem 3 could be improved, if we could design an algorithm to construct smaller -nets. For instance, if we could construct a -net of size , then we would have a constant-factor approximation for the -HS problem. For the particular case of disks, it is easy to extend the method in  to build a -net of size (see Appendix for more details). Essentially, it is enough to replace with and the proof follows through. Also note that the dual of disks and points is also composed by disks and points.
3.2.3. Distributed -Net Approach
Distributed implementation of the -net algorithm requires addressing the following main challenges.
We need to construct a -net, through some sort of distributed randomized selection.
For each constructed -net , we need to verify in a distributed manner whether is indeed a -coverage set (-hitting set in the dual).
If is not a -coverage set, then we need to select one target point (a set in the dual) that is not -covered by and double the weights of all the sensing regions covering it.
We address the previous challenges in the following manner. First, we execute the distributed algorithm in rounds, where a round corresponds to one execution of the inner for loop of Algorithm 1 (i.e., execution of the sampling algorithm for a particular set of weights and a particular estimate of ). We implement rounds in a weakly synchronized manner using internal clocks. Now, for each of the previous challenges, we use the following solutions.
Each sensor keeps an estimate of the total weight of the system and computes independently. To select sensors, each sensor decides to select itself independently with a probability , resulting in selection of sensors (in expectation).
Locally, verify -coverage of the owned target points, by exchanging messages with near-by (that cover a common target point) sensors. If a target point owned by a sensor and its near-by sensors are all -covered for a certain number of rounds (e.g., 10), then exits the algorithm.
Each sensor decides to select one of the owned target points with a probability of , which ensures that the expected number of selected target point is 1.
3.2.4. Generalizations to -Coverage of an Area
The -net approach can also be used to -cover a given area, rather than a given set of target points (as required by the formulation of -SC problem). Essentially, coverage of an area requires dividing the given area into "subregions" as in our previous work ; a subregion is defined as a set of points in the plane that are covered by the same set of sensing regions. The number of such subregions can be shown to be polynomial in the total number of sensing regions in the system. The algorithm described here can then be used without any other modification, and the performance guarantee still holds.
In this paper, we studied the -coverage problem with sensors, which is to select the minimum number of sensors so that each target point is covered by at least of them. We provided an -approximation, where is the number of sensors in an optimal solution. We introduced a generalization of the classical -net technique, which we called -net. We gave a method to build -nets based on random sampling. We showed how to solve the sensor's -coverage problem with the Brönnimann and Goodrich algorithm  together with our -nets. We believe to be the first one to propose this extension.
As a future work, we would like to extend this technique to directional sensors. A directional sensor is a sensor that has associated multiple sensing regions, and its orientation determines its actual sensing region. The -coverage problem with directional sensors is NP-complete and in  we proposed a greedy approximation algorithm. We believe that the use of -nets can give a better approximation factor for this problem.
A. About the Number of Iterations of the Doubling Process
This appendix contains the proof that when is equal to the size of the -hitting set, then , iterations of the doubling process are enough to retrieve the optimal -hitting set. This proof follows the lines of the one in , but with the additional parameter .
If , iterations of the internal for loop of Algorithm 1 are sufficient to find a -hitting set.
Initially . The set selected at the end of the internal for loop satisfies , because the algorithm found a weighted -net. Doubling the weights of the elements in adds a total of new weight to the system. So grows at most by a factor at each iteration. Then, after iterations
Let be the optimal -hitting set. Initially we have . Since is a -hitting set, there are least elements of in each set of . So for any possible set chosen in step 11, there are at least elements of that are doubled. By the convexity of the function , the increase of is minimal if the doublings are spread out over the elements of as evenly as possible. So after iterations, we have
Since the weights are positive and , . We need to find the largest for which
can be true. Taking the log
and solving for
where we used the fact that for . Since the expression on the RHS is for any possible value of , the theorem follows.
B. Computing Weighted -Nets by Random Sampling
This appendix contains the proof of Theorem 2, which is an extension of the -net theorem of Haussler and Welz . As explained in Section 3.2, the two challenges in generalizing the random-sampling technique are that (i) sampling with replacement cannot be used, and (ii) weights must be part of the sampling process. Our contribution is a new method to obtain weighted -nets in which (i) we sample a subset of points at once (without duplicates), and (ii) we include the weights directly in the sampling process.
We start by proving three lemmas, and then we will prove Theorem 2. Let be as given by (1), and let be the subset of points randomly picked from as described in Theorem 2. After picking , pick another set (for the purpose of the below analysis) in the same way as . We now define two events
Intuitively, is the event that does not -hit some set , but has a "large" intersection with the set (also remember that is disjoint from ). Note that is a lower bound the average size of the intersection of and (as computed below).
It holds that
, because if happens, then happens too. From the definition of conditional probability
it suffices to show that .
Let (where the 's are pairwise different). Since happens, there is some set s.t. and . Therefore, is at least the probability that, for this , .
Let be the random variable (r.v.)
Each subset (resp., ) is picked with probability proportional to the sum of the weights of its elements, and each element can appear in (resp., ) subsets (because these are the ways of putting every other elements in the remaining positions). So the probability of picking one element depends only on its weight, and not on the other elements, which means that the elements are pairwise independent. So we have that
where is a function that returns the weight of given set, which is defined as the sum of the weights of its elements.
Let . Clearly . We have
To bound the deviation from the expectation, we use Chebyshev's inequality
Since 's are independent, the covariance is zero. So we get
where we used the fact that for the 0-1 r.v. , .
Applying Chebyshev's inequality
where in the last inequality we used the fact that
Finally we have that
It holds that where .
The experiment of picking and can be viewed in an alternative way. Pick a subset of size at random (each subset is picked with probability proportional to the sum of the weights of its elements). Then, pick as a subset of of size at random (again with probability proportional to the sum of the weights of its elements). Finally, let . Note that this view is equivalent because the probability of picking any subset is the same as before, (similarly for ). This can be verified as follow. We are going to compute the probability of picking a certain subset in both cases. In order to do this, we need to compute the sum of all possible sets of size . Among all possible sets of size , each element appears in exactly of them (because these are the ways of putting every other element in the remaining positions). Now, it is not necessary to know exactly in which set each element gives its contribution, but it is enough to know that it appears a total of times. So, the sum of weight of all possible sets of size is . Then
Now we are going to compute the probability of picking a subset containing . This requires to determine the sum of the weights of all subsets of size that contain . appears in of them (as these are the number of ways of putting any other element in the remaining positions), and it gives a contribution of in each of them. Any other element can appear in any of the remaining positions, for a total of times (because fixed any element, the remaining elements can be placed in any of the remaining positions). So we get that
We also need to compute the probability of picking from
This requires to know , which can be computed as
Finally, it is easy to verify that
Let , with , and define . Since and are disjoint, , and then is equivalent to . If , then does not happen, and it does not happen if either. Suppose that , where , then we can pick as follow. We select elements among the points outside the intersection with
and the remaining elements anywhere else
Their product can be bounded in the following way:
where in the last inequality we used the fact that
which can be proved by induction. The base case, , is trivial. Assuming that the formula is valid for , we get
where in the last inequality we used the fact that .
Using this fact, we can bound . Recall that happens only if . So
For two sets s.t. and , the events and are the same. This is because the occurrence of depends only on the intersection . The number of sets s.t. is unique at most by Corollary 1 below. Then
For any set system , with and VC-dimension , , where .
Given a set system , for any subset of the points , let denote the projection of onto , that is, the set .
For any set system , if , then has VC-dimension , which implies .
Finally, we prove the main theorem.
B.1. Proof of Theorem 2
Combining Lemmas 1, 2, and 3
so we need to show that
which can be written as
Now we consider each part of the sum separately. From (1), it follows that
so it suffice to show that
If this inequality is valid for some value of , then it is for any valid for any bigger value of . So we just need to verify it for . Plugging in we get
and this is equivalent to
which is definitely true.
C. Computing Small Weighted -Nets for Disks
In this appendix we present a simple extension of  to build small weighted -nets for disks. The original construction in  easily extends to -nets by replacing with . The proof presented here is simplified respect to the original one, because we consider only disks, instead of pseudodisks.
The underlining idea is to pick points that are spaced apart, which hit all large enough disks. The strategy that we are going to use is to draw "colored" disks that contain a fixed number of points, and select points only on the border of the colored disks. The position and the size of colored disks depend on the input points, but not on the input disks. All colored disks, but one, will have exactly input points, where . Each input point gets the color of the colored disks that covers it or remains uncolored if uncovered. After placing the colored disks, we compute a Dealaunay triangulation (DT) of the colored points. DT will have uni-colored, bi-colored, and tri-colored triangles. Triangles will have uni-colored and bi-colored edges. Let us define some terminology (see Figure 3).
Definition 7 (corridor; hall; sides; ends; corners).
Let a corridor be a maximal connected chain of bi-colored triangles in DT sharing bi-colored edges. In our construction, corridors are between two colored disks.
Let a hall be a maximal group of adjacent tri-colored triangles (this is a generalization of the degenerate-corridors of ). In our construction, halls are between 3 or more disks, attached to the end of the corridors.
Corridors are bounded by two chains of uni-colored edges, which we call sides, and two bi-colored edges, which we call ends.
We call the endpoints of the sides the corners of the subcorridor. Note that one of the sides can degenerate in a single point, in which case there are 3 corners, instead of 4.
We start by describing the algorithm for the unweighted case, and we will show how to add weights afterwords. We are given , a family of disks, and a set of points in the plane. For simplicity assume that the points are in general position (i.e., no three points are collinear, and no four points are cocircular). Let define (the reason for this will be clear soon). Let be disjoints subsets of constructed in the following manner. From the boundary of , "bite off" subsets of of size with the following properties.
The union of all the subsets contains the boundary points of : (where is the convex hull).
Each is representable as for some half-plane (or equivalently, for some (large enough) disk , and to simplify the proof we can think that is bigger than any input disk).
for , and
Now, consider the internal points of , that are not part of any disk . We are going to draw the largest number of disks of size to cover the internal points. Specifically, let be a maximal collection of disjoint subsets of satisfying
for some disk ,
At this point, we have a total of disks . For each , , color the points of with color , and call the disk defining color, or colored disk. Let be the set of colored points, and call the points in colorless. Let DT be the Dealaunay triangulation of the set of colored points . Break each corridor into a minim number of subcorridors, that is, subchains of the chain of triangles that form , so that each subcorridor contains at most colorless points. Let be the set of the corners of all subcorridors. Clearly . We are going to show that is a the -net for that is, any disk of that contains points of also contains points of . This construction is summarized in Algorithm 2.
Algorithm 2: Small -nets for disks.
Let be disjoints subsets of with the following properties:
(where is the convex hull)
each is representable as for some halfplane
(or equivalently, for some (large enough) This construction of subsets of can be done by
disk , and to simplify the proof we can think that is bigger than any input disk)
for , and
repeatedly "biting off" subsets of with halfplanes
Let be a maximal collection of disjoint subsets of satisfying:
for some disk
For each , , color the points of with color , and call the disk defining color, or colored disk.
Let be the set of colored points, and call the points in colorless;
Let DT be the Dealaunay triangulation of the set of colored points ;
Break each corridor into a minim number of subcorridors, i.e. subchains of the chain of triangles
that form , so that each subcorridor contains at most colorless points
Let be the set of the corners of all subcorridors. Clearly ;
Return has the -net for ;
First of all note that colorless points can only be in corridors and halls (because unicolored triangles are contained in the corresponding color-defining disks). Also, we can observe that any disk containing no colored points contains less than points of . In fact, from the maximality of the construction, there cannot be colorless disks with points. Than we claim what follows.
There are at most corridors in DT, and
See . Note that, since all are disjoint, and all but maybe one contain points, so .
We now prove the -net theorem for disks.
Theorem 5 (-net theorem for disks).
Algorithm 2 creates a -net of size for , where is a set of points in non-D-degenerate position (i.e., no three points are collinear, and no four points are cocircular), and D is a family of disks.
By claim we have that the size of is . So we only need to show that contains at least points in each input disk of size at least .
First of all, the case is already proved in , so we focus on the case of .
For a generic input disk of size at least , we are going to compute the minimum number of corners contributed by each intersecting region (colored disk, subcorridor, or hall), while assuming that it has the largest possible intersection. Also, we will pay attention not to count the same corner multiple times. If a colored disk is completely contained in an input disk, it will contribute for the biggest number of corners. So we should only consider colored disks that intersect the boundary of the input disk. We claim that the minimum contribution is given when the boundary of an input disk intersects an alternation of colored disks and corridors. In this case we should count 1 corner, for each colored disk/corridor. The only case in which (a part of) the boundary of the input disk does not intersect any colored disk is when it is contained inside a corridor. But this can only happen if there is a colored disk inside the input disk, but we already argued that this will give a higher contribution. The following is an upper bound on the number of intersecting points. There can be points for the colored disk, plus another points for the subcorridor on the side of the corner that we are counting, plus another points for the hall adjacent to them. This means that for points there is at least 1 corner. We are considering input disks of size at least , and this implies that there are at least points in each disk.
Finally, we consider the weighted case. The construction is similar to the unweighted one, with the only difference that the colored disks contain points, where is the sum of the weights of all points. It is easy to see that the proof follows through.
Gupta H, Zhou Z, Das SR, Gu Q: Connected sensor cover: self-organization of sensor networks for efficient query execution. IEEE/ACM Transactions on Networking 2006, 14(1):55-67.
Chakrabarty K, Iyengar SS, Qi H, Cho E: Grid coverage for surveillance and target location in distributed sensor networks. IEEE Transactions on Computers 2002, 51(12):1448-1453. 10.1109/TC.2002.1146711
Slijepcevic S, Potkonjak M: Power efficient organization of wireless sensor networks. Proceedings of the International Conveference of Communication (ICC '01), June 2001, Helsinki, Finland 2: 472-476.
Meguerdichian S, Koushanfar F, Qu G, Potkonjak M: Exposure in wireless ad-hoc sensor networks. Proceedings of the 7th Annual International Conference on Mobile Computing and Networking (MOBICOM '01), July 2001, Rome, Italy 139-150.
Cormen TH, Leiserson CE, Rivest RL, Stein C: Introduction to Algorithms. 2nd edition. MIT Press, Boston, Mass, USA; 2001.
Brönnimann H, Goodrich MT: Almost optimal set covers in finite VC-dimension. Discrete and Computational Geometry 1995, 14(1):463-479. 10.1007/BF02570718
Haussler D, Welzl E:-nets and simplex range queries. Discrete and Computational Geometry 1987, 2(1):127-151. 10.1007/BF02187876
Hefeeda M, Bagheri M:Randomized -coverage algorithms for dense sensor networks. Proceedings of the 26th IEEE International Conference on Computer Communications (INFOCOM '07), May 2007 2376-2380.
Ye F, Zhong G, Cheng J, Lu S, Zhang L: PEAS: a robust energy conserving protocol for long-lived sensor networks. Proceedings of the 23rd IEEE International Conference on Distributed Computing Systems (ICDCS '03), May 2003, Providence, RI, USA 28-37.
Shakkottai S, Srikant R, Shroff N: Unreliable sensor grids: coverage, connectivity and diameter. Proceedings of the 22nd Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM '03), March-April 2003, San Francisco, Calif, USA 2: 1073-1083.
Zhou Z, Das S, Gupta H: Connected K-coverage problem in sensor networks. Proceedings of the 13th International Conference on Computer Communications and Networks (ICCCN '04), 2004 373-378.
Matousek J, Seidel R, Welzl E:How to net a lot with little: small -nets for disks and halfspaces. Proceedings of the 6th Annual Symposium on Computational Geometry (SCG '90), June 1990, Berkeley, Calif, USA 16-22.
O'Rourke J: Art Gallery Theorems and Algorithms, International Series of Monographs on Computer Science. Volume 3. Oxford University Press, New York, NY, USA; 1987.
Valtr P: Guarding galleries where no point sees a small area. Israel Journal of Mathematics 1998, 104: 1-16. 10.1007/BF02897056
Fusco G, Gupta H: Selection and orientation of directional sensors for coverage maximization. Proceedings of the 6th Annual IEEE Conference on Sensor, Mesh and Ad Hoc Communications and Networks (SECON '09), June 2009, Rome, Italy
Alon N, Spencer JH:-nets and VC-dimensions of range spaces. In The Probabilistic Method. 2nd edition. Wiley-Interscience, New York, NY, USA; 2000:220-225.
This work was supported in part by the NSF awards: 0713186, 0721701, 0721665.
About this article
Cite this article
Fusco, G., Gupta, H. -Net Approach to Sensor -Coverage. J Wireless Com Network 2010, 192752 (2009) doi:10.1155/2010/192752
- Target Point
- Input Point
- Colored Point
- Colored Disk
- Centralize Heuristic