Skip to content

Advertisement

  • Research
  • Open Access

High-resolution remote sensing image segmentation based on improved RIU-LBP and SRM

EURASIP Journal on Wireless Communications and Networking20132013:263

https://doi.org/10.1186/1687-1499-2013-263

  • Received: 29 July 2013
  • Accepted: 25 October 2013
  • Published:

Abstract

In this paper, we propose an improved rotation invariant uniform local binary pattern (RIU-LBP) operator for segmenting high-resolution sensing image which can effectively describe the texture features of a high-resolution remote sensing image. The improved RIU-LBP is based on RIU-LBP. It introduces a threshold in binarization of region pixels. The new LBP operator can better tolerate small texture variation and better distinguish the plain and rough texture than the original RIU-LBP does. Then, a merging criterion of texture regions is proposed, which is based on regional LBP value distribution and Bhattacharyya distance. Finally, the texture merging criterion and spectral merging criterion are combined in the statistical region merging (SRM)-based remote sensing image segmentation method to improve segmentation results, taking full advantage of rich spectral and texture information in high-resolution remote sensing images. This algorithm can be adjusted to the number of segmented regions, and experiments indicate better segmentation results than ENVI 5.0 and the SRM method.

Keywords

  • High-resolution remote sensing image segmentation
  • Local binary pattern
  • Bhattacharyya distance

1 Introduction

Segmentation is an important problem in remote sensing image processing [1, 2]. Early remote sensing image segmentation methods utilize pixel-based strategies and ignore rich spectral and structure information. Thus, the segmentation results are unsatisfactory and have adverse influence on following image analysis. In recent years, object-oriented segmentation methods are extensively applied in remote sensing image analysis. Homogenous region features such as intensity, texture, and shape can be used to improve the segmentation accuracy. Although there have been many significant object-oriented segmentation algorithms in remote sensing image segmentation using multispectral information [35], only a few ones can be effectively applied in the segmentation of high-resolution remote sensing images. High-resolution remote sensing images contain rich spatial texture information in many scales which is an advantageous resource in the process of remote sensing image segmentation [6, 7]. However, traditional image segmentation algorithms do not take rich texture information into account. Nock and Nielsen proposed a statistical method for image segmentation by merging region following a particular order of regions [8]. The method uses the most common numerical pixel attribute spaces. However, it mainly exploits spectral information in images and ignores useful texture features. On the basis of Nock and Nielsen's work, some new statistical region merging (SRM)-based segmentation algorithms which are specific for high-resolution remote sensing image segmentation are proposed [911]. However, none of them focus on taking advantages of texture information in the merging process. To take such texture features into account, we add texture information in the merging process of the SRM to enhance the segmentation performance. Among numerous texture description methods, the local binary pattern (LBP) operator is chosen in consideration of its combination of statistic-based and structure-based methods. It has been proved to be theoretically simple and very effective in describing the characteristics of local texture regions. Some improvements on LBP operators have been proposed such as multiscale LBP, rotation invariant LBP, rotation invariant uniform LBP (RIU-LBP), etc. [1214].

In this paper, we propose a high-resolution remote sensing image segmentation algorithm based on the improved LBP feature and SRM region merging method. The proposed algorithm can make full use of the spectral information and the texture information in the high-resolution remote sensing image. Moreover, the appropriate criterion can be adaptively chosen in the region merging step according to the characteristics of regions, which can further improve the segmentation performance. This algorithm can segment high-resolution remote sensing images with complex scene effectively. The contributions of our proposed method includes the following: First, an improved RIU-LBP is proposed to describe texture features as the traditional LBP has difficulties in differentiating high-resolution remote sensing image regions with different textures. The improved RIU-LBP operator is based on the RIU-LBP feature. A threshold is introduced to determine a uniform mode that handles regions with similar spectral information but different texture information. Second, the improved RIU-LBP operator is applied to extract texture information in high-resolution remote sensing images for segmentation tasks. The merging criterion based on texture information is combined with the SRM method to form a double criterion to predict whether merging or not. The merging rule based on texture information adopts the Bhattacharyya distance. As shown in the experimental results, the proposed method outperforms the SRM and ENVI 5.0, which is a remote sensing image processing software developed by the Exelis Visual Information Solutions.

This paper is organized as follows. In Section 2, we review the basic LBP and RIU-LBP and then propose the improved RIU-LBP. In Section 3, we present the segmentation algorithm. The proposed scheme using the improved RIU-LBP and SRM is described elaborately. Experiment results and analysis are shown in Section 4. Finally, the paper is concluded in Section 5.

2 The LBP operator

2.1 Rotation invariant uniform local binary pattern

To describe the image texture features, the LBP was proposed by Ojala et al. [12]. The LBP combined the statistics-based method and the structure-based method effectively. Thus, compared with other methods, it has a great advantage in describing the texture feature of image regions. For simplification, we take one color channel in images for discussion. The key of LBP is to encode image pixels' value into binary codes. First, the gray value of the geometric center in a local region is used as a threshold value to binarize pixels around the center. Then, the binarization values are multiplied by the corresponding weights according to their positions. Adding the weighted binarization values, the coding value of the center pixel can be obtained.

The basic LBP has much variations and the quantization is crude, which has high computational complexity. With extensive statistical analysis, Ojala et al. found that the local binary patterns have strong regular form. Some specific patterns account for a large proportion of the total of the LBP. These local binary patterns have a similar structure, where the number of the 0/1 or 1/0 transitions between adjacent positions of the neighborhood pixels in the local region is very small. Therefore, Ojala et al. defined those patterns as ‘uniform’ patterns and proposed a RIU-LBP operator so that a statistical distribution on uniform patterns can be effectively computed. Unlike the basic LBP, the RIU-LBP chooses circular neighborhoods as the center pixel's coding unit and focuses on pixels which have uniform mode [12, 13]. Equation (1) shows how to determine whether a local binary pattern is uniform:
U LBP P , R = sign g P - 1 - g c - sign g 0 - g c + p = 1 P - 1 sign g p - g c - sign g p - 1 - g c
(1)
where P is the number of pixels around the center pixel, R represents the radius of the circular neighborhood, g c is the gray value of the center pixel, g p is the gray value of the p th pixel around the center following the specific order, and p [0,P - 1]. When U(LBPP,R) ≤ 2, the pattern is determined as uniform [13], i.e., the pattern whose number of 0/1 or 1/0 transitions between the adjacent positions of the neighborhood pixels is less than two times is uniform. Moreover, the RIU-LBP operator can be defined as
LBP P , R riu 2 = p = 1 P - 1 sign g p - g c , if U LBP P , R 2 P + 1 , otherwise
(2)

2.2 Improved rotation invariant uniform local binary pattern

The sign operator sign(g p  - g c ) dominates the result of RIU-LBP. If a slight change happens in gray values, LBP P , R riu 2 might change dramatically as shown in Figure 1. In other words, LBP P , R riu 2 is easily affected by the slight variation of gray values in the neighborhood pixels. Observing the left columns in Figure 1A,B, it is obvious that the difference between Figure 1A,B is very small, but their LBP 8 , 1 riu 2 values are quite different: Figure 1A is the uniform LBP and the LBP 8 , 1 riu 2 value is 3, while Figure 1B is the nonuniform LBP and the LBP 8 , 1 riu 2 value is 9.
Figure 1
Figure 1

Two similar regions and their binarization results. Pixels in (A) and (B) are the same except the pixel in the second row and first column. Their binarization results are similar, but the values of LBP P , R riu 2 are different.

From this example, we can see that the RIU-LBP operator defined by (2) cannot express the similarity of two regions, once the gray values of neighborhood pixels are very close to the center pixel's gray value. In other words, LBP P , R riu 2 is sensitive to the light variation. Unfortunately, this situation is prone to occur in remote sensing images because high-resolution remote sensing images have rich texture information, and the gray values of different image regions with the same texture information are vulnerable to this disturbance.

To solve this problem, we add a threshold T to make LBP P , R riu 2 stable. Then, the improved RIU-LBP operator is defined as
LBP P , R riu 2 , T = p = 0 P - 1 sign g p - g c - T , if U LBP P , R T 2 P + 1 , otherwise
(3)
where
U LBP P , R T = sign g P - 1 - g c - T - sign g 0 - g c - T + p = 1 P - 1 sign g p - g c - T - sign g p - 1 - g c - T
(4)

The notation in (3) and (4) is defined in (1) and (2). The original LBP P , R riu 2 in (2) is one special case of LBP P , R riu 2 , T , when T = 0. Thus, LBP P , R riu 2 has the comprehensive ability to represent texture features.

To illustrate LBP P , R riu 2 , T ' s advantages for describing texture features intuitively, two examples are shown in Figures 2 and 3. The parameters are set as P = 8, R = 1, and T = 20. Figure 2 illustrates the binarization result of LBP 8 , 1 riu 2 , T using the same two similar regions as Figure 1. Figure 3 shows two regions with different texture characteristics, and this example can show another advantage of the LBP 8 , 1 riu 2 , T operator.
Figure 2
Figure 2

Another two similar regions and their binarization results. The original regions in (A) and (B) are the same as that in Figure 1A, B. The binarization results of the two regions are the same, but different in Figure 1.

Figure 3
Figure 3

Two different regions. Two regions have different texture characters. (A) is relatively flat, while (B) is relatively rough. The LBP 8 , 1 riu 2 values of (A) and (B) are 4, while the LBP 8 , 1 riu 2 , 20 value of (A) and (B) is 0 and 8, respectively.

As shown in Figures 1 and 2, for the two similar local texture regions, their LBP 8 , 1 riu 2 values are different, but their LBP 8 , 1 riu 2 , 20 values are the same. LBP 8 , 1 riu 2 , 20 is more robust to light variation and image noises. From Figure 3, we can see that Figure 3A is relatively flat while Figure 3B is relatively rough. After calculating the LBP values by LBP 8 , 1 riu 2 and LBP 8 , 1 riu 2 , 20 , respectively, it is easy to find out that the LBP 8 , 1 riu 2 values of both Figure 3A,B are the same, but their LBP 8 , 1 riu 2 , 20 values are different. According to this example, it is obvious that LBP 8 , 1 riu 2 , 20 can effectively distinguish different texture features than LBP 8 , 1 riu 2 does.

According to the analysis and verification above, two merits of the improved RIU-LBP operator are observed. The first merit is the ability to eliminate the disturbance of the small gray value variation in the texture regions, which can improve the robustness for the description of texture regions with the same texture feature. The second one is the strong discrimination capability for different texture regions, such as the flat texture region and the rough texture region. These two advantages are in favor of the segmentation and the classification of the texture image.

3 High-resolution remote sensing image segmentation

3.1 Statistical region merging

The SRM algorithm is an effective image segmentation algorithm, which was proposed by Nock and Nielsen [8]. The SRM algorithm is based on region growing and merging techniques with statistical tests to determine the merging of regions. The image segmentation algorithm based on SRM includes two crucial steps: the merging predication and the computing order to test region merging.

Let I be the observed image, and |I| be the number of pixels in the image, where |·| stands for the cardinal number. Let R l be a set of regions with l pixels, the merging predication on the two candidate regions R and R' can be defined as
P R , R ' = true , if R ¯ - R ¯ ' b 2 R + b 2 R ' false , otherwise
(5)
b R = g 1 / 2 Q | R | ) ln ( | R R | / δ
(6)

where δ = 1/(6|I|2), |R| is the number of pixels in the image region R, g is the gray level which is usually 256 in remote sensing images, and Q is the spectral scale factor to weigh the possibility to merge two regions based on spectral information. It controls the number of the regions in the segmentation result. As three color channels exist in remote sensing images, two regions are merged together when P(R(p),R(p')) of any color channel returns true. Obviously, spectral information is the major factor to predict region merging in the SMR algorithm.

If the 4-neighborhood rule is adopted, there are N < 2|I| adjacent pixel pairs in the observed image I. Let S I be the set of all these pixel pairs in I, and f(p,p') be a real valued function to measure the similarity of two pixels in the pixel pair (p,p'). In the SRM algorithm, a pre-ordering strategy is proposed by Nock and Nielsen. The procedure of this strategy can be described as follows: first, sort all the pairs in S I by the increasing order of f(p,p'), and then traverse this order only once. For each pair of pixels (p,p') S I , if R(p) ≠ R(p'), where R(p) stands for the region to which the P pixel belongs, and the value of P(R(p),R(p')) returns true, R(p) and R(p') can be merged [8].

3.2 High-resolution remote sensing image segmentation based on improved RIU-LBP and SRM

A good segmentation for high-resolution remote sensing images should take both of its spectral information and texture information into account. However, the SRM algorithm mainly exploits the spectral information in images and ignores useful texture features. To make up the deficiency, we introduce the texture information in the SRM algorithm, and the texture information is represented by our proposed LBP descriptor.

In order to compare the texture similarity between two regions, the Bhattacharyya distance is chosen. For a processed image by the LBP operator, the Bhattacharyya distance of two regions R and R' can be calculated by
J B R , R ' = - ln i E p R i q R ' i
(7)
where p R (i) and qR '(i) represent the probability of the pixels whose improved RIU-LBP values are i in the regions R and R' , respectively, and E is an ensemble of possible improved RIU-LBP values. As the Bhattacharyya distance increases, the similarity becomes lower between two candidate regions. The merging criteria based on the Bhattacharyya distance can be defined as
P B R , R ' = true , if J B R , R ' M false , otherwise
(8)

where J B (R,R') is the Bhattacharyya distance between the regions R and R', and M is the texture scale parameter to control the merging scale based on the texture information. The greater is M, the lower is the region merging probability. If P B (R,R') is true, we merge the regions R and R' . It is similar to the SRM algorithm that two regions can be merged if P B (R(p),R(p')) of any color channel returns true.

The SRM algorithm mainly employs the spectral information for region merging. However, there are many different regions with a similar spectral feature because these regions show quite different texture features. For these regions, the SRM algorithm is prone to over-merging. To solve this problem, we introduce the texture information in the process of region merging. With this improvement, the condition of two regions merging is stricter. If two regions are merged, they must satisfy both the spectral similarity condition (5) and the texture similarity condition (8). The two merging scale factors, i.e., the spectral scale factor Q and the texture scale factor M, are combined to decide the final number of regions in the segmentation result. The algorithm flow is shown in Figure 4. First, all adjacent pixel pairs' similarity function value is calculated. Then, we sort all the pixel pairs following the increasing order of similarity function value, and choose the pairs according to this order to determine whether two pixels in each pair comes from the same region. If they are from different regions, another decision that whether the merging criterion is satisfied should be made. The merging criterion is divided in two cases by the size of two regions because the texture information could be stably and exactly extracted from a large enough region. If the numbers of pixels in two regions are greater than the pre-set value N T , we need to compare their spectral similarity and texture similarity. In other cases, only the spectral similarity needs to be taken into account. The segmentation result is obtained after all pixel pairs are processed.
Figure 4
Figure 4

The algorithm flow of the proposed algorithm.

4 The experimental results

We choose two high-resolution remote sensing images as experimental data. The first image is shown in Figure 5 which was obtained in late October 2009. It includes three bands with a resolution of 0.5 m. The second image is shown in Figure 6 which was taken on January 2, 2010 and the resolution is 0.5 m.
Figure 5
Figure 5

GeoEye-1 image of Fairfax in Virginia.

Figure 6
Figure 6

GeoEye-1 image of San Diego, California Town and Country Grand Hotel.

4.1 Comparison of RIU-LBP and improved RIU-LBP

In order to verify the texture representation ability of the improved RIU-LBP, we compare its discrimination with those of the RIU-LBP. Two images tailored from Figure 5 are selected as experimental images. The first tailored image for verifying the discrimination is shown in Figure 7, and the second one for comparing the stability is shown in Figure 8.
Figure 7
Figure 7

The tailored remote sensing image. The two regions specified by red and green rectangles are heterogeneous regions. They are used to compare texture describing ability of RIU-LBP and improved RIU-LBP.

Figure 8
Figure 8

Another tailored remote sensing image. The two regions specified by red and green rectangles are homogenous regions. They are used to compare texture describing ability of RIU-LBP and improved RIU-LBP.

First, two different types of region specified by the red rectangle and the green rectangle are selected in Figure 7. The region surrounded by the red rectangle contains woods only, and the other contains lawn only. Next, we process two regions with LBP P , R riu 2 and LBP P , R riu 2 , T , respectively, and then calculate the corresponding LBP value distribution (Figure 9) and Bhattacharyya distance (Figure 10). The threshold in LBP P , R riu 2 , T is set to 15 in Figure 9. In these experiments, P is 8 and R is 1. As shown in Figure 9, it is obvious that the LBP P , R riu 2 value distributions of heterogeneous regions are similar while the LBP P , R riu 2 , T value distributions are different. LBP P , R riu 2 , T can better distinguish heterogeneous regions than LBP P , R riu 2 does.
Figure 9
Figure 9

The LBP P , R riu 2 value distribution (A) and LBP P , R riu 2 , T value distribution (B) of the two heterogeneous regions in Figure7. Green bars represent the LBP value distribution of the region specified by the green rectangle in Figure 7, and red bars represent the LBP value distribution of the region specified by the red rectangle in Figure 7.

Figure 10
Figure 10

The Bhattacharyya distance of the two regions in Figure 7 after processing with LBP P , R riu 2 , T and LBP P , R riu 2 .

Since the distribution in Figure 9 is obtained when T = 15, the result might not be convincing in general. To further validate the ability to distinguish two heterogeneous regions, varying thresholds are used to compute the distributions with LBP P , R riu 2 , T , and the Bhattacharyya distance is introduced to measure the difference of two regions. Since LBP P , R riu 2 , T can be viewed as a function whose parameter is the threshold T, the discrete Bhattacharyya distance can be considered as a function whose parameter is threshold T as well. Since LBP P , R riu 2 is a constant after specifying two regions, we can regard it as a function whose parameter is threshold T too for convenient comparison. Figure 10 shows the Bhattacharyya distance of two regions from Figure 7 after computing their LBP P , R riu 2 and LBP P , R riu 2 , T features with different thresholds. From Figure 10, we find that the Bhattacharyya distance of LBP P , R riu 2 , T is greater than that of LBP P , R riu 2 when 1 ≤ T ≤ 60. According to the analysis in Section 3.2, the LBP value distribution of two different regions is more dissimilar using the LBP P , R riu 2 , T operator, and thus, LBP P , R riu 2 , T distinguishes the lawn and woods better. Therefore, LBP P , R riu 2 , T is better in the discrimination of the different texture regions.

Similar to the experiment to verify the discrimination of LBP P , R riu 2 , T , two homogenous regions specified by a red rectangle and a green rectangle are selected in Figure 8 to compare the stability to homogenous regions. Apparently, two regions are both lawn. Next, we process two regions with similar procedures. The final LBP value distributions are presented in Figure 11, and the Bhattacharyya distance is shown in Figure 12. The threshold in LBP P , R riu 2 , T is set to 15 as well. As we can see in Figure 11, both LBP P , R riu 2 value distribution and LBP P , R riu 2 , T value distribution are similar for homogenous regions. To further test the stability to describe homogenous regions, comparison of the Bhattacharyya distance is used as well. In Figure 12, the distances with LBP P , R riu 2 , T are lower than the distances with LBP P , R riu 2 when 18 ≤ T ≤ 27 and T ≥ 32, which denotes that LBP P , R riu 2 , T s representation ability to homogenous regions is more stable than that of LBP P , R riu 2 with a proper threshold.
Figure 11
Figure 11

The LBP P , R riu 2 value distribution (A) and LBP P , R riu 2 , T value distribution (B) of the two heterogeneous regions in Figure8. Green bars represent the LBP value distribution of the region specified by the green rectangle in Figure 8, and red bars represent the LBP value distribution of the region specified by the red rectangle in Figure 8.

Figure 12
Figure 12

The Bhattacharyya distance between the two regions in Figure 8 after processing with LBP P , R riu 2 , T and LBP P , R riu 2 .

From these experiments, we can draw a conclusion that the texture representation ability of LBP P , R riu 2 , T is better than that of LBP P , R riu 2 for the stability to homogenous regions and the discrimination to heterogeneous regions. Recalling the analysis in Section 2.2, the essential reason is that our proposed LBP operator can block small variation of the gray values of the pixels around the center pixel and can strongly discriminate the flat and rough regions.

4.2 High-resolution remote sensing image segmentation experiments

The aim of the experiments below is to compare the segmentation performance with the SRM algorithm [8] and ENVI 5.0. ENVI 5.0 is the latest version of the remote sensing image processing software developed by Exelis Visual Information Solutions. It includes a feature extraction module with remote sensing image segmentation tools. The segmentation method in ENVI 5.0 consists of two main steps: an edge-based segmentation followed by Full Lambda Schedule [15] merging technique.

To measure the segmentation quality, we choose two indexes to measure the quality of segmentation objectively. The first index is pixel segmentation error rate E = N error N sum × 100 % , where Nerror represents the number of pixels that are misclassified, and Nsum is the total number of pixels to be classified in the image. The second index is the region ratio RR = N R N ref , where NR is the number of regions in the final segmentation image, and Nref is the number of regions in the reference segmentation image. When the pixel segmentation error rate E remains unchanged, the area ratio RR being close to 1 indicates that the segmentation result is better. RR > 1 implies over-segmentation which means too many segmentation regions in the result image. RR < 1 means under-segmentation which indicates that some different regions are merged into the same region.

Two high-resolution remote sensing images are chosen to conduct the segmentation comparison. The first one is Figure 5. To evaluate the segmentation result, we need a reference segmentation image. As the detail in high-resolution remote sensing images is quite clear, the reference image is obtained by manual segmentation and the result is shown in Figure 13. While the segment factor is 48.2 and the merge factor is 98.0, ENVI 5.0 works best, and the result is given in Figure 14. When the scale factor Q is 250, the SRM segmentation algorithm works best, and the segmentation result is shown in Figure 15. In our proposed segmentation algorithm, Q is chosen as 200, the LBP operator is defined as LBP 8 , 1 riu 2 , 15 and M is 0.12, and the result is best. The result of our proposed segmentation algorithm is shown in Figure 16. The indexes E and RR of three methods are shown in Table 1. The second image is shown in Figure 6, which was taken on January 2, 2010, and the resolution is 0.5 m. The segmentation result of ENVI 5.0 is shown in Figure 17. The segmentation scale factor is set to be 36.8, and the merging scale factor is chosen as 98.0. Then, we use the SRM algorithm to segment this image and set the segmentation scale factor Q as 1,000. The segmentation result is shown in Figure 18. In our proposed algorithm, the LBP operator is LBP 8 , 1 riu 2 , 12 , Q is set to 100, and M is chosen as 0.1. The result is shown in Figure 19.
Figure 13
Figure 13

The reference segmentation result.

Figure 14
Figure 14

The segmentation result of Figure5using ENVI 5.0. (A) The boundaries of segmented regions are illustrated by green curves. (B) The segmented regions are labeled in different colors.

Figure 15
Figure 15

The segmentation result of Figure5using the SRM algorithm. (A) The boundaries of segmented regions are illustrated by red curves. (B) The segmented regions are labeled in different colors.

Figure 16
Figure 16

The segmentation result of Figure5using our proposed algorithm. (A) The boundaries of segmented regions are illustrated by red curves. (B) The segmented regions are labeled in different colors.

Table 1

The detail results ( E and RR ) of the three segmentation methods

Algorithm

Pixel segmentation error rate (%)

Region ratio

ENVI 5.0

14.96

1.0946

The SRM algorithm

17.65

1.1222

Our proposed method

14.21

1.0626

Figure 17
Figure 17

The segmentation result of Figure6using ENVI 5.0. (A) The boundaries of segmented regions are illustrated by green curves. (B) The segmented regions are labeled in different colors.

Figure 18
Figure 18

The segmentation result of Figure6using the SRM algorithm. (A) The boundaries of segmented regions are illustrated by red curves. (B) The segmented regions are labeled in different colors.

Figure 19
Figure 19

The segmentation result of Figure6using our proposed algorithm. (A) The boundaries of segmented regions are illustrated by red curves. (B) The segmented regions are labeled in different colors.

4.2.1 Comparison analysis with ENVI 5.0

As shown in Figures 14 and 16, it is clear that our proposed algorithm outperforms ENVI 5.0 subjectively. As shown in Figure 14, two regions that have different spectral information with surrounding lawn are segmented into one region. This is because ENVI 5.0 does not make full use of the spectral information and the texture information, and the edge and the geometrical relationships between adjacent regions are the main cues used in ENVI 5.0 segmentation algorithm. Our method's result in Figure 16 shows that our method performs better in separating regions with different spectral information. This is because our proposed algorithm inherits the merits of the original SRM algorithm. The objective indexes also indicate that our proposed algorithm is better. The index E is the smallest. Meanwhile, the index RR is the region ratio which is closest to 1. From Figures 17 and 19, we can draw the same conclusion that our proposed algorithm is better. A large number of homogenous regions are merged into different areas in Figure 17. Moreover, ENVI 5.0 cannot segment lawn and trees in the top left corner accurately, and even with a bigger segmentation scale factor, the regions of lawn and trees still are mis-merged. In Figure 19, our method segments regions of lawn and trees into different regions and merges most homogenous regions into one region.

4.2.2 Comparison analysis with the SRM algorithm

As shown in Figures 15 and 16, the SRM algorithm over-segments many regions such as the regions of the bottom left corner and the top right corner. Parts of woods and lawn are merged into one region in the bottom left corner, and parts of woods and road are merged together in the top right corner as they have similar spectral information. The segmentation result in Figure 16 obviously has superior performance in separating regions with similar spectral information but different texture information. This is because our proposed algorithm applies the improved RIU-LBP to describe the texture information and adds the adaptive merging criterion in the original SRM algorithm. So the spectral and texture information could be involved in judgment of region merging. Thus, Figure 16 by our method has the best subjective segmentation results. The objective indexes also show that our proposed algorithm is better. From Figures 17 and 18, we observe that the SRM algorithm has the same problem with ENVI 5.0 that it cannot segment the lawn and trees in the top left corner because the spectral information of the lawn and trees in Figure 6 are very close. A greater segmentation scale factor does not help, but our proposed algorithm works well. From Figure 19, the lawn and trees have been separated well by our method. It owes to the utilization of the texture information which has distinct difference between the lawn and trees, and the satisfactory result is obtained without over-segmentation.

5 Conclusions

In this paper, we improve the RIU-LBP descriptor by introducing a threshold in the binarization step. The new LBP operator can better describe the texture information of high-resolution remote sensing images. Then, the segmentation algorithm is proposed based on the idea that the appropriate criterion is adaptively selected for merging according to the nature of the region, which can make full use of the spectral information and the texture information in high-resolution remote sensing images. Our proposed algorithm improves the accuracy of segmentation of the texture regions and successfully segments different regions that have similar spectral information with different texture information. The experimental results demonstrate the effectiveness of our proposed algorithm.

Declarations

Acknowledgements

This research was supported by the National Natural Science Foundation of China (61201271), Specialized Research Fund for the Doctoral Program of Higher Education (20100185120021), and Sichuan Science and Technology Support Program (cooperated with the Chinese Academy of Sciences) (2012JZ0001).

Authors’ Affiliations

(1)
School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 61173, People’s Republic of China

References

  1. Cao Z, Tan Y, Feng J: Segmentation of PolSAR image by using an automatic initialized variational model and a dual optimization approach. EURASIP J. Wireless Commun. Netw. 2013, 1: 1-10.Google Scholar
  2. Tuia D, Muñoz-Marí J, Camps-Valls G: Remote sensing image segmentation by active queries. Pattern Recogn. 2012, 45(6):2180-2192. 10.1016/j.patcog.2011.12.012View ArticleGoogle Scholar
  3. Sarkar A, Biswas MK, Kartikeyan B, Kumar V, Majumder KL, Pal DK: A MRF model-based segmentation approach to classification for multispectral imagery. IEEE Trans. Geosci. Remote Sens. 2002, 40(5):1102-1113. 10.1109/TGRS.2002.1010897View ArticleGoogle Scholar
  4. Mitra P, Uma Shankar B, Pal SK: Segmentation of multispectral remote sensing images using active support vector machines. Pattern Recogn. Lett. 2004, 25(9):1067-1074. 10.1016/j.patrec.2004.03.004View ArticleGoogle Scholar
  5. Paclik P, Duin RPW, van Kempen GMP, Kohlus R: Segmentation of multi-spectral images using the combined classifier approach. Image Vis. Comput. 2003, 21(6):473-482. 10.1016/S0262-8856(03)00013-1View ArticleGoogle Scholar
  6. Unser M: Texture classification and segmentation using wavelet frames. IEEE Trans. Image Process. 1995, 4(11):1549-1560. 10.1109/83.469936View ArticleGoogle Scholar
  7. Clausi A, Deng H: Design-based texture features fusion using gabor filters and co-occurrence probabilities. IEEE Trans. Image Process. 2005, 14(7):925-936.View ArticleGoogle Scholar
  8. Nock R, Nielsen F: Statistical region merging. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26(11):1452-1458. 10.1109/TPAMI.2004.110View ArticleGoogle Scholar
  9. Li HT, Gu HY, Han YS, Yang JH: An efficient multiscale SRMMHR (Statistical Region Merging and Minimum Heterogeneity Rule) segmentation method for high-resolution remote sensing imagery. IEEE J. Selected Topics in Appl. Earth Observations and Remote Sensing 2009, 2(2):67-73.View ArticleGoogle Scholar
  10. Wang XT, Wu JT: Remote sensing image segmentation based on statistical region merging and nonlinear diffusion. In Proceedings of the 2nd International Asia Conference on Informatics in Control, Automation and Robotic. Piscataway: IEEE; 2010:32-35.Google Scholar
  11. Souri AH, Mohammadi A, Sharifi MA: A new prompt for building extraction in high resolution remotely sensed imagery. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Volume XL-1/W3. Vienna: International Society of Photogrammetry and Remote Sensing; 2013.Google Scholar
  12. Ojala T, Pietikäinen M, Harwood D: A comparative study of texture measures with classification based on feature distribution. Pattern Recogn. 1996, 29(1):51-59. 10.1016/0031-3203(95)00067-4View ArticleGoogle Scholar
  13. Ojala T, Pietikäinen M, Mäenpää T: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24(7):971-987. 10.1109/TPAMI.2002.1017623View ArticleMATHGoogle Scholar
  14. Lin CH, Liu CW, Chen HY: Image retrieval and classification using adaptive local binary patterns based on texture features. IET Image Process. 2012, 6(7):822-830. 10.1049/iet-ipr.2011.0445MathSciNetView ArticleGoogle Scholar
  15. Exelis Visual Information Solutions Company: ENVI 5.0 Help. Boulder: Exelis; 2012.Google Scholar

Copyright

© Cheng et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement