Building Change Detection Improvement Using Topographic Correction Models

Abstract

In the change detection application of remote sensing, commonly the variation in the brightness values of the pixels/objects in bi-temporal image is used as an indicator for detecting changes. However, there exist effects, other than a change in the objects that can cause variations in the brightness values. One of the effects is the illumination difference on steep surfaces mainly steeproofs of houses in very high resolution images, specifically in off-nadir images. This can introduce the problem of false change detection results. This problem becomes more serious in images with different view-angles. In this study, we propose a methodology to improve the building change detection accuracy using imagery taken under different illumination conditions and different view-angles. This is done by using the Patch-Wise Co-Registration (PWCR) method to overcome the misregistration problem caused by view-angle difference and applying Topographic Correction (TC) methods on pixel intensities to attenuate the effect of illumination angle variation on the building roofs. To select a proper TC method, four of the most widely used correction methods, namely C-correction, Minnaert, Enhanced Minnaert (for slope), and Cosine Correction are evaluated in this study. The results proved that the proposed methodology is capable to improve the change detection accuracy. Specifically, the correction using the C-correction and Enhanced Minnaert improved the change detection accuracy by around 35% in an area with a large number of steep-roof houses imaged under various solar angles.

Share and Cite:

Jabari, S. and Zhang, Y. (2017) Building Change Detection Improvement Using Topographic Correction Models. Advances in Remote Sensing, 6, 1-22. doi: 10.4236/ars.2017.61001.

1. Introduction

There are numerous papers published in the remote sensing literature for building change detection in which the intensity variation on the building roofs are taken as the change criterion since the roofs are more easily observable in the remote sensing images compared to other parts of the buildings. In most of studies in this field, the images used are assumed to be taken under similar imaging conditions regarding view-angles [1] [2] [3] [4] . In these studies, the variation in the brightness values alone [1] [2] [3] [4] or along with the elevation information is used for change detection [5] [6] [7] . Recently, there is a raising interest in using multi view-angle images for urban and specifically building change detection [8] [9] . Using multi view-angle images for urban change detection has two main challenges: misregistration and intensity difference. The misregistration problem comes from different layout of the high elevated objects due to the relief displacement effect and has already been addressed by methods using the back-projection concept in the literature [8] [9] . The intensity difference on the other hand comes from causes like seasonal or sensor specification differences and has been addressed by linear methods like Multivariate Alteration Detection (MAD) and non-linear approaches like mutual information in the literature. The MAD method uses a linear function to transfers the brightness values to another space in which the difference is highlighted [10] [11] , while the non-linear approaches are based on the image histograms and the information content of the corresponding pixels [12] . In both cases, the variations in the brightness values play an important role in change detection. However, other than changes, the variation of pixel brightness values in bi-temporal images depends as well on factors such as solar angles and topographic effects which cannot be comprehensively addressed by the linear or non-linear approaches.

The irradiance received from a certain point on the ground surface depends on four angles, which identify the solar illumination angle: solar azimuth, solar elevation, ground slope, and ground aspect. Figure 1 depicts a 2D schema of the solar illumination angle variation on a simple pitched roof. Solar illumination angle can vary due to steep surfaces not only from one side of the roof to another side but also from one image to another thoroughly depending on the shape of the roofs and solar angles of the images. This variation results in differences in the radiance detected by the imaging sensors from the corresponding objects on the ground [13] .

Figure 2 depicts an example of the intensity variation on steep roofs in bi-temporal images due to illumination angle differences. Steep surfaces of the roofs, largely found in the areas with high amount of rain/snowfall, can cause variations in the associated pixel brightness values due to differences in the illumination angles, and not to real changes in the building roofs. This can adversely affect the building change detection results in either linear or non-linear methods.

The main purpose of this study is to present a change detection framework to improve the building change detection using multi view-angle images. To do this, the PWCR method is used to overcome the misregistration problem of such

Figure 1. Variation of solar illumination angle (γ), due to solar radiation angle (θi) and roof slope (θp).

Figure 2. Intensity variation on different parts of steep roofs in two images acquired from the same area. In image (a) the northern parts of the roofs are brighter compared to the other parts, while in (b) the eastern parts are brighter.

images. Also, to compensate for intensity difference due to different solar illumination angles, the TC methods are used to attenuate the brightness value variations due to roof shapes.

In this study, after registering the images in a patch-wise manner, the slope and angle layers are generated using one single DSM. Then, since the direct registration between the slope and aspect layers, generated in the object space, and images, generated in off-nadir conditions, is not possible, the layers are projected to the image spaces using the associated orientation parameters. Then, the TC methods are applied to the brightness values of the pixels in each patch to correct them. Finally, to compensate any spectral and radiometric differences between the bi-temporal images, MAD transform is performed to detect the changed buildings.

The effect of topography, i.e., steep surfaces in our case, on the pixel brightness values of the remotely sensed images has been studied for over three decades since this effect has a significant impact on the quantitative analysis of remotely sensed data [14] . There are numerous TC methods in the literature that compensate for differences in the solar illumination due to terrain shape irregularities in dealing with large scale objects such as mountains and hills, generally, in low to moderate resolution satellite images [15] . However, to the best of our knowledge, it has never been used as a tool for improving the change detection results in very high resolution (VHR) images.

The hypothesis of this study is whether the intensity correction based on TC methods are proper for building roof change detection or if the methods manipulate the brightness values excessively so that the change detection results are negatively affected. Thus, we tested four of the most widely used correction methods (C-correction, Minnaert, Enhanced Minnaert, and Cosine Correction) and compared the respective results to the original brightness values without any corrections to see which ones perform better in building change detection.

When it comes to building change detection, it is important to identify what sort of changes are concerned. Generally, remote sensing image-based change detection methods can detect changes in the spectral properties of the building roofs, such as a demolished building, or the changes in the shape of the buildings, such as a widened building. In this study, our interest is in detecting the spectral changes.

2. Study Area

We tested the proposed change detection framework on three images from the city of Fredericton, NB, Canada. The area is full of steep-roof buildings with varying brightness values in different images, which creates problems for the typical change detection methods. The bi-temporal datasets are selected from two Worldview-2 images acquired in 2011 and 2013 with off-nadir view angles of 15˚ and 27˚, respectively, as well as one orthophoto of the area acquired in 2012. The solar azimuth and zenith angles of the satellite images are similar while the solar zenith angle in the orthophoto is different (Table 1) causing variations in the brightness values of the urban objects. Figure 8 (Row 1) depictsasample building in the area and illustrates how different the same buildings appear in the different imagery.

We made three bi-temporal combinations of the images as presented in Table 2. The Dataset C1 images have similar solar angles, while in Datasets C2 and C3 the solar zenith angles are highly different.

Table 1. Solar zenith and azimuth angles of the imagery used in this study.

Table 2. Bi-temporal imagery combinations used in this study.

The orthophoto used in this study, is generated using an Applanix Digital Camera with four bands: Near Infra-Red, Red, Green, and Blue (NRGB bands). The satellite images of WV-2 are acquired in 8 multispectral bands. However, for the purpose of similarity of bands, only 4 bands are selected from the WV-2 images. The selected bands are NIR2, Red, Green, and Blue which correspond to the NRGB bands of the airborne camera (Table 3).

1http://www.fredericton.ca/en/citygovernment/Catalogue.asp (last accessed May 2015).

Satellite images used in this study, are also already corrected for atmospheric effects. We also used the DSM of the area generated by LiDAR with 0.5 m accuracy which is consistent with the 0.5 m spatial resolution of the WV-2 images. The 15 cm pixel size of the Orthophoto 2012 is also resampled to 0.5 m using bi- linear interpolation. We also used a GIS building borders layer generated by the city of Fredericton municipality1, for identifying buildings.

The whole process implemented in this paper is developed in MATLAB environment.

3. Methodology

In the presented work, as illustrated by the flowchart in Figure 3, first the bi- temporal images are co-registered. Since the WV-2 images are not acquired under close-to-nadir conditions, the PWCR method is used [9] . Later on, using the DSM, the slope and aspect layers are generated. To co-register these two layers to the images, the slope and aspect layers are projected into the image spaces. After that, the intensities of the image pixels are corrected using the TC methods.

Table 3. The NRGB bands used in the datasets C2 and C3. The NRGB bands of the WV-2 images are very close to that of the digital camera.

Figure 3. Flowchart of the presented work.

Finally, the spectral comparison is performed by the MAD transform that is applied on the corrected brightness values to produce the change criteria from which the change map is generated. In this section, the major steps illustrated in the flowchart of Figure 3, are explained in detail.

3.1. Patch-Wise Co-Registration

In fact, an image is a 2D instance of a corresponding 3D ground space. Thus, a global generic way of co-registration of the images acquired under different view-angles or different projection models, especially in urban areas, is not possible. This is mainly because of the varying effect of relief displacement in different images. A solution for the co-registration of VHR images, called the Patch-Wise Co-Registration (PWCR) Method, is presented in [9] . This method first divides one of the bi-temporal images (base image) into patches; in this study, to disregard the problems caused by image segmentation, a GIS layer including building borders is used to make the patches. So, that each building is considered as a patch. Then, using the RPCs (Rational Polynomial Coefficients) in satellite images, every pixel in the DSM is projected to the image spaces. This indirectly relates the corresponding spots in the bi-temporal imagery and patches are transferred from the base image to the other image (target image). Thus, instead of establishing a global co-registration, such as a polynomial registration, the corresponding patches of the base image are produced in the target image.

In the PWCR for every pixel j ( [ X Y Z ] j ) in the DSM,

S k 2 = { G 2 ( [ X Y Z ] j ) | G 1 ( [ X Y Z ] j ) S k 1 } (1)

where, S k 1 is the k t h patch in the base image and S k 2 is the corresponding patch in the target image. G is the projection model to transfer the object coordinates to the image space, which is given by Rational Function Model (RFM) equations (Equation (2)) for the satellite imagery. However, in datasets C2 and C3, since the base image is an orthophoto already co-registered to the DSM, there is no need to apply G 1 .

The RFM equations are

x = P 1 ( X , Y , Z ) P 2 ( X , Y , Z ) y = P 3 ( X , Y , Z ) P 4 ( X , Y , Z ) P ( X , Y , Z ) = c = 0 m b = 0 m a = 0 m A a , b , c X a Y b Z c (2)

where x and y are normalized image coordinates, and X,Y, and Z are normalized ground coordinates. m is generally set to 3 [18] .

If the RPCs of the images are not error free, or so called bias-compensated [19] , a conformal transformation is required to fit the patches to their original places (refer to [9] for more information).

3.2. Slope and Aspect Calculation

The terrain slope ( θ p ) and aspect ( φ o ) angles are calculated using the Equations (3) to (5).

tan θ p = [ ( z / x ) 2 + ( z / y ) 2 ] 1 / 2 (3)

tan α = z / x z / y z / x = ( Z i 1 , j 1 + 2 Z i 1 , j + Z i 1 , j + 1 ) ( Z i + 1 , j 1 + 2 Z i + 1 , j + Z i + 1 , j + 1 ) 8 Δ h z / y = ( Z i 1 , j 11 + 2 Z i , j + 1 + Z i + 1 , j + 1 ) ( Z i 1 , j 1 + 2 Z i , j 1 + Z i + 1 , j 1 ) 8 Δ h (4)

where, i and j are image coordinates in horizontal and vertical directions, respectively and Δ h is the DSM pixel size.

Case : z / x > 0 and z / y > 0 : φ o = α Case : z / x > 0 and z / y < 0 : φ o = 180 α Case : z / x < 0 and z / y < 0 : φ o = 180 + α Case : z / x < 0 and z / y > 0 : φ o = 360 α (5)

As given in Equation (3) and Equation (4), slope is generated by partial derivatives taking the elevation of the left and right as well as top and bottom pixels into account. Hence, sudden jumps of elevation values in the vicinity of the building borders cause high values in the slope as well as biases in the aspect. This causes unrealistic brightness value corrections around the building borders. Thus, the brightness value corrections are limited either by removing the border pixels from the change detection process or putting a threshold for the value of slope to remain within a certain neighborhood range.

3.3. Back-Projection

Real slope and aspect values are to be calculated in an orthometric space not in a projective one. DSM has an orthometric space since it maps the objects orthogonally to a reference surface, while images have projective spaces. Therefore, to correct the image brightness values, the slope and aspect value must be transferred to the image spaces. To do so, the slope and aspect values for each DSM pixel are calculated in the object coordinate system and then assigned to the corresponding pixel in the image space using the G operator (Equation (1)). In Equation (1), since there exists one slope and one aspect value for every Z coordinate (elevation) in the DSM space (X,Y,Z), the slope and aspect values are assigned to the image (x, y) values corresponding the DSM (X,Y,Z) values using the back-projection. Therefore, with the same back-projection used for PWCR, the slope and aspect angles are assigned to their corresponding image coordinates.

3.4. Topographic Correction

After the back-projection of the sloped and aspect layers to the images spaces, the TCs are applied to the brightness values. However, to use TC methods, the solar azimuth and solar elevation angles are also required. In satellite images, these angles are provided as the metadata. In this study, the two angles were unknown for the Orthophoto 2012 image. We used the shadow lengths to calculate the solar angles. I this section, more details are provided for TC methods and solar angles calculations.

3.4.1. TC Methods

The TC methods can be divided into two groups: (1) the methods that do not use a Digital Elevation Model (DEM), and (2) the methods that use a DEM [14] .

Corrections of irradiance in the group one methods were based on band ratios. In those methods, the reflectance variation due to topography differences is assumed to change proportionally in the two bands, which is not a realistic assumption in most remote sensing images. Also, those methods cause the loss of spectral resolution [14] , which is not suitable for change detection.

The second group of corrections considers the effect of the solar illumination angle on the irradiance received by the imaging sensor. The solar illumination angle is calculated by:

cos γ i = cos θ p cos θ z + sin θ p sin θ z cos ( φ a φ o ) (6)

where, γ i is the local illumination angle; θ p , θ z , φ o , and φ a represent terrain slope, solar zenith, topographic azimuth (aspect), and solar azimuth angles, respectively. Once the solar illumination angle is computed for each pixel in the image, the flat normalized reflectance can be estimated. To do this, there are different approaches in the literature which are explained here. Correction methods based on the solar illumination angle are sub-categorized into Lambertian (LTC) and Non-Lambertian (NLTC) methods based on, respectively, whether or not they assume reflectance is independent of the observation and incident angles. The simplest one is the Cosine Method, which assumes the incident radiation reflects equally in all directions [14] .

ρ H = ρ T ( cos θ z cos γ i ) (7)

where, ρ T is the reflectance of an inclined surface and ρ H is the reflectance of a horizontal surface. This correction neglects the effect of diffuse irradiance and is only based on direct solar illumination. Also, the correction is the same for different wavelengths. In the related literature, overcorrection has been reported in satellite images using the cosine method [14] . Figure 4 illustrates how small a value cos γ i can be in steep slopes on the ground for a typical remotely sensed image with a zenith angle around 30˚ across the values for the difference between the solar azimuth and aspect angles. In steep-roof buildings, the slope of

Figure 4. The value of cos γ i versus the difference between the solar azimuth and aspect angles. As can be seen the inverse of cos γ i which is the correction in the Lambertian methods becomes very high.

the roof can reach as high as 50˚ to 60˚ which can result in over corrections on the roofs.

To overcome the problems caused by non-realistic Lambertian modeling, Non-Lambertian models are developed which are based on a Bi-Directional Reflectance Distribution Function (BRDF). BRDF is a ratio of the quantity of light reflected in a certain direction to the amount of light reaching the surface from other directions [16] . However, determination of a BRDF is rather complicated; thus, the semi-empirical models are developed based on non-Lambertian behavior of the surfaces. The first semi-empirical model is the Minnaert correction which was first proposed for lunar surface studies [17] . The correction formula is

ρ H = ρ T ( cos θ z cos γ i ) K k (8)

where, K k is the Minnaert constant for band k. For estimation of the Minnaert constant for each band Equation (8) is re-written as:

ln ( ρ T ) = ln ( ρ H ) + K k ln ( cos γ i cos θ z ) (9)

where, ln ( ρ H ) and K k are the linear regression coefficients of ln ( ρ T ) ver- sus ln ( cos γ i cos θ z ) for the entire image band.

A backwards radiance correction transformation (BRCT) model was later developed based on Minnaert model for rugged terrain in mountainous areas which further included the terrain slope in corrections [15] , which is referred to as Enhanced Minnaert in this study. The related formula is given in Equation (10).

ρ H = ρ T cos θ p ( cos θ z cos θ p cos γ i ) K k (10)

Here, the Minnaert constant is also calculated with regression parameters similar to Equation (9). The other widely used NLTC method is the C-correction model given by

ρ H = ρ T ( cos θ z + C k cos γ i + C k ) , C k = b k / m k (11)

where, ρ T = b k + m k cos γ i ; b k and m k are the linear regression parameters of ρ T versus cos γ i for the entire image.

As can be seen from the equations, despite the LTC method, the NLTC methods are applied separately for each image band. There are numerous papers comparing the performance of the TC methods for specific applications [13] [16] . Here, we test them for compensating intensity variations to improve building change detection results.

3.4.2. Solar Angles Calculation

To calculate the solar azimuth and elevation in the Orthophoto 2012 image, the direction and the length of the shadows were considered in samples of elevated buildings. The heights of the buildings were also extracted from the DSM. Figure 5 depicts samples of the building shadows used for the angle calculation. Figure 6 illustrates the relation between the shadows and the angles.

Equation (12) is used to calculate the solar zenith angle ( θ z ) .

α = tan 1 ( H l ) θ z = 90 α (12)

where, H is building height, l is shadow length, α is solar elevation angle. Equation (13) shows how to calculate the solar azimuth angle using the bearing angle ( θ ) .

θ = tan 1 ( | Δ x Δ y | )

Δ x and Δ y are the difference between the image coordinates of the point B and any point on the direction of the shadow (direction AB)

Case : Δ x > 0 & Δ y > 0 , φ a = 180 + θ Case : Δ x > 0 & Δ y < 0 , φ a = 360 θ Case : Δ x < 0 & Δ y < 0 , φ a = θ Case : Δ x < 0 & Δ y > 0 , φ a = 180 θ (13)

Once the angles solar azimuth, solar elevation, ground slope, and ground aspect are identified/calculated, the solar illumination angle can be established after which the LTC and NLTC methods can be applied to the images.

For flat roofs: θ p = 0 cos γ i = cos θ z ; therefore, in all the TC corrections ρ H = ρ T . This means, using the LTC and NLTC methods, the brightness values of the flat roofs are not altered and only the steep ones are modified.

Figure 5. Shadow direction and length in the airborne image used in this study.

Figure 6. Relation between solar azimuth and zenith angles and shadow lengths and direction.

3.5. Spectral Comparison (MAD Transform)

There are various methods for detecting changes between bi-temporal images. Based on the literature, MAD Transform is a robust one [11] and can compensate for linear radiometric or spectral differences between the bi-temporal images [10] . Instead of direct differentiation of the bi-temporal brightness values, MAD Transform generates a linear transformation of the spectral content of the images and uses the canonical coefficients to maximize the disparity between the brightness values. It transfers the bi-temporal spectral vectors X = [ X 1 , , X k ] T and Y = [ Y 1 , , Y k ] T , k being the number of the spectral bands, into a new space,

K = a T X b T Y (14)

in which a and b are the coefficients making a linear combination of the spectral bands. a and b are calculated so that Var { a T X b T Y } is maximized with the constraints Var { a T X } = Var { b T Y } = 1 . K is the adjusted difference between the X and Y vectors. The Canonical Correlation Analysis is used to solve this problem. The first set of coefficients provides the highest correlation which is equal to lowest variation ( a 1 T X b 1 T Y ) . And the k s t set generates the highest variance ( a k T X b k T Y ) , which is useful in change detection. The transformation, D, can be given as [10]

D { a T X b T Y } = 2 ( I R ) (15)

where, I is the k × k unit matrix and R is a k × k diagonal matrix containing the sorted canonical coefficients on the diagonal. The coefficients are used to study the changes. Usually if the coefficient value falls within ± 2 σ , σ being the stan- dard deviation of canonical coefficients, there is no change [20] . Otherwise, a change is reported.

4. Experimental Results

4.1. TC Results

Figure 7 and Figure 8 depict samples of buildings before and after applying the TC methods. Figure 8 represents the same concept as Figure 7 but with more details for a single building to illustrate the shaded areas more clearly. In the both figures Col. 1, Col. 2, and Col. 3 images display the building roofs either in their original format or after different correction methods for the Orthophoto 2012, WV-2 2011, and WV-2 2013, respectively. Besides, Row 1 images display original brightness values without the corrections; Row 2, Row 3, Row 4, and Row 5 display C-correction, Enhanced Minnaert, Minnaert, and Cosine correction, respectively. Row 6 images represent the borders of the buildings in the PWCR transferred from the Orthophoto 2012 (Col. 1) to the target images (Col. 2 and Col. 3).

As illustrated by the figures, the original brightness values in all the three images (Row 1, Col. 1 to 3) have the shaded effect in different sides of the roofs depending on their slope and aspect angles. However, the difference between Col. 2 and Col.3 images of Row 1 is not very high because of their similar imaging angles. Nevertheless, the difference between the Col. 1 image with either of Col 2. or Col 3 images in Row 1 is high. The shaded roof sides look to be attenuated in either of Row 2 to Row 4 images. However, in Row 4 the brightness values of the images are excessively manipulated so that the natural colors are distorted. In Row 5, not only the brightness values are excessively manipulated, but also the shaded effect is reversed, which means the dark and bright sides of the roofs are switched due to the overcorrection of the cosine method. Row 6 images show the PWCR results of the images. The Orthophoto 2012 image is segmented using the building borders GIS layer as reference. Then, the corresponding segments are found in the other images using the PWCR. In the Row 6 images the building borders are highlighted to show the performance of the PWCR method. The borders play an important role in this study, since if they are not known, building border pixels get falsely corrected due to their high slope and aspect values. As illustrated in the figures, the borders are properly detected with the PWCR method.

4.2. Change Detection Results

After applying PWCR and TCs on the images, a MAD Transform was applied on the mean values of the corresponding patches. In order to simply test how the

Figure 7. Brightness values of building roofs before and after the topographic correction. Column 1: (2012 orthophoto), Column 2: (WV-2 2011), and Column 3: (WV-2 2013). Row 1: original brightness values without correction; Row 2: C-correction; Row 3: Enhanced Minnaert; Row 4: Minnaert; Row 5: Cosine. As can be seen, different sides of the roof are shaded before any corrections. Row 6 images represent the borders of the buildings in the PWCR transferred from the GIS building border layer (Col. 1) to the target images (Col. 2 and Col. 3).

Figure 8. An enlarged view of building roof brightness value difference before and after the topographic correction. Column 1: (Orthophoto 2012), Column 2: (WV-2 2011), and Column 3: (WV-2 2013). Row 1: original brightness values without correction; Row 2: C-correction; Row 3: Enhanced Minnaert; Row 4: Minnaert; Row 5: Cosine. As can be seen, different sides of the roof are shaded before any corrections. Row 6 images represent the borders of the buildings in the PWCR transferred from the building border GIS layer (Col. 1) to the target images (Col. 2 and Col. 3).

shaded roof sides in the original images and the TC outputs can affect the change detection, as shown in Canty [20] a threshold of ± 2 σ , σ is the standard deviation of each MAD layer, is used for the MAD outputs to detect changes from non-changed buildings. Figure 9 shows samples of the produced change maps associated with the area shown in Figure 7 in the three datasets before and after the TC corrections.

As illustrated in Figure 9, the area does not have any changed buildings. However, due to the illumination angle differences, the overall brightness values of the buildings vary among the different images. In the figure, the falsely detected changed buildings (false positives) are hachured and the rest of the truly detected unchanged buildings are shown in white. As can be seen, the number of changed buildings (hachured buildings) after C-correction and Enhanced Minnaert are very low compared to other corrections. In dataset C1, also, there are few hachured buildings which means that the original brightness values were sufficient for change detection without any corrections due to the similar solar angles if the images.

The same fact is shown by the confusion matrices associated with the classification of changed (ch) vs unchanged (uch) buildings in Table 4. In datasets C2 and C3, the number of unchanged buildings labeled as changed (false positives) are high using the original brightness values, Minaert, and cosine methods. However, C-correction and Enhanced Minnaert, have low number of false positives.

As shown in Table 4, in almost all of the cases, the number of false negatives (changed building detected and unchanged) is low which shows that the classifier is a proper method for detecting building changes.

5. Accuracy Assessment

2For definition of the roof types please refer to: http://en.wikipedia.org/wiki/List_of_roof_shapes.

For accuracy assessment, the change detection results of the TC methods and the original brightness values are checked for over 150 buildings in each dataset combined of buildings with pitched, shed, gable, gambrel, tented, flat, hip, and half-hip2 roofs.

Using the confusion matrices generated by check building information, the Receiver Operating Characteristic (ROC) curves, which plot sensitivity or True Positive Rate (TPR) versus fall-out or False Positive Rate (FPR), are generated for the outputs of the TC methods and the original brightness values. Table 5 presents the equations associated with the ROC curve parameters [21] . Figure 10 represents the ROC curves of the TC methods used in this study and also the original brightness values over the datasets C1 to C3. The closer the ROC curve to the point (0, 1) the higher the degree of discrimination between two changed and unchanged classes.

The accuracy is also measured by the criterion called Area Under Curve (AUC) of the ROC curve, which ranges from 0 to 1, “1” being the highest discrimination ability and “0” being the lowest. Using the simple ±2σ thresholding,

Figure 9. Associated produced change maps of buildings of Figure 7 for dataset C1 (Column 1), C2 (Column 2), and C3 (Column 3) for uncorrected brightness values (row 1), C-correction (row 2), Enhanced Minnaert (row 3), Minnaert (row 4), and Cosine (row 5). White color represents unchanged output and hachured areas show the changed buildings.

Table 4. Confusion matrices for the three study datasets.

ch: changed; uch: unchanged; OA: overall accuracy

Table 5. Formulas related to making ROC curves.

we also calculated the Overall Accuracy (OA), which is the ratio of the correctly labeled objects to the total number of the test objects. Figure 11 represents the bar charts of the AUC (a) and OA (b) for the TC methods and the original values across the three datasets.

6. Discussion

As illustrated by ROC curves in Figure 10, in dataset C1, with satellite bi-temporal images, the original brightness values as well as the results of C-correction and Enhanced Minnaert have high discrimination capabilities, while Cosine and Minnaert have lower discrimination capabilities. In general, as expected, the original brightness values were capable of identifying changes from non-changes due to the similar solar illumination angles and accordingly similar brightness value variations of the images in dataset C1. Although, applying TC looks unnecessary in such a dataset, the results of C-correction and Enhance Minnaert did not have any negative impact in the accuracy of the results while the cosines and Minnaert deteriorated the accuracy compared to the original brightness values.

In datasets C2 and C3, the ROC curves of the original brightness values are close to diagonal which means original pixel values failed to discriminate changes. This is due to high amount of brightness variations in the bi-temporal images caused by different solar illumination angles. A similar trend is experienced with the Cosine method that failed to detect changes by inducing over corrections of the roofs. Minnaert method also produced a weak ROC curve in dataset C2. However, Enhanced Minnaert and C-correction produced the best ROC curves in datasets C2 and C3. This means that compared to the original brightness values, C-correction and Enhanced Minnaert have far higher discrimination capabilities.

As illustrated in Figure 11, The AUC and OA for the original brightness values are very low in datasets C2 and C3 and very high in dataset C1. As explained

Figure 10. ROC curves of the three datasets of this study.

Figure 11. Charts for accuracy assessment of the TC methods. (a) AUC and (b) OA charts of the TC methods and the original values across the three datasets used in this study, (C1, C2, and C3).

before, dataset C1 is composed of similar imagery in terms of the solar illumination angles and due to similar illumination angles, the dark and bright sides of the buildings are similar. Thus, the original values are sufficient for change detection in that dataset. However, C2 and C3 are composed of images with different solar illumination angles in which the original brightness values did not perform accurately in change detection. Cosine and Minnaert methods did not generate high and consistent accuracies. In datasets C1 and C2, the Cosine and Minnaert results have lower accuracies compared to the original values that means the corrections have deteriorated the change detection results. Nevertheless, the AUC and OA yielded by the C-correction and Enhanced Minnaert methods are very close to each other and are superior to the other methods even in dataset C1, where corrections look unnecessary.

Overall, in dataset C1 with similar images in terms of illumination angles, as expected, TC looks unnecessary. However, in datasets C2 and C3 with different solar illumination angles, the overall accuracy is improved by around 35% using any of the two best corrections. i.e., C-correction and Enhanced Minnaert. Therefore, C-correction and Enhanced Minnaert can be used to improve change detection accuracies in areas with steep-roof houses images with different view- angles.

Finally, from the OA produced by MAD transform and a simple ±2σ thresholding, high change detection accuracies are achieved which proves that the proposed method for change detection that incorporates PWCR, TC methods, and MAD transform is capable of detecting changes even with images that are not generated with similar solar angles. This proves that the proposed frame- work for building change detection can compensate for different solar illumination angles caused by different view-angles of the images over the areas with high number of steep roofs.

Further, thanks to the PWCR, only one DSM is used for change detection of images taken with different view-angle images while in similar imaging conditions, bi-temporal DSMs should be provided for precise change detection results [7] .

In this study, since we aimed to check the TC methods on roofs, we did not go through the building detection methods and benefitted from the existing building border GIS layer. However, this framework can be combined with a detection or an optimized segmentation method as well.

7. Conclusions

If the bi-temporal images used for change detection are not taken under similar solar or view angles the change detection methods can produce low accuracies due to miss-registration and difference in illumination condition of the images. The major contribution of this paper is improving building change detection accuracies by incorporating the PWCR method and topographic correction methods.

Basically, PWCR is used where images are not generated under similar view- angles that increase the probability of variation in solar angles. This increases the risk of false change detection results caused by solar illumination angle difference. To compensate for that, we used topographic correction methods on building roofs to limit illumination difference.

In this work, we detected the corresponding building borders in the images using the PWCR method. Then, the slope and aspect values calculated for the TC methods were projected to the image spaces for brightness value correction. After that, we applied TC methods to compensate for solar illumination angle differences on the building roofs. To find a proper correction method, we compared four of the most widely used TC methods in the literature namely C-cor- rection, Minnaert, Enhanced Minnaert (for slope), and Cosine Correction. Within the tested methods, C-correction and Enhanced Minnaert presented high accuracies in change detection. Nevertheless, the Cosine and Minnaert methods did not significantly improve the change detection results and in some cases even deteriorated the results.

In conclusion, using the PWCR method combined with either of Enhanced Minnaert or C-correction methods, higher accuracies are generated in building change detection. Specifically, the false change detection results caused by solar illumination angle difference on the roofs are reduced.

Thus, using the presented change detection framework can help improving the building change detection accuracies and accordingly enable the researchers to incorporate a wider range of images in the automatic and semi-automatic change detection processes. Finally, in case the solar angles are different, the TC methods can be used to compensate for illumination differences on the pitched roofs. However, if the solar angles are similar, TC correction is not necessary.

Acknowledgements

This research has been funded by National Sciences and Engineering Research Council (NSERC) of Canada. We acknowledge the City of Fredericton and the Department of Public Safety of the Province of New Brunswick for providing the Orthophoto 2012 image, Building borders GIS layer, and LiDAR data of Fredericton.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Doxani, G., Karantzalos, K. and Strati, M.T. (2012) Monitoring Urban Changes Based on Scale-Space Filtering and Object-Oriented Classification. International Journal of Applied Earth Observation and Geoinformation, 15, 38-48.
https://doi.org/10.1016/j.jag.2011.07.002
[2] Im, J., Jensen, J. and Tullis, J. (2008) Object-based Change Detection Using Correlation Image Analysis and Image Segmentation. International Journal of Remote Sensing, 29, 399-423.
https://doi.org/10.1080/01431160601075582
[3] Bouziani, M.,Goita, K. and He, D.C. (2010) Automatic Change Detection of Buildings in Urban Environment from Very High Spatial Resolution Images Using Existing Geodatabase and Prior Knowledge, ISPRS Journal of Photogrammetry and Remote Sensing, 65, 143-153.
https://doi.org/10.1016/j.isprsjprs.2009.10.002
[4] Gueguen, L., Soille, P. and Pesaresi, M. (2011) Change Detection Based on Information Measure. IEEE Transactions on Geoscience and Remote Sensing, 49, 4503-4515.
https://doi.org/10.1109/TGRS.2011.2141999
[5] Jung, F. (2004) Detecting Building Changes from Multitemporal Aerial Stereopairs. ISPRS Journal of Photogrammetry and Remote Sensing, 58, 187-201.
https://doi.org/10.1016/j.isprsjprs.2003.09.005
[6] Pang, S., Hu, X., Wang, Z. and Lu, Y. (2014) Object-Based Analysis of Airborne LiDAR Data for Building Change Detection. Remote Sensing, 6, 10733-10749.
https://doi.org/10.3390/rs61110733
[7] Tian, J., Cui, S. and Reinartz, P. (2014) Building Change Detection Based on Satellite Stereo Imagery and Digital Surface Models. IEEE Transactions on Geoscience and Remote Sensing, 52, 406-417.
https://doi.org/10.1109/TGRS.2013.2240692
[8] Pollard, T.B., Eden, I., Mundy, J.L. and Cooper, D.B. (2010) A Volumetric Approach to Change Detection in Satellite Images. Photogrammetric Engineering & Remote Sensing, 76, 817-831.
https://doi.org/10.14358/PERS.76.7.817
[9] Jabari, S. and Zhang, Y. (2016) RPC-Based Coregistration of VHR Imagery for Urban Change Detection. Photogrammetric Engineering & Remote Sensing, 82, 521-534.
https://doi.org/10.14358/PERS.82.7.521
[10] Nielsen, A.A., Conradsen, K. and Simpson, J.J. (1998) Multivariate Alteration Detection (MAD) and MAF Postprocessing in Multispectral, Bitemporal Image Data: New Approaches to Change Detection Studies. Remote Sensing of Environment, 64, 1-19.
https://doi.org/10.1016/S0034-4257(97)00162-4
[11] Nielsen, A.A. (2011) Kernel Maximum Autocorrelation Factor and Minimum Noise Fraction Transformations. IEEE Transactions on Image Processing, 20, 612-624.
https://doi.org/10.1109/TIP.2010.2076296
[12] Inglada, J. and Mercier, G. (2007) A New Statistical Similarity Measure for Change Detection in Multitemporal SAR Images and Its Extension to Multiscale Change Analysis. IEEE Transactions on Geoscience and Remote Sensing, 45, 1432-1445.
[13] Sola, I., González-Audícana, M., Alvarez-Mozos, J. and Torres, J.L. (2014) Synthetic Images for Evaluating Topographic Correction Algorithms. IEEE Transactions on Geoscience and Remote Sensing, 52, 1799-1810.
https://doi.org/10.1109/TGRS.2013.2255296
[14] Riano, D., Chuvieco, E., Salas, J. and Aguado, I. (2003) Assessment of Different Topographic Corrections in Landsat-TM Data for Mapping Vegetation Types. IEEE Transactions on Geoscience and Remote Sensing, 41, 1056-1061.
[15] Colby, J.D. (1991) Topographic Normalization in Rugged Terrain. Photogrammetric Engineering & Remote Sensing, 57, 531-537.
[16] Wynn, C. (2000) An Introduction to BRDF-Based Lighting. Nvidia Corporation, Santa Clara.
[17] Minnaert, M. (1941) The Reciprocity Principle in Lunar Photometry. Astrophysical Journal, 93, 403-410.
https://doi.org/10.1086/144279
[18] Grodecki, J. (2001) Ikonos Stereo Feature Extraction-RPC Approach. Annual Conference of the ASPRS 2001, St. Louis, 23-27 April 2001.
[19] Fraser, C.S. and Hanley, H.B. (2003) Bias Compensation in Rational Functions for Ikonos Satellite Imagery. Photogrammetric Engineering & Remote Sensing, 69, 53-57.
https://doi.org/10.14358/PERS.69.1.53
[20] Canty, M.J. (2014) Image Analysis, Classification and Change Detection in Remote Sensing: With Algorithms for ENVI/IDL and Python. CRC Press, Boca Raton.
[21] Fawcett, T. (2006) An Introduction to ROC Analysis. Pattern Recognition Letters, 27, 861-874.
https://doi.org/10.1016/j.patrec.2005.10.010
[22] http://www.applanix.com/media/downloads/products/specs/dss_dualcam_spec.pdf

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.