Simulation of Hazy Image and Validation of Haze Removal Technique ()
1. Introduction
Haze is a natural phenomenon that causes obstruction to vision. For clear vision, dehazing is an ultimate necessity that finds diverse applications such as navigation of vehicles, outdoor movements of people, surveillance system and so on. Many dehazing mechanisms have been developed [1] - [10] . But all contain some drawbacks. Methods mentioned in references [2] [3] use a pair or multiple images of the same scene for haze removal through polarizing filter. This polarized-filter is not effective in situation where changes in images are more rapid than the rotation of filter [4] . Method shown in reference [5] estimates the complete 3D structure and recovers haze free image from two or more bad weather images. Although some of these methods give good results, but, have limited practicability as acquisition of multiple images of same scene under diverse condition is a difficult task. To cope with these drawbacks, researchers are concentrating on developing dehazing using single image. Tan [6] investigated a method based on local contrast maximization. Fatal [7] developed an independent component analysis based dehazing technique using single image. He et al. [8] first developed a single image-based dark channel prior for haze removal. The prior based methods were highly successful in recovering haze-free images. We [11] further improved this method through proposing an adaptive filter-patch to deal with various haze concentrations. However, the effectiveness of these methods is not completely confirmed yet using ground truth (simulated) images. Due to absence ground truth, i.e. simulated haze of diverse densities on a clear image, it is not possible to absolutely quantize the effectiveness of dehazing mechanisms. Therefore, the main objectives of this work are: 1) generation of simulated haze of diverse densities on natural (real) images and 2) doing the validation of a haze removal technique. This paper tries to accomplish these objectives.
In this paper, we generated the synthetic homogeneous hazes with different concentrations on a clear natural image through atmospheric scattering model [12] - [23] . As realistic natural haze is heterogeneous in nature, so to generate heterogeneous hazy image we use Perlin noise. It is a gradient noise developed by Ken Perlin [24] to give natural visual effects on computer generated graphics.
After generation of haze of different concentrations, we used the dark-channel prior [1] [8] [9] [11] as it is the most prominent method among the dehazing mechanisms.
The rest of the paper is described as follows: Section 2 explains the haze generation mechanism; Section 3 shows the dehazing mechanism using dark-channel prior; Section 4 presents the experimental results and validation; and finally Section 5 concludes the paper.
2. Haze Simulation
In computer graphics, visualization of atmospheric phenomenon is important which has high practical value. A realistic haze will greatly improve the reality of simulated scenes. Special effects in computer games, virtual reality, digital movies, TV, entertainment-industry products and so forth are some applications of the simulated haze. For simulation of a hazy scene, various methods have been developed by using the atmospheric model [12] .
The hazy image formation model can be described by using the following equation [12]
(1)
x = (x, y) is a 2D vector that represents the coordinates of a pixel’s location in the image. I is the input hazy image, J is the scene radiance, t is the medium transmission, A is the global atmospheric light.
In the above Equation (1), the term in the first part of right-side
is called direct attenuation and the term in second part of right-side
is called airtight.
Here, we simulate haze using above atmospheric scattering model with and without Perline noise. The flow steps to simulate haze from atmospheric scattering model on an input clear image are shown in Figure 1.
Here, at first, we calculate the depth map of an image. For scene depth restoration, a linear model given in Equation (2) is used. The concentration (density) of haze increases along with the decreases of scene depth. Density of haze is the disparity between the brightness and the saturation. Then it can create a linear model.
We can express this linear model as:
(2)
where x is the position within the image, d is the scene depth, v is the brightness component of the hazy image, s is the saturation component,
are the unknown linear coefficients, ε(x) is a random variable represents the random error of the model that is regarded as a random image. A simple and efficient supervised learning method is used to determine the coefficients
. The training data [19] are needed for finding out the coefficients
. In this case, a training sample consists of an image and its corresponding ground truth depth map. Figure 2 presents the images at different steps of Figure 1.
Raw depth map is determined based on a hypothesis that the scene depth is locally constant as
(3)
where
is an r × r neighborhood centered at x, and
is the depth map with scale r. However, it is also obvious that the blocking artifacts may present in the image. To overcome these artifacts a bilateral filter is used that generates a refine transmission map [25] .
Since we already have the clear image J(x), the refined transmission map, and the air light (which can be set as 255), we can easily simulate the hazy scene according to Equation (1).
We can also generate hazy scenes with different haze densities assuming the transmission medium:
,
where β is a coefficient and λ is the haze density factor λ. Figure 3 shows the hazy images with different haze densities.
However, haze is not always perfectly homogeneous in real situations. Therefore, Perlin noise, which is a gradient noise is introduced in our method though the following Equation (4)
(a)
(b)
(c)
Figure 3. Simulated hazy images with different haze densities. (a) Less hazy image, λ = 1; (b) Medium hazy image, λ = 3; (c) More hazy image, λ = 5.
(4)
Here, I is the hazy image that is obtained by using our haze simulation technique, k is used to control the appearance of Perlin’s turbulence texture and n is the perline noise image. Amplitude and frequency are the two properties that characterize the Perline noise function [19] [20] [21] [22] [23] . Figure 4 shows the Perline noise of different concentrations.
3. Haze Removal Technique
Haze effect minimization in real scene is very important and it finds wide applications. Previously, we modified a single image haze removal algorithm [11] based on dark channel prior with an automated calculation of patch size and automated handling of sky region’s degradation effect, which is known as halo effect.
For removal of haze we used dark channel prior algorithm. This algorithm mainly used to estimate patch size for direct attenuation and guided filtering. Haze removal model can be shown as:
(5)
The flow diagram of haze removal technique based on dark-channel prior is shown in Figure 5 and Figure 6 presents the images at different steps of the flow diagram. The detail explanations of the dehazing mechanism are given in reference (1).
4. Experimental Results and Validation
The visual (subjective) representations of dehazing through dark-channel prior
![]()
Figure 5. Workflow diagram of haze removal technique using dark-channel prior.
method for homogeneous and heterogeneous hazy situations with three different haze concentrations are shown in Figure 7 and Figure 8, respectively. In addition to subjective measure (visual inspection), the validation is performed using a well-known dehazing objective metric―the average gradient shown in Equation (6). Average gradient is the resultant of horizontal and vertical gradients of an image. Here we used the average gradient of the dehazed image, as haze reduces the gradient that makes it blur. Hence, this is an effective metric for haze removal estimation.
(6)
Here, AG is average gradient, Gx and Gy are the horizontal and vertical gradients, respectively. The objective evaluation results of Figure 7 and Figure 8 are shown in Table 1 and Table 2, respectively. Higher AG indicates high quality dehazed image. From these tables, we can see that from low haze concentration to high haze concentration the AG figure is gradually decreasing. This is also confirmed through visual inspection in Figure 7 and Figure 8. In addition, we can see from Figure 7 that for homogeneous haze the original and the dehazed image quality is almost similar, but for heterogeneous haze (created using Perline noise, shown in Figure 8) the dehazed image quality is somehow poor. It is also confirmed from the AG figures of Table 1 and Table 2.
![]()
Table 1. Dehazing performance values (here the value of average gradient AG) for different homogeneous haze conditions. Higher AG indicates high quality dehazed image.
![]()
Table 2. Dehazing performance values (here the value of average gradient AG) for different heterogeneous haze conditions. Higher AG indicates high quality dehazed image.
5. Conclusion
In this work, synthetic hazes are generated on a real scene through atmospheric model with and without Perlin noise. After that we successfully performed the validation of a prominent single image dehazing technique―the dark-channel prior through subjective and objective measures. Here, a well-established dehazing objective metric average gradient is used. The simulated haze will find application in validating any new dehazing technique. In addition, it will be used for many outdoor visual enhancements such as surveillance and navigation systems, real time tasks processing by robots etc.