Image Dehazing with Hybrid λ2-λ0 Penalty Mode

Abstract

Due to the presence of turbid media, such as microdust and water vapor in the environment, outdoor pictures taken under hazy weather circumstances are typically degraded. To enhance the quality of such images, this work proposes a new hybrid λ2-λ0 penalty model for image dehazing. This model performs a weighted fusion of two distinct transmission maps, generated by imposing λ2 and λ0 norm penalties on the approximate regression coefficients of the transmission map. This approach effectively balances the sparsity and smoothness associated with the λ0 and λ2 norms, thereby optimizing the transmittance map. Specifically, when the λ2 norm is penalized in the model, an updated guided image is obtained after implementing λ0 penalty. The resulting optimization problem is effectively solved using the least square method and the alternating direction algorithm. The dehazing framework combines the advantages of λ2 and λ0 norms, enhancing sparse and smoothness, resulting in higher quality images with clearer details and preserved edges.

Share and Cite:

Zhou, Y. , Ji, D. and Xu, C. (2024) Image Dehazing with Hybrid λ2-λ0 Penalty Mode. Journal of Computer and Communications, 12, 132-152. doi: 10.4236/jcc.2024.1210010.

1. Introduction

Larger particles alter light absorption, emission, and scattering in the atmosphere, which are the primary cause of natural phenomena including haze, smoke, and fog. The quality of images taken in haze weather is significantly degraded: clarity, contrast, and visibility are greatly reduced, the image details are blurred, and color deviation and distortion are produced [1]. Therefore, it is crucial to remove the undesired blurred visual effects from the captured images. The dehazing algorithm plays an important role in it and it can be broadly categorized into two categories: one is based on image enhancement and the other is based on image restoration.

The image enhancement method focuses on reducing specific information in the image based on particular needs. It involves processing image features such as contrast, edges, and contours to enhance visual appearance. However, this method does not prioritize image fidelity or consider the physical causes behind the image characteristics. Techniques in this category include histogram equalization [2] [3], which expands the dynamic range to boost image contrast. The homomorphic filtering proposed by Stockman [4] enhances the image contrast by enhancing the high-frequency components and weakening the low-frequency components. Image enhancement based on wavelet is similar to homomorphic filtering, and Russo [5] uses wavelet transform to equalize images and gets good results. Retinex [6] proposed by Land enhanced the image by eliminating the influence of the reflection component in the image, which was improved by later Jobson and Rahman, who proposed Single Scale Retinex and Multi Scale Retinex [7] [8].

The physical model-based dehazing approach, also known as the image restoration-based haze removal method, builds a model of hazy picture degradation by investigating the causes of hazy image degradation and addresses the inverse process of image degradation to restore clear images. Fattal et al. [9] posited that there is no correlation between scene transmission and surface shading in their hypothesis. They used the independent component analysis (ICA) algorithm combined with a Markov random field model to estimate scene transmission. However, this method often falls short in conditions of dense haze. Tan et al. [10] improved the local contrast of hazy images using a Markov random field model, restoring some texture and structure but frequently over-enhancing the contrast. Tarel et al. [11] proposed a method of dehazing using median filtering to estimate atmospheric map and transmission map, which requires less processing time, but often has poor processing results for images with dense haze. A dehazing approach suited for vast sky regions has been proposed by Meng et al. [12], and the transmission map was estimated using the boundary constraint hypothesis and contextual optimization. While the method effectively removes haze, it tends to distort objects whose colors resemble atmospheric light. Li et al. [13] introduced a weighted guided image filter for dehazing, which reduces halo artifacts in both flat and sharp regions while preserving edge information more effectively. However, this approach still results in over-smoothing in sharp areas due to the use of a local linear model and a fixed regularization parameter. Additionally, Li et al. [14] proposed an efficient single image haze removal approach based on edge-preserving decomposition, and the edge-preserving smoothing method is used to estimate the transmission map. However, the over-smoothing effect persists in the sharp regions. To address this, some global optimization-based filters like the weighted least square (WLS) [15] and fast weighted least square (FWLS) [16] have been introduced. These techniques avoid the edge-smoothing effect but are more time-consuming due to the increased number of iterations required. Zhu et al. [17] proposed a fusion-based algorithm for solving the image dehazing problem without considering the degradation mechanism. They utilized the guide filter to decompose the unexposed image into local and global components. The method offers several advantages, including computational efficiency, clear detail, good color quality, and satisfactory results. Raikwar et al. [18] rearranged the atmospheric scattering model to estimate the transmission map by calculating the difference of minimum color channels. This approach aims to enhance visibility restoration. Similarly, Zhao et al. [19] developed a single image dehazing method that analyzes the prior information of local dehazed patches. They accurately estimated the transmission based on these local patches. To improve the quality of the transmission map, they applied weighted interpolation and guided filtering to enhance the edges and details. P.S. Baiju et al. [20] proposed an optimization framework using weighted kernel norm minimization, which preserves prominent edges and detailed structures to obtain a finer transmission map. A scale-aware weighting-based effective edge-preserving gradient domain guided image filter was proposed by Yadav et al. [21], in which sharp and flat regions retained edge information while removing image artifacts. Padmini et al. [22] proposed a method of λ0 smoothing after guided filtering. Still, this method can’t remove pixel-clustering artifacts, so Joongchol Shin et al. [23] present a new structure-guided λ0-norm that removes various artifacts while counting the global gradients of an image.

He et al. [24] proposed a dehazing algorithm based on dark channel priors; and improved the transmission map using soft matting, resulting in clearer restored images. However, this approach had a large computational cost and exhibited gradient reversal and halo artifacts in smooth and sharp regions. To address these issues, He et al. [25] introduced guided filtering with λ2 norm penalty as an alternative to soft matting. This optimization reduced the algorithm’s complexity, mitigated artifact problems, and enhanced the naturalness of the dehazing effect. However, the use of λ2 norm penalty may hinder the retention of edge information in sharp regions. Xu et al. [26] proposed using the λ0 gradient minimization model for image processing. The model optimizes the sparsity of gradient by controlling the number of non-zero gradients, resulting in sharper protruding edges and effective eliminating of low-amplitude structures. The advantage of this method is that it does not depend on local features but instead globally positioning image to suppress the halo. In previous studies [27] [28], it has been demonstrated that the λ0 norm induces stronger sparsity compared to the λ1 and λ2 norms. However, relying solely on the λ0 norm has been found to have limited effectiveness in preventing overfitting of the model. In addition, it is proved in [29] that combining the λ0 norm with additional λ1 or λ2 penalties leads to improved performance. Motivated by these findings, this paper adopts a penalty term that combines the λ0 and λ2 norms.

In this paper, we propose a dehazing algorithm for a hybrid λ2-λ0 penalty model based on the atmospheric scattering model and the dark channel principle. This algorithm is inspired by the guided filtering with the λ2 penalty and the λ0 gradient minimization. It weighted sums the coefficients obtained from the λ2 and λ0 penalties, preventing the loss of edge information in sharp regions from the λ2 norm penalty and avoiding an excessively smooth image from the λ0 norm penalty. The model has an update of the guidance image using the λ2 norm penalty. This update includes the initial transmission map obtained by applying the dark channel prior, as well as the new regression coefficient image obtained after applying the λ0 norm penalty to the regression coefficients. The results demonstrate that this model can preserve more detailed information while preserving edges, resulting in higher quality dehazing images.

This paper is organized as follows: the background of the image dehazing is introduced concisely in Section 2; the proposed dehazing scheme is presented in Section 3, followed by the experimental results in Section 4, and Section 5 concludes this work with discussions.

2. Background

2.1. Atmospheric Scattering Model

According to the rule of conservation of energy, the proportion of atmospheric light scattered to the light sensor should be equal to the proportion of the scattered light reflected by the target object [30]. Therefore, by utilizing the physical haze model, a haze-free image can be reconstructed from an image degraded by haze. The model is expressed as:

I( x )=J( x )t( x )+A( 1t( x ) ) (1)

where I( x ) is the original image, J( x ) is the haze-free image, x=( x,y ) is the pixel coordinate, A is the atmospheric light, and t is the scene transmission map.

2.2. Estimation of Atmosphere Light

When estimating the atmospheric light value A , a common approach is to select the top 0.1% brightest pixels from the dark channel, mapping them to the input image to identify the corresponding pixel point, and take the maximum value in the RGB channels corresponding to this point as the predicted value for atmospheric light intensity A . However, if the value of A is excessively high, the dehazed image may appear partially distorted, while if it is too low, the image may be overexposed. To enhance the reliability of estimating the atmospheric light, we use the improved quadtree search algorithm proposed in [23].

The atmosphere light is then estimated as

A=mean( Q( I min ),I ) (2)

where I min = min( I c ) c{ R,G,B } , Q( I min ) denotes the quadtree operation using I min to

return the pixel locations for atmosphere light, and mean( ) computes the average of the original pixel values in the atmospheric region.

2.3. Guided Filtering with λ2 Penalty Estimates the Coarse Transmission

He et al. [25] proposed using guided filtering to refine the transmission map to optimize the “block effect” caused by an excessively large depth of field during the dehazing process. The key assumption of the guided filter is a local linear model between the guidance image and the filtering output image. The guiding image can either be distinct from or identical to the input image. When the guiding image is identical to the input image, the filtering operation becomes one that preserves edges, which is beneficial for image dehazing. In the proposed method, the initial transmission map t i is selected as the same guiding and input images. The linear relationship is shown as:

T i = a k t i + b k ,i ω k (3)

where ω k is a square window of radius r centered at the pixel k and ( a k , b k ) are linear coefficients assumed to be constant in ω k , t i is the transmission map after refining by the Dark channel prior theory [24], T i is the output image.

To ensure the guided image filtering has the best outcome, the difference between the input and output images must be minimized. Consequently, the cost function E( a k , b k ) uses λ2 norm penalty and defined as follows:

minE( a k , b k )= i ω k [ T i t i 2 2 +ε a k 2 ] = i ω k [ a k t i + b k t i 2 2 +ε a k 2 ] (4)

where ε is a regularization parameter penalizing large, Equation (4) is the linear ride regression [31] model and its solution is given by

a k = σ k 2 σ k 2 +ε b k =( 1 a k ) t ¯ i (5)

where t ¯ i is the mean of t i in ω k , σ k 2 is the variance of t i in ω k .

2.4. λ0 Gradient Minimization

In guided filtering, the primary benefit of using it as a local filter lies in its capability to preserve the edges of the image while optimizing the “block effect”. However, this filter eliminates halos at the boundaries of the image, which results in the significant edges of the image being penalized, weakened and lost. To address the issue of edge loss during transmission optimization, we introduce the λ0 gradient minimization filter optimization method proposed by Xu et al. [26].

Taking the regression coefficient a k as an example, the established λ0 model is as follows:

min a 0 a 0 a k 2 2 +λC( a 0 ) (6)

where a 0 is output, a k is the input of regression coefficient, the regression coefficient b 0 also has the same minimization problem.

C( a 0 )=#{ p|| x a 0 |+| y a 0 |0 } is the gradient measure of a 0 , λ is a weight to control the level of detail. The first term in Equation (6) represents fidelity, while the second term aims to constrain the sparsity of the gradient magnitude of the output.

To solve the objective function, rewrite the objective function as:

min a 0 , δ x , δ y { a 0 a k 2 2 +λC( δ x , δ y )+β( ( x a 0 δ x ) 2 + ( y a 0 δ y ) 2 ) } (7)

where δ is an auxiliary vector to deal with gradient of a 0 and Includes two components: δ x and δ y , β controls the similarity between δ and gradient of a 0 . Equation (7) is solved through alternatively minimizing ( δ x , δ y ) and a 0 .

3. A New Hybrid λ2-λ0 Model for Sparse Solutions with Optimizing Transmission Map

To balance the sparsity and the smoothness, the model with hybrid λ2-λ0 penalty is proposed as follows:

T= μ 1 t 2 + μ 2 t 0 (8)

where μ 1 and μ 2 are the weight parameters, t 2 is the transmission map obtained by combining regression coefficients after λ2 norm punishment, t 0 is the transmission map obtained by combining the regression coefficients after the λ0 norm penalty.

Initially, we apply λ2 norm penalty to both the guided image I and the input image p , which results in obtaining two regression coefficients ( a k , b k ) :

minE( a k , b k )= i ω k [ T p 2 2 +ε a k 2 ] = i ω k [ a k I+ b k p 2 2 +ε a k 2 ] I=p= t (9)

Next, we use these regression coefficients to apply λ0 norm penalty to obtain a 0 , b 0 :

min a 0 , δ x , δ y { a 0 a k 2 2 +λC( δ x , δ y )+β( ( x a 0 δ x ) 2 + ( y a 0 δ y ) 2 ) } min b 0 , δ x , δ y { b 0 b k 2 2 +λC( δ x , δ y )+β( ( x b 0 δ x ) 2 + ( y b 0 δ y ) 2 ) } (10)

where a 0 and b 0 is smoothed a k and b k with λ0 norm, respectively.

Finally, we update the guided image I using the regression coefficients obtained by the λ0 norm penalty, I and p also undergo λ2 norm punishment to obtain a 2 , b 2 :

minE( a 1 , b 1 )= i ω k [ a 2 p 2 2 +ε a 1 2 ] = i ω k [ a 1 I+ b 1 p 2 2 +ε a 1 2 ] a 2 = a 1 I+ b 1 I= a 0 ,p= a k (11)

minE( a 2 , b 2 )= i ω k [ b 2 p 2 2 +ε a 2 2 ] = i ω k [ a 2 I+ b 2 p 2 2 +ε a 2 2 ] b 2 = a 2 I+ b 2 I= b 0 ,p= b k (12)

where a 2 is obtained from the λ2 penalty of a k with a 0 as the guiding image and b 2 is obtained from the λ2 penalty of b k with b 0 as the guiding image.

Therefore, by combining the outcomes from the aforementioned steps, we arrive at the following result:

T= μ 1 t 2 + μ 2 t 0 = μ 1 ( a 2 t + b 2 )+ μ 2 ( a 0 t + b 0 ) (13)

After refinement of the transmission map, we transform Equation (1) to obtain the haze-free image J in Equation (14):

J( x )= I( x )A T +A (14)

For a more intuitive observation of the scheme proposed in this paper, the flow chart is shown in Figure 1.

Figure 1. Flow chart of the dehazing proposal.

Figure 2 presents the transmission map and haze-free images obtained through image dehazing using λ2 penalty and combined λ2-λ0 penalty. The figure clearly demonstrates that increasing the λ0 penalty effectively preserves the obvious edges in the transmission map. Additionally, the resulting haze-free image successfully retains the fine details.

Figure 2. The transmission map and haze-free images obtained by adding different penalty terms, where the upper left image shows a larger picture of the red box. (a) the transmission map obtained by applying λ2 penalty, (b) the haze-free image obtained by applying λ2 penalty, (c) the transmission map obtained by applying λ2-λ0 penalty, (d) the haze-free image obtained by applying λ2-λ0 penalty.

4. Experimental Results

To verify the effectiveness of our proposed method, we conduct two sets of experiments. Using the ground truth of the Berkeley segmentation data set [32], we generated synthetic haze images in the first set of simulation experiments with varying haze densities, such as low, medium, and dense haze, by changing the scattering coefficient β in accordance with Equation (15) in accordance with the atmospheric scattering model [33]. We selected three images of varying scenes from the data set and added haze of different densities to conduct the experiment. Additionally, we conducted the second set of experiments using eight real world images that already had haze present in different scenes. Then, we compared it with some various existing dehazing methods, namely, Tarel [11], Meng [12], He [25], RRO [23], Zhu [17], Ralkwar [18], Zhao [19]. In the experiments, the parameters are manually tuned for all methods to ensure the best results.

I( x )=J( x ) e βd( x ) +A( 1 e βd( x ) ) (15)

4.1. Evaluation Criteria

Image quality assessment (IQA) plays a crucial role in detecting the image dehazing effect. It can be categorized into subjective evaluation and objective evaluation. Objective evaluation encompasses various evaluation indicators such as full reference, semi-reference, and no reference [34]-[40]. For synthetic images in the experiment, the performance of image dehazing is evaluated by calculating PSNR, SSIM and CIEDE2000. PSNR [41] measures the distortion between a haze-free image and its ground truth, while SSIM [42] represents the similarity between the ideal and restored images. Higher values of these metrics indicate better dehazing performance. CIEDE2000 [43] measures color distortion using the Lab color model, with lower scores signify reduced color distortion.

For the real-world images in the experiment, there is without the corresponding ground truth haze-free images, so we use several non-reference image quality evaluation indexes, including contrast, information entropy, average gradient, FADE (Fog Aware Density Evaluator) [44], BIQI (Better Image Quality Index) [45] and Blind Contrast Enhancement Assessment to evaluate [46]. Entropy is a measure of the amount of information in an image, a higher entropy value indicates richer color detail. On the other hand, the contrast value reflects the brightness and clarity of the restored image. Higher contrast values indicate a cleaner image. Additionally, the average gradient represents the ability in recover the image to express the contrast of tiny features. A larger average gradient value signifies more layers within the image and a stronger capacity to express contrasts in fine details, consequently, this results in a clearer image. FADE, a fog-aware density evaluator metric, can assess the visibility of restored images by analyzing the spatial domain deviation between hazy and haze free images, a smaller FADE value indicates a lesser amount of residual haze present in the dehazed result. The BIQI quantifies the quality of distorted images based on perceptual and natural image qualities. A lower score implies a more effective haze removal method. The blind evaluation index is based on the contrast assessment of the visible edges before and after restoration, and it utilizes three descriptors: rate of new visible edges ( e ), the gain of visibility level ( r ), and saturated pixel ration ( σ ). A higher value of e and r indicates a better quality of a dehazed image in terms of preserving edges and enhancing contrast, moreover a smaller value of σ is an indication that the dehazed image has fewer saturated pixels or color distortions compared to the hazy image.

4.2. Synthetic Images

4.2.1. Result

The ground truth is shown in Figure 3. Figures 4-6 display the dehazing results with varying densities of haze added to Figure 3. Table 1 presents the evaluation indicators for three different haze levels in different environments.

Figure 3. Ground truth (a) Figure 4, (b) Figure 5, (c) Figure 6.

Table 1. Comparison of PSNR, SSIM, CIEDE2000 values with haze condition changes. The 1ST, 2ND winners of each measurement are display with “*” and “♣” respectively.

Metrics

Images

Tarel

Meng

He

RRO

Zhu

Ralkwar

Zhao

Our

PSNR

1

12.0906

14.2276

19.1677

17.5710

16.9908

18.5861

17.4758

25.6626*

2

8.4603

12.8889

11.7310

14.6325

14.3163

15.4463

14.0455

20.8745*

3

6.8901

12.0967

8.0399

12.4654

12.3562

12.4868

12.4368

14.8729*

4

10.5252

15.6673

19.9764

18.0610

14.4334

17.7421

26.9512*

21.7462

5

7.5869

13.6606

14.2213

15.0869

11.8570

13.1387

20.5683*

18.7600

6

6.4291

10.9419

8.9544

12.4301

10.7675

13.5437*

12.6572

12.9107

7

16.8665

12.6900

17.7105

20.1108

17.8666

16.1226

19.1245

24.4086*

8

11.7926

12.1959

18.1033

20.5063

15.3091

14.0582

15.9377

22.5254*

9

8.6035

12.0782

12.3380

17.8191

13.2887

13.1885

14.2598

17.8298*

SSIM

1

0.8988

0.9439

0.9781

0.9697

0.9591

0.9704

0.9716

0.9962*

2

0.8198

0.9158

0.9106

0.9473

0.9337

0.9563

0.9436

0.9795*

3

0.7704

0.9077

0.8255

0.9240

0.9138

0.9145

0.9277

0.9317*

4

0.8336

0.9285

0.9733

0.9506

0.9199

0.9701

0.9953*

0.9811

5

0.7601

0.8916

0.9158

0.9232

0.8600

0.8854

0.9763*

0.9650

6

0.7285

0.8549

0.8210

0.8873

0.8364

0.8972

0.8874

0.8985

7

0.9635

0.9420

0.9830

0.9816

0.9729

0.9648

0.9826

0.9941*

8

0.9062

0.9409

0.9742

0.9876

0.9574

0.9518

0.9647

0.9927*

9

0.8414

0.9285

0.9139

0.9764*

0.9367

0.9330

0.9518

0.9762

CIEDE

2000

1

11.0760

12.8238

4.3351

10.9121

8.0194

7.8610

10.9879

2.1125*

2

16.9556

17.2330

9.8924

14.3200

9.5124

13.6248

16.4988

3.6882*

3

20.1291

19.3050

15.7083

15.3984

11.3941

16.7263

18.2981

7.4793*

4

12.8323

7.4279

4.2099

4.9022

9.0671

10.6647

3.8127

3.4415*

5

17.5238

9.4118

7.0856

6.9731

10.4734

11.8321

5.4766

4.5197*

6

19.5586

11.8739

13.3066

9.3310

11.8376

13.5734

9.5211

8.6101*

7

6.8092

9.4979

5.3851

4.7616

6.1957

6.4570

5.5694

2.9184*

8

11.9938

11.1840

5.1680

5.1630

7.5016

8.3372

8.3629

3.2086*

9

16.6186

11.8780

9.6490

6.3003

9.0265

10.2918

10.0992

5.5338*

Figure 4. Dehazing results using synthetic images without the sky area: No. 1 shows the low condition, No. 2 shows the medium condition, and No. 3 shows the dense condition. (a) input haze image, (b) Tarel, (c) Meng, (d) He, (e) RRO, (f) Zhu, (g) Raikwar, (h) Zhao, (i) the proposed method.

Figure 5. Dehazing results using synthetic images with a large amount of sky area: No. 4 shows the low condition, No. 5 shows the medium condition, and No. 6 shows the dense condition. (a) input haze image, (b) Tarel, (c) Meng, (d) He, (e) RRO, (f) Zhu, (g) Raikwar, (h) Zhao, (i) the proposed method.

The results presented in Figure 4 indicate a gradual deterioration of the dehazing effect as the level of haze increases. The results of Meng reveal significant color distortion, which can be attributed to the absence of distinct white objects in the image. This led to the selection of only the points closest to white, resulting in the observed phenomenon. The results of Zhu show that as the level of haze deepens, the color of the door demonstrates a progressive darkening. Similarly, the results of Zhao, RRO, and Raikwar show a gradual distortion in the color of the door, with instances of it appearing white and even exhibiting a purple hue.

In Figure 5, the results of Tarel indicate that the image still contains significant haze, which worsen as the intensity of haze increases. The results of Meng highlight issues such as oversaturation or excessive darkening of white clouds, possibly due to the selection of different white objects. Both the results of Meng and RRO demonstrate better dehazing outcomes in low to medium haze conditions; but struggle to achieve satisfactory results in denser haze scenarios. The results of Zhu successfully capture mountain textures; however, they also exhibit a general darkening of white cloud colors. The results of Raikwar show noticeable distortion and alteration in the color of white clouds. On the other hand, the results of Zhao perform well in low haze conditions, as indicated in Table 1, showing positive indicators. However, as the haze intensifies, white patches become increasingly visible.

In Figure 6, the results of Tarel mirror those observed in previous figures, with a significant amount of haze remaining unremoved. The results of Meng display images that are excessively bright in low and medium haze, causing the foreground mountains to lose their original color input, while exhibiting excessively dark colors in dense haze conditions. The results of He indicate a residual amount of haze present during dense haze scenarios. The results of Zhu and Raikwar demonstrate darker images compared to those generated by other comparative algorithms. The results of Zhao highlight increased artifacts and halos on distant mountains as haze levels escalate, accompanied by patches appearing in lakes. The results of RRO exhibit an effective haze removal effect across various degrees of haze, as indicated by the data in Table 1, showing very little deviation from the best-performing results.

Figure 6. Dehazing results using synthetic images that contain mirror images (e.g., sea surface, lake surface, mirror, etc.): No. 7 shows the low condition, No. 8 shows the medium condition, and No. 9 shows the dense condition. (a) input haze image, (b) Tarel, (c) Meng, (d) He, (e) RRO, (f) Zhu, (g) Raikwar, (h) Zhao, (i) the proposed method.

In contrast, our proposed approach yields better results compared to these competing techniques. It is important to highlight that our approach not only improves the visibility of distant views, but also enhances texture details without altering the original color appearance.

4.2.2. Parameter Sensitivity Analysis

Sensitivity of the weight parameters μ 1 and μ 2 in our algorithm are discussed. In Equation (8), these weight parameters control the balance of the coefficient map after applying the l0 norm penalty and the quadratic λ2 norm penalty. Compared to the other parameters, the weight parameters are more sensitive to PSNR and SSIM. Therefore, we conduct experiments by fixing parameters of ε 1 , ε 2 , r 1 , r 2 ,λ,β and varying μ 1 and μ 2 , using PSNR and SSIM metrics to measure the accuracy of dehazed results. Here the regularization parameter and the local window radius of the improved guided filter are ε 1 and r 1 ; the regularization parameter and the local window radius of the guided filter with a quadratic limit by the regression coefficient are ε 2 and r 2 , respectively. The sum of μ 1 and μ 2 is 1. Figure 7 shows the evaluation index values of μ 1 settings in the medium haze density of Figure 4. It can be observed that PSNR gradually decreases with the increase of μ 1 , while SSIM presents an upward trend until μ 1 of 0.16, remains stable from 0.16 to 0.19, and then starts to decline after 0.19. Therefore, the parameters as shown in Table 2 which includes fixed parameters that were carefully chosen after conducting numerous experiments.

Figure 7. Parameter sensitivity: PSNR and SSIM of our algorithm with respect to the weight μ 1 in the medium haze density of Figure 4.

Table 2. Parameters setting of synthetic images.

Parameter

ε 1

r 1

ε 2

r 2

λ

β

μ 1

μ 2

value

2

42

0.01

20

2

0.1

0.16

0.84

4.3. Real World Images

4.3.1. Results

Figure 8 and Figure 9 show the dehazing results for real world images, and the evaluation indicators of various methods are shown in Table 3.

Figure 8. Qualitative comparison of different methods on real world images. (a) input haze image, (b) Tarel, (c) Meng, (d) He, (e) RRO, (f) Zhu, (g) Raikwar, (h) Zhao, (i) the proposed method.

Table 3. Visual quality evaluation using Entropy, Contrast ratio, Mean gradient, FADE, BIQI, σ , e , r . The 1ST, 2ND winners of each measurement are display with “*” and “♣” respectively.

Metrics

Images

Tarel

Meng

He

RRO

Zhu

Ralkwar

Zhao

Our

Entropy

1

6.77

6.43

6.88

6.99

7.12*

5.86

6.70

6.79

2

7.40

7.41

7.55

7.66

7.66

6.45

7.57

7.70*

3

7.07

6.59

6.84

7.10*

6.78

6.71

6.71

7.00

4

6.50

6.56

7.29

7.18

6.93

6.96

7.08

7.36*

5

7.33

6.92

7.33

7.48*

7.54

7.17

7.31

7.40

6

7.04

6.81

7.28*

7.15

7.22

7.10

7.03

7.23

7

6.85

6.91

6.71

7.04

7.17

7.25

7.29*

7.12

8

6.79

6.51

7.25

7.43

7.28

6.97

7.27

7.44*

Contrast

ratio

1

0.37

0.39

0.73

0.55

0.33

1.09*

0.58

0.85

2

0.45

0.53

0.40

0.64

0.58

1.01*

0.46

0.62

3

0.46

0.43

0.42

0.45

0.38

0.80*

0.63

0.71

4

0.34

0.35

0.40

0.53

0.30

0.57

0.53

0.62*

5

0.54

0.47

0.46

0.56

0.44

0.46

0.45

0.58*

6

0.37

0.43

0.46

0.49

0.36

0.61*

0.55

0.50

7

0.42

0.52

0.42

0.53

0.47

0.68*

0.46

0.61

8

0.38

0.35

0.48

0.48

0.40

0.39

0.30

0.51*

Mean

gradient

1

5.78

3.83

6.18

5.40

9.83*

5.72

5.96

7.42

2

11.67

11.53

11.29

17.30

17.48*

11.55

12.59

16.13

3

3.29

2.95

2.84

2.92

3.61*

2.98

3.14

3.53

4

5.10

4.75

4.39

6.31

7.03*

4.93

7.01

6.93

5

11.06

8.82

9.16

10.98

12.29*

8.45

9.72

11.42

6

6.85

6.80

5.14

6.11

10.87*

5.42

6.52

5.49

7

3.13

3.71

4.31

6.46*

4.31

4.00

3.62

5.97

8

4.85

4.31

4.38

4.71

6.84*

4.03

3.24

4.66

FADE

1

0.56

0.19

0.21

0.47

0.34

0.14*

0.26

0.15

2

0.24

0.19

0.27

0.14

0.15

0.11*

0.22

0.15

3

0.90

0.75

0.86

0.80

0.65

0.53

0.50*

0.51

4

0.82

0.44

0.84

0.40

0.48

0.29

0.28

0.27*

5

0.20

0.20

0.26

0.19

0.31

0.25

0.25

0.15*

6

0.37

0.26

0.43

0.31

0.25

0.21*

0.22

0.34

7

1.24

0.68

0.92

0.66

0.76

0.57

0.68

0.50*

8

0.79

0.75

0.75

0.77

0.80

0.94

1.53

0.71*

BIQI

1

46.42

25.71

34.36

38.86

27.25

30.92

34.99

22.10*

2

21.43

25.09

23.22

19.21*

21.60

29.43

25.52

27.96

3

38.63

40.70

42.81

42.44

37.00

41.71

43.44

28.03*

4

32.25

30.56

30.34

22.56*

23.03

28.72

25.61

27.86

5

19.10

18.41

17.93*

17.96

21.01

18.29

18.53

18.24

6

38.12

71.59

45.63

61.36

15.76*

48.09

76.19

47.36

7

31.09

38.18

41.72

35.49

36.11

43.85

50.45

23.23*

8

27.43

25.99

27.10

26.87

37.18

26.18

27.35

25.88*

σ

1

0.00*

95.97

0.92

0.00*

0.00*

7.79

0.00*

0.03

2

0.00*

0.00*

0.00*

0.03

0.18

6.80

0.01

0.01

3

0.00*

0.01

0.04

0.02

0.02

0.16

0.07

0.00*

4

0.00*

0.00*

0.09

0.06

0.01

0.80

0.31

0.24

5

0.00*

0.04

0.00*

0.41

0.00*

0.01

0.00*

0.01

6

0.00*

0.00*

0.08

0.01

0.53

0.49

0.00*

0.85

7

0.00*

0.00*

0.05

0.03

0.00*

0.18

0.21

0.00*

8

0.00*

0.03

1.12

1.02

0.20

0.13

0.00*

0.11

e

1

0.38

−0.54

1.41

1.08

1.37

1.52*

1.51

1.51

2

0.21

0.38

0.29

0.34

0.42*

0.40

0.35

0.38

3

0.32

3.35

2.54

2.38

4.75

8.16

6.47

8.24*

4

1.84

3.56

2.24

3.86

3.15

3.88

4.56*

4.43

5

0.37*

0.07

0.07

0.05

0.12

0.05

0.08

0.12

6

1.32

2.02*

1.18

1.63

1.56

1.62

1.87

1.32

7

11.65*

5.77

5.39

7.18

6.27

7.36

6.05

8.16

8

1.37*

0.20

0.06

0.09

0.04

0.07

0.02

0.20

r

1

0.42

6.65

1.80

1.54

2.85*

1.63

1.72

2.22

2

0.50

1.56

1.55

2.34

2.54*

1.57

1.73

2.34

3

0.67

1.94

1.85

1.93

2.61

1.90

2.09

2.63*

4

0.71

2.27

2.04

3.01

3.45

2.35

3.62

3.74*

5

0.63

1.33

1.40

1.68

1.89

1.27

1.52

1.93*

6

0.60

2.47

1.69

2.14

3.49*

1.86

2.41

1.83

7

0.99

2.31

2.77

4.32

2.70

2.43

2.22

4.43*

8

0.85

1.40

1.39

1.47

2.24*

1.28

1.02

1.50

Figure 9. Qualitative comparison of different methods on real world images. (a) input haze image, (b) Tarel, (c) Meng, (d) He, (e) RRO, (f) Zhu, (g) Raikwar, (h) Zhao, (i) the proposed method.

The model of Tarel is not able to completely eliminate the haze effect at Nos. 1, 4 and 6, particularly in the distant areas of Nos. 4 and 6. Additionally, it results in artifact halos on ships in the distant part of No. 8. According to the σ data in Table 3, the results of Tarel have little distortion and supersaturated pixels. The results of Meng indicate that the image is too dark in Nos. 3, 4, 5 and 8, and there is distortion in the sky of No. 8. The results of He, RRO and Zhao demonstrate that the details of the scene and objects are well restored, especially in No. 2, according to the data in Table 3, it can be seen that the results of RRO seem to have achieved better results in all aspect. However, both the methods of He and RRO still have some haze in the distant part that is not completely removed. Additionally, Nos. 3 and 7 of Zhao exhibit the phenomenon of white spots, and a halo is generated around the tree in the foreground of No. 6. The results of Zhu indicate a partial distortion in the lower left corner of Nos. 1, 6, and the contrast of No. 5 is too low, resulting in an overall white color for the building. While his objective evaluation indicators perform well according to Table 3, his subjective evaluation is not satisfactory. On the other hand, the results of Raikwar generally appear darker compared to others, but they exhibit better haze removal performance, particularly noticeable on Nos. 1 and 2. This is reflected in their high FADE value and Contrast ratio value.

The proposed method in this research has a positive impact on the dehazing effect and color recovery. By applying the λ0 gradient minimization constraint to the ridge regression coefficient based on guided filtering, it better preserves edges, as evidenced by the average gradient and e value in the table.

4.3.2. Parameter Sensitivity Analysis

Here, similar to the parameter sensitivity analysis of the synthetic image, we also conduct experiments by fixing parameters of ε 1 , ε 2 , r 1 , r 2 ,λ,β and varying μ 1 and μ 2 , using BIQI and e metrics to measure the accuracy of dehazed results. Figure 10 shows the evaluation index values of μ 1 settings in the No. 3 of Figure 8. The figure shows that e generally presents an upward trend, with little difference observed when the μ 1 is between 0.88 and 0.94. Within this interval, the smallest BIQI values are recorded for 0.89 and 0.92. Therefore, we can choose either of these values. The corresponding parameter settings can be found in Table 4.

Figure 10. Parameter sensitivity: BIQI and e of our algorithm with respect to the weight μ 1 on the No. 3 of Figure 8.

Table 4. Parameter settings of real-world images.

Parameter

ε 1

r 1

ε 2

r 2

λ

β

μ 1

μ 2

value

0.001

4

0.001

10

2

0.004

0.92

0.08

5. Conclusions

To enhance the realism of the existing haze image degradation model, we propose a dehazing model that integrates a hybrid λ2-λ0 penalization approach. This model refines the transmission map by weighted summation of regression coefficients derived from the λ2 and λ0 penalties. This method effectively smooths out the details while preserving the edges, resulting in a more accurate outline for the refined transmittance map. Consequently, the dehazed image retains more edge information. Experimental results demonstrate that the method proposed in this paper yields favorable outcomes in dehazing both synthetic and real-world images, achieving higher evaluation scores.

In future studies, we plan to enhance the transmission with new regularization terms, and hope to reduce the impact of noise during the dehazing process, resulting in a cleaner and more optimal output image.

Acknowledgements

I would like to express my sincere gratitude to my advisor, Dongjiang Ji, for his invaluable guidance and support throughout this research. His insights and encouragement have been instrumental in shaping my work. I also appreciate the contributions of Chunyu Xu for her assistance in the validation process. Furthermore, I extend my thanks to my colleagues and friends for their encouragement and constructive feedback. This research would not have been possible without their support.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Yang, Y. and Wang, Z. (2020) Image Restoration Algorithm Based on Compensated Transmission and Adaptive Haze Concentration Coefficient. Journal on Communications, 41, 66-75.
[2] Dale-Jones, R. and Tjahjadi, T. (1993) A Study and Modification of the Local Histogram Equalization Algorithm. Pattern Recognition, 26, 1373-1381.
https://doi.org/10.1016/0031-3203(93)90143-k
[3] Wang, Q. and Ward, R. (2007) Fast Image/Video Contrast Enhancement Based on Weighted Thresholded Histogram Equalization. IEEE Transactions on Consumer Electronics, 53, 757-764.
https://doi.org/10.1109/tce.2007.381756
[4] Stockham, T.G. (1972) Image Processing in the Context of a Visual Model. Proceedings of the IEEE, 60, 828-842.
https://doi.org/10.1109/proc.1972.8782
[5] Russo, F. (2002) An Image Enhancement Technique Combining Sharpening and Noise Reduction. IEEE Transactions on Instrumentation and Measurement, 51, 824-828.
https://doi.org/10.1109/tim.2002.803394
[6] Land, E.H. (1977) The Retinex Theory of Color Vision. Scientific American, 237, 108-128.
https://doi.org/10.1038/scientificamerican1277-108
[7] Jobson, D.J., Rahman, Z. and Woodell, G.A. (1997) Properties and Performance of a Center/Surround Retinex. IEEE Transactions on Image Processing, 6, 451-462.
https://doi.org/10.1109/83.557356
[8] Rahman, Z., Jobson, D.J. and Woodell, G.A. (1996) Multi-Scale Retinex for Color Image Enhancement. Proceedings of 3rd IEEE International Conference on Image Processing, Lausanne, 19 September 1996, 1003-1006.
https://doi.org/10.1109/icip.1996.560995
[9] Fattal, R. (2008) Single Image Dehazing. ACM Transactions on Graphics, 27, 1-9.
https://doi.org/10.1145/1360612.1360671
[10] Tan, R.T. (2008) Visibility in Bad Weather from a Single Image. 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, 23-28 June 2008, 1-8.
https://doi.org/10.1109/cvpr.2008.4587643
[11] Tarel, J. and Hautiere, N. (2009) Fast Visibility Restoration from a Single Color or Gray Level Image. 2009 IEEE 12th International Conference on Computer Vision, Kyoto, 29 September-2 October 2009, 2201-2208.
https://doi.org/10.1109/iccv.2009.5459251
[12] Meng, G., Wang, Y., Duan, J., Xiang, S. and Pan, C. (2013) Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. 2013 IEEE International Conference on Computer Vision, Sydney, 1-8 December 2013, 617-624.
https://doi.org/10.1109/iccv.2013.82
[13] Li, Z.G., Zheng, J.H., Zhu, Z.J., Yao, W. and Wu, S.Q. (2015) Weighted Guided Image Filtering. IEEE Transactions on Image Processing, 24, 120-129.
https://doi.org/10.1109/tip.2014.2371234
[14] Li, Z. and Zheng, J. (2015) Edge-Preserving Decomposition-Based Single Image Haze Removal. IEEE Transactions on Image Processing, 24, 5432-5441.
https://doi.org/10.1109/tip.2015.2482903
[15] Farbman, Z., Fattal, R., Lischinski, D. and Szeliski, R. (2008) Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation. ACM Transactions on Graphics, 27, 1-10.
https://doi.org/10.1145/1360612.1360666
[16] Min, D., Choi, S., Lu, J., Ham, B., Sohn, K. and Do, M.N. (2014) Fast Global Image Smoothing Based on Weighted Least Squares. IEEE Transactions on Image Processing, 23, 5638-5653.
https://doi.org/10.1109/tip.2014.2366600
[17] Zhu, Z., Wei, H., Hu, G., Li, Y., Qi, G. and Mazur, N. (2021) A Novel Fast Single Image Dehazing Algorithm Based on Artificial Multiexposure Image Fusion. IEEE Transactions on Instrumentation and Measurement, 70, 1-23.
https://doi.org/10.1109/tim.2020.3024335
[18] Raikwar, S.C. and Tapaswi, S. (2020) Lower Bound on Transmission Using Non-Linear Bounding Function in Single Image Dehazing. IEEE Transactions on Image Processing, 29, 4832-4847.
https://doi.org/10.1109/tip.2020.2975909
[19] Zhao, X. (2021) Single Image Dehazing Using Bounded Channel Difference Prior. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, 19-25 June 2021, 727-735.
https://doi.org/10.1109/cvprw53098.2021.00082
[20] Baiju, P.S., Antony, S.L. and George, S.N. (2021) An Intelligent Framework for Transmission Map Estimation in Image Dehazing Using Total Variation Regularized Low-Rank Approximation. The Visual Computer, 38, 2357-2372.
https://doi.org/10.1007/s00371-021-02117-2
[21] Yadav, S.K. and Sarawadekar, K. (2023) A New Robust Scale-Aware Weighting-Based Effective Edge-Preserving Gradient Domain Guided Image Filter for Single Image Dehazing. Journal of Signal Processing Systems, 95, 475-493.
https://doi.org/10.1007/s11265-023-01849-9
[22] Padmini, T.N. and Shankar, T. (2016) De-Hazing Using Guided and L0 Gradient Minimization Filters. Indian Journal of Science and Technology, 9, 1-6.
https://doi.org/10.17485/ijst/2016/v9i37/102115
[23] Shin, J., Kim, M., Paik, J. and Lee, S. (2020) Radiance-Reflectance Combined Optimization and Structure-Guided 0-Norm for Single Image Dehazing. IEEE Transactions on Multimedia, 22, 30-44.
https://doi.org/10.1109/tmm.2019.2922127
[24] He, K.M., Sun, J. and Tang, X.O. (2011) Single Image Haze Removal Using Dark Channel Prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33, 2341-2353.
https://doi.org/10.1109/tpami.2010.168
[25] He, K., Sun, J. and Tang, X. (2013) Guided Image Filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 1397-1409.
https://doi.org/10.1109/tpami.2012.213
[26] Xu, L., Lu, C., Xu, Y. and Jia, J. (2011) Image Smoothing via L0 Gradient Minimization. ACM Transactions on Graphics, 30, 1-12.
https://doi.org/10.1145/2070781.2024208
[27] Dedieu, A., Lázaro-Gredilla, M. and George, D. (2021) Sample-Efficient L0-L2 Constrained Structure Learning of Sparse Ising Models. Proceedings of the AAAI Conference on Artificial Intelligence, 35, 7193-7200.
https://doi.org/10.1609/aaai.v35i8.16884
[28] de Resende Oliveira, F.D., Batista, E.L.O. and Seara, R. (2024) On the Compression of Neural Networks Using ℓ0-Norm Regularization and Weight Pruning. Neural Networks, 171, 343-352.
https://doi.org/10.1016/j.neunet.2023.12.019
[29] Mazumder, R., Radchenko, P. and Dedieu, A. (2023) Subset Selection with Shrinkage: Sparse Linear Modeling When the SNR Is Low. Operations Research, 71, 129-147.
https://doi.org/10.1287/opre.2022.2276
[30] Middleton, W.E.K. and Twersky, V. (1954) Vision through the Atmosphere. Physics Today, 7, 21.
https://doi.org/10.1063/1.3061544
[31] Kim, J., Jang, W., Sim, J. and Kim, C. (2013) Optimized Contrast Enhancement for Real-Time Image and Video Dehazing. Journal of Visual Communication and Image Representation, 24, 410-425.
https://doi.org/10.1016/j.jvcir.2013.02.004
[32] Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., et al. (2019) Benchmarking Single-Image Dehazing and Beyond. IEEE Transactions on Image Processing, 28, 492-505.
https://doi.org/10.1109/tip.2018.2867951
[33] Agrawal, S.C. and Jalal, A.S. (2021) Distortion-Free Image Dehazing by Superpixels and Ensemble Neural Network. The Visual Computer, 38, 781-796.
https://doi.org/10.1007/s00371-020-02049-3
[34] Min, X., Zhai, G., Gu, K., Zhu, Y., Zhou, J., Guo, G., et al. (2019) Quality Evaluation of Image Dehazing Methods Using Synthetic Hazy Images. IEEE Transactions on Multimedia, 21, 2319-2333.
https://doi.org/10.1109/tmm.2019.2902097
[35] Min, X., Zhai, G., Gu, K., Yang, X. and Guan, X. (2019) Objective Quality Evaluation of Dehazed Images. IEEE Transactions on Intelligent Transportation Systems, 20, 2879-2892.
https://doi.org/10.1109/tits.2018.2868771
[36] Min, X., Gu, K., Zhai, G., Liu, J., Yang, X. and Chen, C.W. (2018) Blind Quality Assessment Based on Pseudo-Reference Image. IEEE Transactions on Multimedia, 20, 2049-2062.
https://doi.org/10.1109/tmm.2017.2788206
[37] Min, X., Zhai, G., Gu, K., Liu, Y. and Yang, X. (2018) Blind Image Quality Estimation via Distortion Aggravation. IEEE Transactions on Broadcasting, 64, 508-517.
https://doi.org/10.1109/tbc.2018.2816783
[38] Min, X., Gu, K., Zhai, G., Yang, X., Zhang, W., Le Callet, P., et al. (2021) Screen Content Quality Assessment: Overview, Benchmark, and Beyond. ACM Computing Surveys, 54, 1-36.
https://doi.org/10.1145/3470970
[39] Zhai, G. and Min, X. (2020) Perceptual Image Quality Assessment: A Survey. Science China Information Sciences, 63, Article No. 211301.
https://doi.org/10.1007/s11432-019-2757-1
[40] Zhang, J., Min, X., Zhu, Y., Zhai, G., Zhou, J., Yang, X., et al. (2022) HazDesNet: An End-To-End Network for Haze Density Prediction. IEEE Transactions on Intelligent Transportation Systems, 23, 3087-3102.
https://doi.org/10.1109/tits.2020.3030673
[41] Wang, Z. and Bovik, A.C. (2006) Modern Image Quality Assessment. Springer.
https://doi.org/10.2200/s00010ed1v01y200508ivm003
[42] Wang, Z., Bovik, A.C., Sheikh, H.R. and Simoncelli, E.P. (2004) Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing, 13, 600-612.
https://doi.org/10.1109/tip.2003.819861
[43] Sharma, G., Wu, W. and Dalal, E.N. (2004) The CIEDE2000 Color-Difference Formula: Implementation Notes, Supplementary Test Data, and Mathematical Observations. Color Research & Application, 30, 21-30.
https://doi.org/10.1002/col.20070
[44] Choi, L.K., You, J. and Bovik, A.C. (2015) Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging. IEEE Transactions on Image Processing, 24, 3888-3901.
https://doi.org/10.1109/tip.2015.2456502
[45] Moorthy, A.K. and Bovik, A.C. (2010) A Two-Step Framework for Constructing Blind Image Quality Indices. IEEE Signal Processing Letters, 17, 513-516.
https://doi.org/10.1109/lsp.2010.2043888
[46] Hautière, N., Tarel, J., Aubert, D. and Dumont, É. (2011) Blind Contrast Enhancement Assessment by Gradient Ratioing at Visible Edges. Image Analysis & Stereology, 27, 87-95.
https://doi.org/10.5566/ias.v27.p87-95

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.