Numerical Comparisons of Different Imaging Algorithms

Abstract

Image processing is the set of operations performed to extract “information” from the image. An interesting problem in digital image processing is the restoration of degraded images. It often happens that the resulting image is different from the expected image. Our problem will therefore be to recover an image close to the original image from a poor quality image (that has been skewed by Gaussian and additive noise). There are several algorithms on how we can improve the broken image in better quality. We present in this paper our numerical results obtained with the models of Tikhonov regularization, ROF, Vese Osher, anisotropic and isotropic TV denoising algorithms.

Share and Cite:

Bougueroua, S. and Daili, N. (2023) Numerical Comparisons of Different Imaging Algorithms. Journal of Applied Mathematics and Physics, 11, 2671-2690. doi: 10.4236/jamp.2023.119175.

1. Introduction

A digital image is composed of basic units (referred to as pixels) that each represent a specific area of the image. The width and height of an image are determined by the practically infinite number of pixels that make up each of those dimensions, as well as the range of grayscale or colors that each pixel can contain (we speak of image dynamics). There are three categories of digital images:

• Binary images: in the simplest images, a pixel can only take the values black or white. When a piece of text just has one color, this form of image is typically used to scan it;

• The grayscale images: images with gray levels typically display 256 shades of gray. Simply put, each of the 256 colors in a 256-color image is defined by the range of gray. According to tradition, 0 represents black (null luminous intensity) and 255 represents white (maximum luminous intensity);

• The color images: in order to represent the colors red, green, and blue, a color image is actually made up of three images. Each of these three images is referred to as a canal. This representation in red, green, and blue mimics how the human visual system works.

The fundamentally ill-posed character of some practical problems is recognized and is manifested in a very large class of problems, called “inverse problems”. There are several types of ill-posed inverse problems, and their applications can be found in many fields such as image processing.

Digital image processing is one of the most crucial components of machine learning or computer vision. A fascinating area of digital image processing is the restoration of images. During the acquisition of image (especially through photography), it is typical for the final image to diverge from the expected image.

Image processing refers to all techniques and methods used to modify, remove Gaussian noise and improve or analyze digital images. It aims to extract useful information, improve visual quality, detect specific patterns or objects, and identify various imaging tasks. Image processing plays a crucial role in many areas of our daily lives, such as digital photography, television, medicine, computer vision, pattern recognition, augmented reality, robotics, surveillance, security and surveillance, geography. It plays an essential role in understanding and exploiting the visual information contained in the images.

There are many examples and fields of applications of image processing. The two main areas that have enabled image processing to develop are:

1) The military domain: missiles of all kinds (self-directed (short range), cruise (long range), …); intelligence (remote sensing from satellite images, the accuracy of which can go up to a few centimeters today, photo-interpretation); real simulators (aircraft, tank, …).

2) The medical field: medical imaging (ultrasound, MRI, tomography; angiography; x-ray; ultrasound; scanner; MRI, …).

Image processing is a discipline that concerns the manipulation, analysis and improvement of images using algorithms and computer techniques. Different techniques can be combined to get better results. There are many algorithms used in imaging to process and analyze images such as resizing, rotation, filtering, as well as more advanced operations such as segmentation, object detection, pattern recognition, and image restoration.

Image restoration algorithms are used to reduce Gaussian noise, remove artifacts, improve image resolution or overall image quality. Common methods include deconvolution techniques, model-based restoration, use of adaptive filters.

Denoising is the opposite problem from removing noise from an image; the outcome would be subpar if noise was left in the image. Noise is parasitic information that is added to the scene. Since noise has a wide range of origins and characteristics, it can be replicated in many different ways. There are many different kinds of noise, but the case study in this article is Gaussian additive noise with grayscale images, or f = u + η . The original image is represented by u, the observed noisy image is represented by f, and the Gaussian random fluctuation to zero mean is represented by η . Gaussian noise is often referred to as normal noise in a predefined density function. It is a common technique for including noise in images. According to the following definition, this noise can be produced randomly and separately inside the image with

p ( z ) = 1 2 π σ e 1 2 σ 2 ( z z ¯ ) 2 , (1)

where z stands for intensity, z ¯ represents the mean value of z, and σ stands for standard deviation. To return the image to a better level of visual quality, descalation techniques are necessary. The investigation of various image restoration models with Gaussian noises will be covered in this paper.

2. Some Models of Image Restoration

This work as an introduction to image recovery, which is an interesting and ill-positioned problem and is of crucial importance to the idea of image processing. The process of recovering an image nearly identical to the original from an observation, usually a fuzzy or blurred image of an real image, is known as Restore name image. Several recovery models have been applied to many problems that are poorly posed in the mathematical literature. Among them are the following models.

2.1. Tikhonov Regularization

The oldest regular method still in used to address inverse problems is the Tikhonov regularization method. In other words, we replace the ill-posed original problem with a well-posed alternative approximation problem. It is one of the most well-known methods of regularization in both statistical and digital analysis.

The Tikhonov regularization is a very commonplace yet overly simplistic regularization method for image processing. If we assume that the additive noise v is Gaussian and that f represents the observed image, then we attempt to reconstruct or restore the image u.

Let V = H 0 1 ( Ω ) and H = L 2 ( Ω ) , we take the original minimization problem (adjustment to the data):

( P ) α : = min u V u f H 2 , (2)

where f : Ω N is the observed image and the following regularized problem:

( P α ) β : = min u V { u f H 2 + α u H 2 } , α > 0. (3)

The gradient must be “very minimal” in order for us to merely adjust u to the data f (it depends on the parameter). A slight gradient, in an image, is “smoothed”. The restoration will provide a blurry image because the margins are eroded.

Proposition 1. [1] Assume that ( P ) requires at least one answer u ˜ . The problem ( P α ) requires a one-of-a-kind solution u α . When α 0 , one can extract a subsequence from the family ( u α ) α that converges (possibly) in V to a solution u of ( P ) .

Proof. The solution exists, unique and therefore converges the well-posed regularized Tichonov problem. The proof of this proposition will be found in detail in [1] .

The restored image u is far less clear (in particular, the edges are eroded), which makes the problem of image restoration incompatible with the common expression for image restoration, L ( u ) = u 2 2 (Tikhonov regularization). Think about the overall variation, or consider L ( u ) = | D u | . This strategy is significantly more successful. With respect to the problem of functions of bounded variation spaces, this leads to a functional minimization in a particular Banach space.

2.2. The Continuous Model of Rudin-Osher-Fatemi

Rudin, Osher, and Fatemi (ROF) proposed the first image restoration model from a given noisy image having additive noise using regularization (TV), which is defined by

T V ( u ( x , y ) ) : = { Ω | u ( x , u ) | d x d y , with | u | = u x 2 + u y 2 } .

The regularization of total variation (TV) approach of image processing is used to reduce noise from digital images. (TV) is a technique that was originally developed by ROF, it has since been applied to a multitude of other imaging problems.

Rudin, Osher, and Fatemi developed the method known as (TV) to address the problem of visual degradation. Now, it has been used to solve numerous additional image problems.

In [2] , a model has been proposed by Rudin, Osher, and Fatemi and in which the image is divided into two parts f = u + v , where u is an unknown image and v is the noise. f is a brilliant measure that is usual at the beginning of a clean image, and is an agreement parameter. We will thus try to solve the problem and simply apply the regularization to the “ noise” portion using the u + v formula with u B V ( Ω ) and v L 2 ( Ω ) . If f L 2 is correct, the ROF problem is well-posed and the minimizer u exists, unique, and stable in L 2 ( Ω ) . ROF proposed the following minimization problem:

( P R O F ) α R o F : = inf u { J ( u ) + 1 2 λ v 2 2 : u B V ( Ω ) , v L 2 ( Ω ) , f = u + v } . (4)

This results in a ( B V ( Ω ) , L 2 ( Ω ) ) decomposition of the image f.

J ( u ) denotes the total variance of u and λ > 0 .

J ( u ) = sup { Ω u ( x ) d i v ( φ ( x ) ) d x : φ C c 1 ( Ω , 2 ) , φ 1 } .

Also known as BV, or functions of bounded variation space, according to

B V ( Ω ) = { u L 1 ( Ω ) , J ( u ) < + } .

Here J ( u ) denotes the TV of u and λ > 0 is a weight parameter.

Theorem 2. [2] [3] The problem ( P R O F ) requires a single solution, which is provided by

u = f λ Π λ K ( f ) , (5)

where Π is the orthogonal projector on λ K (dilatation of K by λ ), and K is the overall closure in L 2 .

K : = { d i v ( φ ) : φ C c 1 ( Ω , 2 ) , φ 1 } .

2.3. Meyer’s Model

In [4] , Yves Meyer shows that if λ is small enough, the ROF model will erase the texture. Yves Meyer suggests the use of a space of functions, which is in some ways the dual of the BV space, to extract both the u component in BV and the v component as an oscillating function (texture or noise) from f. The following definition is given by Meyer.

Definition 1. [4] G ( 2 ) is a Banach space made of v distributions that may be written

G ( 2 ) : = { v ( x , y ) = x g 1 ( x , y ) + y g 2 ( x , y ) / g 1 , g 2 L ( 2 ) } .

We will see that the space G allows for oscillating functions v, as justified by Meyer, and that the oscillations are well measured by the norm

v G : = inf { g L ( 2 ) = ess sup x 2 | g ( x ) | / v = d i v g , g = ( g 1 , g 2 ) L ( 2 ) × L ( 2 ) , | g | = g 1 2 + g 2 2 } .

Meyer suggests the following new image restoration model:

( P M e y e r ) α M e y e r : = inf u { J ( u ) + α v G / u B V ( Ω ) , v G ( Ω ) ; f = u + v } . (6)

J ( u ) = | u | denotes the total variation of u and α > 0 , while G ( 2 ) denotes the space of oscillating functions.

Description of the Model

The interest in this space of oscillating functions stems from the fact that a strongly oscillating image with a small average norm in G ( 2 ) , can have large oscillations but a small average norm, and that the L 2 ( Ω ) norm is not the best choice for capturing the oscillating portion of an image. That is why he created a new space that was better suited from the start the G oscillating functions space. We have . G low for oscillating functions with a null average and high for geometric functions.

We have arrived at the following conclusion based on a close approximation of the L standard:

g 1 2 + g 2 2 L = l i m p g 1 2 + g 2 2 L p , g 1 , g 2 L ( 2 ) .

Then, if λ , μ > 0 are tuning parameters, λ and p the approximation of Meyer model is given by

inf u , g 1 , g 2 { G p ( u , g 1 , g 2 ) = | u | + λ | f u x g 1 y g 2 | 2 d x d y + μ [ ( g 1 2 + g 2 2 ) P ] 1 P } (7)

where

| u | insures that u B V ( 2 ) ,

| f u x g 1 y g 2 | 2 d x d y insures that f u + d i v ( g ) ,

μ [ ( g 1 2 + g 1 2 ) p ] 1 p is a penalty on the norm v = d i v ( g ) in G.

As a result, the form of the Euler-Lagrange equation is given here.

u = f x g 1 y g 2 + 1 2 λ d i v ( u | u | ) . (8)

μ ( g 1 2 + g 2 2 p ) 1 p ( g 1 2 + g 2 2 ) p 2 g 1 = 2 λ [ x ( u f ) + x x 2 g 1 + x y 2 g 2 ] . (9)

μ ( g 1 2 + g 2 2 p ) 1 p ( g 1 2 + g 2 2 ) p 2 g 2 = 2 λ [ y ( u f ) + x y 2 g 1 + y y 2 g 2 ] . (10)

2.4. Vese-Osher Model

Vese and Osher, who were the first to propose an approach to solve Meyer’s problem numerically; that is to say to realize the program, they used the approximation of Meyer model as follows:

( P V e s e _ O s h e r ) α V e s e _ O s h e r : = inf ( u , v ) B V × G ( Ω ) { J ( u ) + λ f u v 2 2 + μ v G / u B V ( Ω ) , v G ( Ω ) ; u + v = f } . (11)

In our numerical calculations, the steps to calculate the solution of this problem are:

1) Replace the term v G by g 1 2 + g 2 2 p with v = d i v ( g 1 , g 2 ) ;

2) p = 1 is used for digital resonances because it allows for faster calculations each iteration;

3) Gives the equation of Euler-Lagrange;

4) We apply a fixed point iterative technique with a finite difference semi-implicit scheme.

2.4.1. The Numerical Discretization of Meyer’s Model

The numerical discretization of Equations (8), (9) and (10) is performed using the semi-implicit method of difference and the iterative algorithm based on the fixed point. We used the following initial values for the iterative algorithm:

h = 1, p = 1 and n = 100.

{ u 0 = f ; g 1 0 = 1 2 λ f x | f | ; g 2 0 = 1 2 λ f y | f | .

The following concepts are used: u i , j = u ( i h , j h ) , f i , j = f ( i h , j h ) , g 1, i , j = g 1 ( i h , j h ) , with the step h > 0 and the point ( i h , j h ) for all 0 i , j M , and the variable change is taken.

H ( g 1 , g 2 ) = ( g 1 2 + g 2 2 p ) 1 p ( g 1 2 + g 2 2 ) p 2 .

So the discretization of Equations (8), (9) and (10) is given by

u i , j n + 1 = 1 1 + 1 2 λ h 2 ( c 1 + c 2 + c 3 + c 4 ) ( f i , j g 1, i + 1, j n g 1, i 1, j n 2 h g 2, i , j + 1 n g 2, i , j 1 n 2 h + 1 2 λ h 2 ( c 1 u i + 1, j n + c 2 u i 1, j n + c 3 u i , j + 1 n + c 4 u i , j 1 n ) ) ; (12)

g 1, i , j n + 1 = 2 λ μ H ( g 1, i , j , g 2, i , j ) g 1, i , j ( u i + 1, j n u i 1, j n 2 h f i + 1, j f i 1, j 2 h + g 1, i + 1, j n 2 g 1, i , j n + 1 + g 1, i 1, j n h 2 + 1 4 h 2 ( g 2, i + 1, j + 1 n + g 2, i 1, j 1 n g 2, i + 1, j 1 n g 2, i 1, j + 1 n ) ) ; (13)

g 2, i , j n + 1 = 2 λ μ H ( g 1, i , j , g 2, i , j ) g 2, i , j ( u i , j + 1 n u i , j 1 n 2 h f i , j + 1 f i , j 1 2 h + g 2, i , j + 1 n 2 g 2, i , j n + 1 + g 2, i , j 1 n h 2 + 1 4 h 2 ( g 1, i + 1, j + 1 + g 1, i 1, j 1 n g 1, i + 1, j 1 n g 1, i 1, j + 1 n ) ) . (14)

The following notations are used:

c 1 = 1 ( u i + 1 , j n u i , j n h ) 2 + ( u i , j + 1 n u i , j 1 n 2 h ) 2 ; c 2 = 1 ( u i , j n u i 1 , j n h ) 2 + ( u i 1 , j + 1 n u i 1 , j 1 n 2 h ) 2 ; c 3 = 1 ( u i + 1 , j n u i 1 , j n 2 h ) 2 + ( u i , j + 1 n u i , j n h ) 2 ; c 4 = 1 ( u i + 1 , j 1 n u i 1 , j 1 n 2 h ) 2 + ( u i , j n u i , j 1 n h ) 2 . (15)

2.4.2. Solution of Vese-Osher Problem

In order to solve the Vese-Osher problem, we will study this final problem in the discriminating case, when the image is a vector with two dimensions of size N × N , the Eulidian space X = N × N and Y = X × X .

If u X then u Y is defined by ( u ) i , j = ( ( u ) i , j 1 , ( u ) i , j 2 ) , where

( u ) i , j 1 = { u i + 1, j u i , j if i < N , 0 if i = N , and ( u ) i , j 2 = { u i , j + 1 u i , j if j < N , 0 if j = N . (16)

In the discriminating case, the total variance (TV) of u is defined as

J d ( u ) = 1 i , j N | ( u ) i , j | .

The divergence operator is d i v = (the adjoint of ) where

( d i v ( p ) ) i , j = { p i , j 1 p i 1, j 1 if 1 < i < N p i , j 1 if i = 1 p i 1, j 1 if i = N + { p i , j 2 p i , j 1 2 if 1 < j < N p i , j 2 if j = 1 p i , j 1 2 if j = N , (17)

and the space G d is defined by

G d : = { v X / g Y such that v = d i v ( g ) } .

Note that

G μ ( Ω ) : = { v G ( Ω ) such that v G μ } ,

G μ d ( Ω ) : = { v G d ( Ω ) such that v G d μ } .

As J d is the indicator function of G 1 d ( Ω ) defined by

J d ( v ) = χ G 1 d ( v ) = { 0 if v G 1 d + else .

So to solve the Vese-Osher problem, we propose the following algorithm.

Algorithm 1. The algorithm for solving Vese Osher problem

inf ( u , v ) B V ( Ω ) × G μ ( Ω ) { F λ , μ ( u , v ) = { J ( u ) + 1 2 λ f u v 2 2 if v G μ ( Ω ) + if v G ( Ω ) / G μ ( Ω ) } } .

By description;

inf ( u , v ) X × X { F λ , μ ( u , v ) = { J d ( u ) + 1 2 λ f u v X 2 if v G μ d ( Ω ) + if v X / G μ d ( Ω ) } } .

inf ( u , v ) X × X F ( u , v ) = inf ( u , v ) X × X { J d ( u ) + 1 2 λ f u v X 2 + J d ( v μ ) } .

We divide the problem into two sub-problems:

Pbm1 u solution, v fixed: inf v X { J d ( u ) + 1 2 λ f u v X 2 } .

accordingtoROF changeofvariable f = f v thesolutionis u ^ = f v P G μ d ( f v ) .

Pbm2 v solution, u fixed: inf v G μ d ( Ω ) { f u v X 2 } .

changeofvariable f = f u thesolutionis accordingtoROF v ^ = P G μ d ( f u ) .

Lemma 3. There is an unique solution ( u ^ , v ^ ) X × G λ d that minimizes F λ , μ ( u , v ) in X × G λ d .

2.5. The Split Bregman Algorithm

Goldstein and Osher first proposed the split Bregman algorithm in [5] to handle more general form optimization problems:

ϖ : = min u X { H ( u ) + Φ ( u ) 1 } , (18)

where X is a closed convex set and Φ : X , H : X are convex functions. This problem is the same as the stress minimization problem as below:

ϖ ¯ : = min u X , d { H ( u ) + d 1 } such that d = Φ ( u ) . (19)

Goldstein and Osher introduced the split Bregman algorithm, which was written as follows:

Algorithm 2. The split Bregman algorithm

Initialization: k = 0 , u 0 = 0 , b 0 = 0 .

While u k u k 1 > t o l do,

u k + 1 = min u H ( u ) + λ 2 d k Φ ( u ) b k 2 2 ,

d k + 1 = min d | d | + λ 2 d Φ ( u k + 1 ) b k 2 2 ,

b k + 1 = b k + ( Φ ( u k + 1 ) d k + 1 ) ,

k = k + 1 ,

End while.

The split Bregman algorithm is used to solve some of the most common form optimization problems:

ϖ ^ : = min u X { z ( u ) + 1 2 u f 2 2 } . (20)

Anisotropic and isotropic TV denoising problems are solved using the split Bregman method.

2.5.1. Anisotropic TV Denoising Problem

The problem of anisotropic TV denoising is considered in [6] .

( P 1 ) τ 1 : = min u { u x 1 + u y 1 + μ 2 u f 2 2 } , (21)

where f is the noisy image, u x and u y will be noted by u x and u y respectively. The problem is solved using a constraint equivalent to a problem ( P 1 ) . We answer the problem ( P 2 ) as follows:

( P 2 ) { τ 2 : = min u d x 1 + d y 1 + μ 2 u f 2 2 subject to d x = u x , d y = u y .

The split Bregman algorithm can be used to tackle this last problem:

( P 3 ) τ 3 : = min u , d x , d y { d x 1 + d y 1 + μ 2 u f 2 2 + λ 2 d x u x 2 2 + λ 2 d y u y 2 2 } . (22)

We use

s h r i n k ( x , a ) = { x a if x > a , x + a if x < a , 0 else . (23)

The Gauss-Seidel function is also useful.

G i , j k = λ μ + 4 λ ( u i + 1, j k + u i 1, j k + u i , j + 1 k + u i , j 1 k + d x , i 1, j k + d x , i , j k + d y , i , j 1 k + d y , i , j k + b x , i 1, j k + b x , i , j k + b y , i , j 1 k + b y , i , j k ) + μ μ + 4 λ f i , j . (24)

Algorithm 3. The split Bregman algorithm of anisotropic TV denoising

Initialization: k = 0 , u 0 = 0 , b 0 = 0 .

While u k u k 1 > t o l do,

u k + 1 = G k , where G is the Gauss-Seidel function,

d x k + 1 = s h r i n k ( x u k + 1 + b x k , 1 λ ) ,

d y k + 1 = s h r i n k ( y u k + 1 + b y k , 1 λ ) ,

b x k + 1 = b x k ( x u k + 1 d x k + 1 ) ,

b y k + 1 = b y k ( y u k + 1 d y k + 1 ) ,

k = k + 1 ,

End while.

2.5.2. Isotropic TV Denoising Problem

The problem of isotropic TV denoising is considered in [6] ,

( P 1 ) I s 1 : = min u { u 2 + μ 2 u f 2 2 } . (25)

The problem ( P 1 ) is solved using a constraint equivalent problem ( P 2 ) :

( P 2 ) { I s 1 ˜ : = min u ( d x , d y ) 2 + μ 2 u f 2 2 subjectto d x = u x , d y = u y . (26)

To solve the problem ( P 2 ) , we solve the following problem without constraint:

( P 3 ) I s 3 ˜ : = min u , d x , d y { ( d x , d y ) 2 + μ 2 u f 2 2 + λ 2 d x u x 2 2 + λ 2 d y u y 2 2 } . (27)

The split Bregman algorithm can be used to tackle this last difficulty.

We give the following definitions:

s k : = | u x k b x k | 2 + | u y k b y k | 2 . (28)

Algorithm 4. The split Bregman algorithm of isotropic TV denoising

Initialization: k = 0 , u 0 = 0 , b 0 = 0 .

While u k u k + 1 > t o l do,

u k + 1 = G k , where G is the Gauss-Seidel function,

d x k + 1 = s k λ ( u x k + b x k ) s k λ + 1 ,

d y k + 1 = s k λ ( u y k + b y k ) s k λ + 1 ,

b x k + 1 = b x k + ( u x k + 1 d x k + 1 ) ,

b y k + 1 = b y k + ( u y k + 1 b y k + 1 ) ,

k = k + 1 ,

End while.

3. Numerical Experimental Results

We present in this section our numerical results obtained with the following models of: Tikhonov regularization, ROF, anisotropic and isotropic TV denoising. Let X be the matrices that depict an image of size m × n . We then used Matlab f = i m n o i s e ( X , ' g a u s s i a n ' , s i g m a ) command to define our noise image f, where sigma is a version of the Gaussian noise level. We used the values μ = 0.1 , λ = 0.2 and the tolerance T o l = 10 5 in our studies. In our experience, we have tried to implement several models of rehabilitation. Each model aims to produce a better solution to remove noise from the image. However, we are going to implement iast in the script. By calculating Performance metrics as well different sigma values, we try to present the best result. The results for Tikhonov regularization, ROF, anisotropic and isotropic TV denoising algorithms are in Tables 1-8.

In addition, in Tables 9-16, we evaluate quality of images restored by the image restoration models of Tikhonov regularization, ROF, anisotropic and isotropic TV denoising algorithms, we use square error (MSE), signal noise rate (SNR), peak signal to noise ratio (PSNR), image quality index (IQI), normalized cross-correlation (NK), average difference (AD), structural content (SC), maximum difference (MD), and normalized absolute error (NAE), So that the definitions of quality measures are in the Figure 1.

In Figure 2, we did an experiment by taking the original image of Barbara (image without noise), then we added white Gaussian noise (sigma 0.08).

A comparative numerical was carried out between the Tikhonov regularization restoration model and the ROF; TV anistropic and isotropic denoising algorithm for the same parameter sigma 0.08 is shown in Figure 3 and Figure 4.

Table 1. Results for the ROF algorithm, sigma = 0.01.

Table 2. Results for the ROF algorithm, sigma = 0.08.

Table 3. Results for the ROF algorithm, sigma = 0.2.

Table 4. Results for the Tichonov Regularization algorithm, sigma = 0.01.

Table 5. Results for the Tichonov Regularization algorithm, sigma = 0.08.

Table 6. Results for the Tichonov Regularization algorithm, sigma = 0.2.

Table 7. Results for the anisotropic TV denoising algorithm, sigma = 0.08.

Table 8. Results for the isotropic TV denoising algorithm, sigma = 0.08.

Table 9. Performance metrics for ROF algorithm, sigma = 0.01.

Table 10. Performance metrics for ROF algorithm, sigma = 0.08.

Table 11. Performance metrics for ROF algorithm, sigma = 0.2.

Table 12. Performance metrics for Tichonov algorithm, sigma = 0.01.

Table 13. Performance metrics for Tichonov algorithm, sigma = 0.08.

Table 14. Performance metrics for Tichonov algorithm, sigma = 0.2.

Table 15. Performance metrics for the anisotropic TV denoising algorithm, sigma = 0.08.

Table 16. Performance metrics for the isotropic TV denoising algorithm, sigma = 0.08.

Figure 1. Quality measures.

In Figure 5, we did an experiment by taking the original image of girl (image without noise), then we added white Gaussian noise (sigma 0.08).

Finally, in Figure 6 and Figure 7, we show comparisons and numerical results for the Tikhonov regularization restoration model and the ROF model; TV anistropic and isotropic denoising algorithm with a noisy image of girl for the same parameter sigma 0.08.

Figure 2. The original and noisy image Barbara for sigma = 0.08.

Figure 3. Denoised image barbar by Tikhonov and ROF for sigma = 0.08.

Figure 4. Denoised image Barbara by TV anistropic and isotropic for sigma = 0.08.

Figure 5. The original and noisy image girl for sigma = 0.08.

Figure 6. Denoised image girl by Tikhonov and ROF for sigma = 0.08.

Figure 7. Denoised image girl by TV anistropic and isotropic for sigma = 0.08.

Remark

To quantify the restoration quality for a noisy image, we use sometimes measures. On note x j , k is original image and x j , k is restored image with [M N] this is the size of the images.

Program for calculating image quality measurements in MATLAB

4. Conclusion

In this paper, we have presented and compared theoretical and numerical of different imaging algorithms for solving optimization problems. We are looking for an image that is near to the original as possible among images that have been skewed by Gaussian and additive noise. Image deconstruction is a technique for restoring a noisy image after it has been captured. According to our experimentation, and by calculating performance metrics as well different sigma values, we can conclude that the ROF model is better image quality compared to the Tichonov regularization, because the space BV ensures continuity and allows for a stairway effect in restoring smooth images in applications where edges are not the main feature. We can conclude that the anisotropic TV and isotropic TV denoising algorithms work in a direct correlation relationship. In other words, regardless of how little the sigma value is that we get better and more old image quality results. Finally, it should be mentioned that all the methods that are common for removing parasitic information are added from an image.

Acknowledgements

We would like to thank the referees for some corrections which greatly improved the presentation of this paper.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

[1] Bergounioux, M. (2008-2009) Quelques méthodes mathématiques pour le traite- ment d’image.
https://cursus.edu/fr/18910/quelques-methodes-mathematiques-pour-le-traitement-dimage
[2] Rudin, L., Osher, S. and Fatemi, E. (1992) Total Variation Based Noise Removal Algorithms. Physica D: Nonlinear Phenomena, 60, 259-268.
https://doi.org/10.1016/0167-2789(92)90242-F
[3] Chambolle, A. (2004) An Algorithm for Total Variation Minimization and Applications. Journal of Mathematical Imaging and Vision, 20, 89-97.
https://doi.org/10.1023/B:JMIV.0000011321.19549.88
[4] Meyer, Y. (March 2001) Oscillating Patterns in Image Processing and in Some Nonlinear Evolution Equations: The Fifteenth Dean Jacquelines B. Lewis Memorial Lectures. American Mathematical Society, Boston, USA.
[5] Goldstein, T. and Osher, S. (2009) The Split Bregman Method for -Regularized Problems. SIAM Journal on Imaging Sciences, 2, 323-343.
https://doi.org/10.1137/080725891
[6] Bush, J. (June 10, 2011) Bregman Algorithms. Senior Thesis, University of California, Santa Barbara, Santa Barbara.

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.