A Noise Suppression Method for Speech Signal by Jointly Using Bayesian Estimation and Fuzzy Theory ()

Akira Ikuta^{1}, Hisako Orimoto^{1}, Kouji Hasegawa^{2}

^{1}Department of Management Information Systems, Prefectural University of Hiroshima, Hiroshima, Japan.

^{2}Western Region Industrial Research Center, Hiroshima Prefectural Technology Research Institute, Kure, Japan.

**DOI: **10.4236/jsea.2021.1412037
PDF
HTML XML
138
Downloads
630
Views
Citations

Speech recognition systems have been applied to inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-writing is difficult. In these actual circumstances, some countermeasure methods for surrounding noise are indispensable. In this study, a new method to remove the noise for actual speech signal was proposed by using Bayesian estimation with the aid of bone-conducted speech and fuzzy theory. More specifically, by introducing Bayes’ theorem based on the observation of air-conducted speech contaminated by surrounding background noise, a new type of algorithm for noise removal was theoretically derived. In the proposed noise suppression method, bone-conducted speech signal with the reduced high-frequency components was regarded as fuzzy observation data, and a stochastic model for the bone-conducted speech was derived by applying the probability measure of fuzzy events. The proposed method was applied to speech signals measured in real environment with low SNR, and better results were obtained than an algorithm based on observation of only air-conducted speech.

Share and Cite:

Ikuta, A. , Orimoto, H. and Hasegawa, K. (2021) A Noise Suppression Method for Speech Signal by Jointly Using Bayesian Estimation and Fuzzy Theory. *Journal of Software Engineering and Applications*, **14**, 631-645. doi: 10.4236/jsea.2021.1412037.

1. Introduction

Speech recognition systems have been applied to various fields, for example, to inspection and maintenance operations in industrial factories and at construction sites, etc. where hand-writing is difficult. For speech recognition in such actual circumstances, some suppression methods for surrounding noises are indispensable.

Previously reported methods for noise reduction in speech recognition can be classified into two categories. One is based on a single microphone [1] [2], and the other uses a microphone array [3]. Since the latter requires *a **priori* information on the number of noise sources, and the number of microphones larger than that of the noise sources is needed in the case of multi-noise sources, this category demands large scale systems. Therefore, the former based on a single microphone is more advantageous than the latter [4] [5]. In such a noise suppression task for speech signals based on a single microphone, many algorithms applying the Kalman filter have been proposed up to now [6] [7] [8] [9]. However, the Kalman filter is originally based on the assumption of Gaussian white noise [10]. The actual noises show complex fluctuation forms with non-Gaussian and non-white properties.

From the above viewpoint, in our previously reported study, a noise suppression algorithm for the actual speech signals without requirement of the assumption of Gaussian white noise has been proposed [11]. The method can be applied to actual complex situation where both the noise statistics and the fluctuation forms of speech signal are unknown. By applying the algorithm to real speech signals with several kinds of noises, its effectiveness has been experimentally confirmed in comparison with the Kalman filter.

Furthermore, signal processing methods to remove the noise for actual speech signals have been proposed by jointly using the measured data of bone- and air-conducted speech signals [12] [13]. However, the algorithms of the previous methods were introduced a simple additive model of the original speech signal and surrounding noise for the air-conducted speech observation. Furthermore, the derived algorithms have applied to only the signals mixed with noises on computer, and not to signals in real environment under existence of noises.

In this study, a new noise suppression method for speech signals is proposed by using Bayes theorem after employing a posterior distribution based on the air-conducted speech observation contaminated by surrounding noise. In the proposed algorithm, in order to improve the accuracy of estimation of speech signal, an expansion expression of conditional probability density function reflecting all linear and non-linear correlation information between original speech signal and air-conducted speech observation is adopted as the model of the speech observation. Then, a probability distribution with parameters estimated from the bone-conducted speech is adopted as the prior distribution. Furthermore, the algorithm proposed in this study is applied to signals measured in real environment under existence of noises.

Though the bone-conducted speech signal is a kind of solid propagation sound with less effect by the surrounding noise, the high frequency components of the signal are reduced through the propagation process [14]. After considering the bone-conducted speech signal with the reduction of higher components as fuzzy data, applying the probability measure of fuzzy events [15], a new simplified noise suppression method is derived by reflecting the air- and bone-conducted speech signals.

The effectiveness of the proposed method is confirmed by applying it to bone- and air-conducted speech measured in a real environment under the existence of surrounding noise.

2. Theoretical Consideration

2.1. Stochastic Model for Air- and Bone-Conducted Speech Signals by Introducing Fuzzy Theory

In the actual environment with a surrounding noise, let
${x}_{k}$,
${y}_{k}$ and
${z}_{k}$ be the original speech signal, the observations of air- and bone-conducted speech signals at a discrete time *k*. The observation
${y}_{k}$ is contaminated by a surrounding noise
${v}_{k}$. In our previous studies, a simple additive model was considered for the air-conducted speech observation
${y}_{k}$ [12] [13]. In this study, in order to improve the accuracy of estimation of speech signal
${x}_{k}$, an expansion expression of conditional probability density function
$P\left({y}_{k}|{x}_{k}\right)$ [11] reflecting all linear and non-linear correlation information between
${x}_{k}$ and
${y}_{k}$ is adopted as the model of air-conducted speech observation.

$\begin{array}{c}P\left({y}_{k}|{x}_{k}\right)=P\left({x}_{k},{y}_{k}\right)/P\left({x}_{k}\right)\\ =P\left({y}_{k}\right){\displaystyle {\sum}_{r=0}^{\infty}{\displaystyle {\sum}_{s=0}^{\infty}{A}_{rs}{\theta}_{r}^{\left(1\right)}\left({x}_{k}\right){\theta}_{s}^{\left(2\right)}\left({y}_{k}\right)}}\end{array}$ (1)

with

${A}_{rs}\equiv \langle {\theta}_{r}^{\left(1\right)}\left({x}_{k}\right){\theta}_{s}^{\left(2\right)}\left({y}_{k}\right)\rangle $, (2)

where $\langle \text{\hspace{0.05em}}\rangle $ denotes the averaging operation on variables.

As the probability density functions ${x}_{k}$ and ${y}_{k}$ showing non-Gaussian distribution, the following statistical orthonormal expansion series expressions are adopted.

$P\left({x}_{k}\right)=N\left({x}_{k};{\mu}_{x},{\sigma}_{x}^{2}\right){\displaystyle {\sum}_{i=0}^{\infty}{B}_{i}\frac{1}{\sqrt{i!}}{H}_{i}\left(\frac{{x}_{k}-{\mu}_{x}}{{\sigma}_{x}}\right)}$, (3)

$P\left({y}_{k}\right)=N\left({y}_{k};{\mu}_{y},{\sigma}_{y}^{2}\right){\displaystyle {\sum}_{i=0}^{\infty}{C}_{i}\frac{1}{\sqrt{i!}}{H}_{i}\left(\frac{{y}_{k}-{\mu}_{y}}{{\sigma}_{y}}\right)}$ (4)

with

${\mu}_{x}\equiv \langle {x}_{k}\rangle $, ${\sigma}_{x}^{2}\equiv \langle {\left({x}_{k}-{\mu}_{x}\right)}^{2}\rangle $,

${B}_{i}\equiv \langle \frac{1}{\sqrt{i!}}{H}_{i}\left(\frac{{x}_{k}-{\mu}_{x}}{{\sigma}_{x}}\right)\rangle $, ${B}_{0}=1$, ${B}_{1}={B}_{2}=0$,

${C}_{i}\equiv \langle \frac{1}{\sqrt{i!}}{H}_{i}\left(\frac{{y}_{k}-{\mu}_{y}}{{\sigma}_{y}}\right)\rangle $, ${C}_{0}=1$, ${C}_{1}={C}_{2}=0$,

$N\left(x;\mu ,{\sigma}^{2}\right)\equiv \frac{1}{\sqrt{2\pi {\sigma}^{2}}}\mathrm{exp}\left\{-\frac{{\left(x-\mu \right)}^{2}}{2{\sigma}^{2}}\right\}$, (5)

where
${H}_{i}\left(\text{\hspace{0.05em}}\right)$ is a Hermite polynomial with *i*th order. Functions
${\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)$ and
${\theta}_{s}^{\left(2\right)}\left({y}_{k}\right)$ are orthonormal polynomials having weighting functions
$P\left({x}_{k}\right)$ and
$P\left({y}_{k}\right)$, respectively. These orthonormal polynomials can be decomposed into linearly independent series as

${\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)={\displaystyle {\sum}_{i=0}^{r}{\lambda}_{ri}^{\left(1\right)}\frac{1}{\sqrt{i!}}{H}_{i}\left(\frac{{x}_{k}-{\mu}_{x}}{{\sigma}_{x}}\right)}$, (6)

${\theta}_{s}^{\left(2\right)}\left({y}_{k}\right)={\displaystyle {\sum}_{i=0}^{s}{\lambda}_{si}^{\left(2\right)}\frac{1}{\sqrt{i!}}{H}_{i}\left(\frac{{y}_{k}-{\mu}_{y}}{{\sigma}_{y}}\right)}$. (7)

The coefficients ${\lambda}_{ri}^{\left(1\right)}$ and ${\lambda}_{si}^{\left(2\right)}$ are calculated beforehand by using Schmidt’s orthogonalization algorithm [16]. The expansion coefficients ${A}_{rs}$ with order $r\le R$, $s\le S$ can be obtained from the correlation relationship between original speech signal ${x}_{k}$ and noisy observation of air-conducted speech ${y}_{k}$. Since the original speech signal is unknown in the presence of noise, these coefficients have to be estimated on the basis of the observation ${y}_{k}$. Let’s regard the expansion coefficients ${A}_{rs}$ as unknown parameter vector $a$ :

$a\equiv \left({a}_{11},\cdots ,{a}_{R1},{a}_{12},\cdots ,{a}_{R2},\cdots ,{a}_{1S},\cdots ,{a}_{RS}\right)$,

${a}_{rs}\equiv {A}_{rs}$, $\left(r=1,2,\cdots ,R;s=1,2,\cdots ,S\right)$, (8)

the following simple dynamical model is introduced for the simultaneous estimation of the parameters with the specific signal ${x}_{k}$ :

${a}_{k+1}={a}_{k}$, (9)

Next, in order to express the relationship between the original speech signal and bone-conducted speech, after regarding the bone-conducted speech as fuzzy data, the conditional probability distribution function $P\left({x}_{k}|{z}_{k}\right)$ can be obtained by applying the probability measure of fuzzy events [15] to (1), as follows.

$\begin{array}{l}P\left({x}_{k}|{z}_{k}\right)=P\left({x}_{k},{z}_{k}\right)/P\left({z}_{k}\right)\\ ={\displaystyle \int {m}_{{\stackrel{\xaf}{y}}_{k}}\left({y}_{k}\right)P\left({x}_{k},{y}_{k}\right)\text{d}{y}_{k}}/{\displaystyle \int {m}_{{\stackrel{\xaf}{y}}_{k}}\left({y}_{k}\right)P\left({y}_{k}\right)\text{d}{y}_{k}}\left(\equiv N\left({x}_{k},{z}_{k}\right)/D\left({z}_{k}\right)\right)\end{array}$ (10)

where ${m}_{{\stackrel{\xaf}{y}}_{k}}\left({y}_{k}\right)$ is a membership function of the bone-conducted speech ${z}_{k}$, and a Gaussian type function:

${m}_{{\stackrel{\xaf}{y}}_{k}}\left({y}_{k}\right)=\mathrm{exp}\left\{-\alpha {\left({y}_{k}-{\stackrel{\xaf}{y}}_{k}\right)}^{2}\right\}$, $\left({\stackrel{\xaf}{y}}_{k}\equiv a+b{z}_{k}\right)$, (11)

where *a* and *b* are constants and
$\alpha \left(>0\right)$ is a parameter, is adopted. Accordingly, by considering
$P\left({x}_{k},{y}_{k}\right)$ in Equation (1) and
$P\left({y}_{k}\right)$ in Equation (4), and the membership function in Equation (11), the numerator of Equation (10) can be expressed as follows:

$\begin{array}{c}N\left({x}_{k},{z}_{k}\right)=P\left({x}_{k}\right)\frac{{\text{e}}^{{K}_{3}}}{\sqrt{2{K}_{1}{\sigma}_{y}^{2}}}{{\displaystyle \int}}^{\text{}}\left(\frac{1}{\sqrt{\pi /{K}_{1}}}\right)\mathrm{exp}\left\{-\frac{{\left({y}_{k}-{K}_{2}\right)}^{2}}{1/{K}_{1}}\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.05em}}\cdot {\displaystyle {\sum}_{i=0}^{\infty}{C}_{i}}\frac{1}{\sqrt{i!}}{H}_{i}\left(\frac{{y}_{k}-{\mu}_{y}}{{\sigma}_{y}}\right){\displaystyle {\sum}_{r=0}^{\infty}{\displaystyle {\sum}_{s=0}^{\infty}{A}_{rs}{\theta}_{r}^{\left(1\right)}\left({x}_{k}\right){\theta}_{s}^{\left(2\right)}\left({y}_{k}\right)\text{d}{y}_{k}}}\end{array}$ (12)

with

${K}_{1}\equiv \left(2\alpha {\sigma}_{y}^{2}+1\right)/\left(2{\sigma}_{y}^{2}\right)$, ${K}_{2}\equiv \left(2\alpha {\sigma}_{y}^{2}{\stackrel{\xaf}{y}}_{k}+{\mu}_{y}\right)/\left(2\alpha {\sigma}_{y}^{2}+1\right)$,

${K}_{3}\equiv {K}_{1}\left({K}_{2}^{2}-\frac{2\alpha {\sigma}_{y}^{2}{\stackrel{\xaf}{y}}_{k}^{2}+{\mu}_{y}^{2}}{2\alpha {\sigma}_{y}^{2}+1}\right)$. (13)

After considering the equality on Hermite polynomial:

${H}_{i}\left(\frac{{y}_{k}-{\mu}_{y}}{{\sigma}_{y}}\right)={\displaystyle {\sum}_{j=0}^{i}{d}_{ij}{H}_{i}\left(\frac{{y}_{k}-{K}_{2}}{\sqrt{1/2{K}_{1}}}\right)}$, (14)

where ${d}_{ij}$ are expansion coefficients reflecting bone-conducted speech signal, and using the orthonormal condition:

$\int N\left({y}_{k};{K}_{2},1/2{K}_{1}\right){H}_{j}\left(\frac{{y}_{k}-{K}_{2}}{\sqrt{1/2{K}_{1}}}\right){H}_{{j}^{\prime}}\left(\frac{{y}_{k}-{K}_{2}}{\sqrt{1/2{K}_{1}}}\right)\text{d}{y}_{k}}=j\text{!}\cdot {\delta}_{j{j}^{\prime}$, (15)

the integral in Equation (12) can be calculated. Thus, the following expression is derived

$N\left({x}_{k},{z}_{k}\right)=P\left({x}_{k}\right)\frac{{\text{e}}^{{K}_{3}}}{\sqrt{2{K}_{1}{\sigma}_{y}^{2}}}{\displaystyle {\sum}_{i=0}^{\infty}\frac{1}{\sqrt{i!}}{C}_{i}}{\displaystyle {\sum}_{r=0}^{R}{\displaystyle {\sum}_{s=0}^{S}{F}_{si}\left({z}_{k}\right){a}_{rs,k}{\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)}}$, (16)

${F}_{si}\left({z}_{k}\right)\equiv {\displaystyle {\sum}_{t=0}^{a}{\displaystyle {\sum}_{j=0}^{\mathrm{min}\left\{i,t\right\}}{\lambda}_{st}^{\left(2\right)}\frac{1}{\sqrt{t!}}{d}_{ij}{d}_{tj}j!}}$. (17)

Furthermore, through the similar calculation process, the denominator of Equation (10) can be derived as follows:

$D\left({z}_{k}\right)=\frac{{\text{e}}^{{K}_{3}}}{\sqrt{2{K}_{1}{\sigma}_{y}^{2}}}G\left({z}_{k}\right)$, $G\left({z}_{k}\right)\equiv {\displaystyle {\sum}_{i=0}^{\infty}\frac{1}{\sqrt{i!}}{C}_{i}{d}_{i0}}$. (18)

Therefore, by substituting Equations (16) and (18) into Equation (10), the conditional probability distribution function $P\left({x}_{k}|{z}_{k}\right)$ can be expressed explicitly.

2.2. Derivation of Noise Suppression Algorithm Based on Bayesian Estimation

To derive an estimation algorithm for the speech signal ${x}_{k}$, the Bayes’ theorem for the conditional probability distribution [17] is first considered. Since the parameter $a$ is also unknown, the conditional joint probability distribution of ${x}_{k}$ and ${a}_{k}$ is expressed as

$P\left({x}_{k},{a}_{k}|{Y}_{k}\right)=P\left({x}_{k},{a}_{k},{y}_{k}|{Y}_{k-1}\right)/P\left({y}_{k}|{Y}_{k-1}\right)$, (19)

where
${Y}_{k}\left(\equiv \left\{{y}_{1},{y}_{2},\cdots ,{y}_{k}\right\}\right)$ is a set of air-conducted speech data up to time *k*. By expanding the conditional joint probability distribution
$P\left({x}_{k},{a}_{k},{y}_{k}|{Y}_{k-1}\right)$ in a statistical orthogonal expansion series on the basis of the well-known Gaussian distribution and calculating the conditional expectation, the estimates of
${x}_{k}$ and
${a}_{rs,k}$ for mean can be derived as follows:

$\begin{array}{l}{\stackrel{^}{x}}_{k}\equiv \langle {x}_{k}|{Y}_{k}\rangle \\ ={\displaystyle {\sum}_{n=0}^{\infty}\left\{{B}_{00n}{E}_{00}^{10}+{B}_{10n}{E}_{10}^{10}\right\}\frac{1}{\sqrt{n!}}{H}_{n}\left(\frac{{y}_{k}-{y}_{k}^{*}}{\sqrt{{\Omega}_{k}}}\right)}/{\displaystyle {\sum}_{n=0}^{\infty}{B}_{00n}\frac{1}{\sqrt{n!}}{H}_{n}\left(\frac{{y}_{k}-{y}_{k}^{*}}{\sqrt{{\Omega}_{k}}}\right)}\end{array}$ (20)

$\begin{array}{l}{\stackrel{^}{a}}_{rs,k}\equiv \langle {a}_{rs,k}|{Y}_{k}\rangle \\ ={\displaystyle {\sum}_{n=0}^{\infty}\left\{{B}_{00n}{E}_{00}^{01}+{B}_{01n}{E}_{01}^{01}\right\}\frac{1}{\sqrt{n!}}{H}_{n}\left(\frac{{y}_{k}-{y}_{k}^{*}}{\sqrt{{\Omega}_{k}}}\right)}/{\displaystyle {\sum}_{n=0}^{\infty}{B}_{00n}\frac{1}{\sqrt{n!}}{H}_{n}\left(\frac{{y}_{k}-{y}_{k}^{*}}{\sqrt{{\Omega}_{k}}}\right)}\end{array}$ (21)

with

${E}_{00}^{10}={x}_{k}^{*}\left(\equiv \langle {x}_{k}|{Y}_{k-1}\rangle \right)$, ${E}_{10}^{10}=\sqrt{{\Gamma}_{{x}_{k}}}$, ${\text{\Gamma}}_{{x}_{k}}\equiv \langle {\left({x}_{k}-{x}_{k}^{*}\right)}^{2}|{Y}_{k-1}\rangle $,

${E}_{00}^{01}={a}_{rs,k}^{*}\left(\equiv \langle {a}_{rs,k}|{Y}_{k-1}\rangle \right)$, ${E}_{01}^{01}=\sqrt{{\text{\Gamma}}_{{a}_{rs,k}}}$, ${\text{\Gamma}}_{{a}_{rs,k}}\equiv \langle {\left({a}_{rs,k}-{a}_{rs,k}^{*}\right)}^{2}|{Y}_{k-1}\rangle $,

${y}_{k}^{*}\equiv \langle {y}_{k}|{Y}_{k-1}\rangle $, ${\Omega}_{k}\equiv \langle {\left({y}_{k}-{y}_{k}^{*}\right)}^{2}|{Y}_{k-1}\rangle $,

${B}_{lmn}\equiv \langle \frac{1}{\sqrt{l!}}{H}_{l}\left(\frac{{x}_{k}-{x}_{k}^{*}}{\sqrt{{\Gamma}_{{x}_{k}}}}\right){\displaystyle {\prod}_{r=0}^{R}{\displaystyle {\prod}_{s=0}^{S}\frac{1}{\sqrt{{m}_{rs}!}}{H}_{{m}_{rs}}\left(\frac{{a}_{rs,k}-{a}_{rs,k}^{*}}{\sqrt{{\Gamma}_{{a}_{rs,k}}}}\right)}}\frac{1}{\sqrt{n!}}{H}_{n}\left(\frac{{y}_{k}-{y}_{k}^{*}}{\sqrt{{\Omega}_{k}}}\right)|{Y}_{k-1}\rangle $. (22)

Furthermore, the estimate of ${a}_{rs,k}$ for variance is derived as follows:

$\begin{array}{l}{P}_{{a}_{rs,k}}\equiv \langle {\left({a}_{rs,k}-{\stackrel{^}{a}}_{rs,k}\right)}^{2}|{Y}_{k}\rangle \\ ={\displaystyle {\sum}_{n=0}^{\infty}\left\{{B}_{00n}{E}_{00}^{02}+{B}_{01n}{E}_{01}^{02}+{B}_{02n}{E}_{02}^{02}\right\}\frac{1}{\sqrt{n!}}{H}_{n}\left(\frac{{y}_{k}-{y}_{k}^{*}}{\sqrt{{\Omega}_{k}}}\right)}/{\displaystyle {\sum}_{n=0}^{\infty}{B}_{00n}\frac{1}{\sqrt{n!}}{H}_{n}\left(\frac{{y}_{k}-{y}_{k}^{*}}{\sqrt{{\Omega}_{k}}}\right)}\end{array}$ (23)

with

${E}_{00}^{02}={\Gamma}_{{a}_{rs,k}}+{\left({a}_{rs,k}^{*}-{\stackrel{^}{a}}_{rs,k}\right)}^{2}$, ${E}_{01}^{02}=2\sqrt{{\Gamma}_{{a}_{rs,k}}}\left({a}_{rs,k}^{*}-{\stackrel{^}{a}}_{rs,k}\right)$, ${E}_{02}^{02}=\sqrt{2}{\Gamma}_{{a}_{rs,k}}$. (24)

Using Equation (1) and the orthonormal property of ${\theta}_{s}^{\left(2\right)}\left({y}_{k}\right)$, variables ${y}_{k}^{*}$ and ${\Omega}_{k}$ in Equations (20) (21) and (23) can be calculated as follows:

$\begin{array}{c}{y}_{k}^{*}=\langle {\displaystyle \int {y}_{k}P\left({y}_{k}|{x}_{k}\right)\text{d}{y}_{k}}|{Y}_{k-1}\rangle \\ =\langle {\displaystyle {\sum}_{r=0}^{\infty}{\displaystyle {\sum}_{s=0}^{1}{e}_{1s}{A}_{rs}{\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)}}|{Y}_{k-1}\rangle \\ ={\displaystyle {\sum}_{r=0}^{R}{\displaystyle {\sum}_{s=0}^{1}{e}_{1s}{a}_{rs,k}^{*}\langle {\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)|{Y}_{k-1}\rangle}}\end{array}$ (25)

$\begin{array}{c}{\Omega}_{k}=\langle {\displaystyle \int {\left({y}_{k}-{y}_{k}^{*}\right)}^{2}P\left({y}_{k}|{x}_{k}\right)\text{d}{y}_{k}}|{Y}_{k-1}\rangle \\ ={\displaystyle {\sum}_{r=0}^{R}{\displaystyle {\sum}_{s=0}^{2}{e}_{2s}{a}_{rs,k}^{*}\langle {\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)|{Y}_{k-1}\rangle}}\end{array}$ (26)

with

${e}_{10}={\mu}_{y}$, ${e}_{11}={\sigma}_{y}$,

${e}_{20}={f}_{20}-\left(\frac{{f}_{21}}{{\lambda}_{11}^{\left(2\right)}}-\frac{{f}_{22}}{{\lambda}_{11}^{\left(2\right)}{\lambda}_{22}^{\left(2\right)}}{\lambda}_{21}^{\left(2\right)}\right){\lambda}_{10}^{\left(2\right)}-\frac{{f}_{22}}{{\lambda}_{22}^{\left(2\right)}}{\lambda}_{20}^{\left(2\right)}$,

${e}_{21}=\frac{{f}_{21}}{{\lambda}_{11}^{\left(2\right)}}-\frac{{f}_{22}}{{\lambda}_{11}^{\left(2\right)}{\lambda}_{22}^{\left(2\right)}}{\lambda}_{21}^{\left(2\right)}$, ${e}_{22}=\frac{{f}_{22}}{{\lambda}_{22}^{\left(2\right)}}$,

${f}_{20}={\left({\mu}_{y}-{y}_{k}^{*}\right)}^{2}+{\sigma}_{y}^{2}$, ${f}_{21}=2{\sigma}_{y}\left({\mu}_{y}-{y}_{k}^{*}\right)$, ${f}_{22}=\sqrt{2}{\sigma}_{y}^{2}$. (27)

Furthermore, by considering Equations (10) (16) (18) and orthonormal property of ${\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)$, variables ${x}_{k}^{*}$, ${\Gamma}_{{x}_{k}}$ in Equation (22) and the conditional expectation in Equations (25) (26) can be calculated as follows:

$\begin{array}{c}{x}_{k}^{*}=\langle {\displaystyle \int {x}_{k}P\left({x}_{k}|{z}_{k}\right)\text{d}{x}_{k}}|{Y}_{k-1}\rangle \\ ={\displaystyle {\sum}_{i=0}^{\infty}\frac{1}{\sqrt{i!}}{C}_{i}}{\displaystyle {\sum}_{r=0}^{1}{\displaystyle {\sum}_{s=0}^{S}{h}_{1r}{F}_{si}\left({z}_{k}\right){a}_{rs,k}^{*}/G\left({z}_{k}\right)}}\end{array}$ (28)

$\begin{array}{c}{\Gamma}_{{x}_{k}}=\langle {\displaystyle \int {\left({x}_{k}-{x}_{k}^{*}\right)}^{2}P\left({x}_{k}|{z}_{k}\right)\text{d}{x}_{k}}|{Y}_{k-1}\rangle \\ ={\displaystyle {\sum}_{i=0}^{\infty}\frac{1}{\sqrt{i!}}{C}_{i}}{\displaystyle {\sum}_{r=0}^{2}{\displaystyle {\sum}_{s=0}^{S}{h}_{2r}{F}_{si}\left({z}_{k}\right){a}_{rs,k}^{*}/G\left({z}_{k}\right)}}\end{array}$ (29)

$\begin{array}{c}\langle {\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)|{Y}_{k-1}\rangle =\langle {\displaystyle \int {\theta}_{r}^{\left(1\right)}\left({x}_{k}\right)P\left({x}_{k}|{z}_{k}\right)\text{d}{x}_{k}}|{Y}_{k-1}\rangle \\ ={\displaystyle {\sum}_{i=0}^{\infty}\frac{1}{\sqrt{i!}}{C}_{i}}{\displaystyle {\sum}_{s=0}^{S}{F}_{si}\left({z}_{k}\right){a}_{rs,k}^{*}/G\left({z}_{k}\right)}\end{array}$ (30)

with

${h}_{10}={\mu}_{x}$, ${h}_{11}={\sigma}_{x}$,

${h}_{20}={p}_{20}-\left(\frac{{p}_{21}}{{\lambda}_{11}^{\left(1\right)}}-\frac{{p}_{22}}{{\lambda}_{11}^{\left(1\right)}{\lambda}_{22}^{\left(1\right)}}{\lambda}_{21}^{\left(1\right)}\right){\lambda}_{10}^{\left(1\right)}-\frac{{p}_{22}}{{\lambda}_{22}^{\left(1\right)}}{\lambda}_{20}^{\left(1\right)}$,

${h}_{21}=\frac{{p}_{21}}{{\lambda}_{11}^{\left(1\right)}}-\frac{{p}_{22}}{{\lambda}_{11}^{\left(1\right)}{\lambda}_{22}^{\left(1\right)}}{\lambda}_{21}^{\left(1\right)}$, ${h}_{22}=\frac{{p}_{22}}{{\lambda}_{22}^{\left(1\right)}}$,

${p}_{20}={\left({\mu}_{x}-{x}_{k}^{*}\right)}^{2}+{\sigma}_{x}^{2}$, ${p}_{21}=2{\sigma}_{x}\left({\mu}_{x}-{x}_{k}^{*}\right)$, ${p}_{22}=\sqrt{2}{\sigma}_{x}^{2}$. (31)

Since Equations (28) (29) and (30) can be evaluated by measuring bone-conducted speech ${z}_{k}$, no time transition models of ${x}_{k}$ are necessary. Therefore, computation time of the proposed algorithm can be reduced than the previous one [12]. Furthermore, by considering Equation (9), two parameters ${a}_{rs,k}^{*}$ and ${\Gamma}_{{a}_{rs,k}}$ in Equation (22) are given by the estimates of ${a}_{rs,k}$ at the discrete time $k-1$, as follows:

${a}_{rs,k}^{*}={\stackrel{^}{a}}_{rs,k-1}$, ${\Gamma}_{{a}_{rs,k}}={P}_{{a}_{rs,k-1}}$. (32)

Finally, considering Equations (1) (9) and (10), the expansion coefficients ${B}_{lmn}$ in the estimation algorithm in Equations (20) (21) and (23) are given by the measurement of bone-conducted speech ${z}_{k}$, estimates of parameter ${a}_{rs,k}$ at the discrete time $k-1$, through the similar calculation process to Equations (25)-(30). Therefore, recursive estimation of the speech signal ${x}_{k}$ can be achieved.

3. Application to Speech Signal in Real Environment

In order to confirm the actual usefulness of the proposed noise suppression algorithm, it was applied to speech signals in real noise environment. Though, in the previous studies [12] [13], the noisy air-conducted speeches were created on a computer by mixing the original air-conducted speech signal measured in a noise-free environment, the algorithm proposed in this study was applied to signals measured in real environment under existence of actual noises. For a female and a male speech signals digitized with sampling frequency of 10 kHz and quantization of 16 bits, we estimated the speech signal based on the observation corrupted by additive noise.

More specifically, air-conducted speeches were measured in real environment under existence of a white noise generated from a noise generator and an actual machine noise. The bone-conducted speech was simultaneously measured by use of an acceleration sensor with the air-conducted speech. By setting roughly the amplitude of the noises at two levels, the proposed algorithm was applied to extremely difficult situations with low SNR (noise-free air-conducted speech signal to noise ratio defined by $\text{SNR}=10{\mathrm{log}}_{10}\left({\displaystyle \sum {x}_{k}^{2}}/{\displaystyle \sum {v}_{k}^{2}}\right)$ ) being approximately −3 dB and −5 dB.

Using the observed bone-conducted speech and noisy observation on air-con ducted speech, constants *a* and *b* are first calculated by introducing the linear regression model in Equation (11) and applying the least squared method to this model. Secondly, the parameter
$\alpha $ of the membership function is obtained by calculating the standard deviation
$\sigma $ of
${y}_{k}$ around
${\stackrel{\xaf}{y}}_{k}$, as
$\alpha =2\sigma $ after assuming Gaussian distribution for the deviation.

The observed signals on air-conducted female speech contaminated by the white noise and machine noise are shown in Figure 1 and Figure 2. Furthermore, for the male speech signal, noisy air-conducted speech observations are shown in Figure 3 and Figure 4 respectively.

The estimated results by using the algorithm based on Equations (20)-(24) are shown in Figure 5 and Figure 6 for the female speech signal and in Figure 7 and Figure 8 for the male speech signal. For comparison, the estimated results of the female and male speech signals by using the estimation algorithm based on only the observation of air-conducted speech are shown in Figures 9-12.

By comparing Figures 5-8 with Figures 9-12, it is obvious that the proposed method can suppress the effects of white noise and real machine noise better than the method based on observation of only air-conducted speech.

The air-conducted female and male speech signals spoken by the same speakers in the different situation without any noises are shown in Figure 13 and Figure 14 as references. By comparing these speech signals measured in noise-free circumstance with the estimated results by the proposed method and the results by using the algorithm based on the observation of only air-conducted signal, the effectiveness of the proposed method is obvious. Furthermore, the computation time of the proposed method was reduced by 55.2% of the algorithm based on the only air-conducted observation, because it is unnecessary for the proposed method to calculate recursively the estimate of variance of ${x}_{k}$ based on the air-conducted speech ${y}_{k}$.

Figure 1. Observed female speech signal contaminated by white noise with $\text{SNR}\cong -3\text{\hspace{0.17em}}\text{dB}$.

Figure 2. Observed female speech signal contaminated by machine noise with $\text{SNR}\cong -5\text{\hspace{0.17em}}\text{dB}$.

Figure 3. Observed male speech signal contaminated by white noise with $\text{SNR}\cong -3\text{\hspace{0.17em}}\text{dB}$.

Figure 4. Observed male speech signal contaminated by machine noise with $\text{SNR}\cong -5\text{\hspace{0.17em}}\text{dB}$.

Figure 5. Estimated female speech signal by use of the proposed method based on observation contaminated by white noise with $\text{SNR}\cong -3\text{\hspace{0.17em}}\text{dB}$.

Figure 6. Estimated female speech signal by use of the proposed method based on observation contaminated by machine noise with $\text{SNR}\cong -5\text{\hspace{0.17em}}\text{dB}$.

Figure 7. Estimated male speech signal by use of the proposed method based on observation contaminated by white noise with $\text{SNR}\cong -3\text{\hspace{0.17em}}\text{dB}$.

Figure 8. Estimated male speech signal by use of the proposed method based on observation contaminated by machine noise with $\text{SNR}\cong -5\text{\hspace{0.17em}}\text{dB}$.

Figure 9. Estimated female speech signal by use of the method based on only air-conducted observation contaminated by white noise with $\text{SNR}\cong -3\text{\hspace{0.17em}}\text{dB}$.

Figure 10. Estimated female speech signal by use of the method based on only air-conducted observation contaminated by machine noise with $\text{SNR}\cong -5\text{\hspace{0.17em}}\text{dB}$.

Figure 11. Estimated male speech signal by use of the method based on only air-conducted observation contaminated by white noise with $\text{SNR}\cong -3\text{\hspace{0.17em}}\text{dB}$.

Figure 12. Estimated male speech signal by use of the method based on only air-conducted observation contaminated by machine noise with $\text{SNR}\cong -5\text{\hspace{0.17em}}\text{dB}$.

Figure 13. Air-conducted female speech signal in the different situation without any noises.

Figure 14. Air-conducted male speech signal in the different situation without any noises.

4. Conclusions

In this paper, after considering the bone-conducted speech signal with the reduction of higher components as fuzzy data, applying the probability measure of fuzzy events, a new noise suppression method is derived on the basis of Bayes’ theorem as the fundamental principle of estimation. Furthermore, the proposed algorithm has been applied to real speech signals contaminated by noises measured in actual environment with low SNR. As a result, it has been revealed by experiments that better estimation results may be obtained by the proposed algorithm as compared with the method based on only air-conducted observations.

The proposed approach is quite different from the traditional standard techniques. However, we are still in an early stage of development, and a number of practical problems are yet to be investigated in the future. These include: 1) application to a diverse range of speech signals in actual noise environment, 2) extension to cases with multi-noise sources, and 3) finding an optimal number of expansion terms for the expansion-based probability expressions adopted.

Acknowledgements

The authors are grateful to Ms. Yui Maeda of the Prefectural University of Hiroshima for her help during this study. This work was supported in part by fund from the Grant-in-Aid for Scientific Research No. 19K04428 from the Ministry of Education, Culture, Sports, Science and Technology-Japan.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Yamashita, K. and Shimamura, T. (2005) Nonstationary Noise Estimation Using Low-Frequency Regions for Spectral Subtraction. IEEE Signal Processing Letters, 12, 465-468. https://doi.org/10.1109/LSP.2005.847864 |

[2] |
Plapous, C., Marro, C. and Scalart, P. (2006) Improved Signal-to-Noise Ratio Estimation for Speech Enhancement. IEEE Transactions on Speech and Audio Processing, 14, 2098-2108. https://doi.org/10.1109/TASL.2006.872621 |

[3] |
McCowan, I.A. and Bourlard, H. (2003) Microphone Array Post-Filter Based on Noise Field Coherence. IEEE Transactions on Speech and Audio Processing, 11, 709-716. https://doi.org/10.1109/TSA.2003.818212 |

[4] |
Kawamura, A., Fujii, K., Itoh, Y. and Fukui, Y. (2002) A Noise Reduction Method Based on Linear Prediction Analysis. IEICE Transactions on Fundamentals, J85-A, 415-423. https://doi.org/10.1109/ICASSP.2002.1004860 |

[5] |
Kawamura, A., Fujii, K. and Itoh, Y. (2005) A Noise Reduction Method Based on Linear Prediction with Variable Step-Size. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E88-A, 855-861. https://doi.org/10.1093/ietfec/e88-a.4.855 |

[6] | Kim, W. and Ko, H. (2001) Noise Variance Estimation for Kalman Filtering of Noisy Speech. IEICE Transactions on Information and Systems, E84-D, 155-160. |

[7] | Li, H., Wang, X., Dai, B. and Lu, W. (2007) A Kalman Smoothing Algorithm for Speech Enhancement Based on the Properties of Vocal Tract Varying Slowly. Proceedings of Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, Qingdao, 30 July-1 Aug. 2007, 832-836. |

[8] |
Tanabe, N., Furukawa, T. and Tsuji, S. (2008) Robust Noise Suppression Algorithm with the Kalman Filter Theory for White and Colored Disturbance. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E91-A, 818-829. https://doi.org/10.1093/ietfec/e91-a.3.818 |

[9] |
Jia, H., Zhang, X. and Jin, C. (2009) A Modified Speech Enhancement Algorithm Based on the Subspace. Proceedings of 2009 Second International Symposium on Knowledge Acquisition and Modeling, Wuhan, 30 November-1 December 2009, 344-347. https://doi.org/10.1109/KAM.2009.19 |

[10] |
Candy, J.V. (2009) Bayesian Signal Processing: Classical, Modern, and Particle Filtering Methods. John Wiley & Sons Ltd., Hoboken. https://doi.org/10.1002/9780470430583 |

[11] |
Ikuta, A. and Orimoto, H. (2011) Adaptive Noise Suppression Algorithm for Speech Signal Based on Stochastic System Theory. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E94-A, 1618-1627. https://doi.org/10.1587/transfun.E94.A.1618 |

[12] |
Ikuta, A., Orimoto, H. and Gallagher, G. (2018) Noise Suppression Method by Jointly Using Bone- and Air-Conducted Speech Signals. Noise Control Engineering Journal, 66, 472-488. https://doi.org/10.3397/1/376640 |

[13] |
Orimoto, H., Ikuta, A. and Hasegawa, K. (2021) Speech Signal Detection Based on Bayesian Estimation by Observing Air-Conducted Speech under Existence of Surrounding Noise with the Aid of Bone-Conducted Speech. Intelligent Information Management, 13, 199-213. https://doi.org/10.4236/iim.2021.134011 |

[14] | Shin, H.S., Kang, H.G. and Fingscheidt, T. (2012) Survey of Speech Enhancement Supported by a Bone Conduction Microphone. Proceedings of 10th ITG Conference on Speech Communication, Braunschweig, 26-28 September 2012, 47-50. |

[15] | Ikuta, A. and Orimoto, H. (2014) Fuzzy Signal Processing of Sound and Electromagnetic Environment by Introducing Probability Measure of Fuzzy Events. Proceedings of International Conference on Fuzzy Computation Theory and Applications, Rome, 22-24 October 2014, 5-13. |

[16] |
Orimoto, H. and Ikuta, A. (2012) Prediction of Response Probability Distribution by Considering Additive Property of Energy and Evaluation in Decibel Scale for Sound Environment System with Unknown Structure. Transactions of the Society of Instrument and Control Engineers, 48, 830-836. https://doi.org/10.9746/sicetr.48.830 |

[17] |
Orimoto, H. and Ikuta, A. (2019) State Estimation for Sound Environment System with Nonlinear Observation Characteristics by Introducing Wide-Sense Particle Filter. Intelligent Information Management, 11, 87-101. https://doi.org/10.4236/iim.2019.116008 |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.