Scientific Research

An Academic Publisher

Calibration Method of Magnetometer Based on BP Neural Network ()

Share and Cite:

*Journal of Computer and Communications*,

**8**, 31-41. doi: 10.4236/jcc.2020.86004.

1. Introduction

Three-axis magnetometer has the advantages of small size, lightweight and low power consumption, which is often used to determine the attitude of the earth observation satellite. However, there are many problems in the application of magnetometer, such as the three axes are not orthogonal, the sensitivity of the three axes is not consistent, the constant value drift and the internal remanence, leading to a large error between the measured value and the actual value [1]. Therefore, it is necessary to calibrate the magnetometer to reduce the measurement error. At the same time, due to the influence of emission vibration, temperature alternation, particle radiation, and device aging and other factors, the magnetometer will drift or change the installation matrix when it is in orbit, resulting in time-varying error. If the model parameters calibrated by the ground are still used, the measurement error will be large, and the on-orbit training method can eliminate the influence of time-varying factors on the measurement accuracy.

At present, the calibration methods of magnetometer mainly include the least square method [1] [2], pseudo-inverse method [3], LM algorithm [4], neural network method [5], etc. At present, the calibration method of the magnetometer is to establish an error model first, and then identify model parameters [1] [2] [3] [6]. This method cannot eliminate the influence of unknown factors, there are model errors. The algorithm based on the neural network [5] does not introduce model error and has high accuracy.

BP neural network has good model learning ability, a three-layer BP neural network can fit any continuous function [7]. In orbit calibration of the magnetometer needs to improve the online performance of the network, so the network training speed is the main factor affecting the online performance. At present, the commonly used training methods include momentum method, gradient descent method, adaptive learning rate method, Levenberg Marquardt (LM) method, etc. for the network with small structure, the LM algorithm can obtain the fastest training speed [8].

In reference [5], a neural network is designed based on the stochastic gradient descent training method, and the magnetometer is calibrated by on-orbit data. However, the stochastic gradient descent algorithm is inefficient, slow in convergence, and easy to converge to the local optimal solution. Compared with the stochastic gradient descent algorithm, the Levenberg Marquardt backpropagation (LMBP) training algorithm has faster convergence speed. To improve the training speed and convergence of neural networks, LMBP neural network is designed to calibrate the magnetometer.

The rest of this paper is organized as follows. Firstly, the structure of the BP neural network is designed according to the error characteristics of the magnetometer. Then the LM method is used to improve the speed of network training, and the convergence is verified by simulation. After that, the periodic training method is proposed to calibrate the magnetometer and the effectiveness of the proposed method is verified by numerical simulation. Finally, the concluding remarks of the present investigation are noted.

2. Error Correction Model of Magnetometer

The measurement errors of magnetometer mainly include the nonorthogonal errors caused by the three-axis nonorthogonal, the nonorthogonal errors caused by the inconsistent sensitivity, the drift, and noise caused by the circuit characteristics. The magnetometer error model can be expressed by the following formula.

${B}_{m}={B}_{0}+\Delta \text{B}+AB$ (1)

where ${B}_{m}$ is the output data of magnetometer, $B$ is the constant drift, $\Delta \text{B}$ is the measurement noise, $A$ is the coefficient matrix generated by nonorthogonal error, and $B$ is the actual magnetic field data.

3. BP Neural Network

According to Kolmogorov’s theorem [7], there exist three-layer BP neural network which can realize the mapping from any n-dimension to m-dimension. Thus, BP neural network usually adopts the structure of single hidden layer as it can approach any continuous function. And the number of hidden nodes is the main parameter of BP network structure optimization.

3.1. Structure Design of BP Neural Network

The structure of the BP neural network is one of the main factors affecting its performance. Too many hidden nodes in the network might cause over fitting, while too few hidden nodes in the network might affect the prediction ability of the network [9]. The number of hidden nodes of the BP neural network should be as small as possible under the condition of satisfying the requirements of network accuracy. The number of hidden nodes of the BP neural network is usually designed based on empirical formula [10] [11], so it is difficult to ensure the optimal network structure. In this paper, the network structure is optimized by the pruning method. Firstly, a network with 7 hidden nodes is established. Then, if the network accuracy meets the requirements, the hidden nodes will be deleted. For magnetometer calibration using BP neural network, the regression coefficients of neural networks with different hidden nodes are shown in the table below.

The data in Table 1 shows that the three hidden nodes are the most compact structure to meet the accuracy requirements. When the number of hidden nodes is less than 3, the network will diverge. Therefore, this paper adopts the BP neural network with three hidden nodes in a single hidden layer. The network structure is shown in Figure 1.

Table 1. Regression coefficient of network with different hidden nodes.

Figure 1. Structure of neural network.

3.2. Levenberg Marquardt Algorithm

The basic BP algorithm has the problems of long training time, difficult to determine the network structure and easy to fall into the local optimal solution. To avoid these problems, some improved BP algorithm were proposed. Compared with the basic BP algorithm, the LMBP training method has higher accuracy and stability and has a better training effect under the same network structure [8] [12] [13]. Therefore, LMBP algorithm is adopted in this paper.

Set network input vector is $x={\left({x}_{1},{x}_{2},\cdots ,{x}_{m}\right)}^{T}$, output vector is $y={\left({y}_{1},{y}_{2},\cdots ,{y}_{N}\right)}^{T}$ and weight vector is $\omega ={\left({\omega}_{1},{\omega}_{2},\cdots {\omega}_{n}\right)}^{T}$.

The loss function is:

$E\left(\omega \right)=\frac{1}{2}{\displaystyle \underset{i=1}{\overset{N}{\sum}}{\left({y}_{i}-{O}_{i}\right)}^{2}}=\frac{1}{2}{\displaystyle \underset{i=1}{\overset{N}{\sum}}{e}_{i}^{2}}$ (2)

where ${O}_{i}$ is the expected output of sample $\text{i}$, ${y}_{i}$ is the network output, and ${e}_{i}$ is the error of sample $\text{i}$.

Neglecting the higher-order infinitesimal, the second-order Taylor expansion of the error function is carried out at the minimum point.

$\phi \left(\omega \right)=E\left({\omega}_{k}\right)+\nabla E\left({\omega}_{k}\right)\left(\omega -{\omega}_{k}\right)+\frac{1}{2}{\left(\omega -{\omega}_{k}\right)}^{T}{\nabla}^{2}E\left({\omega}_{k}\right)\left(\omega -{\omega}_{k}\right)$ (3)

where $\nabla E$ is gradient vector and ${\nabla}^{2}E$ is Hessian matrix.

$\nabla E={\left[\begin{array}{cccc}\frac{\partial E}{\partial {\omega}_{1}}& \frac{\partial E}{\partial {\omega}_{2}}& \cdots & \frac{\partial E}{\partial {\omega}_{n}}\end{array}\right]}^{\text{T}}$ (4)

${\nabla}^{2}E=\left[\begin{array}{cccc}\frac{{\partial}^{\text{2}}E}{\partial {\omega}_{1}\partial {\omega}_{\text{1}}}& \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{1}\partial {\omega}_{2}}& \cdots & \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{1}\partial {\omega}_{\text{n}}}\\ \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{2}\partial {\omega}_{1}}& \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{2}\partial {\omega}_{2}}& \cdots & \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{2}\partial {\omega}_{n}}\\ \vdots & & \ddots & \vdots \\ \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{n}\partial {\omega}_{1}}& \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{n}\partial {\omega}_{2}}& \cdots & \frac{{\partial}^{\text{2}}E}{\partial {\omega}_{\text{n}}\partial {\omega}_{n}}\end{array}\right]$ (5)

According to the necessary conditions of extreme value:

$\nabla \phi \left(\omega \right)=0$ (6)

Therefore:

${g}_{k}+{H}_{k}(\omega -{\omega}_{k})=0$ (7)

where ${H}_{\text{k}}\text{=}{\nabla}^{2}E$ is Hessian matrix and ${g}_{k}=\nabla E$ is gradient vector. The weight updating formula of Newton method can be obtained when ${H}_{k}$ is nonsingular.

$\Delta \omega =-{H}_{k}^{-1}{g}_{k}$ (8)

Hessian matrix can be transformed into the following form

${H}_{k}={\nabla}^{2}E=J{\left(\omega \right)}^{T}J\left(\omega \right)+S\left(\omega \right)$ (9)

where $S\left(\omega \right)={\displaystyle \underset{i=1}{\overset{N}{\sum}}{e}_{i}\left(\omega \right)}{\nabla}^{2}{e}_{i}\left(\omega \right)$, $J$ is Jacobian matrix.

$J\text{=}\left[\begin{array}{cccc}\frac{\partial {e}_{1}}{\partial {\omega}_{1}}& \frac{\partial {e}_{1}}{\partial {\omega}_{2}}& \cdots & \frac{\partial {e}_{1}}{\partial {\omega}_{n}}\\ \frac{\partial {e}_{2}}{\partial {\omega}_{1}}& \frac{\partial {e}_{2}}{\partial {\omega}_{2}}& \cdots & \frac{\partial {e}_{2}}{\partial {\omega}_{n}}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial {e}_{N}}{\partial {\omega}_{1}}& \frac{\partial {e}_{N}}{\partial {\omega}_{2}}& \cdots & \frac{\partial {e}_{N}}{\partial {\omega}_{n}}\end{array}\right]$ (10)

When the solution is near the extreme point $S\left(\omega \right)\approx 0$

${H}_{k}=J{\left(\omega \right)}^{T}J\left(\omega \right)$ (11)

The weight updating formula of Newton method is as follows:

$\Delta \omega =-{\left(J{\left(\omega \right)}^{T}J\left(\omega \right)\right)}^{-1}J{\left(\omega \right)}^{T}\text{e}\left(\omega \right)$ (12)

Newton’s method needs to inverse the Hessian matrix ${H}_{k}^{}$ in every iteration, but in practice, ${H}_{k}^{}$ might be irreversible while LMBP algorithm can avoid this problem.

Let:

$G={H}_{k}+\mu I$ (13)

where $\mu >0$, $I$ is the unit matrix.

It can be proved that $G$ and ${H}_{k}$ have the same eigenvector, and the eigenvalue of $G$ is ${\lambda}_{i}+\mu $. Properly choosing the value of $\mu $ makes the matrix $G$ invertible.

The weight updating formula of LMBP algorithm is as follows.

$\Delta \omega =-{\left(J{\left(\omega \right)}^{T}J\left(\omega \right)+\mu I\right)}^{-1}J{\left(\omega \right)}^{T}\text{e}\left(\omega \right)$ (14)

where $\text{e}\left(\omega \right)\text{=}{\left({e}_{1}\left(\omega \right),{e}_{2}\left(\omega \right),\cdots ,{e}_{N}\left(\omega \right)\right)}^{\text{T}}$ is the error vector.

4. Simulation

According to the error model of Formula (1), this paper uses the BP neural network to calibrate magnetometer. The random error is the random noise with the mean value of 0 and the standard deviation of 2. The parameters of ground error model are set as follows.

$\{\begin{array}{l}{B}_{0}\text{=}{\left(\begin{array}{ccc}-633.2& 1281.7& -455.5\end{array}\right)}^{\text{T}}nT\\ \text{A}=\left[\begin{array}{ccc}0.9599& -0.0209& -0.0065\\ 0.0020& 0.9476& -0.0040\\ 0.0011& 0.0561& 0.9657\end{array}\right]\end{array}$ (15)

The actual magnetic field is as follows:

$B=4000+100\left[\begin{array}{c}\mathrm{sin}(t)\\ \mathrm{cos}(t)\\ -\mathrm{sin}(t)\end{array}\right]nT$ (16)

Figure 2 shows that the maximum error of magnetometer before calibration is greater than 1000 nT.

4.1. Magnetometer Calibration Simulation

The trained BP neural network can predict the real magnetic field data according to the measured value of magnetometer. The training process is shown in Figure 3.

The structure of the BP neural network is determined as three neurons in a single hidden layer. LMBP algorithm is used to train the network. After 26 times of training, the network error is reduced to less than 0.001. The mean square error is shown in Figure 4.

The calibration results are shown in Figure 5. The results in Figure 5 show that the output error of the magnetometer can be effectively corrected by the BP neural network, and the accuracy of the magnetometer can be significantly improved.

The trained BP neural network can predict the real magnetic field data according to the measured value of the magnetometer by calibrating the ground model. The maximum error of the neural network calibrated by the ground model is less than 10 nT.

Figure 2. Simulation of uncalibrated magnetometer.

Figure 3. Calibration process of BP neural network.

Figure 4. Error of neural network.

Figure 5. Comparison of calibration value with actual magnetic field.

4.2. On Orbit Calibration Simulation

In the process of satellite on-orbit operation, due to the influence of environmental factors, the installation matrix and sensor characteristics change, resulting in the actual performance of the sensor inconsistent with the results of ground calibration. Thus, on-orbit calibration is necessary. While doing on-orbit training, the data of accurate magnetic field intensity should be obtained. Fortunately, the position of the satellite is recorded during operation, combined with the geomagnetic field model, the accurate magnetic field can be obtained, which makes the on-orbit training possible.

Assuming that the satellite is affected by the space environment in t = 15 s, the constant drift of magnetometer and the coefficient matrix change to Formula (17)

$\{\begin{array}{l}B={\left[\begin{array}{ccc}-652.7& 1266.4& -444.49\end{array}\right]}^{\text{T}}nT\\ \text{A}=\left[\begin{array}{ccc}0.9579& -0.0209& -0.0065\\ 0.0020& 0.9456& -0.0040\\ 0.0011& 0.0561& 0.9637\end{array}\right]\end{array},t>15s$ (17)

As shown in Figure 6, if the ground training model is still used for measurement, there will be errors between the real and measured data.

And the calibration error of the ground error model will reach 30 nT (as shown in Figure 6(b)). In order to reduce the influence of hourly variation on the measurement accuracy, the BP neural network is trained on-orbit by using the periodic training method. The training period of the BP neural network is set to 10 seconds, that is, BP neural network is trained every 10 seconds, and the magnetometer is recalibrated. The training process is shown in Figure 7.

Figure 8(b) shows that the calibration error increases due to the change of magnetometer performance, and the on-orbit calibration using BP neural network can effectively reduce the calibration error in a training period. The error after calibration is less than 10 nT, which can significantly increase the accuracy compared to ground calibration parameters.

(a)(b)

Figure 6. Calibration result of ground model. (a) Magnetic field intensity; (b) Error.

Figure 7. Simulation flow chat.

(a)(b)

Figure 8. Calibrated value and real value comparison. (a) Magnetic field intensity; (b) Error.

5. Conclusion

In this paper, LMBP neural network is designed for magnetometer calibration to avoid calibration error caused by the model error. The structure of the neural network is designed, and the LMBP algorithm is used to improve the training speed and convergence of the network. A periodic training method is designed to calibrate the magnetometer in orbit by collecting online data. The calibration effect of the LMBP neural network is verified by simulation. The results show that the BP neural network can improve the measurement accuracy of the magnetometer when the error model of the magnetometer is unknown. And it can effectively reduce the error caused by the change of magnetometer parameters caused by the change of space environment, and the measurement error of magnetometer can be less than 10 nT.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] | Hao, D., Sheng, T. and Chen, X. (2011) The Error Correction of Three-Axis Magnetometer Measurement. Spacecraft Environ Engineering, 28, 463-466. |

[2] | Feng-Zhuo, X., Guang-Yun, L.I., Li, W., An-Cheng, W. and De-Qi, Y.U. (2019) Three-Axis Magnetometer Online Self-Calibration Method Based on Recursive Least Square. Transducer and Microsystem Technologies, 38, 30-33. |

[3] | Liao, L., Huamiao, S., Bo, H.E., Lijun, X. and Yingchun, Z. (2018) Error Calibration Method of Three-Axis Magnetometer Measurement for Mi-cro-Satellite. Spacecraft Engineering, 27, 89-95. |

[4] | Wu, H., Pei, X., Li, J., Gao, H. and Bai, Y. (2020) An Improved Magnetometer Calibration and Compensation Method Based on Levenberg-Marquardt Algorithm for Multi-Rotor Un-manned Aerial Vehicle. Measurement and Control, 53, 276-286. https://doi.org/10.1177/0020294019890627 |

[5] | Abbey, J. and Boland, S. (2019) On-Orbit Calibration of Magnetometer Using Stochastic Gradient De-scent. Proceedings of the AIAA/USU Conference on Small Satellites. https://digitalcommons.usu.edu/smallsat/2019/all2019/57/ |

[6] | Ji, T.Y. and Xu, Y.F. (2016) Simplified Calibration of Three-Axis Magnetometer. Mechanical Engineering and Automation, 1, 68. |

[7] | Eswaran, K. and Singh, V. (2015) Some Theorems for Feed Forward Neural Networks. arXiv Preprint arXiv: 1509.05177. https://doi.org/10.1177/0020294019890627 |

[8] | Su, G.L. and Deng, F.P. (2003) On the Improving Backpropagation Algorithms of the Neural Networks Based on MATLAB Language: A Review. Bulletin of Science and Technology, 19, 130-135. |

[9] | Panchal, G., Ganatra, A., Kosta, Y.P. and Pan-chal, D. (2011) Behaviour Analysis of Multilayer Perceptrons with Multiple Hidden Neurons and Hidden Layers. International Journal of Computer Theory and Engineering, 3, 332-337. https://doi.org/10.7763/IJCTE.2011.V3.328 |

[10] | Madhiarasan, M. and Deepa, S.N. (2016) A Novel Criterion to Select Hidden Neuron Numbers in Improved Back Propagation Networks for Wind Speed Forecasting. Applied Intelligence, 44, 878-893. https://doi.org/10.1007/s10489-015-0737-z |

[11] | Sheela, K.G. and Deepa, S.N. (2013) Review on Methods to Fix Number of Hidden Neurons in Neural Networks. Mathematical Problems in Engineering, 7, Article ID: 425740. https://doi.org/10.1155/2013/425740 |

[12] | Lv, C., Xing, Y., Zhang, J., Na, X., Li, Y., Liu, T. and Wang, F. Y. (2017) Levenberg-Marquardt Backpropagation Training of Multilayer Neural Networks for State Estimation of a Safety-Critical Cyber-Physical System. IEEE Transactions on Industrial Informatics, 14, 3436-3446. https://doi.org/10.1109/TII.2017.2777460 |

[13] | Ye, Z. and Kim, M.K. (2018) Predicting Electricity Consumption in a Building Using an Optimized Back-Propagation and Levenberg-Marquardt Back-Propagation Neural Network: Case Study of a Shopping Mall in China. Sustainable Cities and Society, 42, 176-183. https://doi.org/10.1016/j.scs.2018.05.050 |

Copyright © 2020 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.