Neural Network Based Order Statistic Processing Engines

Abstract

Order statistic filters are used often in the applications of science and engineering problems. This paper investigates the design and training of a feed-forward neural network to approximate minimum, median and maximum operations. The design of order statistic neural network filtering (OSNNF) is further refined by converting the input vectors with elements of real numbers to a set of inputs consisting of ones and zeros, and the neural network is trained to yield a rank vector which can be used to obtain the exact ranked values of the input vector. As a case study, the OSNNF is used to improve the visibility of target echoes masked by clutter in ultrasonic nondestructive testing applications.

Share and Cite:

Unluturk, M. and Saniie, J. (2012) Neural Network Based Order Statistic Processing Engines. Journal of Signal and Information Processing, 3, 30-34. doi: 10.4236/jsip.2012.31004.

1. Introduction

Order statistic (OS) processors have been widely used in the field of signal and image processing [1-3]. OS results can be obtained by sorting the elements of an input vector according to the rank of each element. Ranked outputs such as minimum, median and maximum have been used for target detection with applications in radar, sonar and ultrasonic nondestructive testing [4,5]. The problem of sorting has already been solved by sequential and iterative methods such as the bubble sort, selection sort, insertion sort, and quick sort with computational efficiency ranging between O(NlogN) and O(N2) comparisons and swapping operations [6]. As an alternative to conventional sorting techniques, a neural network design resulting from the harmony theory has been proposed for the sorting operation [7]. Neural network hardware can be implemented with parallel architecture using VLSI and FPGA technology, and this is highly desirable for high-speed computation [8-11].

In this paper, feed-forward neural network models [12] are introduced to find the minimum, the median, and the maximum of the input vectors consisting of real numbers. The back-propagation learning algorithm [13] is utilized in the training phase of the order statistic neural network filters (OSNNF). If the size of the input data is n, there is n! different input vectors including the same real numbers which give the same sorted output. Furthermore, the input vectors with real numbers demand an unlimited number of input vectors for training. Therefore, it is impractical to train an OSNNF with that many input data. Consequently, the trained OSNNF filter might not provide exact sorted results. In spite of this drawback, neural network filters can be trained to provide good approximation for the sorted results, and perhaps this might be sufficient for sorting the random processes in certain applications [4,5].

In practice, it is desirable to develop an efficient neural network model that can be used in finding the estimates of minimum, median, and maximum of the input vectors. To achieve this, simulation data is used in the training phase. The training set of data consists of random numbers with uniform distribution scaled between zero and one. Then, the neural network is trained to yield the ranked output (e.g., the minimum, the median or the maximum value of the input vector with real numbers). In the next section we present the design techniques for the neural network OS filters. Section 3 discusses an improved neural network solution that finds the rank of each input in order to reveal the exact sorted result. Section 4 utilizes these neural network filters to enhance the visibility of target echoes in high scattering clutter using split-spectrum processing (SSP).

2. Neural Network OS Filters

Figure 1 displays the structure for the neural network OS filter where the number of inputs is 8 (M = 8). This filter is a fully connected feed-forward neural network. When unordered uniformly distributed random numbers, x(i), are presented as an input vector of real numbers to the neural network, each output of the hidden neurons is the weighted sum of the input nodes and bias node passes

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] J. Serra, “Image Analysis and Mathematical Morphology,” Academic Press, New York, 1988.
[2] I. Pitas and A. N. Venetsanopoulos, “Nonlinear Digital Filters, Principles and Applications,” Kluwer Academic Publishers, Boston, 1990.
[3] J. Astola and P. Kuosmanen, “Fundamentals of Nonlinear Digital Filtering,” CRC Press, Boca Raton, 1997.
[4] J. Saniie, K. D. Donohue and N. M. Bilgutay, “Order Statistic Filters as Postdetection Processor,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 38, No. 10, 1990, pp. 1722-1732.
[5] J. Saniie, D. T. Nagle and K. D. Donohue, “Analysis of Order Statistic Filters Applied to Ultrasonic Flaw Detection Using Split-Spectrum Processing,” IEEE Transactions on Ultrasonics, Ferrorelectrics, and Frequency Control, Vol. 38, No. 2, 1999, pp. 133-140. doi:10.1109/58.68470
[6] M. A. Weiss, “Data Structures and Algorithm Analysis in C++,” Addison Wesley, Reading, 2006.
[7] T. Tambouratzis, “A Novel Artificial Neural Network for Sorting,” IEEE Transact?ons on Systems, Man, and Cybernet?cs—Part B: Cybernet?cs, Vol. 29, No. 2, 1999, pp. 271-275. doi:10.1109/3477.752799
[8] P. W. Hollis and J. J. Paulos, “A Neural Network Learning Algorithm Tailored for VLSI Implementation,” IEEE Transactions on Neural Networks, Vol. 5, No. 5, 1994, pp. 784-791. doi:10.1109/72.317729
[9] B. M. Wilamowski and R. C. Jaeger, “Neuro-Fuzzy Architecture for CMOS Implementation,” IEEE Transaction on Industrial Electronics, Vol. 46, No. 6, 1999, pp. 1132-1136. doi:10.1109/41.808001
[10] X. Zhu, L. Yuan, D. Wang and Y. Chen, “FPGA Implementation of a Probabilistic Neural Network for Spike Sorting,” 2010 2nd International Conference on Information Engineering and Computer Science, Wuhan, 25-26 December 2010, pp. 1-4.
[11] J. Misra and I. Saha, “Artificial Neural Networks in Hardware: A Survey of Two Decades of Progress,” Neurocomputing, Vol. 74, No. 1-3, 2010, pp. 239-255.
[12] K. Hornik, “Multilayer Feedforward Networks as Universal Approximators,” Neural Networks, Vol. 2, No. 5, 1989, pp. 359-366. doi:10.1016/0893-6080(89)90020-8
[13] T. Masters, “Practical Neural Network Recipes in C++,” Academic Press Inc., New York, 1993.
[14] F. J. Bremner, S. J. Gotts and D. L. Denham, “Hinton Diagrams: Viewing Connection Strengths in Neural Networks,” Behavior Research Methods, Vol. 26, No. 2, 1994, pp. 215-218.
[15] E. Parzen, “On Estimation of a Probability Density Function and Mode,” Annals of Mathematical Statistics, Vol. 33, No. 3, 1962, pp. 1065-1076. doi:10.1214/aoms/1177704472

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.