American Journal of Computational Mathematics

Volume 8, Issue 4 (December 2018)

ISSN Print: 2161-1203   ISSN Online: 2161-1211

Google-based Impact Factor: 0.42  Citations  

Square Neurons, Power Neurons, and Their Learning Algorithms

HTML  XML Download Download as PDF (Size: 424KB)  PP. 296-313  
DOI: 10.4236/ajcm.2018.84024    711 Downloads   1,513 Views  
Author(s)

ABSTRACT

In this paper, we introduce the concepts of square neurons, power neu-rons, and new learning algorithms based on square neurons, and power neurons. First, we briefly review the basic idea of the Boltzmann Machine, specifically that the invariant distributions of the Boltzmann Machine generate Markov chains. We further review ABM (Attrasoft Boltzmann Machine). Next, we review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. The invariant distribution of the ABM is a θ-transformation; therefore, an ABM can simulate any distribution. We review the linear neurons and the associated learning algorithm. We then discuss the problems of the exponential neurons used in ABM, which are unstable, and the problems of the linear neurons, which do not discriminate the wrong answers from the right answers as sharply as the exponential neurons. Finally, we introduce the concept of square neurons and power neurons. We also discuss the advantages of the learning algorithms based on square neurons and power neurons, which have the stability of the linear neurons and the sharp discrimination of the exponential neurons.

Share and Cite:

Liu, Y. (2018) Square Neurons, Power Neurons, and Their Learning Algorithms. American Journal of Computational Mathematics, 8, 296-313. doi: 10.4236/ajcm.2018.84024.

Cited by

No relevant information.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.