Brain as an Emergent Finite Automaton: A Theory and Three Theorems


This paper models a biological brain—excluding motivation (e.g., emotions)—as a Finite Automaton in Developmental Network (FA-in-DN), but such an FA emerges incrementally in DN. In artificial intelligence (AI), there are two major schools: symbolic and connectionist. Weng 2011 [1] proposed three major properties of the Developmental Network (DN) which bridged the two schools: 1) From any complex FA that demonstrates human knowledge through its sequence of the symbolic inputs-outputs, a Developmental Program (DP) incrementally develops an emergent FA itself inside through naturally emerging image patterns of the symbolic inputs-outputs of the FA. The DN learning from the FA is incremental, immediate and error-free; 2) After learning the FA, if the DN freezes its learning but runs, it generalizes optimally for infinitely many inputs and actions based on the neuron’s inner-product distance, state equivalence, and the principle of maximum likelihood; 3) After learning the FA, if the DN continues to learn and run, it “thinks” optimally in the sense of maximum likelihood conditioned on its limited computational resource and its limited past experience. This paper gives an overview of the FA-in-DN brain theory and presents the three major theorems and their proofs.

Share and Cite:

Weng, J. (2015) Brain as an Emergent Finite Automaton: A Theory and Three Theorems. International Journal of Intelligence Science, 5, 112-131. doi: 10.4236/ijis.2015.52011.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Weng, J. (2011) Three Theorems: Brain-Like Networks Logically Reason and Optimally Generalize. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2983-2990.
[2] Weng, J. (2012) Natural and Artificial Intelligence: Introduction to Computational Brain-Mind. BMI Press, Okemos.
[3] Gluck, M.A., Mercado, E. and Myers, C. (2013) Learning and Memory: From Brain to Behavior. 2nd Edition, Worth Publishers, New York.
[4] Chomsky, N. (1978) Rules and Representation. Columbia University Press, New York.
[5] Kandel, E.R., Schwartz, J.H., Jessell, T.M., Siegelbaum, S. and Hudspeth, A.J. (2012) Principles of Neural Science. 5th Edition, McGraw-Hill, New York.
[6] Weng, J. and Luciw, M. (2012) Brain-Like Emergent Spatial Processing. IEEE Transactions on Autonomous Mental Development, 4, 161-185.
[7] Weng, J., Luciw, M. and Zhang, Q. (2013) Brain-Like Temporal Processing: Emergent Open States. IEEE Transactions on Autonomous Mental Development, 5, 89-116.
[8] Weng, J. and Luciw, M.D. (2014) Brain-Inspired Concept Networks: Learning Concepts from Cluttered Scenes. IEEE Intelligent Systems Magazine, 29, 14-22.
[9] Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2006) Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, Boston.
[10] Weng, J., Paslaski, S., Daly, J., VanDam, C. and Brown, J. (2013) Modulation for Emergent Networks: Serotonin and Dopamine. Neural Networks, 41, 225-239.
[11] Wang, Y., Wu, X. and Weng, J. (2011) Synapse Maintenance in the Where-What Network. International Joint Conference on Neural Networks, San Jose, 31 July-5 August 2011, 2823-2829.
[12] Krichmar, J.L. (2008) The Neuromodulatory System: A Framework for Survival and Adaptive Behavior in a Challenging World. Adaptive Behavior, 16, 385-399.
[13] Weng, J. (2012) Symbolic Models and Emergent Models: A Review. IEEE Transactions on Autonomous Mental Development, 4, 29-53.
[14] Russell, S. and Norvig, P. (2010) Artificial Intelligence: A Modern Approach. 3rd Edition, Prentice-Hall, Upper Saddle River.
[15] Weng, J. (2011) Why Have We Passed “Neural Networks Do Not Abstract Well”? Natural Intelligence: The INNS Magazine, 1, 13-22.
[16] Minsky, M. (1991) Logical versus Analogical or Symbolic versus Connectionist or Neat versus Scruffy. AI Magazine, 12, 34-51.
[17] Gomes, L. (2014) Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts. IEEE Spectrum, Online Article Posted 20 October 2014.
[18] Olshaushen, B.A. and Field, D.J. (1996) Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature, 381, 607-609.
[19] Hinton, G.E., Osindero, S. and Teh, Y-W. (2006) A Fast Learning Algorithm for Deep Belief nets. Neural Computation, 18, 1527-1554.
[20] Desimone, R. and Duncan, J. (1995) Neural Mechanisms of Selective Visual Attention. Annual Review of Neuroscience, 18, 193-222.
[21] Weng, J. (2013) Establish the Three Theorems: DP Optimally Self-Programs Logics Directly from Physics. Proceedings of International Conference on Brain-Mind, East Lansing, 27-28 July 2013, 1-9.
[22] Frasconi, P., Gori, M., Maggini, M. and Soda, G. (1995) Unified Integration of Explicit Knowledge and Learning by Example in Recurrent Networks. IEEE Transactions on Knowledge and Data Engineering, 7, 340-346.
[23] Frasconi, P., Gori, M., Maggini, M. and Soda, G. (1996) Representation of Finite State Automata in Recurrent Radial Basis Function Networks. Machine Learning, 23, 5-32.
[24] Omlin, C.W. and Giles, C.L. (1996) Constructing Deterministic Finite-State Automata in Recurrent Neural Networks. Journal of the ACM, 43, 937-972.
[25] Felleman, D.J. and Van Essen, D.C. (1991) Distributed Hierarchical Processing in the Primate Cerebral Cortex. Cerebral Cortex, 1, 1-47.
[26] Sur, M. and Rubenstein, J.L.R. (2005) Patterning and Plasticity of the Cerebral Cortex. Science, 310, 805-810.
[27] Bichot, N.P., Rossi, A.F. and Desimone, R. (2006) Parallel and Serial Neural Mechanisms for Visual Search in Macaque Area V4. Science, 308, 529-534.
[28] Campbell, N.A., Reece, J.B., Urry, L.A., Cain, M.L., Wasserman, S.A., Minorsky, P.V. and Jackson, R.B. (2011) Biology. 9th Edition, Benjamin Cummings, San Francisco.
[29] Weng, J., McClelland, J., Pentland, A., Sporns, O., Stockman, I., Sur, M. and Thelen, E. (2001) Autonomous Mental Development by Robots and Animals. Science, 291, 599-600.
[30] Weng, J. (2009) Task Muddiness, Intelligence Metrics, and the Necessity of Autonomous Mental Development. Minds and Machines, 19, 93-115.
[31] Weng, J. and Luciw, M. (2009) Dually Optimal Neuronal Layers: Lobe Component Analysis. IEEE Transactions on Autonomous Mental Development, 1, 68-85.
[32] Martin, J.C. (2003) Introduction to Languages and the Theory of Computation. 3rd Edition, McGraw Hill, Boston.
[33] Chomsky, N. (1957) Syntactic Structures. Mouton, The Hague.
[34] Ji, Z., Weng, J. and Prokhorov, D. (2008) Where-What Network 1: “Where” and “What” Assist Each Other through Top-Down Connections. IEEE International Conference on Development and Learning, Monterey, 9-12 August 2008, 61-66.
[35] Wu, X., Guo, Q. and Weng, J. (2013) Skull-Closed Autonomous Development: WWN-7 Dealing with Scales. Proceedings of International Conference on Brain-Mind, East Lansing, 27-28 July 2013, 1-8.
[36] Luciw, M. and Weng, J. (2010) Where What Network 3: Developmental Top-Down Attention with Multiple Meaningful Foregrounds. IEEE International Conference on Neural Networks, Barcelona, 18-23 July 2010, 4233-4240.
[37] Miyan, K. and Weng, J. (2010) WWN-Text: Cortex-Like Language Acquisition, with What and Where. IEEE 9th International Conference on Development and Learning, Ann Arbor, 18-21 August 2010, 280-285.
[38] Wiesel, T.N. and Hubel, D.H. (1965) Comparison of the Effects of Unilateral and Bilateral Eye Closure on Cortical Unit Responses in Kittens. Journal of Neurophysiology, 28, 1029-1040.
[39] Von Melchner, L., Pallas, S.L. and Sur, M. (2000) Visual Behaviour Mediated by Retinal Projections Directed to the Auditory Pathway. Nature, 404, 871-876.
[40] Voss, P. (2013) Sensitive and Critical Periods in Visual Sensory Deprivation. Frontiers in Psychology, 4, 664.

Copyright © 2023 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.