Anthropo-Genetic Algorithm of the Mind

Abstract

This study aims to develop a hybrid model to represent the human mind from a functionalist point of view that can be adapted to artificial intelligence. The model is not a realistic theory of the neural network of the brain but an instrumentalist AI model, which means that there can be some other representative models too. It had been thought that the provability of an axiomatic system requires the completeness of a formal system. However, Gödel proved that no consistent formal system can prove its own consistency. There is a paradoxical limit to provability. Both consistency and completeness are impossible together. These formal limits form the basis of our model: the hybrid model of the mind consists of a formal axiomatic system and an evolutionary algorithm (EA) of intelligence, which also includes the genetic algorithm (GA) of consciousness. The GA of consciousness operates based on intentions, functioning in conjunction with the principles of EA. This collaboration allows it to transcend the paradoxical formal limits of intelligence. However, GA overcomes the problem of paradoxicality at the cost of producing illusions. Following this collaboration gives us a GA operating system. After all, if the rational optimization task of the GA in question produces illusions, then the rationality must lie elsewhere in the paradox.

Share and Cite:

Bilgic, M. (2024) Anthropo-Genetic Algorithm of the Mind. Open Journal of Philosophy, 14, 161-179. doi: 10.4236/ojpp.2024.141014.

1. Introduction: The Formal Limits

The core of scientific thinking is based on the problem of representing mind-independent reality with various mental forms, as can be observed throughout the history of philosophy. Aristotle set up his idea of apodictic syllogism based on the principle of identity between two mental forms to be consistent with external reality, against Plato’s idea of Causa Sui and non-relational mental forms that are absolutely true in themselves, called eidos. Even though we do not use Aristotelian logic today, the logic of mathematics and deduction is based on the idea of identity. Whether science should provide an exact, theoretical definition or whether another kind of science, such as paraconsistent logic, is possible, is still a question to keep in mind at all times.

Whatever the scientific method to be followed, some basic principles of scientific thinking will always exist. Even though theoretical intuition is needed, proof is a fundamental characteristic of scientific thinking, just as Euclid noted as a milestone in his Elements. In his Philosophiæ Naturalis Principia Mathematica, Newton sharply separated physics from alchemy, emphasizing the limit of the scope of science as the second character of scientific thinking. Proof methods should only be applied to scientific objects. Scientific thinking is a theoretical system consisting of variables and conversion principles, and the proof is valid only within the system. There is a debate on the issue: Wittgenstein stated in his Tractatus that the formal principles of the system cannot be represented and proven within the system. Carnap rejects his idea by limiting it with syntax. Scientific thinking is thus limited to the framework of scientific language; the syntax of a framework cannot be questioned in itself without pragmatics. A proposition must comply with the definitions and rules established in the framework. Only within the frame can we scientifically define the pragmatical boundaries of the frame. Quine concludes the discussion as follows: internal questions as subclass questions and external questions as category questions, like numerical operations and mathematical rules. This tension at the frontiers of scientific methodology is fundamental to scientific thinking and cognition for the human mind and artificial intelligence in general. The specific problem is how to define the categorical framework of intelligence.

Taking Aristotle’s position, the problem of defining the identity principle will always be the basis of science, as when Leibniz devised a calculator. Leibniz’s machine was based on the principle of the identity of indiscernibles. However, after the identity principle, a second condition for truth emerged, namely that calculating the truth values of mathematical expressions requires a consistent, complete formal language. Therefore, the need for a complete formal system became the second problematic goal of the project, after the identity principle.

Russell and Whitehead (1927) wrote Principia Mathematica to achieve this goal. In an attempt to provide a model-theoretical first-order axiomatization of identity, they started with Leibniz’s identity of the indiscernibles. Russell’s set theory unfortunately arrived at self-referential paradoxical sets, akin to what would be called Russell’s paradox. It is a paradox of the set of all sets that do not contain themselves as elements. Independent of Russell’s logicism, Hilbert defined the target explicitly as the decision problem, or “Entscheidungsproblem,” which promises the completeness theorem of first-order logic. The problem is the existence of an algorithm that will decide the provability of a statement in the system based on the axioms of the system.

Gödel’s incompleteness theorem provides a higher-level definition of the problem in principle. Gödel’s proof is based on the principle of negation and the negation of a self-referential statement (Gödel, 1931) . We may simply say that the basic axioms of a formal system cannot be proved using the outputs of the same axiomatic system, and therefore formal systems are incomplete. Gödel’s result undermines Hilbert’s program. Hilbert’s program was to find methods of proving the consistency of an axiomatic system from within. He hoped to offer the view that any system of assumed axioms and rules that can be shown to be consistent (without using any new assumptions of axioms and/or rules) is legitimate. So, Hilbert hoped to find ways of proving consistency from within. Gödel’s result reveals that once we get to axiomatic systems that can capture elementary arithmetic (representing every recursive function on natural numbers), it is impossible to prove the consistency from constructing a formal system without self-referencing results in the constitutional limits of certain formal systems. Influenced by Gödel’s proof, Alan Turing framed the problem in terms of the existence of a computing machine or an algorithm capable of solving the decision problem (Turing, 1937) . If we take the Turing machine as a function or a computational program, the problem lies in the feasibility of writing a program that can prove when its recursive functions will eventually reach zero and stop. Even though elements of the function are recursively enumerable, the complement of the function is not enumerable. Then, the halting problem or “Entscheidungsproblem” is undecidable.

In conclusion, we can say that thinking is copying, modeling, or constructing some formal representation of objects using some formal system, such as the set of natural numbers, arithmetic, and geometry. However, the operations of these representative formal systems are limited to self-referential paradoxes arising from their constitutional framework. What about the proofs mentioned above? How could these mathematicians think about and compute incompleteness or the halting problem in formal systems themselves? How can human intelligence go beyond the paradoxical limits of formal systems while operating within a formal system, which programmed artificial intelligence cannot accomplish yet? Gödel, Turing, and other geniuses, whose minds paradoxically operated in a self-referential way, could decide undecidability or provide full proof of incompleteness in some incredibly creative ways. They operated in a formal system but carried out their formal intellectual functions in informal, complex systems too, overcoming the paradox. Our goal is to unravel the map within the possibilities of such a hybrid mind model.

There are several problems that arise when discussing the mind. We introspectively experience our minds, but other minds are obscurities for us. How can we discuss the mind from a third-person or scientific perspective? If we attempt to explain it, will we bump into the boundaries of such an epistemological gap? Meanwhile, offering a hypothesis about the mind means representing it in some way. In this case, should we idealistically hypothesize the mind as being separate from the brain, or should we reduce it to the brain in a materialistic ontology, and if so, how? As the totality of epistemological and ontological problems suggests, there is a metaphysical problem in uttering anything about the mind. However, the functionalist approach has the ability to avoid epistemological and ontological problems because it works in a very different way. Even though functionalism is a materialist theory, it does not reduce the mind to the body but to functions. Representing the mind with some function is a philosophically hygienic position and holds some powerful possibilities when discussing the mind. This is what we try to achieve below. The main stream of the history of philosophy sets up its general theories according to the cognitive faculties of the human mind. Social organizations, political orders, classifications of sciences, educational theories, and, in general, philosophical theories are mostly oriented according to a mind model. Plato’s tripartite of soul, Aristotle’s three kinds of soul, Aquinas’ three acts of intellect, Kant’s three higher faculties of cognition, and Hegel’s three spirits were the basis of their philosophical systems. The problems of the fields of philosophy are deeply related to mind models.

2. Hybrid System of the Mind

The main problem, as discussed here, is the creative, rational relationship between formal and complex systems. Functionally, the human mind has a general self-referential, paradoxical, formal intelligence system, but it also has individual and complex usage patterns called consciousness. The problem with the mind, then, emerges as the relationship between intelligence and consciousness. Intelligence recursively enclosures the mind, and consciousness tries to transcend its paradoxical limits. At this point, there is an additional problem to explore and express regarding this relationship, which is to use a formal system to express the relationship between formal and complex systems. Here, we are trying to express this problematic relationship in the mind.

Intelligence is a result of the cognitive evolution of the brain, especially the evolution of the faculty of language. Hauser, Chomsky, and Fitch (2002: p. 1569) argue that the faculty of language requires a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion. Recursion will provide the faculty of language to generate an infinite range of expressions from a finite set of elements. It is possible to demonstrate a neural basis for infinite recursion in neural network activations between layers of the cerebral cortex, and the origin of neocortical lamination can be traced back to the sensory cortex of early mammals (Treves, 2005: p. 277) . A conceptual-intentional system involved here is naturally a part of the evolution of recursive functions of neocortical microcircuitry, just as Chomsky explains the evolution of grammar for the human mind. Therefore, the evolution of the mind is the history of enclosure and recursion of the homeostasis of the brain so as to mutate itself, working on its own cognitive system within the same system. The mutation of the mind appears as a recursive version of the self-referential paradoxical behavior of intelligence. The intelligence of Homo sapiens as a species must have had a shared evolutionary algorithm that repeatedly folded back on itself to mutate cognition so as to expend less energy over a shorter path. Gödel’s fixed mathematical point needs actual infinity, like natural numbers. Even though we cannot consciously count such an infinite brain activity for the naturalization of the self-reference of the human mind, we can assume infinity in the multidimensional and unconscious dept of paradoxical vortices. For our basic needs here, we have to assume and apply certain premises of such an evolutionary algorithm (EA) of human intelligence1. We cannot develop such an algorithm in this article. The history of philosophy is based on the search for such an algorithm in nature or the mind. Aristotle’s categories are both in nature and ultimately in the mind: Descartes’ cogito reduction, Locke’s basic and complex ideas, Leibniz’s idea of finding the algorithm of the human mind, Kant’s theory of a priori categories as an algorithm of the mind, and so on. Hegel’s categories evolve on three levels and form a common historical Spirit. Although Hegel uses very different language and inspiration, his dialectical algorithm is quite useful for imagining our theory, which we will demonstrate here.

As for consciousness, it exists neither as a substance nor as a function. It is only a mode of the acts of intelligence; it is about being in an intention, in a connection, in a purpose. Intentionality helps intelligence navigate through paradoxes without cognitive obstacles. As Husserl said, consciousness is not something, but it is always about something; “every conscious process is, in itself, consciousness of such and such” (Husserl, 1999: p. 33) . In particular, consciousness is not the act of considering itself as an object, but the act of considering an object other than itself. Consciousness is nothing but intentionality. It presents objects from the external world for the intelligence to operate on. Consciousness is an act of transcending the limits of the mind; it is the link between mind-independent reality and the mind, or between the external world and the functions of intelligence. These links also belong to individual existential stories in cultural life.

3. Formal Constitution of the Hybrid System of the Mind

So far, we have tried to delineate some of the well-known underpinnings of the problem of the formal constitutional limits of the mind. Now, we will attempt to create a formal constitution for these underpinnings. The mind is evolutionarily constructed through a formal enclosure of its recursive functioning, models itself in total accountability, and then evolves a consciousness to transcend its own enclosed limits.

It is a living self-referential system that enclosures and transcends itself. Intelligence (I) is a function of the brain and is intrinsically tied to the cognitive power of the brain. Consciousness (C) is an intermediary action involving objects in the function of intelligence. Consciousness borrows objects from the external world (W).

Even if phenomenal consciousness borrows some mental properties as objects from the general structure of intelligence, it externalizes the objects and “puts them in front of” the subject. This is the meaning of objectum in Latin. Consciousness also means intending something other than itself. When it treats itself as an object, it has made itself an alien external object. Putting all these definitions in order, we arrive at Formula (1):

I = { C W / C C } (1)

This is essentially Russell’s class-theoretic paradox. The formula for the function of intelligence consists only of the set of all sets that are not members of themselves. When we arrive at such a perfect set, it immediately falls into a self-referential paradox. Its self-referentiality appears as a paradox, as in Formula (2). If (I) takes itself as a member, the quality of its members is not self-referential; therefore, it does not take itself as a member. If (I) does not take itself as a member, that is the quality of its members; hence, it becomes a member of itself:

( I I ) ( I I ) , ( I I ) ( I I ) (2)

This is the semantic paradoxical behavior model of consciousness, which had been similarly depicted within Priest’s inclosure schema (Priest, 1995: p. 172) . Consciousness is the solution to the paradoxical nature of intelligence, which emerged as a result of an evolutionary process. Intelligence can be seen at any level of the cognitive network of living systems. The logical behavior of such a system is considered recursive. A recursive function defines the next state of the series by referring to the base case; it runs by referring to itself without limiting the repetition of the function. For example, the irrational number pi (π) is the circumference (c) of a circle divided by the diameter (d) of the circle. It continuously divides the remainder at each step:

π = c / d = 3.141592653589793 ?

The definition of certain recursive functions yields self-referential sets. The paradoxicality of a self-referential set also relies on the principle of negation. Gödel’s proof of incompleteness and the halting problem of the Turing machine were based on the negation of a self-referential statement. For example, we can use the function π for the halting problem. Π is a self-referential recursive function. We may attempt to write a program or imagine a machine that will enumerate the total states of the function where the recursion will stop. However, when the program cannot find any indication to terminate the repetition in the previous cases, it falls into an infinite loop and never stops. The challenge is to find a general program that will decide where to stop this kind of recursion. This is a decision problem, and it is mathematically undecidable because mathematical definitions are universally valid for any program. In reality, programs stop the repetition at some determined limit, or such repetitions in nature stop due to natural reasons. This is the case for a self-referential axiomatic system as well as for social organizations or institutions formed according to their own constitutional principles. Penrose had proposed that only human intelligence can decide whether or not such a program will work and simply predict the complete evaluation of a self-referential function. Contrary to Penrose’s famous argument, deep-learning artificial intelligence is chasing the same possibility as human intelligence to push forward and transcend its own limits.

4. A Genetic Algorithm for Self-Referential Hybrid Mind Models

Here we aim to draw the genetic algorithm (GA) of our mind. Over the course of brain evolution, the human mind has evolved in such a way that it can be modeled as follows: The mind embodies a self-referential paradoxical axiomatic formal system; meanwhile, it has developed its own algorithm to transcend these formal limits. Intelligence must have an evolutionary algorithm (EA) that includes concepts, objects, principles, and operators throughout biological evolution. Consciousness, which emerged as a result of the evolution of intelligence, developed a genetic algorithm to overcome the paradoxical formal limits of intelligence with internal and external intentions. Thus, we can say that the mind is an algorithm consisting of two algorithms: evolutionary and genetic. The genetic algorithm of the mind emerged from its evolutionary algorithm, and it is an element of EA. The genetic algorithm is a map of internal and external intentionality in consciousness, consisting of multi-layered systems and subsystems formed as a result of evolution. Now, we will examine the subsystems of the mind and see how this GA works through these subsystems to overcome the constitutional limits of the mind. Since it is not necessary to see the evolutionary constitution process of the evolutionary algorithm in which the genetic algorithm works, we will assume that we have such an evolutionary algorithm consisting of principles, as in Table 1, and show the functional map of the GA over the two subsystems:

4.1. Layer 1: Sentience

Living systems are considered self-organizing units composed of self-generating, recursive cognitive processes. They copy, replicate, and even mutate their genetic coding system to maintain their circular organization and adaptability in their environment. A living system is a closed unit that interacts with its interior and exterior as a cognitive process from within sentience to maintain its unity and identity. Maintenance of its identity by interacting with its components makes it a self-referring system. Maturana & Varela (1972) calls such living homeostatic systems “autopoiesis,” meaning self-creation. According to the EA above, there

Table 1. Evolutionary algorithm with the categories of intelligence.

must be at least a quadruple core principle for the receptivity of an autopoietic system. It needs to be able to affirm and negate internal and external spatial qualities distinguished in the system in order to organize the components and count the temporal quantities of plurality and unity in order to re-code and reproduce itself. Thus, affirmation-negation and plurality-unity are the four operational categories required for any living system, and the nervous systems evolve according to them. Nervous systems are organized into networks that are more complex and, hence, have well-defined morphological classes. The first evolutionary layer or subsystem of the function of intelligence can be monitored through the horizontal coupling of plurality and affirmation together with the passive participation of unity and negation; comparatively, the activation of plurality requires the possibility of unity, and affirmation requires the possibility of negation.

Intelligence: It is the basic processor of the cognitive evolution of neural systems. We can characterize this basic level as a sentient system. The external data of the environment and the internal data of its own components are experienced as an interior cognitive organization. Here, individual external objects are internally classified as types in order to recognize what they are. Each type of object is recognized as a sign, such as the relationship between meat and ringing a bell in Pavlov’s experiments. If we consider the classical prototype proposition about Socrates, we can express it as, “A particular one of these is a man” ( x ). This is the basic data-processing function of intelligence. There is a horizontal process between the signs, extending from neuron to neuron. The process contains analog information with mostly physical properties and rarely some mental properties like color and pain. After affirming the presence of a particular object in plurality and identifying the type of the object, the organism either does not move or does move but cannot voluntarily stop. The following steps simply go on, one by one, according to causality, and each step depends on the expected possibility of cause and effect. For example, when an organism encounters a piece of meat, when the sign of hunger matches the sign of meat, the organism attacks the meat and naturally stops when the sign of hunger disappears. At this level, there is no halting operator because there is no active limitation or negation principle and no self-referential mind model yet. Negation and unity can merely play a potential role in the vertical coupling of GA to activate affirmation and plurality, respectively. The recursion runs horizontally after innumerable repeats without control and stops due to various natural reasons. Biologically, the reptilian brain stem and cerebellum are characterized by these qualities. The cerebellum is fundamental to intelligence (Buckner, 2013: p. 813) . Cognitive functions of intelligence (I) run within sentience (S):

I = { S W } (3)

4.2. Layer 2: Psyche

Relationships between individual objects and their classified types and relationships between types can evolve by learning the ways in which types are classified and, thus, by learning to classify the classifications. Thus, the second, or meta-system, evolves the first subsystem; the relatively passive usage of intelligence to recognize species translates into active transactions between members and classes. It is a recursive vertical movement and evolution that creates shortcuts and meta-rules, namely principles, symbols, concepts, and defined class relations, and then it can mutate the topography of the system. This active operation of intelligence is also a horizontal coupling movement in the same recursive self-referential movement that maintains the closed unity of the system that identifies itself as an internal organization by raising its difference as a subject, namely, by unifying its own actions in negating the others. It is the self-identification of a self-referential system that enclosures itself and creates an image of itself within it.

Reason: Reason is an exponential function of intelligence ( R = I 2 ). In other words, reason is not an independent entity; it is merely a specific manifestation of intelligence. Reasoning is the duplication of the functions of intelligence on itself to run in a shortcut by using meta-rules and meta-classes over subclasses. Reasoning involves the evolution and evaluation of principles. These are realized through the vertical coupling of GA between signs and symbols, neurons and concepts, and types and their genera. This process contributes to maintaining the unity of the self-image. Active participation of classes in the process contributes more mental properties next to physical properties, at least the mind itself and an image of the ego. The logical relationships between systems and subsystems, classes and members, genera-species and singulars, respectively, are the basis of reasoning. We can mark this logical level with the proposition that “the singular one called Socrates is mortal.” “Mortal” belongs to a more abstracted meta-class than “man.” The random organization of analog information in the previous subsystem starts to transform into a well-defined organization with digital information. There is also horizontal coupling between abstract principles or concepts, like being mortal and alive or being a legitimate subject, among others. Defining its own identity in an accepted, legitimate way requires close control over the possibilities of emphasizing the differences and negating the others in a comparison. It gives the system psychological self-control over internal processes and limits its recursive processes. However, it is instinctive, weak self-control without logic. The limbic system of mammals possesses such a self-regulatory, emotional, and psychological mechanism, which gives the brain the ability to process more with less energy and increases its chances of survival amidst competition. At this level, intelligence operates on objects through the psyche (P):

I = { P W } (4)

4.3. Layer 3: Consciousness

Consciousness, the third layer, is not a system per se, but it is the conductor of the previous subsystems, layers 1 and 2. Consciousness is a quality of GA, and its mission is to process the principles of EA. It consists of functional relations between symbols, that is, judgments between concepts. We used Socrates as an example to form our judgment. We first took the proposition “he is a particular one of these men” and, secondly, the proposition “the singular man Socrates is mortal,” and in the third layer, we formed our judgment “all men are mortal” ( x ) via the singular man Socrates by making “man” the middle term. However, “Socrates” in the judgment has no semantic significance for the individuality of Socrates; it functions as a universal concept; that is, “Socrates” in the judgment indicates an idea. Information, which is the object of cognitive processing, is a digital, realistic illusion constructed in the form of judgment. Recursion operates on the previous two subsystems to optimize all their cognitive functions to maintain the identity under the totality of “Ego” as a universal concept. Recursive motion constantly goes back and references the system itself while operating within the system, just like in the paradoxical cases of Gödel and Turing. It is paradoxical because recursion accrues within logical forms, and consciousness logically rejects the self-referencing of the self-image of its identity since, by its very nature, it can only have intentions towards something else. As a result, the system running with its subsystems seems incomplete and has no criterion for determining when it will stop for any final decision or action. This is the paradoxical position of the Formula (1) mentioned above: I = { C W / C C } . It is not a simple logical formula but rather the natural construction of the mind, which is the ratio essendi of consciousness. Then, we will name it the “existential paradox.”

The long evolution of the mind has been rewarded with consciousness at the price of the paradoxical nature of intelligence. It is a result of the vertical coupling of the previous two subsystems (Maturana, 1999) , and this self-referential evolution mutates the topography of the third system and effects and directs the operation of the others. Just as the vertical coupling played a pivotal role in the evolution of the EA, it will reciprocate and cause the GA to exert influence over the subsystems. After all, the system as a whole both falls into paradoxicality and creates ways of dealing with this paradoxicality. Consciousness somehow goes through paradoxicality and can make some rational decisions about where and how to stop the process. Overcoming the problem of the self-referential paradox is a victory for the genetic algorithm, which is also the target of this work.

Consciousness, as a natural behavior of intelligence, is a result of the evolution of the neocortex. Scientifically, there are details of this evolution that are not yet known, but there are some quite satisfactory explanations. Neurological experiments show that consciousness is a result of the recursive and self-referential functions of intelligence in a spatiotemporally oriented, egocentrically extended domain. This recursive function operates on feedforward circuitry that processes the basic orientational and self-locational schema independently of external actions. The issue of energy efficiency throughout biological evolution necessitates some anticipative feedforward processing mechanisms (Peters, 2010) . The algorithmic schema of the mechanism is based on an organization of the whole system that limits itself in the form of a judgment to maintain the totality of the self-image. In the mode of necessity, the subject defines itself as the judge of its own judgment by providing the exact measure of its own limits without comparison to others. The disposition of its relationship with others is autonomy. Autonomy is a natural tendency for any higher intelligence and is a result of learning processes such as deep learning and autopoietic operations. Autonomy is a non-paradoxical, self-referential operation.

If we consider the whole system, the GA vertically distributes the EA principles to the subsystems through the quadruple core principles of the two subsystems, contributes to their horizontal coupling, and aggregates their results through vertical coupling. Transforming a principle into another system requires that the principle in that system have a similar quality. For example, for optimizing the quantity of a totality, transforming plurality to unity needs to run via negation; thus, the unity of negation of plurality is optimized to transfer them to the quantity of their totality. This algorithm runs for consciousness and decision, which is an optimization of the coupling processes of the subsystems.

The system starts with the quadruple core principles and their full internal organization. Next, they activate the first subsystem and work with the environmental data inputs to classify them by type. They transform themselves into the second subsystem under the optimization task of the GA via the core principles. Therefore, the two subsystems work together in such a recurrent way so as to optimize class-member relations to realize the self-image in the second subsystem. The final optimization task transforms the outputs of the whole system into the GA so that the GA is ready to complete the self-referential operations for decisions without paradoxicality. Although the outputs of the quadruple core are directly determinant in the decision processes of the will, the determining power of the open-ended outputs decreases as they move away from the center. Still, the ultimate goal of rationality is autonomous self-realization within the entire system of EA principles, in the mode of necessity. Figure 1 shows an AI model of the GA map exhibited here.

Figure 1. The operational map of the GA as an AI model.

Will: Will is an exponential function of reason ( W = R 2 = I 4 ). Even though autonomous decisions may offer themselves as necessary judgments, consciousness works based on the faculty of will and can construct many different necessities. However, even then, there can be no rational meta-will capable of deciding which line of reasoning is both rational and necessary. To prove that “Socrates is mortal,” he could be sentenced to death and executed, or it could be logically deduced. This is a historically creative (if possible) decision domain for will. Will is the highest cognitive faculty of the mind. Its basic problem is rationality, namely, deciding on which reasoning is rational. The optimization task of GA over the two subsystems is not fully computable, as we can use a Boolean network. If we rely solely on a Boolean system like decision theory, game theory, or fuzzy logic, we will inevitably encounter the issue of undecidability. At best, we can identify some optimal probabilities, but we have no compelling reason to choose one option over another. An impression of the necessity of a judgment originates from the intentions of consciousness and the full implementation of the genetic algorithm of the mind. The hybrid system consisting of a Boolean network and an evolutionary (and genetic) algorithm, protects the self-referential mind from paradoxicality. The rationality that will break the formal limits of self-referentiality is not, as expected, limited to the rational optimization of the intentional tasks of consciousness; this rationality will then be able to go beyond intentionality.

5. Intentionality and Transcending the Formal Limits

The quadruple core principles of receptivity are the point of departure for cognitive processes in any sentient being. Affirmation-negation and plurality-unity work together to respond to any cause. These are the gateways to moving in and out of any cognitive system, from coding DNA to worshipping a goddess. These gates become intentions of consciousness, where the mind has autological or heterological intentions. Autological intentions are directed towards internality, either inside or outside, and heterological intentions are directed towards externality, either inside or outside. We have seen a gradual transformation between subsystems, from the use of analog information and physical entities to the use of digital information and mental entities. In the third layer of the GA, consciousness (C) heterologically receives physical entities (p) from the external world (W) and works on them, or autologically receives mental entities (m) of intelligence (I) and works on the mind itself. After all, Formula (1), I = { C W / C C } can be exhibited in Formula (5) in the light of intentionality:

I = C , W , p C [ ( C ( p ) W ( p ) ) ( W ( p ) C ( p ) ) ] C , I , m C [ ( I ( m ) C ( m ) ) ( C ( m ) I ( m ) ) ] (5)

Each part of the Formula (5) is correlated with one of the quadruple nuclei of EA. The autopoietic character of intelligence works through the intentionality of consciousness between the mind and mind-independence. Like any living system, the mind is a closed system that experiences the outside from the inside. Here is the function of intentions: they cross the border by creating an illusion and constructing conscious experiences and theories as if self-referential minds had no formal limits. Self-referential and non-paradoxical hybrid minds can decide how and where to stop recursive computation according to the limits of their intellectual and cultural intentions. Below, we can see how the genetic algorithm optimizes the subsystems via the four basic intentions for some complex intentional and multidimensional decision processes.

1) p C [ ( p C ) ( p W ) ] : For ( p ) all physical entities, if they are some mind-independent physical entities experienced in phenomenal consciousness, then they are the physical entities of the mind-independent world. This results in an empty image of the idea of “Cosmos” as a tautology. It is the heterological intention towards externality outside. We believe in the spatial extension of a mind-independent reality in our common sense by affirming its existence. GA vertically activates the principle of affirmation, and the intention over the idea of Cosmos horizontally activates the first subsystem. If we employ Kant’s language, we can say that Cosmos is a “thing in itself” for our intelligence. Mind-independence is mind-independent, beyond our imagination and cognition. In philosophy, it is the problem of ontology, and it is inhabited in the mind as a core problem. At the psychological level, the illusion of affirmation turns into a pathological defense mechanism as a negation of mind-independent reality and may even turn into the idealistic sense of a sage. Any one of these illusions works to transcend the formal limits of paradoxicality, but it is an empty set due to the nature of a tautology and needs to be filled up with illusion.

2) p C [ ( p W ) ( p C ) ] : For ( p ) all physical entities, if they are some physical entities of the mind-independent world, then they are known physical entities just as those experienced in phenomenal consciousness in the mind. This is a contradiction and says that mind-independence is in the mind. This contradiction is full of ideas about how to know and theoretically organize the idea of Cosmos we believe in. It is a model of the mind in general, so anyone could be born into such a ready-made theoretical universe. It can be called a “micro-Cosmos”, which fills up the empty idea of Cosmos. Micro-Cosmos is the model of the mind that is available in a culture full of inspirations, theories, beliefs, etc. Even though it is a contradictory idea, it aids in understanding mind-independence. It has a heterological intention toward externality inside the mind. In the mind, the plurality of things can be counted and organized according to the formal structure of the micro-Cosmos model present. Algorithmic activation of the principle of plurality also activates the intention of micro-Cosmos with the belief of truth and intelligibility. As a result, intelligence surpasses formal boundaries and maintains cognitive clarity amidst illusions. The problem of the possibility of truth and knowledge, as a constitutional nature of the mind, is the basic problem of epistemology in philosophy. In psychology, it turns into a neurotic problem of the defense mechanisms of suppression and intellectualization and may cause ideological rationalizations and excellent anthropomorphic constructions.

3) m C [ ( m I ) ( m C ) ] : For ( m ) all mental entities, if they are some mental entities belonging to general intelligence in the mind, then they are the mental entities experienced in phenomenal consciousness in the mind. It is a tautology. In this relationship, consciousness enclosures its individual self-referential recursions so as to identify itself as a mental entity and achieve unity by using some mental entities in the general intelligence where it historically finds itself already. The genetic algorithm that operates here, through the principle of unity, gives a rational reason to be a unique person for an Ego in a society. This unity can be called “Ego”. The idea of Ego has an autological intention towards internality inside. We believe that we are who we are, and whatever justifies our inner voice is true beyond doubt. However, Ego is a tautological concept and a null set that needs to be filled up by its codomain. In philosophy, this algorithm indicates the basic problem of ethics: Is it possible to find in our minds a universal ethical law necessary for it to be the basis of our unique Ego? Such an Ego has been defined by some contingent objective ethical principles that could easily be misrepresented by hypocritical moralists. The problem is very close to the surface of consciousness and is an active, intense, and popular issue. Culture may cover the other constitutional problems of the mind perfectly except for this one. Immaturity of Ego is a very common problem in psychology, and as a defense mechanism, it produces passive-aggressive behaviors, projections, and schizoid fantasies such as the afterlife, aliens, etc.

4) m C [ ( m C ) ( m I ) ] : For ( m ) all mental entities, if they are some mental entities experienced in phenomenal consciousness, then they are mental entities that exist in a general intelligence beyond phenomenal consciousness. It is a contradiction; there are some entities that are experienced in consciousness, but in the meantime, they are beyond the individual conscious experience, which can specifically define general intelligence. We may call this idea “super-Ego”. Super-Ego models are vital for the generation of egos in culture. They are contradictory cultural forms full of ideas and inspirations; therefore, they can fill the empty egos in their society. The idea of super-Ego, which is based on the negation of the existence of mental entities experienced in consciousness, attributes the existence of these mental entities to a general intelligence beyond conscious experience. Here, GA operates via the principle of negation to negate an infinite number of disjunctive judgments representing the ego models of a community or the infinite disjunctions of representations of a belief system in individual conscious experiences, optimizing the negations of the egos and transforming them into the existence of some super-Ego models as an intellectus divinus, such as father sky, mother earth, kings, gods or states. This algorithm has an autological intention toward internality outside. In this way, believing in the illusions of some super-Ego models that are present in culture allows psychologically acceptable ego models to be distributed to individuals in a cognitive experience and transcend the formal limits of the mind. The fourth intention is also the problem of ideology in philosophy: Is there such a universal lawmaker who can equally distribute justice back to each individual’s consciousness regardless of his or her differences? This concept can be an ideal and legitimate state in political philosophy, or it can also be a model of any god in the philosophy of religion. In psychology, the maturity defense mechanism of an Ego to be aligned with the super-Ego causes refuge in social virtues. Thus, wisdom will also be the final illusion.

6. Conclusion: If the Cost Is Illusion, Where Is Rationality?

Even though we are individuals who seek a mysterious depth in our self-conscious experiences, striving to find our true essence within an authentic existence or as a substratum in a pantheistic nature, rationality follows an externally determined and predictable mechanism. Therefore, the journey following the mechanism outlined above cannot be rational at all because it would neither be our journey nor an original creation; briefly, the cost of reasoning is illusion and alienation. Rationality is the pursuit of original self-creation, the synthesis of truth and freedom as two sides of the same medallion, a creative aesthetic act to match up our essence and existence against illusion and alienation, or to be a dragon catching its own tail. It seems very wise and satisfying to follow an idealistic journey by believing that this Cosmos is not real but a reflection of a universal, divine mind, creating a theoretical model in our micro-Cosmos by divine inspiration, being blessed in heaven, and deserving to be glorified after devoting one’s life to the virtues of one’s community and to those who deserve magnanimous treatment. It would be a very human, highly spiritual, and valuable life journey that would take us from the bottom to a certain altitude. However, it would be an illusion, a copy-paste life, and a blind operation of the mechanism of so-called rationality. Well, if not here, where is the place of that ideal target called “rationality”? Perhaps one may jump into a conspiracy that says “Remain in a safe belief system; otherwise, you will fall into the absurdity of an infinite regress by merely postponing the mystery of the spiritual essence after each scientific step is taken.” Fortunately, this is not the case.

It is a well-known problem that a formal axiomatic system appears paradoxical when axiomatized by its own system, as represented by the famous models of Gödel and Turing. The human mind is not just a formal system; it is a hybrid system that works with a formal system and an evolutionary algorithm. Evolution has also developed a self-referential axiomatic formal system that falls into a paradox, but it has also produced a solution to the paradox with a genetic algorithm. It seems that the biological evolutionary process of the brain has chosen a solution similar to Tarski hierarchies, divided the cognitive system into subsystems, and developed hierarchical relationships in neural networks. We similarly modeled how the paradoxical problem of self-referential systems could be overcome by designing a hybrid system combining Boolean formal systems with certain biological behavior algorithms. If we consider an evolutionary algorithm model as above, it can be seen that EA has developed a genetic algorithm that can provide some actual solutions with intentionality to transcend formal boundaries. On the other hand, intentionality not only transcends formal limits but also creates illusions. Therefore, although the rational steps taken to solve the problems are successful, they create new kinds of problems.

There are other types of artificial intelligences that utilize a hybrid model with such a GA, which evaluates itself recursively via a metaprogramming system. For example, the hybrid system of SOCAIN (Klüver, 2017) is interestingly in perfect match with the above system. SOCAIN is an artificial intelligence model developed as a social system: “SOciety consisting of CA and IN”. The first subsystem of SOCAIN, called CA, is a cellular automaton composed of cells that simulates biological organizations with a version of Boolean networks. The second subsystem, the interactive neural network (IN), is a recurrent network of neurons that connects the neurons to each other. Both of these subsystems operate through horizontal coupling in their own systems. GA is a class of EA. GA is a vertical coupling system that operates over the previous two subsystems. GA has some optimization tasks that are performed by simulating evolution. GA activates and transforms the subsystems several times, from CA to IN and the other way around, until the final optimization moment, which will be sufficient for self-modelling and self-improving through environmental adaptation. The map of GA is the same as our map in Figure 1, and so is the destiny of our systems’ solutions: illusion! But this time, due to artificial intelligence, the result would be some scientific illusion models, very well-organized theoretical models. Like any computing system, the results offered by this artificial intelligence can participate in our decision-making processes, and it can be very beneficial in overcoming many unreasonable obstacles in a decision process.

However, we usually do not make decisions by optimizing the data at our disposal. The optimization task is computable and therefore predictable. In the sense of mathematical universality, we are not predictable products of the environment. Instead of optimization, using error elimination, for example, by validating an uncertainty measure in fuzzy clustering, will only provide us with a tolerable level of confidence in our decisions (Sirmen & Ustundag, 2022) . In contrast, a conscious decision or act has faith in itself without taking into account some probability of uncertainty. As a self-referential hybrid mind, conscious human behavior can be predicted in light of all the incredible scientific advances achieved today, but not exhaustively. Even if carbon nanotube transistors could work like a biological brain, if autopoietic self-programming computers could better themselves, or even if artificial intelligences could create works of art like a grandmaster, their behavior is a blind process for both them and us. For this reason, no intelligent person can fall in love with a computer. However, humans can often be as zombie-like as machines. We fall in love with an authentic personality that deepens as one gets to know oneself. It is a matter of deep learning. For now, making creative decisions in complex, mysterious, and challenging narratives is still the hallmark of the human mind.

Consequently, irrationality is the original place of rationality, and rationality itself is paradoxical. There are civilizations, philosophies, and logical systems based on paradoxical logic. Western philosophy has come to remember the central role of paradoxicality, like Priest’s (1995) concept of “dialetheia”. Nagarjuna’s Indian logic, Zen Koans in Japan or the Fragments of Heraclitus, and oral literature around the world all use some kind of paradoxical logic. Solving the existential paradox of the human mind as formulated in Formula (1) is possible only for each individual interpretation in action. It cannot be resolved from the third-person perspective in any discipline of a culture or even any scientific theory. Even though scientific theories are the most reliable and realistic illusions, they are still illusions. Scientific theorizing likewise has formal limits in the context of axiomatic systems. Indeed, every great scientific revolution emerges out of this paradox as a solution. Strong and decisive intuitions in an aesthetic creation, any action that moves in time, always have the same origin in the historicity of narratives describing the human species. The paradox takes the form of the prophecy paradox in a creative decision process. Popper (2004) calls it the “Oedipus effect.” An oracle affects the sequence of events and disrupts the historical sequence. If we try to predict the outcome of the previous prediction, the prediction makes itself impossible. It is also the same with the act of aesthetic creation that steps into historicity taken by an individual conscious experience: unpredictable, unique, intuitive, irrational rationality! A creative decision or action whose beauty we expect to have a universal aesthetic value takes place in such a prophecy paradox; it is a step into unknown darkness without a theoretical Archimedean standing point or any rational control mechanism. It is an irrational intuition based on maturity and mastery. Rationality can help transcend formal limits with paradoxical intentionality. The intention to go beyond intentionality, beyond the computable optimization task, creates a hypothetical space for decision. This hypothetical space is where rationality lies. Escher’s art piece Print Gallery can be seen as a symbol of the place of rationality.

In conclusion, we can summarize the result of the article as follows: Deciding the proof of a description requires a complete formal system set upon its own axioms, and due to this reason, the formal systems are paradoxical and incomplete. Additionally, describing the mind presents an extra difficulty: Somehow, we will use a formal system in our mind to describe the mind. Therefore, we have the same formal limits. We cannot describe the mind, but we can indicate the formal limits of the mind while we are in the same mind. Intelligence in the mind, with its conceptual-categorical algorithm, works like a limited axiomatic formal system, yet the mind is not so limited by formal boundaries as to speak of formal boundaries. The mind gathers this skill from its hybrid system of intelligence and consciousness. Intelligence is a formal system developed through an evolutionary algorithm that could be transformed into the recursive dynamics of Boolean networks. Interestingly, at the same time, the same algorithm evolved a complex system from within. This complex system is consciousness, which has a genetic algorithm that allows it to transcend the formal limits of intelligence in an unlimited way. We have followed the necessary formal limits of the GA by assuming an EA model, and we have achieved the GA model as a map of the constitutional intentionality of consciousness. The problem of rationality can be defined as the optimization task of the GA. However, optimizing decisions through intentionality produces illusions. If plain rationality ends up with irrationality, then, paradoxically, irrationality may help to overcome the decision problem and yield rationality. As a result, following the paradoxical nature of intelligence through a decision process looks like the only possible way to demonstrate rationality.

NOTES

1We will borrow the EA of intelligence that we have already developed in category theory: Bilgic (2022) . Critique of Rationality. Berlin: Peter Lang.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Bilgic, M. (2022). Critique of Rationality. Peter Lang.
https://doi.org/10.3726/b19049
[2] Buckner, R. L. (2013). The Cerebellum and Cognitive Function: 25 Years of Insight from Anatomy and Neuroimaging. Neuron, 80, 813.
https://doi.org/10.1016/j.neuron.2013.10.044
[3] Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38, 173-198.
https://doi.org/10.1007/BF01700692
[4] Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The Faculty of Language: What Is It, Who Has It, and How Did It Evolve? Science, 298, 1569-1579.
https://doi.org/10.1126/science.298.5598.1569
[5] Husserl, E. (1999). Cartesian Meditations, an Introduction to Phenomenology. In D. Cairns (Trans.). Kluwer Academic Publishers.
https://doi.org/10.1007/978-94-009-9997-8_1
[6] Klüver, J. (2017). Paradoxes, Self-Referentiality, and Hybrid Systems: A Constructive Approach. Open Journal of Philosophy, 7, 48-63.
https://doi.org/10.4236/ojpp.2017.71004
[7] Maturana, H. (1999). The Organization of the Living: A Theory of the Living Organization. International Journal of Human-Computer Studies, 51, 149-168.
https://doi.org/10.1006/ijhc.1974.0304
[8] Maturana, H., & Varela, F. J. (1972). Autopoiesis and Cognition: The Realization of the Living. D. Reidel Publishing Company.
[9] Peters, F. (2010). Consciousness as Recursive, Spatiotemporal Self-Location. Psychological Research, 74, 407-421.
https://doi.org/10.1007/s00426-009-0258-7
[10] Popper, K. (2004). The Poverty of Historicism. Routledge.
[11] Priest, G. (1995). Beyond the Limits of Thought. Cambridge University Press.
[12] Russell, B., & Whitehead, A. N. (1927). Principia Mathematica (2nd ed.). Cambridge University Press.
[13] Sirmen, R. T., & Ustundag, B. B. (2022). Internal Validity Index for Fuzzy Clustering Based on Relative Uncertainty. Computers, Materials & Continua, 72, 2909-2926.
https://doi.org/10.32604/cmc.2022.023947
[14] Treves, A. (2005). Frontal Latching Networks: A Possible Neural Basis for İnfinite Recursion. Cognitive Neuropsychology, 22, 276-291.
https://doi.org/10.1080/02643290442000329
[15] Turing, A. M. (1937). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society Series, s2-42, 230-265.
https://doi.org/10.1112/plms/s2-42.1.230

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.