Self-Sustained Boundedness of Logical and Quantal Error at Semantic Intelligence

It is demonstrated that the recently introduced semantic intelligence spontaneously maintains bounded logical and quantal error on each and every semantic trajectory, unlike its algorithmic counterpart which is not able to. This result verifies the conclusion about the assignment of equal evolutionary value to the motion on the set of all the semantic trajectories sharing the same homeostatic pattern. The evolutionary value of permanent and spontaneous maintenance of boundedness of logical and quantal error on each and every semantic trajectory is to make available spontaneous maintenance of the notion of a kind intact in the long run.

At the same time, the algorithmic intelligence, artificially designed and artificially maintained, is not able to keep boundedness of logical and quantal error in the long run in all three of its realizations, namely: deterministic, probabilistic, and mixed one. In the next section, it is considered in detail why each of the above types of algorithmic intelligence exhibits ill-definiteness of logical errors which result in their unrestrained accumulation in a long run regardless of how small quantal errors are maintained. The lack of restraint over logical error renders the fundamental task for the design and maintenance of algorithmic intelligence to be establishing the best relation between structure and functionality under the supervision of the "survival of the fittest" paradigm.
Semantic intelligence is a new form of intelligence that naturally arises in the frame of the recently introduced concept of boundedness [1]. It is fundamentally different from the algorithmic intelligence both in the means of its physical realization and in the mathematical background. This culminates in the most pronounced difference between them: the semantic intelligence acquires the property of autonomous creation and comprehension of information while its algorithmic counterpart needs external mind (that is our human mind) to create an algorithm and to comprehend the output.
Semantic intelligence is executed spontaneously by natural non-linear and non-homogeneous physico-chemical processes subject to a general operational protocol that keeps the boundedness intact. To compare, the algorithmic intelligence is executed by means of artificially designed and artificially maintained linear processes. Thus, the physical realization of the semantic intelligence by means of spontaneously executed natural physico-chemical processes renders similarity to human intelligence which, to remind, is also executed by means of spontaneous natural processes.
The fundamental importance of maintenance of the linearity of all local processes for algorithmic intelligence consists of the fact that it provides holding of the parallelogram summation rule at each and every moment and throughout the entire hardware. In turn, the latter keeps the computing process free from local distortions which, it happens, would appear as unrestrained local errors.
Consequently, the artificially maintained linearity provides the re-occurrence of any output of each and every algorithm on the re-occurrence of the same input.
However, as proven in Section 2, the reproducibility of the outcome is inevitably accompanied by the reproducibility of once produced large enough logical error.
The reproducibility of semantic computing is provided by the major outcome of the general operational protocol which keeps boundedness intact [1] [2]. It consists of the fact that the functional metrics are kept permanently Euclidean regardless of what the underlying spatio-temporal metrics of the computing system are. The Euclidean of the functional metrics provides not only the uniformity of "units" throughout the entire hardware and at each and every step of computation but it serves also as a crucial ingredient for holding the central for the entire theory of boundedness theorem, proven by the author, and called by  The following immediate outcome of the concept of boundedness makes the puzzle most intriguing: the central assertion of the concept of boundedness is that only bounded amount of matter and energy, specific to any local process, is involved; in turn, this implies that the precision of the semantic computing is bounded both from below and from above. This is because the computation of each and every number involves energy and matter proportional to the number of defining digits and thus it is available only for those values which belong to specific bounded margins of each and every semantic hardware. Thus, the origin of permanent boundedness for the quantal error from above is evident. Yet, the question of how the interplay between logical and quantal error is organized so that the limited form below the precision of semantic computing does not accelerate the logical error remains; it will be considered in Section 3.

Unrestrained Logical and Quantal Error at Algorithmic Intelligence
The major goal of the present section is to consider in detail why the algorithmic intelligence is not able to maintain the logical error bounded even though the quantal error is kept very small. At first, the case of deterministic algorithms comes: 1) Deterministic algorithms are artificially designed, specific to each case, sequences of logical steps. The verification of each logical step is provided by a positive answer to a cleverly enough posed question specially constructed for the purpose. In turn, the logic of the algorithms is represented by acyclic directed graphs where the computing is represented as a trajectory connecting an input and the corresponding output following the steps prescribed by a given algorithm. However, a generic property of algorithms is to comprise at least one step of logical operation "IF". The latter could change drastically the course of a current trajectory by means of causing "jumps" to distant "branches" of the graph.
The hazardous moment is that such deviations can be a result not only by the prescription of the corresponding algorithm (desired outcome) but to result in an unrestrained error due to finiteness of the precision (misleading outcome).
To make it clear, let us present the most pronounced example: the computation of limit cycles as solutions of differential equations is inevitably bound to degenerate into a motion on a spiral (ingoing or outgoing depending on any current realization of computing) which produces qualitatively different result in a long run: instead of bounded cycling motion, it approaches either steady point or infinity. The inevitability of this behavior is rooted in the fact that the operation "IF" separates two logically different regions by a single point (line in some cases) while the precision restrains the digits to "intervals" regardless of how small the precision is. Thus, around unstable (and/or neutrally stable) solutions the logical "error" accelerates by each and every step.
2) Next in the line comes the probabilistic approach to algorithmic computing. It is grounded on artificial assignment of specific probabilities to local the interaction with the neighborhood and the environment. The goal is to find out whether a system self-organizes so that to exhibit a collective behavior which in turn can serve as a physical background for an information symbol, i.e. a letter in an alphabet. Such self-organization has been established in a number of concrete cases but not as a generic property of a certain class of systems and/or dynamical rules. Yet, self-organization is expected to behave as a type of critical phenomena. The major flaw of this approach is that any such self-organization if exists, is extremely sensitive to even infinitesimal noise added to any steady environment. It is worth noting the difference between vulnerability to small changes in the environment (a setback) and the robustness to local failures (one of the advantages of the approach). The above vulnerability becomes a stumbling block of the entire approach because it rules out any general operational protocol which could govern transitions among different collective states. Indeed, even the very existence of any such protocol turns out inherently contradictive since the vulnerability to any changes in the environment renders the impossibility to define the exact amount of matter/energy and information involved in the substantiation of that transition. Consequently, this renders lack of any metrics in the state space which constitutes the ill-definiteness of the characteristics of any transition between any collective states. Thus, the logical operations among different information symbols, represented through different collective states, are subject to indefinite quantal error which is further loaded by the physical inability to substantiate any "jump" between the information symbols.
3) The mixed case of algorithmic intelligence encompasses these types of algorithms which have both deterministic and probabilistic components. In most cases, the "link" between the deterministic and the probabilistic part is set by means of specific optimization. Since I already have discussed the origin of unrestrained and/or ill-definiteness of the logical error for deterministic and probabilistic cases separately, now I will consider only why the optimization is not the "remedy" for the problem. The optimization is an artificially set constraint that holds along the entire optimal trajectory. Generally, it is of 3 types: minimax optimization, Bellman type optimization and Pontryagine type one. The common setback of all three types of optimization is that their fulfillment along the entire trajectory is accompanied by specific local discontinuities of the optimal trajectory. These discontinuities are hazardous not only for the physical maintenance of the corresponding hardware but they produce a massive change in logical error. As an example, the discontinuity of the optimal solution in the Pontryagine type of optimization is a product of the collapse of the current effective "Hamiltonian" and its substitution with a new one on the next part of the trajectory where it again will collapse when the next discontinuity occurs. Thus, this is not the only problem of the value of the logical error but it turns into a problem of identity of a system since the "Hamiltonian" is supposed to hold the identity of an object in physics.

Boundedness of Logical and Quantal Error for Semantic Intelligence
Since the mathematical and physical grounds of the semantic intelligence and its algorithmic counterpart are completely different, it is to be expected that they yield completely different outcomes. In the previous section, it has been demonstrated that algorithmic computing is not able to preclude the accumulation of logical error although the quantal error is maintained very small. In a nutshell, the root of the problem lies in the contradiction between the logic of the algorithms which operates as "dots" whilst the precision operates as "intervals".
Further, the optimal trajectory is generically a "choice" of a single line among a volume of all available ones. Thus, each and every operation "IF" acts as a comparison between values of different dimensions, which, as it is well known, is never well-defined.
The goal of the present section is to demonstrate that the semantic intelligence permanently maintains bounded logical error at self-sustaining the quantal error bounded from below and above. The first clue lies in the fact that the general protocol providing spontaneous execution of semantic intelligence is organized so that the semantic computing to operate only with sets of equal dimensions.
Let me start the consideration by reminding how semantic intelligence is or-

Conclusions
The obtained in the present paper result about permanent self-sustaining of It should be stressed also on the difference in substantiation of these types of intelligence: whilst the algorithmic intelligence is fully governed by the human mind, the semantic intelligence is executed by means of spontaneously executed physico-chemical processes governed by general operational protocol which spontaneously and permanently maintains boundedness and whose major distinctive property is autonomous creation and comprehension of information.
Yet, besides fundamental differences, there is a highly non-trivial synergy between semantic and algorithmic computing, which becomes most pronounced [2] the semantic computing provides boundedness of the quantal error for the corresponding semantic computation; on the other hand, the algorithmic computing, where the "Eucledianity" of computation is artificially maintained through artificially designed and executed linear processes, "computes" the underlying spatio-temporal structure. Outlining, the semantic intelligence "computes" functionality while the algorithmic intelligence "computes" the underlying spatio-temporal structure. The advantage of that interplay is crucial for the study and exploration of any unknown phenomena ranging from nano-devices to cosmological objects.

Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.