Journal of Modern Physics
Vol.11 No.02(2020), Article ID:97973,11 pages
10.4236/jmp.2020.112010

Self-Sustained Boundedness of Logical and Quantal Error at Semantic Intelligence

Maria K. Koleva

Institute of Catalysis, Bulgarian Academy of Sciences, Sofia, Bulgaria

Copyright © 2020 by author(s) and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 2, 2020; Accepted: January 16, 2020; Published: January 19, 2020

ABSTRACT

It is demonstrated that the recently introduced semantic intelligence spontaneously maintains bounded logical and quantal error on each and every semantic trajectory, unlike its algorithmic counterpart which is not able to. This result verifies the conclusion about the assignment of equal evolutionary value to the motion on the set of all the semantic trajectories sharing the same homeostatic pattern. The evolutionary value of permanent and spontaneous maintenance of boundedness of logical and quantal error on each and every semantic trajectory is to make available spontaneous maintenance of the notion of a kind intact in the long run.

Keywords:

Semantic Intelligence, Algorithmic Intelligence, Boundedness, Logical Error, Quantal Error, Optimization, Survival of the Fittest, Notion of a Kind

1. Introduction

One of the major apprehensions of any theory of general intelligence is whether it is ever possible to create a type of intelligence that spontaneously maintains logical and quantal errors permanently bounded. The question is provoked by the opposition between our human intelligence on the one hand and on the other hand, the rapidly developing in the recent decades' algorithmic intelligence which serves as grounds for modern-day computers and as grounds for the current comprehension of artificial intelligence.

Human intelligence is executed by means of natural processes and is organized in a variety of individual responses. The persistence of the latter organization prompts to suggest that in the long run, all individual responses share the same evolutionary value.

At the same time, the algorithmic intelligence, artificially designed and artificially maintained, is not able to keep boundedness of logical and quantal error in the long run in all three of its realizations, namely: deterministic, probabilistic, and mixed one. In the next section, it is considered in detail why each of the above types of algorithmic intelligence exhibits ill-definiteness of logical errors which result in their unrestrained accumulation in a long run regardless of how small quantal errors are maintained. The lack of restraint over logical error renders the fundamental task for the design and maintenance of algorithmic intelligence to be establishing the best relation between structure and functionality under the supervision of the “survival of the fittest” paradigm.

Semantic intelligence is a new form of intelligence that naturally arises in the frame of the recently introduced concept of boundedness [1]. It is fundamentally different from the algorithmic intelligence both in the means of its physical realization and in the mathematical background. This culminates in the most pronounced difference between them: the semantic intelligence acquires the property of autonomous creation and comprehension of information while its algorithmic counterpart needs external mind (that is our human mind) to create an algorithm and to comprehend the output.

Semantic intelligence is executed spontaneously by natural non-linear and non-homogeneous physico-chemical processes subject to a general operational protocol that keeps the boundedness intact. To compare, the algorithmic intelligence is executed by means of artificially designed and artificially maintained linear processes. Thus, the physical realization of the semantic intelligence by means of spontaneously executed natural physico-chemical processes renders similarity to human intelligence which, to remind, is also executed by means of spontaneous natural processes.

The fundamental importance of maintenance of the linearity of all local processes for algorithmic intelligence consists of the fact that it provides holding of the parallelogram summation rule at each and every moment and throughout the entire hardware. In turn, the latter keeps the computing process free from local distortions which, it happens, would appear as unrestrained local errors. Consequently, the artificially maintained linearity provides the re-occurrence of any output of each and every algorithm on the re-occurrence of the same input. However, as proven in Section 2, the reproducibility of the outcome is inevitably accompanied by the reproducibility of once produced large enough logical error.

The reproducibility of semantic computing is provided by the major outcome of the general operational protocol which keeps boundedness intact [1] [2]. It consists of the fact that the functional metrics are kept permanently Euclidean regardless of what the underlying spatio-temporal metrics of the computing system are. The Euclidean of the functional metrics provides not only the uniformity of “units” throughout the entire hardware and at each and every step of computation but it serves also as a crucial ingredient for holding the central for the entire theory of boundedness theorem, proven by the author, and called by her decomposition theorem. It proves that any bounded irregular sequence, (BIS) subject to permanent boundedness of rates of exchanging matter/energy/ information with current environment and self-sustaining permanent boundedness of the amplitudes of the corresponding terms, shares the property that the power spectrum of each and every such BIS is additively decomposed to a specific discrete pattern called homeostasis and a continuous component, called noise one, whose shape is universal. The crucial property is that the specific properties of any homeostatic pattern and the shape of the noise component are insensitive to the details of the statistics of the members of any sequence. As considered in our previous paper [2] the causal relations are concentrated in the discrete pattern while the noise component comprises no information content, although it commences from well-defined local physicochemical processes and thus it serves as a reservoir for providing an adequate current local response in an ever-changing environment. An immediate outcome of the decomposition theorem is that it provides reproducibility of established causal relations set in an ever-changing non-uniform environment let alone the latter is bounded.

It should be stressed that the major difference in the mathematical background between semantic and algorithmic intelligence lies in holding of the decomposition theorem for the former while the algorithmic intelligence is subject to the Central Limit Theorem. It is worth noting that the subjects of both theorems have no common cross-section since, while the subject of the Central Limit theorem is independent random variables (yet not bounded), the subject of the decomposition theorem is bounded irregular variables (yet not independent).

Consequently, the fundamentally different physico-chemical and mathematical background of the semantic intelligence and its algorithmic counterpart prompts the anticipation of completely different outcomes one of which is subject to the present paper.

To continue, let us mention that the insensitivity of the specific properties of any set of causal relations to the details of current environment, obtained by the decomposition theorem, comes at a price: obviously, it is available only under the general condition that both the logical and the quantal error are kept permanently bounded on each and every semantic trajectory. The proof of this assertion constitutes the major goal of the present paper. It is worth noting that an immediate consequence of the affirmation of that claim renders the semantic intelligence to share the property of the human intelligence of assigning equal evolutionary value to the variety of all the individual responses in a long run. It should be stressed on the difference with the “survival of the fittest” paradigm: the latter is available in a constant non-uniform environment while the semantic computing operates in a rapidly and permanently changing non-uniform environment. The latter result is obvious since no long-term preference on any of semantic trajectories is available in an ever-changing non-uniform environment whilst a constant environment renders differences among different individuals to grow in the long run.

The following immediate outcome of the concept of boundedness makes the puzzle most intriguing: the central assertion of the concept of boundedness is that only bounded amount of matter and energy, specific to any local process, is involved; in turn, this implies that the precision of the semantic computing is bounded both from below and from above. This is because the computation of each and every number involves energy and matter proportional to the number of defining digits and thus it is available only for those values which belong to specific bounded margins of each and every semantic hardware. Thus, the origin of permanent boundedness for the quantal error from above is evident. Yet, the question of how the interplay between logical and quantal error is organized so that the limited form below the precision of semantic computing does not accelerate the logical error remains; it will be considered in Section 3.

2. Unrestrained Logical and Quantal Error at Algorithmic Intelligence

The major goal of the present section is to consider in detail why the algorithmic intelligence is not able to maintain the logical error bounded even though the quantal error is kept very small. At first, the case of deterministic algorithms comes:

1) Deterministic algorithms are artificially designed, specific to each case, sequences of logical steps. The verification of each logical step is provided by a positive answer to a cleverly enough posed question specially constructed for the purpose. In turn, the logic of the algorithms is represented by acyclic directed graphs where the computing is represented as a trajectory connecting an input and the corresponding output following the steps prescribed by a given algorithm. However, a generic property of algorithms is to comprise at least one step of logical operation “IF”. The latter could change drastically the course of a current trajectory by means of causing “jumps” to distant “branches” of the graph. The hazardous moment is that such deviations can be a result not only by the prescription of the corresponding algorithm (desired outcome) but to result in an unrestrained error due to finiteness of the precision (misleading outcome). To make it clear, let us present the most pronounced example: the computation of limit cycles as solutions of differential equations is inevitably bound to degenerate into a motion on a spiral (ingoing or outgoing depending on any current realization of computing) which produces qualitatively different result in a long run: instead of bounded cycling motion, it approaches either steady point or infinity. The inevitability of this behavior is rooted in the fact that the operation “IF” separates two logically different regions by a single point (line in some cases) while the precision restrains the digits to “intervals” regardless of how small the precision is. Thus, around unstable (and/or neutrally stable) solutions the logical “error” accelerates by each and every step.

2) Next in the line comes the probabilistic approach to algorithmic computing. It is grounded on artificial assignment of specific probabilities to local events and their permanent updating under apriori set local dynamical rules for the interaction with the neighborhood and the environment. The goal is to find out whether a system self-organizes so that to exhibit a collective behavior which in turn can serve as a physical background for an information symbol, i.e. a letter in an alphabet. Such self-organization has been established in a number of concrete cases but not as a generic property of a certain class of systems and/or dynamical rules. Yet, self-organization is expected to behave as a type of critical phenomena. The major flaw of this approach is that any such self-organization if exists, is extremely sensitive to even infinitesimal noise added to any steady environment. It is worth noting the difference between vulnerability to small changes in the environment (a setback) and the robustness to local failures (one of the advantages of the approach). The above vulnerability becomes a stumbling block of the entire approach because it rules out any general operational protocol which could govern transitions among different collective states. Indeed, even the very existence of any such protocol turns out inherently contradictive since the vulnerability to any changes in the environment renders the impossibility to define the exact amount of matter/energy and information involved in the substantiation of that transition. Consequently, this renders lack of any metrics in the state space which constitutes the ill-definiteness of the characteristics of any transition between any collective states. Thus, the logical operations among different information symbols, represented through different collective states, are subject to indefinite quantal error which is further loaded by the physical inability to substantiate any “jump” between the information symbols.

3) The mixed case of algorithmic intelligence encompasses these types of algorithms which have both deterministic and probabilistic components. In most cases, the “link” between the deterministic and the probabilistic part is set by means of specific optimization. Since I already have discussed the origin of unrestrained and/or ill-definiteness of the logical error for deterministic and probabilistic cases separately, now I will consider only why the optimization is not the “remedy” for the problem. The optimization is an artificially set constraint that holds along the entire optimal trajectory. Generally, it is of 3 types: minimax optimization, Bellman type optimization and Pontryagine type one. The common setback of all three types of optimization is that their fulfillment along the entire trajectory is accompanied by specific local discontinuities of the optimal trajectory. These discontinuities are hazardous not only for the physical maintenance of the corresponding hardware but they produce a massive change in logical error. As an example, the discontinuity of the optimal solution in the Pontryagine type of optimization is a product of the collapse of the current effective “Hamiltonian” and its substitution with a new one on the next part of the trajectory where it again will collapse when the next discontinuity occurs. Thus, this is not the only problem of the value of the logical error but it turns into a problem of identity of a system since the “Hamiltonian” is supposed to hold the identity of an object in physics.

3. Boundedness of Logical and Quantal Error for Semantic Intelligence

Since the mathematical and physical grounds of the semantic intelligence and its algorithmic counterpart are completely different, it is to be expected that they yield completely different outcomes. In the previous section, it has been demonstrated that algorithmic computing is not able to preclude the accumulation of logical error although the quantal error is maintained very small. In a nutshell, the root of the problem lies in the contradiction between the logic of the algorithms which operates as “dots” whilst the precision operates as “intervals”. Further, the optimal trajectory is generically a “choice” of a single line among a volume of all available ones. Thus, each and every operation “IF” acts as a comparison between values of different dimensions, which, as it is well known, is never well-defined.

The goal of the present section is to demonstrate that the semantic intelligence permanently maintains bounded logical error at self-sustaining the quantal error bounded from below and above. The first clue lies in the fact that the general protocol providing spontaneous execution of semantic intelligence is organized so that the semantic computing to operate only with sets of equal dimensions.

Let me start the consideration by reminding how semantic intelligence is organized. It starts with the fact, proven in [1] that any non-uniform ever-changing environment, let alone being bounded, is decomposable to an effective specific steady component and an effective noise component, latter presented as a BIS. One of the major outcomes of this exclusive for the boundedness decomposition is one-to-one correspondence between the state space and the effective control parameter space. Thus the state space acquires metrics which in turn defines the characteristics of any motion in it. Further, the state space turns divided into basins-of-attractions so that a specific to each basin homeostatic pattern appears as an intra-basin invariant and thus the latter serves as an information symbol. Since stable in a long run, solution could be only bounded ones, each basin-of-attraction has non-zero volume. Alongside, the bounded precision renders the motion on each and every trajectory to be confined in an open tube such that any current trajectory (a line) never leaves that tube. Further, the boundedness renders the state space to be bounded which in turn renders the motion in it to be orbital. Yet, it should be stressed that, due to the boundedness of rates, each and every available orbit passes only through adjacent basins-of-attraction thus keeping local deviations permanently minimal regardless of whatever the logical operation is.

The crucial step forward is the association of the notion of a semantic unit with the performance of a non-mechanical engine built on the corresponding orbit which generically passes through at least 4 different information symbols (basins-of-attraction). The greatest value of this association is that semantic meaning acquires novel connotation irreducible to a mere algorithmic sequence of information symbols. To remind, so far it is taken for granted that the execution of any algorithm is subject to the general laws of arithmetic (e.g. associative law, dissociative law, etc.). This irreducibility is best pronounced by the exclusive property of semantic intelligence to get hold permutation sensitivity of the operation of any engine to the change of the direction of its execution; e.g. the famous Carnot engine operates in one direction as a pump and in the opposite as a refrigerator. Note that the execution of a sequence of logical operations in the opposite direction could yield an uncontrolled change in the logical meaning most probably producing a non-sense; thus the latter cannot be classified as “permutation sensitivity”. Thus, the arithmetic laws alone are unable to provide meaningful permutation sensitivity as a generic property. It is worth noting that the permutation sensitivity of the semantic intelligence is indispensably linked to the boundedness of logical and quantal errors of any non-mechanical engine. The boundedness of logical and quantal errors are self-sustained by means of the confinement of the corresponding trajectories within specific “tubes” set by the corresponding precision.

The next step forward lies in the assumption that different semantic units are separated by punctuation marks, e.g. space bars. The punctuations marks are substantiated by means of special volumes tangent to all semantic units at any given hierarchy. Details of these considerations can be found in Chapter 4 of [1].

Thus, the motion on any semantic trajectory is confined to be exerted within a torus (“donut”) “wrapped” onto an orbit which passes through different basins-of-attraction; the width of the torus is set correspondingly by the current lower and upper level of quantal error. Thus, due to the boundedness of rates and amplitudes, the quantal error never exceeds its margins throughout the motion on any semantic trajectory. Later it is considered the role of Euclideanity of the functional metrics for self-sustaining the margins of quantal error intact in the long run.

Outlining, all ingredients of semantic computing, namely basins-of-attractions, punctuation marks, and trajectory confinements are volumes of the same dimension. Further, the execution of the semantic intelligence as a sequence of orbits, each of which of bounded length, automatically keeps the logical error also bounded.

A crucially important point is about spontaneous self-sustaining of bounded quantal error. The latter is provided by the general operational protocol for extra-matter/energy dissipation. Now we make use of the property of that protocol to provide robustness of the Eucledianity of the functional metrics in the sense that the latter provides the global robustness of the same abstract quantal relations for each and every spatio-temporal point (for example, that is, 2 + 2 = 4 everywhere and/or on repetition). In turn, the robustness of the quantal relations maintains the robustness of the thresholds thus providing permanent robustness of the margins of boundedness of rates and amplitudes which eventually culminates in maintenance of bounded quantal error regardless to the details of the environmental impact and regardless to the details of the local specificity of the physico-chemical processes which substantiate any piece of semantic intelligence. It is worth reminding that these processes are in general non-linear and operate in a non-homogeneous way in any non-uniform ever-changing environment let alone the latter is bounded [2].

Note that this view renders the notion of abstract arithmetic relations equally available for continuous objects such as space and time zones, and for discrete objects such as apples and matter. Thus, as mentioned in the Introduction, the maintenance of Euclideanity of metrics provides the notion of a unit for different spatio-temporal phenomena well-defined. Alongside this, it renders the law of parallelogram for the summation of vector variables insensitive to the spatio-temporal point where it is applied. An immediate consequence of that is the covariance (viewed as insensitivity to the choice of reference frame) of the specific laws of Nature represented through the corresponding functional relations. Note that otherwise, the quantal relations would turn local and specific to the current spatio-temporal point of application. Consequently, the notion of boundedness would turn ill-defined since the notion of thresholds turns local so that they can vary from one spatio-temporal point to another and under repetition of any current event. As an immediate consequence, this would result in a lack of any form of covariance for any relation.

Let us now focus our attention on highly non-trivial feedback between the boundedness of quantal errors and the boundedness of logical errors. Next, I present arguments that provide long-term stability of any concrete realization of the notion of semantic intelligence. Indeed, the permanent non-distorted boundedness of rates and amplitudes precludes both logical and quantal errors from exceeding their margins on each and every orbit and render both of them to turn to zero on completing each and every cycle on each and every semantic trajectory. To remind, turning errors to zero on completing a cycle is an immediate consequence of the fact that since successive orbits are separated by accumulation point (e.g. space bar), one can consider each and every cycle starting and ending at the corresponding accumulation point, that is to consider each orbit starting and ending at the same point. Thus, the feedback operates via self-sustaining of Euclideanity of the corresponding functional metrics which, in turn, provides the characteristics of the boundedness of rates and amplitudes non-distorted. In turn, the non-distorted boundedness of rates and amplitudes sustains boundedness of logical and quantal errors which in turn provides robustness of the characteristics of the boundedness of rates and amplitudes. Taking into account that boundedness of rates and amplitudes has been put forward as the most general condition for providing long-term stability of complex systems, the conclusion derived is that the same conditions turns enough to provide not only long-term stable operating of a complex system but permanent self-sustaining of the boundedness of logical and quantal error of the corresponding realization of semantic intelligence as well.

The highly non-trivial feedback between logical and quantal error is best pronounced through the decomposition theorem. Indeed, the latter proves additivity of the decomposition of the power spectrum of each and every BIS to a specific homeostatic pattern and a universal noise component. In turn, namely, the additivity provides a constant error for that decomposition in the long run. So, the additive decomposition confirms permanent boundedness of the logical error for the corresponding causal relations in an ever-changing environment, and in the long run and regardless of the concrete values of the quantal error. To remind, the causal relations are encapsulated in the homeostatic pattern while the noise component has no information content.

It is worth to point out the fundamental difference between the above-considered feedback between logical and quantal error for the semantic intelligence and the case of algorithmic intelligence. Next, it is considered why the artificial maintenance of low quantal error is not enough to provide boundedness of logical error for algorithmic intelligence. This conclusion becomes evident through the following considerations: any piece of algorithmic intelligence is executed as a “string” of specific successive local computations by means of local linear processes (compare to orbits for semantic intelligence). Namely, the locality of each and every computation along with the execution as a sequence of computations immediately provides a disconnection between the corresponding logical and quantal errors. Consecutively, the consideration at the beginning of the present section comparison between variables of different dimensions renders unrestrained accumulation of logical error even though the quantal error is kept very small.

Let us now focus the attention on the role of optimization for semantic computing. The problem with the identity stands differently: the long-term bearer of identity for semantic intelligence is the current homeostatic pattern whilst the Hamiltonians acquire rather local meaning in the sense that they define local properties of participating in any given chemical reaction and/or physical process atoms/molecules. It should be stressed on the fact that the optimization can be useful in the short run only and for the purpose only. However, it should not yield to deviations from the current semantic trajectory because thus it returns in hazardous long-run events.

It is obvious that permanent self-sustaining of the boundedness of the logical and the quantal error renders all different semantic trajectories to be of equal evolutionary value from the point of view of the concept of general intelligence.

The evolutionary value of permanent and spontaneous maintenance of boundedness of logical and quantal error on each and every semantic trajectory is to make available maintenance of the notion of a kind intact in the long run. It is worth noting that the notion of a kind is corroborated by means of the diversity of individual responses executed as motion on different semantic trajectories so that, provided by the bounded logical and quantal errors on each of them, to assign the same evolutionary value to the motion on all the semantic trajectories sharing the same homeostatic pattern.

To compare, algorithmic computing is universal and reproducible if and only if the environment is kept permanently the same. In the ever-changing environment, the logical and quantal errors at algorithmic computing turn ill-defined and unsaved from unlimited accumulation of error. Preventing the accumulation of logical and quantal errors by means of appropriate optimization is available also only for a steady environment. In an ever-changing environment, the effect of optimization turns only local because either the identity is violated and/or the optimization causes permanent discontinuity of the major flow function.

4. Conclusions

The obtained in the present paper result about permanent self-sustaining of bounded logical and quantal error for a newly introduced type of general intelligence called by the author semantic intelligence prompts to outline the general strategy for the development of the notion of general intelligence, namely: the development is substantiated through a variety of yet to be discovered forms with radically different properties. The grounds for this suggestion lying in the verified in the present paper conclude that the semantic intelligence is exerted through a variety of individual responses, each substantiated by motion on a semantic trajectory so that all individual responses, sharing the same homeostatic pattern, acquire the same evolutionary value in a long run. The latter comes in fundamental opposition to the algorithmic intelligence which is artificially created, maintained in a constant environment, and artificially comprehended, thus being subject to the survival of the fittest paradigm. Another conclusion drawn from the above comparison is that each evolutionary paradigm holds for its own subject and there is no cross-section for the subjects of different types of intelligence.

The evolutionary value of permanent and spontaneous maintenance of boundedness of logical and quantal error on each and every semantic trajectory is to make available spontaneous maintenance of the notion of a kind intact in the long run. It is worth noting that the notion of a kind is corroborated by means of the diversity of individual responses executed as motion on different semantic trajectories so that, provided by the bounded logical and quantal errors on each of them, to assign the same evolutionary value to the motion on all the semantic trajectories sharing the same homeostatic pattern.

It should be stressed also on the difference in substantiation of these types of intelligence: whilst the algorithmic intelligence is fully governed by the human mind, the semantic intelligence is executed by means of spontaneously executed physico-chemical processes governed by general operational protocol which spontaneously and permanently maintains boundedness and whose major distinctive property is autonomous creation and comprehension of information.

Yet, besides fundamental differences, there is a highly non-trivial synergy between semantic and algorithmic computing, which becomes most pronounced in curved space-time. Thus, by means of self-sustaining Euclidean metrics [1] [2] the semantic computing provides boundedness of the quantal error for the corresponding semantic computation; on the other hand, the algorithmic computing, where the “Eucledianity” of computation is artificially maintained through artificially designed and executed linear processes, “computes” the underlying spatio-temporal structure. Outlining, the semantic intelligence “computes” functionality while the algorithmic intelligence “computes” the underlying spatio-temporal structure. The advantage of that interplay is crucial for the study and exploration of any unknown phenomena ranging from nano-devices to cosmological objects.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

Cite this paper

Koleva, M.K. (2020) Self-Sustained Boundedness of Logical and Quantal Error at Semantic Intelligence. Journal of Modern Physics, 11, 157-167. https://doi.org/10.4236/jmp.2020.112010

References

  1. 1. Koleva, M.K. (2012) Boundedeness and Self-Organized Semantics: Theory and Applications. IGI-Global, Hershey, PA.

  2. 2. Koleva, M.K. (2019) Journal of Modern Physics, 10, 43-58.