^{1}

^{*}

^{2}

^{*}

A mathematical model of perceptual symbol system is developed. This development requires new mathematical methods of dynamic logic (DL), which have overcome limitations of classical artificial intelligence and connectionist approaches. The paper discusses these past limitations, relates them to combinatorial complexity (exponential explosion) of algorithms in the past, and relates it further to the static nature of classical logic. DL is a process-logic; its salient property is evolution of vague representations into crisp. We first consider one aspect of PSS: situation learning from object perceptions. Next DL is related to PSS mechanisms of concepts, simulators, grounding, embodiment, productiveity, binding, recursion, and to the mechanisms relating embodied-grounded and amodal symbols. We discuss DL capability for modeling cognition on multiple levels of abstraction. PSS is extended toward interaction between cognition and language. Experimental predictions of the theory are discussed. They might influence experimental psychology and impact future theoretical developments in cognitive science, including knowledge representation, and mechanisms of interaction between perception, cognition, and language. All mathematical equations are also discussed conceptually, so mathematical understanding is not required. Experimental evidence for DL and PSS in brain imaging is discussed as well as future research directions.

Barsalou [

During the Cognitive Revolution in the middle of the last century, according to Barsalou, cognitive scientists were inspired by new forms of representation “based on developments in logic, linguistics, statistics, and computer science.” Amodal representations were adopted, such as feature lists, semantic networks, and frames [

Past and ongoing developments of computational implementations of PSS include [7-9]. Yet, computational models for PSS require new mathematical methods different from traditional artificial intelligence, pattern recognition, or connectionist methods. The reason is that the traditional methods encountered combinatorial complexity (CC), an irresolvable computational difficulty, when attempting to model complex systems. Cognitive modeling requires learning combinations of perceptual features and objects or events [10-14].

This article develops a realistic and scalable mathematical model of grounded symbols and formalization of PSS based on a new computational technique of dynamic logic, DL [15,16]. The developed mathematical formalism is quite general. We first concentrate on one example of PSS mechanism: a mathematical description of models and simulators for forming and enacting representations of situations (higher level symbols) from perceptions of objects (lower level symbols), and then we discuss its general applicability. In addition to simulators, we consider concepts, grounding, binding, dynamic aspect of PSS (DIPSS), abstract concepts, the mechanism of amodal symbols within PSS, and the role of logic.

Past mathematical difficulties are considered in Section 2. They are related to classical logic, and a new computational technique of dynamic logic (DL) is introduced, which overcomes past computational limitations. Whereas classical logic is a static logic of statements (e.g., “if A then B”), DL describes a process capable of modeling the main components of PSS, including simulators. Section 3 illustrates the important properties of DL. Section 4 illustrates how DL models essential mechanisms of PSS considering an example of learning situations from objects (a difficult problem due to its inherent combinatorial complexity). Section 5 discusses DL as a general mechanism of interacting bottom-up and top-down signals, applicable to all levels of cognitive processing. Section 6 continues this discussion concentrating specifically on DL modeling amodal vs perceptual symbols. Section 7 discusses experimental evidence confirming DL predictions of the mind mechanism, and formulates further predictions that could be tested experimentally in the near future. Section 8 describes future theoretical research as well as proposed verifiable experimental predictions of DL.

In modern neuroscience, a fundamental process in object perception is an interaction of bottom-up signals from sensory organs and top-down signals from internal mind’s representations (memories) of objects. During perception, the mind matches subsets of bottom-up signals corresponding to objects with representations of object in the mind (and top-down signals). This produces object recognition; it activates brain signals leading to mental and behavioral responses [3,17-20]. This section briefly summarizes mathematical development in artificial intelligence, pattern recognition, and other computational methods used in cognitive science for modeling brain-mind processes. The fundamental difficulties preventing mathematical modeling of perception, cognition, and PSS are discussed; then overcoming these difficulties using DL is discussed.

Mathematical modeling of the above recognition process has not been easy, a number of difficulties have been encountered during the past fifty years. These difficulties were summarized under the notion of combinatorial complexity (CC) [^{100}, exceeding the number of all elementary particle events in life of the Universe; no computer would ever be able to compute that many combinations.

CC was first identified in pattern recognition and classification research in the 1960s and was named “the curse of dimensionality” [

In the 1970’s rule systems were proposed to solve the problem of learning complexity [23,24]. Minsky suggested that learning was a premature step in artificial intelligence; Newton “learned” Newtonian laws, most of scientists read them in the books. Therefore Minsky has suggested, knowledge ought to be input in computers “ready made” for all situations and artificial intelligence would apply these known rules. Rules would capture the required knowledge and eliminate a need for learning. Chomsky’s original ideas concerning mechanisms of language grammar related to deep structure [

To overcome these difficulties model systems were proposed in the 1980s to combine advantages of learning and rules-models by using adaptive models [10,11,26-32]. Available knowledge was to be summarized in models and parameters of the models were to describe unknown aspects of concrete situations. Similar was the Chomsky idea [

Perceptual symbols and amodal symbols, as described by PSS, differ not only in their representations in the brain, but also in their properties that are mathematically modeled in this paper. This mathematically fundamental difference and its relations to CC of matching bottom-up and top-down signals are discussed in this section.

It has been demonstrated that CC is related to the use of formal logic in algorithms and neural networks [11,12, 35]. Logic serves as a foundation for many approaches to cognition and linguistics; it underlies most of computational algorithms. But its influence extends far beyond, affecting cognitive scientists, psychologists, and linguists, including those not using complex mathematical algorithms for modeling the mind. Formal logic is more than 2000 years old, it influences the entire science. Most psychologists make a more or less conscious assumption that the mechanisms of logic serve as the basis of cognition. As discussed in Section 7, our minds are unconscious about its illogical foundations. Only approximately logical part of the mind mechanisms is accessible to consciousness. Although this is a minor part of the mind operations it fundamentally influences scientific intuition. It is unconsciously affected by the bias toward logic. Even when the laboratory data drive our thinking away from logical mechanisms it is difficult to overcome the logical bias [11,12,16,18,19,36-41].

Relationships between cognition and logic have been a source of longstanding myth. The widely accepted story is that Aristotle founded logic as a fundamental mind mechanism, and only during the recent decades science overcame this influence. I would like to emphasize the opposite side of this story. Aristotle assumed a close relationship between logic and language. He emphasized that logical statements should not be formulated too strictly and language inherently contains the necessary degree of precision. According to Aristotle, logic serves not for thinking but to communicate already made decisions [42,43]. The mechanism of the mind relating language, cognition, and the world Aristotle [

A contradiction between logic and language was emphasized by the founders of formal logic. In the 19^{th} century George Boole and the great logicians following him, including Gottlob Frege, Georg Cantor, David Hilbert, and Bertrand Russell (see [

Logic is fundamentally related to CC. CC turned out to be a finite-system manifestation of the Gödel’s theory [48-50]. If Gödelian theory is applied to finite systems (such as computers and brain-mind), CC is the result, instead of the fundamental inconsistency. Algorithms matching bottom-up and top-down signals based on formal logic have to evaluate every variation in signals and their combinations as separate logical statements. Combinations of these variations cause CC.

This property of logic manifests in various algorithms in different ways. Rule systems are logical in a straightforward way, and the number of rules grows combinatorially. Pattern recognition algorithms and neural networks are related to logic in learning procedures: every training sample is treated as a logical statement (“this is a chair”) resulting in CC of learning. Multivalued logic and fuzzy logic were proposed to overcome limitations related to logic [51,52]. Yet the mathematics of multivalued logic is no different in principle from formal logic [15,16]. Fuzzy logic uses logic to set a degree of fuzziness. Correspondingly, it encounters a difficulty related to the degree of fuzziness: if too much fuzziness is specified, the solution does not achieve a needed accuracy, and if too little, it becomes similar to formal logic. If logic is used to find the appropriate fuzziness for every model at every processing step, then the result is CC. The mind has to make concrete decisions, for example one either enters a room or does not; this requires a computational procedure to move from a fuzzy state to a concrete one. But fuzzy logic does not have a formal procedure for this purpose; fuzzy systems treat this decision on an ad-hoc logical basis.

Is logic still possible after Gödel’s proof of its incomepleteness? The contemporary state of this field was reviewed in [

To summarize, various manifestations of CC are all related to formal logic and Gödel theory. Rule systems rely on formal logic in a most direct way. Even mathematical approaches specifically designed to counter limitations of logic, such as fuzzy logic and the second wave of neural networks (developed after the 1980s) rely on logic at some algorithmic steps. Self-learning algorithms and neural networks rely on logic in their training or learning procedures: every training example is treated as a separate logical statement. Fuzzy logic systems rely on logic for setting degrees of fuzziness. CC of mathematical approaches to the mind is related to the fundamental inconsistency of logic. Therefore logical intuitions, leading early cognitive scientists to amodal brain mechanisms, could not realize their hopes for mathematical models of the brain-mind.

The outstanding mathematicians of the 19^{th} and early 20^{th} c. believed that logic is the foundation of the mind, why? Even more surprising is that after Gödel. Gödelian theory was long recognized among most fundamental mathematical results of the 20^{th} c. How is it possible that outstanding minds, including founders of artificial intelligence, and many cognitive scientists and philosophers of mind insisted that logic and amodal symbols implementing logic in the mind are adequate and sufficient? The answer is in the “conscious bias”. As we have mentioned above and discuss in details in Section 7, non-logical operations represent more than 99.9% of the mind functioning, but they are not accessible to consciousness [

We have already mentioned another aspect of logic relevant to PSS, logic lacks dynamics; it is about static statements such as “this is a chair”. Classical logic is good at modeling structured statements and relations, yet it misses the dynamics of the mind and faces CC, when attempts to match bottom-up and top-down signals. The essentially dynamic nature of the mind is not represented in mathematical foundations of logic. Dynamic logic is a logic-process. It overcomes CC by automatically choosing the appropriate degree of fuzziness-vagueness for every mind’s concept at every moment. DL combines advantages of logical structure and connectionist dynamoics. This dynamics mathematically represents the process of Aristotelian forms, different from classical logic, and serves as a foundation for PSS concepts and simulators.

DL models perception and cognition as an interaction between bottom-up and top-down signals [12,15,16,35, 38,39,41,42,55-62]. This section concentrates on the basic relationship between the brain processes and the mathematics of DL. To concentrate on this relationship, we much simplify the discussion of the brain structures. We discuss visual recognition of objects as if the retina and the visual cortex each consists of a single processing layer of neurons where recognition occurs (which is not true, detailed relationship of the DL process to brain is considered in given references). Perception consists of the association-matching of bottom-up and top-down signals. Sources of top-down signals are mental representations, memories of objects created by previous simulators [

The DL processes along with concept-representations are mathematical models of the PSS simulators. The bottom-up signals, in this simplified discussion, are a field of neuronal synapse activations in visual cortex. Sources of top-down signals are mental representationconcepts or, equivalently, model-simulators (for short, models; please notice this dual use of the word model, we use “models” for mental representation-simulators, which match-model patterns in bottom-up signals; and we use “models” for mathematical modeling of these mental processes). Each mental model-simulator projects a set of priming, top-down signals, representing the bottom-up signals expected from a particular object. Mathematical models of mental models-simulators characterize these mental models by parameters. Parameters describe object position, angles, lightings, etc. (In case of learning situations considered later, parameters characterize objects and relations making up a situation). To summarize this highly simplified description of a visual system, the learning-perception process “matches” top-down and bottom-up activations by selecting “best” mental modelssimulators and their parameters and fitting them to the corresponding sets of bottom-up signals. This DL process mathematically models multiple simulators running in parallel, each producing a set of priming signals for various expected objects.

The “best” fit criteria between bottom-up and top-down signals were given in [12,15,16,35,62]. They are similar to probabilistic or informatic measures. In the first case they represent likelihood that the given (observed) data or bottom-up signals corresponds to representations-models (top-down signals) of particular objects. In the second case they represent information contained in representations-models about the observed data (in other words, information in top-down signals about bottom-up signals). These similarities are maximized over the model parameters. Results can be interpreted correspondingly as a maximum likelihood that models-representations fit sensory signals, or as maximum information in modelsrepresentations about the bottom-up signals. Both similarity measures account for all expected models and for all combinations of signals and models. Correspondingly, a similarity contains a large number of items, a total of M^{N}, where M is a number of models and N is a number of signals; this huge number is the cause for the previously discussed combinatorial complexity.

Maximization of a similarity measure mathematically models a cognitive mechanism of an unconditional drive to improve the correspondence between bottom-up and top-down signals (representations-models). In biology and psychology it was discussed as curiosity, cognitive dissonance, or a need for knowledge since the 1950s [63,64]. This process involves knowledge-related emotions evaluating satisfaction of this drive for knowledge [12,15,16,36,38,41,56,60-62,65-67]. In computational intelligence it is even more ubiquitous, every mathematical learning procedure, algorithm, or neural network maximizes some similarity measure. In the process of learning, mental concept-models are constantly modified.

The DL learning process, let us repeat, consists in estimating parameters of concept-models (mental representations) and associating subsets of bottom-up signals with top-down signals originating from these modelsconcepts by maximizing a similarity. Although a similarity contains combinatorially many items, DL maximizes it without combinatorial complexity [11,12,15,16,35,38, 39,48] as follows. First, vague-fuzzy association variables are defined, which give a measure of correspondence between each signal and each model. They are defined similarly to the a posteriori Bayes probabilities, they range between 0 and 1, and as a result of learning they converge to the probabilities, under certain conditions. This mathematical breakthrough led to solving many problems that could not have been solved previously [68-80].

After giving a short mathematical description of the DL process, we summarize it conceptually, so that understanding of mathematics is not essential. DL is defined by a set of differential equations given in the above references; together with models discussed later it gives a mathematical description of the PSS simulators. Here we summarize these equations simplifying the visual system, as if there is just one neural layer between the visual cortex and object recognition. Bottom-up signals {X(n)} is a field of neuronal synapse activations in visual cortex. Here and below curve brackets {…} denote multiple signals, a field. Index n = 1,… N, enumerates neurons and X(n) are the activation levels. Sources of top-down signals are representations or models {M_{m}(n)} indexed by m = 1,… M. Each model m M_{m}(n) projects a set of priming, top-down signals, representing the bottom-up signals X(n) expected from a particular object, m. Models depend on parameters {S_{m}}, M_{m}(S_{m},n). Parameters characterize object position, angles, lightings, etc. (In case of learning situations considered in Section 3, parameters characterize objects and relations making up a situation). We use n to enumerate the visual cortex neurons, X(n) are the “bottom-up” activation levels of these neurons coming from the retina, and M_{m}(n) are the “top-down” activation levels (priming) of the visual cortex neurons. The learning-perception process “matches” these top-down and bottom-up activations by selecting “best” models and their parameters and the corresponding sets of bottom-up signals. This “best” fit is given by maximizing a similarity measure between bottom-up and top-down signals; it is designed so that it treats each object-model as a potential alternative for each subset of signals [12,13,15,16,35]

; (1)

Here, l(X(n)|M_{m}(n)) (or simply l(n|m)) is called a conditional similarity between one signal X(n) and one model M_{m}(n). Parameters r(m) are proportional to the number of objects described by the model m. Expression (1) accounts for all combinations of signals and models in the following way. Sum ∑ ensures that any of the object-models can be considered (by the mind) as a source of signal X(n). Product ∏ ensures that all signals have a chance to be considered (even if one signal is not considered, the entire product is zero, and similarity L is 0; so for good similarity all signals have to be accounted for. This does not assume exorbitant amount of attention to each minute detail: among models there is a vague simple model for “everything else”). In a simple case, when all objects are perfectly recognized and separated from each other, there is just one object-model corresponding to each signal (other l(n|m) = 0). In this simple case expression (1) contains just 1 item, a product of all non-zero l(n|m). In the general case, before objects are recognized, L contains a large number of combinations of models and signals; a product over N signals is taken of the sums over M models; this results in a total of M^{N} items; this huge number is the cause for the combinatorial complexity discussed previously.

The DL learning process consists in estimating model parameters S_{m} and associating subsets of signals with concepts by maximizing the similarity (1). Although (1) contains combinatorially many items, DL maximizes it without combinatorial complexity [11,12,48,49]. First, fuzzy association variables f(m|n) are defined,

These variables give a measure of correspondence between signal X(n) and model M_{m} relative to all other models, m’. They are defined similarly to the a posteriori Bayes probabilities, they range between 0 and 1, and as a result of learning they converge to the probabilities under certain conditions.

DL process is defined by the following set of differenttial equations, as the Equation (3) below.

The principles of DL can be adequately understood from the following conceptual description and examples. As a mathematical model of perception-cognitive processes, DL is a process described by differential equations given above; in particular, fuzzy association variables f associate bottom-up signals and top-down models-representations. Among unique DL properties is an autonomous dependence of association variables on modelsrepresentations: in the processes of perception and cognition, as models improve and become similar to patterns in the bottom-up signals, the association variables become more selective, more similar to delta-functions. Whereas initial association variables are vague and associate near all bottom-up signals with virtually any topdown model-representations, in the processes of perception and cognition association variables are becoming specific, “crisp”, and associate only appropriate signals. This we call a process “from vague to crisp”. (The exact mathematical definition of crisp corresponds to values of f = 0 or 1; values of f in between 0 and 1 correspond to various degrees of vagueness).

DL processes mathematically model PSS simulators and not static amodal signals. Another unique aspect of DL is that it explains how logic appears in the human mind; how illogical dynamic PSS simulators give rise of classical logic, and what is the role of amodal symbols. This is discussed throughout the paper, and also in specific details in Section 6.

An essential aspect of DL, mentioned above, is that associations between models and data (top-down and bottom-up signals) are uncertain and dynamic; their uncertainty matches uncertainty of parameters of the models and both change in time during perception and cognition processes. As the model parameters improve, the associations become crisp. In this way the DL model of simulator-processes avoids combinatorial complexity because there is no need to consider separately various combinations of bottom-up and top-down signals. Instead, all combinations are accounted for in the DL simulator-processes. Let us repeat, that initially, the models do not match the data. The association variables are not the narrow logical variables 0, or 1, or nearly logical, instead they are wide functions (across top-down and bottom-up signals). In other words, they are vague, initially they take near homogeneous values across the data (across bottom-up and top-down signals); they associate all the representationmodels (through simulator processes) with all the input signals [12,16,39].

Here we conceptually describe the DL process as applicable to visual perception, taking approximately 160 ms, according to the reference below. Gradually, the DL simulator-processes improve matching, models better fit data, the errors become smaller, the bell-shapes concentrate around relevant patterns in the data (objects), and the association variables tend to 1 for correctly matched signal patterns and models, and 0 for others. These 0 or 1 associations are logical decisions. In this way classical logic appears from vague states and illogical processes. Thus certain representations get associated with certain subsets of signals (objects are recognized and concepts formed logically or approximately logically) [

Here we illustrate the DL processes, modeling multiple simulators running in parallel as described above. In this example, DL searches for patterns in noise. Finding patterns below noise can be an exceedingly complex problem. If an exact pattern shape is not known and depends on unknown parameters, these parameters should be found by fitting the pattern model to the data. However, when the locations and orientations of patterns are not known, it is not clear which subset of the data points should be selected for fitting. A standard approach for solving this kind of problem, which has already been mentioned, is multiple hypotheses testing [

Several types of models are used in this example: parabolic models describing “smiles” and “frown” patterns (unknown size, position, curvature, signal strength, and number of models), circular-blob models describing approximate patterns (unknown size, position, signal strength, and number of models), and noise model (unknown strength). Exact mathematical description of these models is given in the reference cited above.

We consider the image size of 100 × 100 points (N = 10,000 bottom-up signals, corresponding to the number of receptors in an eye retina), and the true number of models is 4 (3 + noise), which is not known. Therefore, at least M = 5 models should be fit to the data, to decide that 4 fits best. The complexity of logical combinatorial search is M^{N} = 10^{5000}; this combinatorially large number is much larger than the size of the Universe and the problem was considered unsolvable. ^{5000} to about 10^{9}. By solving the CC problem DL was able to find patterns under the strong noise. In terms of signal-to-noise ratio this example gives 10,000% improvement over the previous state-of-the-art (In this example DL actually works better than human visual system; the reason is that human brain is not optimized for recognizing these types of patterns in noise).

The main point of this example is that DL simulatorperception is a process “from vague-to-crisp”, similar to visual system processes demonstrated in [

The previous section illustrated DL for recognition of simple objects in noise, a case complex and unsolvable for prior state-of-the-art algorithms, still too simple to be directly relevant for PSS. Here we consider a problem of situation learning, assuming that object recognition has been solved. In computational image recognition this is called “situational awareness” and it is a long-standing unsolved problem. The principled difficulty is that every situation includes many objects that are not essential for recognition of this specific situation; in fact there are many more “irrelevant” or “clutter” objects than relevant

ones. Let us dwell on this for a bit. Objects are spatiallylimited material things perceptible by senses. A situation is a collection of contextually related objects that tend to appear together and are perceived as meaningful, e.g., an office, a dining room. The requirement for contextual relations and meanings makes the problem mathematically difficult. Learning contexts comes along with learning situations; it reminds of the problem of a chicken and egg. We subliminally perceive many objects, most of which are irrelevant, e.g. a tiny scratch on a wall, which we learn to ignore. Combinations of even a limited number of objects exceed what is possible to learn in a single lifetime as meaningful situations and contexts (e.g. books on a shelf) from random sets of irrelevant objects (e.g. a scratch on a wall, a book, and a pattern of tree branches in a window). Presence of hundreds (or even dozens) irrelevant objects makes learning by a child of mundane situations a mathematical mystery. In addition, a human constantly perceives large numbers of different objects and their combinations, which do not correspond to anything worth learning; the human mind successfully learn to ignore them.

A most difficult part of learning-cognition for mathematical modeling is to learn which sets of objects are important for which situations (contexts). The key mathematical property of DL that made this solution possible, same as in the previous section, is a process “from vagueto-crisp.” Concrete crisp models-representations of situations are formed from vague models in the process of learning (or cognition-perception). We illustrate below how complex symbols, situations, are formed by situation-simulators from simpler perceptions, objects, which are simpler perceptual symbols, being formed by simulators at “lower” levels of the mind, comparative to “higher” situation-simulators. Situation-simulators operate on PSS representations of situations, which are dynamic and vague assemblages of situations from imagery (and other modalities), bits and pieces along with some relations among them perceived at lower levels. These pieces and relations may come from different past perceptions, not necessarily from a single perceptual mode, and not necessarily stored in a contiguous parts of the brain. The dynamic process of DL-PSS-simulation, which assembles these bits into situations attempting to match those before the eyes, is mostly unconscious. We will discuss in details in section 6 that these are perceptual symbols as described in [

Here we consider a problem, where an intelligent agent (a child) is learning to recognize certain situations in the environment; while it is assumed that a child has learned to recognize objects. In real life a child learns to recognize situations, to some extent, in parallel with recognizing objects. But for simplicity of the illustration examples and discussions below, we consider a case of objects being already known. For example, situation “office” is characterized by the presence of a chair, a desk, a computer, a book, a book shelf. Situation “playground” is characterized by the presence of a slide, a sandbox, etc. The principal difficulty is that many irrelevant objects are present in every situation.

We use D_{o} to denote the total number of objects that the child can recognize in the world (it is a large number). In every situation he or she perceives D_{p} objects. This is a much smaller number compared to D_{o}. Each situation is also characterized by the presence of D_{s} objects essential for this situation (D_{s} < D_{p}). Normally nonessential objects are present and D_{s} is therefore less than D_{p}. The sets of essential objects for different situations may overlap, with some objects being essential to more than one situation. We assume that each object is encountered in the scene only once. This is a minor and nonessential simplification, e.g. we may consider a set of similar objects as a new object.

A situation can be mathematically represented as a vector in the space of all objects, X_{n} = (x_{n1}, … x_{ni}, … x_{nDo}) [84,85]. If the value of x_{ni} is one the object i is present in the situation n and if x_{ni} is zero, the corresponding object is not present. Since D_{o} is a large number, X_{n} is a large binary vector with most of its elements equal to zero. A situation model is characterized by parameters, a vector of probabilities, p_{m} = (p_{m1},… p_{mi},... p_{mDo}). Here p_{mi} is the probability of object i being part of the situation m. Thus a situation model contains D_{o} unknown parameters. Estimating these parameters constitutes learning. We would like to emphasize that although notations like x_{ni}, p_{mi }might look like amodal symbols, such an interpretation would be erroneous. Correct interpretation of notations in a mathematical model depends on what actual physical entities are referred to by the notations. These notations refer to neural signals, elements from which simulatorprocesses assemble symbols of a higher level. As discussed, for simplicity of presentation of the results, we assume that lower-level simulator-processes responsible for object recognition have already run their course, and objects have been recognized at a lower level, therefore x_{ni} are 0 or 1. Given mathematical formulation can use dynamic signals x_{ni}, parts of object-recognition simulators. We remind that simulators of interest in this example are situations; in addition to x_{ni} these simulators involve dynamic neural signals referred by p_{mi}. These are constituent signals of the ongoing simulator processes at the considered level of situations, which learn to recognize situations, symbols at a higher level (relative to objects).

The elements of vector p_{m} are modeled as independent (this is not essential, if presence of various objects in a situation actually is correlated, this would simplify learning, e.g. perfect correlation would make it trivial). Correspondingly, conditional probability of observing vector X_{n} in a situation m is then given by the standard formula

Consider N perceptions a child was exposed to (N includes real “situations” and “irrelevant” random ones); among them most perceptions were “irrelevant” corresponding to observing random sets of objects, and M-1 “real” situations, in which D_{s} objects were repeatedly present. All random observations we model by 1 model (“noise”); assuming that every object has an equal chance of being randomly observed in noise (which again is not essential) the probabilities for this noise model, m = 1, are p_{1i} = 0.5 for all i. Thus we define M possible sources for each of the N observed situations.

The total similarity for the above M models (M-1 “real” and 1 noise) is given by the same equation as similarity in the previous example [

For intuitive understanding we point out that these association variables are different from Equation (1) in that they are normalized by the denominator, the sum of l(m|n) for a given bottom-up signal over all active simulators m. Whereas l(m|n) could vary greatly in their values, f(m|n) vary between 0 and 1. Also, Equation (5) contain parameters r(m), which are needed for the following reason: it is convenient to define conditional probabilities (1) assuming the simulator m actually is present and active; therefore r(m) are needed to define the actual probability of the simulator m in the current process. These association variables (5) are defined similarly to the a posteriori Bayesian probabilities that a bottom-up signal n comes from a situation-simulator m. Yet they are not probabilities as long as parameters p_{mi} have wrong values. In the process of learning, these parameters attain their correct values and, at the end of the DL-simulator processes, the association variables can be interpreted as probabilities.

These association variables are used to update parameter values, which is the second part of the DL process. In this case parameter update equations are simple,

These equations have very simple interpretations: they estimate parameters p_{mi} of the m-th simulator as weighted averages of the bottom-up signals, x_{ni}. Note, the bottomup signals “do not know” which situation they came from. The weights (f/Σf) are normalized association variables, associating data n with simulator m. These equations are easy to understand: if the object i never appears in a situation m, at the end of the DL-simulator learning process, f(m|n) = 0, and p_{mi} = 0, as it should be, even if x_{ni} are not 0 because object i appears in other situations. The role of the normalizing denominator (Σf) is easy to understand; for example, if object i is actually present in situation m, then x_{ni} = 1 for each set of bottom up signal n, whenever situation m is observed. In this case, at the end of DL-simulator process, f(m|n) = 1 for all n, and Equation (3) yields p_{mi} = 1, as it should be.

We did not mention modeling relations among objects. Spatial, temporal, or structural connections, such as “to the left,” “on top,” or “connected” can be easily added to the above DL formalism. Relations and corresponding markers (indicating which objects are related) are no different mathematically than objects, and can be considered as included in the above formulation. This mechanism is “flat” in the hierarchical structure of the brain, meaning that relations “reside” at the same level as entities they relate. Alternatively, some relations are realized in the brain hierarchically: relations could “reside” at a higher level, with markers being implemented similar to parametric models. Experimental data might help to find out, which relations are “flat” and which are “hierarchical”. Other types of relations are principally hierarchical, e.g. objects-features, situations-objects, etc. We would also add that some relations are not “directly observable”, as objects; say to differentiate between “glued to” or “stack to” might require knowledge of human actions or how the world is organized. Prerequisites to some of this knowledge might be inborn [86,87]. We suggest that directly observable relations are learned as parts of a situation, similar to objects, and this learning is modeled by the DL formalism described above. Relations that require human cultural knowledge may be learned with the help of language, as discussed later, and inborn mechanisms should be further elucidated experimentally. This discussion implies several predictions that could be experimenttally tested: existence of two types of relation mechanisms, flat and hierarchical; suggestions of which types of mechanisms are likely to be used in the brain for which types of relations; and suggestions of mechanisms conditioned by culture and language.

The above formulation, let us repeat, assumes that all the objects have already been recognized, still the above formulation can be applied without any change to real, continuously working brain with multiplicity of concurrently running simulators at many levels, feeding each other. Also modality of objects (various sensorial or motor mechanisms) requires no modifications; emotions can be included as well, some emotions are reducible to representations and learned to be a part of a situation similar to objects; other involve entirely different mechanisms discussed later [60,88-92]). The bottom up signals do not have to be definitely recognized objects, these signals can be sent before objects are fully recognized, while object simulators are still running and object representations are vague; this would be represented by x_{ni} values between 0 and 1. The bottom up signals do not have to correspond to complete objects, but could recreate patterns of activetions in sensorimotor brain areas associated with perception of objects; similarly, top-down signals corresponding to situations, p_{m}, correspond to patterns of activations recreating experience of this situation. The presented formalization therefore is a general mechanism of simulators. A fundamental experimentally testable prediction of the developed theory is that top-down signals originate from vague representations, and the vagueness is determined by degrees of uncertainty of association between bottom-up signals and various higher-level representations.

This example considers the total number of recognizable objects equal to 1000 (D_{o} = 1000). The total number of objects perceived in a situation is set to 50 (D_{p} = 50). The number of essential objects is set to 10 (D_{s} = 10). The number of situations to learn (M-1) is set to 10. Note that the true identities of the objects are not important in this simulation so we simply use object indexes varying from 1 to 1000 (this index points to neural signals corresponding to a specific object-simulators). The situation names are also not important and we use situation indexes (this index points to neural signals corresponding to a specific situation-simulators). We would emphasize that the use of numbers for objects and situation, while may seem consistent with amodal symbols, in fact is nothing but notations. We repeat that the principled differences between PSS and amodal systems are mechanisms in the brain and their modeling, not mathematical notations. Among these mechanisms are simulators, mathematically described by DL.

Let us repeat, amodal symbols are governed by classical logic, which is static and faces CC. DL is a process and overcomes CC. DL operates on PSS representations (models p_{m}), which are vague collections of objects (some of these objects could also be vague, not completely assembled yet representations). Another principled difference is interaction between perceptual-based bottom-up and top-down neural fields X_{n} and M_{m}; indexes n and m are just mathematical shorthand for corresponding neural connections. In this paper we consider object perception and situation perception in different sections, but of course the real mind-brain operates continuously, “objects” in this section are neural signals sent to situation-recognition brain area (and corresponding simulators) by excited neuron fields corresponding to models of recognized-objects (partially, as described in Section 2; and as discussed, these signals are being sent before objects are fully recognized, while object simulators are still running).

This example uses simulated data generated by first randomly selecting D_{s} = 10 specific objects for each of the 10 groups of objects, allowing some overlap between the groups (in terms of specific objects). This selection is accomplished by setting the corresponding probabilities p_{mi} = 1. Next we add 40 more randomly selected objects to each group (corresponding to D_{p} = 50). We also generate 10 more random groups of 50 objects to model situations without specific objects (noise); this is of course equivalent to 1 group of 500 random objects. We generate N’ = 800 perceptions for each situation resulting in N = 16000 perceptions (data samples, n = 1… 16000) each represented by 1000-dimensional vector X_{n}. These data are shown in

Then the samples are randomly permuted, according to randomness of real life perceptual situations, in ^{N} = 10^{16000} inspections, which is of course impossible. This CC is the reason why the problem of learning situations has been standing unsolved for decades. By overcoming CC, DL can solve this problem as illustrated below.

The DL algorithm is initiated similarly to Section 2 by defining 20 situational models (an arbitrary selection, given actual 10 situations) and one random noise model to give a total of M = 21 models (in Section 2.4,

The initialization and the iterations of the DL algorithm (the first 3 steps of solving DL equations) are illustrated in _{m} for each of the 20 models. The vectors have 1000 elements corresponding to objects (vertical axes). The values of each vector element are shown in gray scale. The initial models assign nearly uniformly distributed probabilities to all objects. The horizontal axes are the model index changing from 1 to 20. The noise model is not shown. As the algorithm progresses, situation grouping improves, and only the elements corresponding to repeating objects in “real” situations keep their high values, the other elements take low values. By the third iteration the 10 situations are identified by their corresponding models. The other 10 models converge to more or less random low-probability vectors.

This fast and accurate convergence can be seen from Figures 5 and 6. We measure the fitness of the models to the data by computing the sum-squared error, using the following equation.

In this equation the first summation is over the subset {B} containing top 10 models that provide the lowest error (and correspondingly, the best fit to the 10 true models). In the real brain, of course, the best models would be added as needed, and the random samples would accumulate in the noise model automatically; as mentioned, DL can model this process and the reason we did not model it, is that it would be too cumbersome to present results.

,

Here, f(m|n) for true 10 models m is either 1 (for N’ data samples from this model) or 0 (for others), f(m’|n) are computed associations, in the second line all 10 computed noise models are averaged together, corresponding to one true (random) noise model. The correct associations on the main diagonal in

Errors in ^{0} are numerically non-defined, therefore all values have been bounded from below by 0.05 (a somewhat arbitrary limit).

Similar to Section 2, learning of perceptual situationsymbols has been accomplished due to the DL process-simulator, which simulated internal model-representtations of situations, M, to match patterns in bottom-up signals X (sets of lower-level perceptual object-symbols).

PSS grounds perception, cognition, and high-level symbol operation in modal symbols, which are ultimately grounded in the corresponding brain systems. Previous section provides an initial development of formal mathematical description suitable for PSS: the DL process “from vagueto-crisp” models PSS simulators. We have considered just one subsystem of PSS, a mechanism of learning, formation, and recognition of situations from objects making up the situations. (More generally, the formalized mechanism of simulators includes recognition of situations by recreating patterns of activations in sensorimotor brain areas, from objects, relations, and actions making up the situations). The mind’s representations of situations are symbol-concepts of a higher level of abstractness than symbol-objects making them up. The proposed mathematical formalism can be advanced to “higher” levels of more and more abstract concepts. Such an application to more abstract ideas, however, may require an additional grounding in language [42,90,92,93,96-101] as we briefly consider in the next section.

The proposed mathematical formalism can be similarly applied at a lower level of recognizing objects as constructed from their parts; mathematical techniques of Sections 2 and 3 can be combined to implement this PSS object recognition idea as described in [

The developed DL theory, by modeling the simulators, also mathematically models productivity of the mind concept-simulator system. The simulated situations and other concepts are used not only in the process of matching bottom-up and top-down signals for learning and recognizing representations, but also in the motor actions, and in the processes of imagination and planning.

Presented examples are steps toward general solution of the binding problem discussed in [

Now we discuss other relationships between the mathematical DL procedures of previous sections and the fundamental ideas of PSS. Section 2 concentrated on the principal mathematical difficulty experienced by all previous attempts to solve the problem of complex symbol formation from less complex symbols, the combinatorial complexity (CC). CC was resolved by using DL, a mathematical theory, in which learning begins with vague (nonspecific) symbol-concepts, and in the process of learning symbol-concepts become concrete and specific. Learning could refer to a child’s learning, which might take days or months or an everyday perception and cognition, taking approximately 1/6^{th} of a second (in the later case learning refers to the fact that every specific realization of a concept in the world is different in some respects from any previous occurrences, therefore learning-adaptation is always required; in terms of PSS, a simulator always have to re-assemble the concept). In the case of learning situations as compositions of objects, the initial vague state of each situation-symbol is a nearly random and vague collection of objects, while the final learned situation consists of a crisp collection of few objects specific to this situation. This specific of the DL process “from vague-to-crisp” is a prediction that can be experimentally tested, and we return to this later. In the learning process random irrelevant objects are “filtered out,” their probability of belonging to a concept-situation is reduced to zero, while probabilities of relevant objects, making up a specific situation is increased to a value characteristic of this object being actually present in this situation. Relation of this DL process to PSS is now considered. First we address concepts and their development in the brain. According to [

DL implements this aspect of PSS theory in a most straightforward way. Concept-situations in DL are collections of objects (symbol-models at lower levels, which are neurally connected to neural fields of object-images). As objects are perceptual entities-symbols in the brain, concept-situations are collections of perceptual symbols. In this way situations are perceptual symbols of a higher order complexity than object-symbols, they are grounded in perceptual object-symbols (images), and in addition, their learning is grounded in perception of images of situations. A PSS mathematical formalization of abstract concepts [

According to PSS, concepts are developed in the brain by forming collections of correlated features [105,106]. This is explicitly implemented in the DL process described in Section 3. The developed mathematical representation corresponds to multimodal and distributed representation in the brain. It has been suggested that a mathematical set or collection is implemented in the brain by a population of conjunctive neurons [

DL processes are mathematical models of PSS simulators. DL symbol-situations are not static collections of objects but dynamic processes. In the process of learning they “interpret individuals as tokens of the type” [

The same DL mathematical procedure can apply to perception of a real situation in the world as well as an imagined situation in the mind. This is the essence of imagination. Models of situations (probabilities of various objects belonging to a situation, and objects attributes, such as their locations) can depend on time, in this way they are parts of simulators accomplishing cognition of situations evolving in time. If “situations” and “time” pertain to the mind’s imaginations, the simulators implement imagination-thinking process, or planning.

Usually we perceive-understand a surrounding situation, while at the same time thinking and planning future actions and imagine consequences. This corresponds to running multiple simulators in parallel. Some simulators support perception-cognition of the surrounding situations as well as ongoing actions, they are mathematically modeled by DL processes that converged to matching internal representations (types) to specific subsets in external sensor signals (tokens). Other simulators simulate imagined situations and actions related to perceptions, cognitions, and actions, produce plans, etc.

Described here DL mathematical models corresponds to what [

DL is a general model of interacting bottom-up and topdown signals throughout the hierarchy-heterarchy of the mind-brain, including abstract concepts. The DL mathematical analysis suggests that modeling the process of learning abstract concepts has to go beyond PSS analysis in [

In Section 3 situation representations have been assembled from object representations. This addresses interaction between top-down and bottom-up signals in two adjacent layers of the mind hierarchy. The mathematical description in Section 3 addresses top-down and bottom-up signals and representations without explicit emphasis on their referring to objects or situations. Accordingly, we would emphasize here that the mathematical formulation in Section 3 equally addresses interaction between any two adjacent layers in the entire hierarchy of the mindbrain, including high-level abstract concepts. A fundamental question of embodiment is discussed now.

Barsalou assumed that higher level abstract concepts remain grounded-embodied in PSS since they are based on lower level grounded concepts, and down the hierarchy to perceptions directly grounded in sensory-motor signals. The DL modeling suggests that this aspect of the PSS theory has to be revisited. First, each higher level is vaguer than a lower level. Several levels on top of each other would result in representations too vaguely related to sensory-motor signals to be grounded in them with any reliability. Second, the Section 3 example is impressive in its numerical complexity, which significantly exceeds anything that has been computationally demonstrated previously. We would like to emphasize again that this is due to overcoming difficulty of CC. Still statistically, learning of situations was based on these situations being present among the data with statistically sufficient information to distinguish them among each other and from noise. In real life however, human learn complex abstract concepts, such as “system”, “rationality”, “survival”, and many other abstract concepts, without statisticcally sufficient information been experienced. This is possible due to language.

Language is learned at all levels of the hierarchy of the mind-brain and cultural knowledge from surrounding language. This is possible because language models exist “ready made” in the surrounding language. Language can be learned without real life experience, therefore kids can talk about much of cultural contents by the age of five or seven. At this age kids can talk about many abstract ideas, which they cannot yet adequately use in real life. This corresponds to a significant difference in language and cognitive representations, and their different locations in the brain. Language concepts are grounded in surrounding language at all hierarchical levels. But learning corresponding cognitive concepts grounded in life experience takes entire life. Learning language, like learning cognition can be modeled by DL. Linguists consider words to be learned by memorizing them [

A fundamental difference of language from cognition is embodiment-grounding. Let us repeat, language is grounded in direct experience (of talking, reading) at all levels of the hierarchy, whereas cognition is grounded in direct perceptions only at the bottom of the hierarchy. Higher abstract levels of cognition are grounded in language. The detailed theory of interaction between cognition and language is considered in [16,37,43,83,93,94, 100-108,107,110,112-117]. It leads to a number of verifiable experimental predictions, which we summarize in Section 8. The main mechanism of interaction between language and cognition is the dual model, which suggests that every mental model-representation has two neurally connected parts, language model and cognitive model. Language models are learned by simulator processes, similar to PSS simulators, however, “perception” in case of language refers to perception of language facts. Through neural connections between the two parts of each model, the early acquired abstract language models guide the development of abstract cognitive models in correspondence with cultural experience stored in language. The dual model leads to the dual hierarchy illustrated in

The dual model mechanism explains how language affects cognition. It follows that language affects culture and its evolution [56,88,108,118,119].

Any mathematical notation may look like an amodal symbol, so in this section we discuss the roles of amodal vs perceptual symbols in DL. This would require clarification of the word symbol. We touch on related philosophical and semiotic discussions and relate them to mathematics of DL and to PSS. For the sake of brevity within this paper we limit discussions to the general interest, emphasizing connections between DL, perceptual, and amodal symbols; extended discussions of symbols can be found in [16,43,83,120]. We also summarize here related discussions scattered throughout the paper.

“Symbol is the most misused word in our culture” [

In scientific understanding of symbols and semiotics, the two functions, understanding the language and understanding the world, have often been perceived as identical. This tendency was strengthened by considering logical rules to be the mechanism of both, language and cognition. According to Russell [

DL and PSS explain how the mind constructs symbols, which have psychological values and are not reducible to arbitrary logical amodal signs, yet are intimately related to them. In Section 3 we have considered objects as learned and fixed. This way of modeling objects indeed is amenable to interpreting them as amodal symbolssigns. Yet, we have to remember that these are but final states of previous simulator processes, perceptual symbols. Every perceptual symbol-simulator has a finite dynamic life, and then it becomes a static symbol-sign. It could be stored in memory, or participate in initiating new dynamical perceptual symbols-simulators. This infinite ongoing dynamics of the mind-brain ties together static signs and dynamic symbols. It grounds symbol processes in perceptual signals that originate them; in turn, when symbol-processes reach their finite static states-signs, these become perceptually grounded in symbols that created them. We become consciously aware of static signstates, express them in language and operate with them logically. Then, outside of the mind-brain dynamics, they could be transformed into amodal logical signs, like marks on a paper. Dynamic processes-symbols-simulators are usually not available to consciousness. These PSS processes involving static and dynamic states are mathematically modeled by DL in Section 3 and further discussed in Section 4.

To summarize, in this paper we follow a tradition using a word sign for an arbitrary, amodal, static, unmotivated notation (unmotivated means unemotional, in particular). We use a word symbol for the PSS and DL processes-simulators, these are dynamic processes, connecting unconscious to conscious; they are motivationally loaded with emotions. As discussed in Section 2, DL processes are motivated toward increasing knowledge, and they are loaded with knowledge-related emotions, even in absence of any other motivation and emotion. These knowledge-related emotions are called aesthetic emotions since Kant. They are foundations of higher cognitive abilities, including abilities for the beautiful, sublime, and they are related to musical emotions. More detailed discussions can be found in [11-13,15,16,36,38, 40-42,54, 66,83,88,89,93,95,96,100,101,126-138].

DL mathematical models (in Section 3) use mathematical notations, which could be taken for amodal symbols. Such an interpretation would be erroneous. Meanings and interpretations of mathematical notations in a model depends not on the appearance, but on what is modeled. Let us repeat, any mathematical notation taken out of the modeling context, is a notation, a static sign. In DL model-processes these signs are used to designate neuronal signals, dynamic entities evolving “from vague to crisp” and mathematically modeling processes of PSS simulators-symbols. Upon convergence of DL-PSS simulator processes, the results are approximately static entities, approximately logical, less grounded and more amodal.

Grounded dynamic symbol-processes as well as amodal static symbols governed by classical logic can both be modeled by DL. DL operates on a non-logical type of PSS representations, which are vague combinations of lower-level representations. These lower-level representations are not necessarily complete images or events in their entirety, but could include bits and pieces of various sensor-motor modalities, memory states, as well as vague dynamic states from concurrently running simulators— DL processes of the on-going perception-cognition (in Section 3, for simplicity of presentation, we assumed that the lower-level object-simulators have already run their course and reached static states; however, the same mathematical formalism can model simulators running in parallel on multiple hierarchical levels). The mind-brain is not a strict hierarchy, the same-level and higher-level representations could be involved along with lower levels. DL models processes-simulators, which operate on PSS representations. These representations are vague and incomplete, and DL processes are assembling and concretizing these representations. As described in several references by Barsalou, bits and pieces from which these representations are assembled could include mental imagery as well as other components, including multiple sensor, motor, and emotional modalities; these bits and pieces are mostly inaccessible to consciousness during the process dynamics.

DL also explains how logic and ability to operate amodal symbols originate in the mind from illogical operations of PSS: mental states approximating amodal symbols and classical logic appear as the end of the DL processsimulators. At this moment they become conscious static representations and loose that component of their emotional-motivational modality, which is associated with the need for knowledge (to qualify as amodal, these mental states should have no sources of modality, including emotional modality). The developed DL formalization of PSS, therefore suggests using a word signs for amodal mental states as well as for amodal static logical constructs outside of the mind, including mathematical notations; and to reserve symbols for perceptually grounded motivational cognitive processes in the mind-brain. Memory states, to the extent they are static entities, are signs in this terminology. Logical statements and mathematical signs are perceived and cognized due to PSS simulator symbolprocesses and become signs after being understood. Perceptual symbols, through simulator processes, tie together static and dynamic states in the mind. Dynamic states are mostly outside of consciousness, while static states might be available to consciousness.

Neuroimaging experiments demonstrated that DL is a valid model for visual perception [

From sensory data the brain predicts expected perception and cognition. This includes predictions of complex information, such as situations and social interactions [139,140]. Predictions are initiated by gist information rapidly extracted from sensory data. At the “lower”-object level this gist information is a vague image of an object. At higher levels “the representation of gist information is yet to be defined”. The model presented here defines this higher-level gist information as vague collections of vague objects, with relevant objects for a specific situation having just slightly higher probabilities than irrelevant ones. The model is also consistent with the hypothesis in [

The DL mathematical description of PSS should be addressed throughout the mind hierarchy; from features and objects “below situations” in the hierarchy to abstract models and simulators at higher levels “above situations”. Modeling across the mind modalities will be addressed including diverse modalities, symbolic functions, conceptual combinations, predication. Modeling features and objects would have to account for suggestions that perception of sensory features are partly inborn [

The developed theory provides solutions to classical problems of conceptual relations, binding, and recursion. Binding is a mechanism connecting events into meaningful “whole” (or larger-scale events). The DL model developed here specifies two types of binding mechanisms “flat” and “hierarchical”, and suggests which mechanisms are likely to be used for various relations. Our model also suggests existence of binding mechanisms conditioned by culture and language. Recursion has been postulated to be a fundamental mechanism in cognition and language [

Predicted properties of higher-level simulators will be addressed in experimental research [19,139]. These include a prediction that early predictive stages of situation simulations are vague. Whereas vague predictions of objects resemble low-spatial frequency of object imagery [

Another topic discussed in [

Experimental and theoretical future research will address interaction between language and cognition. Language is acquired from surrounding language, rather than from embodied experience; language therefore is closer aligned with amodal symbols than with perceptual symbols. According to the developed theory, higher abstract concepts could be stronger grounded in language than in perception; using language the mind may operate with abstract concepts as with amodal symbols, and therefore have limited embodied understanding grounded in experience of how abstract concepts relate to the world. Higher-level concepts may be less grounded in perception and experience than in language. The developed theory suggests several testable hypotheses: (i) the dual model, postulating separate cognitive and language mental representations; (ii) neural connections between cognitive and language mental representations; (iii) language mental representations guiding acquisition of cognitive representations in ontological development; (iv) abstract concepts being more grounded in language than in experience; and (v) this shift from grounding in perception and experience to grounding in language progresses up the hierarchy of abstractness; while grounding in perception and experience increases with age. These make a fruitful field for future experimental research.

The suggested dual model of interaction between language and cognition bears on language evolution, and future research should address theoretical and experimental tests of this connection between evolution of languages, cognition, and cultures [42,61,88,97,112-117,134, 143-148].

Emotions in language and cognition have been addressed in [88,149]. Future research would explore roles of emotions (i) in language-cognition interaction), (ii) in symbol grounding, (iii) the role of aesthetic and musical emotions in cognition [36,54,128,131,134-136,150], and (iv) emotions of cognitive dissonances [151,152].