Artificial Dasein: Solving the Frame Problem with Incremental Contextualization
Kole Harvey
Independent Scholar, Limerick, Ireland.
DOI: 10.4236/oalib.1104535   PDF    HTML   XML   467 Downloads   1,099 Views   Citations

Abstract

In this paper, we give an account of a rationally behaving agent—an artificial Dasein—which can perceive the world in terms of relevance to its own goals. The way we achieve this is through a process of incremental contextualization of goals and constraints that range from the purely conceptual and abstract to the well-defined and physical. The model described in this paper combines a conceptual hierarchy with a schema structure, and leads to an account of practical reasoning which relies on two novel ideas: the recursive selection of increasingly contextualized subgoals, and the tractable determination of behavioral consequences through simulation. The present account seeks to provide an outline for developing an agent which does not suffer from the frame problem due to the way in which it incrementally contextualizes its goals until they can be achieved unreflectively by matching them to pre-learned schemas. We believe that this account can lead to a form of artificial intelligence more powerful than traditional attempts based on formal logic.

Share and Cite:

Harvey, K. (2018) Artificial Dasein: Solving the Frame Problem with Incremental Contextualization. Open Access Library Journal, 5, 1-17. doi: 10.4236/oalib.1104535.

1. Introduction

AI has traditionally encountered problems of scale when dealing with the complexities of the real world. In the past, this has taken the form of the “commonsense knowledge problem” which deals with the lack of everyday facts held by an artificial agent, and the “frame problem”, which deals with how to know which of the stored facts are relevant in any given situation. Past approaches to AI based on formal logic have suffered a “combinatorial explosion” in the context of solving these problems, as it is intractable to deduce innumerable logical consequents to the myriad propositions pertaining to any particular state of the world.

In our approach, we start with the vague idea of a goal state and successively contextualize it until it can be partially or fully “deployed” into physical behavior. In this way, the relevance determined from the goal is always at the origin of behavior selection, and this relevance can be used to focus solely on features of the environment relevant to the goal. In essence, we seek to give the agent a Heideggerian “world”, from which the relevance of any particular aspect is self-evident in practice. Like Dreyfus, we believe that Heidegger’s account of relevance may hold the key to solving the frame problem [1] . The planning process posited here begins in a concept-heavy fashion, whereas action at the physical layer is always maximally contextualized and automatically executed by sensorimotor schemas. This dichotomy between treating things as more objective or context-free at the more “rational” level of thought and treating things just as they are in the world at the physical level of behavior correspond to Heidegger’s notions of “objective presence” (Vorhandenheit) (Heidegger 1996, Int.2.26) and “at hand” (Zuhandenheit) (Heidegger 1996, 1.3.70), respectively. It is in this sense that such an agent is in-the-world (Heidegger 1996, 1.2.53) as it acts that we may call it an artificial Dasein, since to Dasein, the relevance of the world is “always already disclosed” (Heidegger 1996, 1.2.58) [2] . Further, instead of giving the agent a database of symbolic facts, we rely on a novel combination of dynamical logical concepts [3] and schemas to provide a more realistic notion of declarative knowledge as part of an “embodied” store of sensorimotor schemas. Accessing facts is done through “simulation” of an embodied schema, and so always occurs in either a real or virtual context. In this way, facts are always implicitly connected to the “referential totality” in practice, and never interpreted as isolated forms in a meaningless symbolic space. The present model seeks to fulfill the following objectives:

・ To provide an account of a system capable of practical reasoning, that is, carrying out complex tasks involving abstract tools and/or sources of information in order to reach a goal.

・ To be able to solve one or more goals that are defined in a vague fashion.

・ To be able to detect unexpected events and form rational, adaptive responses to them based on internal planning and external infotropic behavior.

・ To do the above while avoiding the combinatorial explosion of formal logical approaches to AI.

It is our hope that the current model can lead to a more thorough investigation into the potential of such an artificial Dasein, which can think and act in the real world with a cognitive fluency that brings it closer to human being than calculator.

2. Concepts in Practice

As we experience the world as infants, the “blooming, buzzing confusion” (James 2007, 462) [4] of sensory information is ultimately broken down into concepts, which enable the grouping of dislike things as identical for the purposes of a given task, ignoring any irrelevant differences. Experience is, as James calls it: “the immediate flux of life which furnishes the material to our later reflection with its conceptual categories… a that which is not yet any definite what, tho’ ready to be all sorts of whats” (James 2003, 93) [5] .

We focus on a mathematical formalization of Aristotelian dynamic forms called dynamic logic [3] , which posits a model of concepts with a degree of fuzziness which can be dynamically altered, such that concepts become successively discriminated or “crisp”, with experience. This mechanism allows for avoidance of the combinatorial problems associated with finding an appropriate degree of fuzziness in fuzzy logic. Further, these concepts can form a hierarchy, with the lower levels more perceptual in nature, and higher levels made up of relationships, situations, or generalities. We now discuss each of the aforementioned requirements in light of this model.

2.1. Concept Formation

Formation of concepts can take on a natural progression reminiscent of the human infant [6] . First, concrete objects grounded in direct experience are learned at the lowest levels of the concept hierarchy. As experience with these objects increases, they attain a crisper state, and so respond (are activated in the presence of) more specific combinations of experienced features. This approach is similar to Drefyus’ idea of an expert, who can see the world in terms of more nuanced or differentiated concepts after much experience with a field such as playing chess [7] .

The above process is the reverse of some theories of induction, which starts with maximally discriminative exemplars and leads to more general theories over time [8] . In the present model, induction instead takes place by learning concepts higher up the hierarchy; while there is a bias for higher concepts to be fuzzier in general, it is not necessarily the case. After sufficient experience with a more abstract concept, its representation can still become crisply differentiated, however, instead of being based on perceptual features it instead is activated in the presence of specific relationships between other concepts, or situations in general (themselves a collection of other concepts and relations between them).

2.2. From Concepts to Goals

How do we utilize previously learned concepts? Knowledge can be expressed in behavior by either using it to make plans/subgoals or to respond directly to affordances for action in the environment. Thus in order to provide a coherent account, we must prove that the idea of concepts as presently described is sufficient for translating knowledge into action. It is important to stress that our account seeks for a solution that is qualitatively different to just “symbol crunching”, in as much as when the concepts are used in practice they are already coded in terms of significance (upholding the idea that the agent should be able to directly perceive relevance in the world).

To avoid combinatorial explosion it is necessary to possess vague concepts about situations and goals that do not completely determine their content. How do we know when a situation corresponds to the current goal? Anything that can be made a goal is something experienced in the past a certain number of times, and so will be embodied to some degree as a concept. As with other “situation concepts”, the goal concept will be applicable to the ideas of graded membership (or in the parlance of dynamic logic, “fuzzy similarity”). Thus we can tell through a similarity function to what extent a particular situation matches a particular goal concept.

We seek to combine our account of concepts with ideomotor theory, which describes how action can be selected by thinking of its goal [9] . Here we can see a compatible approach if we think of the initial goal as more of a “set of guidelines” as opposed to a static objective. A plan developed before action can reference the vague concept which represents the goal situation; as the plan is unpacked, this vague initial plan can then be incrementally contextualized based on external feedback. This also allows for online modification of the plan by consistent comparison of the goal requirements to what is predicted to follow from the current plan.

3. Types of Knowledge

3.1. A Tale of Two Knowledge Types

We now extend our model to explain how knowledge can be represented in an agent. Here we focus on an account predicated on the use of “sensorimotor schemas” which can act as forward and inverse models in the motor control sense [10] [11] . To represent knowledge in a bodily format makes sense from an action-oriented perspective since knowledge must ultimately interface with the world through the agent’s body. This “bodily memory” as Bergson calls it, is thus “made up of the sum of the sensorimotor systems organized by habit” (Bergson 2011, 197) [12] .

The versatility of schemas cannot be understated; for example, we have given a previous account of how an agent could acquire new skills through learning schemas in a real life context akin to a developing infant [13] . Pezzulo [14] presents an elegant way to utilize schemas, either “online” as procedural knowledge (e.g. models for grabbing a door knob), or “offline” or decoupled from their original context [15] to represent declarative knowledge (e.g. the temperature of the door knob). This conversion of procedural to declarative knowledge is compatible with Heidegger’s account of “thematization”, or freeing things in the world so that they become objectively present: “freeing beings encountered within the world in such a way that they can “project” themselves back upon pure discovery, that is, they can become objects. Thematization objectifies.” (Heidegger 1996, 2.4.363). It is only by abstracting or objectifying the personal relation that we have towards entities that we can then adopt the complementary “objective presence” perspective, focusing on objects and their substances, as well as the relation between objects as opposed to their relation with ourselves.

Further, such simulations can be engaged at varying levels of “depth”, ranging from effector-independent “task space” simulations to individual action simulations [16] [17] . This can provide the agent with a means to quickly hypothesize outcomes of a plan at a shallow level of detail and only spend time contemplating its specifics after it is known that it is a sensible approach in general, which can greatly reduce planning time. An agent’s schemas are used to predict outcomes of actions (i.e. proprioceptive and exteroceptive content) by simulating them in an online or offline fashion. Since such simulators can also cover purely exteroceptive content [18] , this also allows the agent to deduce the effects of actions in the environment in a causative manner, or to predict external events in the environment that are not caused by itself. These schemas then, represent pure sensorimotor knowledge of the world. Our conceptual structure is tied to the schema structure in an intricate way so as to afford look up of both procedural and declarative knowledge through simulation during the process of reasoning. It is only by doing so that an internal concept hierarchy can actually be used in practice, which demands a sensorimotor interface with the world.

3.2. Ideomotor Concepts, Schema-Based Behavior

Following ideomotor theory, we maintain the premise that ideas can lead behavior. In the present treatment, ideas are always represented as concepts of varying levels of crispness. As such, a general approach would be to allow any concept to be treated as a “goal state” which behavior is directed towards. But we also know that schemas are sensorimotor in nature, so how can they be made compatible with the sort of abstract vague concepts that make up high level goals, such that their inverse models can convert such a goal into a high level behavior? This would be highly desirable to achieve as it is often the case that we do not have a fully contextualized goal in mind before starting to act, instead relying on interaction with the current environment to determine the shape our abstract goal ultimately takes in practice.

Here we can refer to several different works which extend the ability of sensorimotor schemas. First, we refer to the work of Schubotz [18] who introduces multimodal schemas which can be used for proprioceptive or exteroceptive (as well as other modality) prediction either together or independently, and allows for schemas to be used to predict external events as well as those undertaken by the agent’s own body. Second, we refer to the work of Barsalou in showing how even abstract ideas can be ultimately grounded in bodily experience [19] , and how particularly abstract ideas are thought to require “situating contexts” to interpret them in (what we would call “contextualization” in this paper). Third, at higher levels of the motor hierarchy schemas can work with effector-independent variables [16] , which would be a basic requirement for allowing schemas to deal with abstract goals in terms of higher level parameters such as “temperature”, which are invariant with respect to what body appendage is used to test for them. Finally, we know that schemas can be used to convert between procedural and declarative knowledge, and so it is feasible that goals can be set in terms of declarative facts such as “refilling the fridge”.

By outfitting the agent’s schemas with these additional capabilities, it can freely use any degree of abstraction to plan its behavior, depending on what is learned to be effective. The present account seeks to break away from the bias of assuming that every action is well planned and thought out beforehand and instead allows for the possibility to optimize behavior in a more balanced fashion, with action taking precedence over planning whenever it is deemed to be a more adaptive response. For example, when prior information is scarce, it is preferable to act first in order to tease information from the environment. Further, in very well-known environments it is often preferable to launch a habit (executed as unreflective “at hand” behavior) that is known to work well and wait for it to either succeed or to result in unexpected effects which can be dealt with on a case-by-case basis.

3.3. Schema-Based Simulation

As outlined in [14] , the inputs and outputs of schemas can be redirected such that the schemas become simulators which are run in an offline fashion. Detached from real world input and output, they can simulate virtual conditions without having the agent actually perform any movements physically. But to avoid an infinite regress of homunculus-like “higher controllers”, we must propose a mechanistic way in which the schemas can be set up and used as simulators in the context of actual behavior. We posit that simulation itself can be thought of as a behavior that can be selected and executed as a cognitive alternative to engaging in physical behavior.

Concepts are also amenable to simulation. By using their original grounded features to set the parameters of the schema structure, the sensorimotor schemas will act as they would in the presence of a real world instance of the replayed concept. For example, if a concept of a mug is thought of, the correct grasp parameters for gripping the mug can be loaded into the gripping schema, in line with studies on automatic evocation of motor programs upon object perception [20] [21] , which is a form of grounded cognition [22] . Vague concepts must be contextualized before they can be replayed. This can be done by imagining a preexisting relevant context in which to engage the simulation for the purposes of contextualizing the concept and thus allowing for a particular observation of it. This resembles Barsalou’s idea of “situated conceptualization” [23] which points out that simulation of any particular concept leads to the automatic simulation of related concepts, or what Heidegger would call “the totality of equipment”.

The idea of concepts being a superposition of multiple possibilities until they are observed in context has been described mathematically as “actualization of potentiality” using principles from quantum mechanics by Aerts [24] [25] , and the ability of these models to accurately recreate pertinent psychological effects lend credence to the idea that all concepts are contextualized in practice. We employ the successive contextualization of concepts as part of the main reasoning process outlined in this paper, as abstract goals are combined with online information to result in a chain of more basic or immediate behaviors that are amenable to execution with pre-learned “habitual” schemas.

3.4. The Working Memory Figure

A rational agent must have access to a short term store which can record relevant aspects of recent experience and can be used as a “temporary scratchpad” with which to compare and contrast alternatives or to make implicit expectations explicit as parts of a conscious plan of action. Another requirement for short term storage is to allow the agent to engage in recursive bouts of reasoning, suspending some higher goals while it attends to intermediate and short term goals, as we describe in more detail in the next section. It is of note that this storage is not as infinitely large as the mind seems to be with regards to past experience. Further, there is a limit on how many levels of recursion our working memory can engage in [26] . Here we posit a flat figure to serve the role of short term store, thus bypassing the limits of using recursion. The proposed working memory figure holds onto the state of the entire behavioral process while matters of more narrow focus are attended to. Simultaneously, the figure can be updated based on new information encountered during the execution of more short term goals. This entails that upon completion of subtasks, instead of popping an out-of-date goal from a “stack”, the agent can reevaluate what is most suitable given the updated context. Due to the way in which hierarchical goals are incrementally contextualized in our approach, there is no huge penalty for this sort of constant reevaluation of pertinent information as it remains sufficiently localized. In fact, we hypothesize that this should instead speed up the reasoning process as no time is wasted on irrelevant or out of date aspects of the environment.

Each time a simulation occurs, its outcome and prediction confidence can be stored in the figure as a (sensorimotor) contingency. By recording outcomes on an internal figure, the predicted transition graph can be made explicit. Each outcome can then compete based on a function of its accuracy and applicability to the goal(s) at hand, eventually allowing the agent to settle on one or more compatible winners that will be selected for execution, similar to the affordance competition hypothesis [27] . After execution, the actual outcome can be observed and the schema accuracy revised; the accuracy of a schema is revised whenever it is used in practice by comparing its prediction to the actual outcome. This accuracy then plays a part in the competitiveness of the schema in future transactions, as outlined in [14] .

3.5. Updating Beliefs and Predicting the Future

It is often the case that when a certain event happens (say I observe a carton of milk going into the garbage) that I can immediately know its relevant consequences (if I open the fridge, there will be no milk inside). How is it possible to have this sort of immediate knowledge without facing the combinatorial problem of predicting every outcome of an event? In other words, how does an agent update its “mental state” (its set of declarative facts about the world) based on common notions of cause and effect? It is necessary for such a set of beliefs to be updated immediately, be about relevant aspects of the situation, and allow for uncertainty.

The present account handles this as follows. Upon perception of the relevant conceptual aspects of the environment, either an appropriate situation concept is activated or infotropic behavior attempts to discern the situation further, as shown in Figure 1. The behavioral setting [28] (in this case, of being in a kitchen) can heavily bias which infotropic behaviors in particular are accessed (such as imagining if milk was recently purchased), and avoids the problem of searching in irrelevant memory or physical locations for clues (such as looking under the sink for milk). Once the situation concept has been determined, the agent is free to make explicit any of its beliefs pertaining to that situation by simulating it from a particular perspective and transforming implicitly coded knowledge into declarative fact via internal infotropic behavior such as “internal saccades” [29] . We talk at length about this process in Perspective Taking through Simulation.

3.6. Between the Abstract and the Real

To comprehend the environment is to match a set of schemas to it, such that its opportunities for action become evident, allowing the agent to compare these actions against one another (see Figure 2) in a competition process which takes into account the present goals, bodily state, and background context. We believe that this process of comprehension and competition adequately serves the role of a pre-reflective “motor intentionality” as proposed by Merleau-Ponty (Merleau-Ponty 2013, p. 127) [30] .

The environment can also be recognized as an instance of a particular class of event, location, etc. In fact, any arbitrary subset of features pertaining to the current environment may be matched to a known concept. A main goal of the present account is to avoid such arbitrary judgments, and so we have focused on incrementally reducing the degrees of freedom of the agent in tandem with execution of behavior. By doing so we coax the world to reveal its relevant aspects to the agent. It is important to differentiate our account from the idea of “matching the context to a frame”, which by its very name clearly falls prey to the frame problem. By instead fuzzy-matching a learned situational concept to the current situation, we seek to implicitly reveal the relevant aspects of the world, rather than logically deduce a set of definite facts. This use of vague situational concepts can be thought as corresponding to Merleau-Ponty’s “intentional arc”, which “projects around us our past, our future, our human milieu, our physical situation, our ideological situation, and our moral situation.” (Merleau-Ponty 2013, p. 157). The aforementioned relevant features are then made explicit throughout the process we discuss next, referred to as “practical reasoning”, in which the goal is incrementally transcribed into definite motor behavior.

Figure 1. Upon perceiving a milk carton going into the garbage, the predictive model of opening the fridge results in a sensory simulation of no longer seeing milk inside or the declarative fact of being “out of milk”.

Figure 2. Schema matching. (1) The environment is not perceived as distinct objects, but as a totality of equipment that is taken in as a whole; (2) In the case that the goal relies on use of the computer, the on switch of the computer will be more readily matched to a “switching” action (note it is the target of the action―the switch―that is brought to attention by matching a motor schema to it, and not the entire computer itself); (3) In the case that a hammer is required in the task, or if it may be used in a non-canonical fashion to substitute something else (like a door stop), the hammer will be brought into attention by matching a gripping schema to it.

4. Practical Reasoning

4.1. Overview of Reasoning

The main novelty we propose in our account of reasoning is to extend the idea of ideomotor association. Instead of using it in the context of well-defined goals with well-defined behaviors, we instead make use of the present hybrid conceptual-sensorimotor model to deal with more complex, vague, or dynamic cases. We do this by recursively contextualizing the current goal in order to successively restrict the behaviors available to the agent, simultaneously entertaining multiple hypotheses about the world and actions that can be taken until an appropriate behavior is found that can be run unreflectively by delegation to an automatic low-level schema.

An agent will first try to match a high level habit schema to the situation. When the set of habitual behavioral schemas are not applicable, the agent can instead begin to break down the world, seeing not full “totalities of equipment” but individual affordances. From this, the agent in partially novel circumstances can still rely on general principles such as naive physics (a form of exteroceptive prediction) or simulated heeding of affordances in order to conceive of the utility of each available action and even produce non-canonical uses for known instruments. It is through this process that the agent may learn how to use a cup as a pencil-holder, or a dish as a paperweight. The breakdown of wholes into subparts results in setting a new subgoal, directed towards resolving the current epistemological or practical barrier which the agent cannot solve with any of its premade schemas. Once the subgoal is set, the agent can then recurse one level deeper, starting the process of behavior selection again on a more primitive set of stimuli and concepts, restricted both in terms of space and time compared to the previous behavioral level. Once this subgoal has been completed, the agent can then continue at the level of behavior engaged in before recursing. An example of this procedure is outlined in Figure 3.

4.2. Goal Selection

Higher level goals will tend to be vaguer and resemble more of a loose collection of requirements, while lower level goals should resemble more distinct concepts that deal with material aspects of the world. But at each level of the decision process, the same machinery that deals with concrete behaviors can be recycled to also choose between abstract behaviors, in line with other accounts of “neural recycling” [31] .

One of the key principles of the present account is that behavior selection should be done indirectly by successive addition of contextual constraints instead of intense deliberation among an astronomical number of combinations. Thus, we can define the process of behavioral selection to consist of taking a top down goal as input and producing subgoals as output to the lower levels. Physical behavior only ever occurs after matching of an unreflective or “automatic” sensorimotor schema, and so the selection processes outside of bottom-level

Figure 3. Incremental goal contextualization. (1)-(3) High level goals of Make Coffee, Cleaning, and Make Dinner compete; (5) (11)-(13) The actual format each goal would take in practice is simulated in order to generate accurate evaluations of effort and affect involved. Here it is assumed Make Coffee wins; (6)-(8) Thesubgoals of Make Pod Coffee, Make Instant Coffee, and Make Grounds Coffee are simulated in a similar manner. Make Instant Coffee wins; (8)-(10) The same process continues at deeper levels of contextualization.

schema matching have a primary objective of converting the complex world into a set of automatically achievable sub-behaviors. Through incremental contextualization, the reasoning process thus draws upon past knowledge (the usual location of certain tools), the current state of the body (remaining energy levels), and other contextual requirements (not wasting resources). Explicit and implicit objectives must be combined in a tractable way with the information given by the present environment as well as past episodic knowledge about similar situations and future expectations in order to solve the task intelligently and avoid combinatorial overhead.

Invoking the ideomotor principle, the current goal is sent to the set of schemas at the current level so that their inverse models can generate a set of behaviors which can compete against one another. Next, the contextualized effects of selecting each of these behaviors can be simulated from a specific perspective utilizing the forward models of the schema set. Notice that this process is qualitatively different to breaking the goal up into a set of logical facts and deducing their consequents through formal logical rules. It takes on a more “experiential” character which is more powerful than pure logical deduction as it is focused on aspects of the situation directly related to the known goals or known from experience to be relevant.

Even when placed in an unfamiliar environment, the agent can still attempt to use its general knowledge about recognized aspects of it to attempt to deduce a more accurate assessment of effort and goal appropriateness. This will also take place via simulation, except the simulation will be of a vaguer concept from which specific declarative knowledge within the sphere of recognition are used to extrapolate new facts to work with. It is important that simulation can be employed in both known and unknown contexts, maximally utilizing what facts are available regardless, so that when presented with somewhat novel scenarios the agent doesn’t freeze up completely.

Ultimately, the consideration of possible behaviors will result in a “pull” of some states over others in terms of desirability. Once a set of mutually compatible hypothesized goal states have reached a certain lead over their competitors, they will grab the focus of attention and be output from this level (along with their associated constraints). In this way, the output is temporarily locked in, but it is a “soft lock” as the output merely takes on a hysteretic bias that makes it harder for competitors to replace it. Sufficiently more demanding or urgent goals can override it at any point if they are strong enough (for example a burning pot roast can override and interrupt the behavior of washing dishes).

4.3. Perspective Taking through Simulation

In order to simulate a scenario, I must change my perspective. Armed with a set of predictive models, combined with the ability to operate them offline with intentional control, it is possible for an agent to temporarily utilize its predictive powers while inhibiting the real inputs and outputs of those models, instead providing a hypothesized scenario as a concept and investigating the simulated outcome. Treating the simulated situation as a real one affords the agent the ability to recycle a lot of the machinery used to deal with the real world, such as focus of attention on particular features through internal saccades [29] , and generation of indications for possible actions based on perceived affordances. This allows the agent to simulate the outcome of executing any action in the environment as well as predicting what new affordances it generates [32] , allowing for comparison of actions in multiple dimensions and on multiple timescales. Further, the parallel application of schemas rids the agent of the necessity to check each and every feature of the environment against vast knowledge stores, thus avoiding the frame problem.

Not all of the data generated during a simulation is relevant, and so only that necessary for task achievement should be focused on. Further, the results generated from the simulation must be stored in a coherent way such that they are not treated as reality but more of a prediction, the confidence of which should be based on the accuracy of the schemas used to construct the simulation. To do this, the working memory figure is used to store results as concepts (thus implicitly storing all their content related to the goal), and the transition graph of the working memory figure stores the confidence related to each concept. Thus at any point the agent can reflect on the concepts stored in the working memory figure in order to make their goal-relevant information explicit. This is a particularly powerful feature in a dynamic context in which the goal constantly changes its form, as instead of running the full simulation again the agent can simply perform internal infotropic behavior with respect to a result concept stored in the working memory figure in order to test its relevance.

4.4. From Reasoning to Execution

As goals are successively contextualized they will ultimately be broken down into subgoals which directly match automatic schemas. These schemas can then be deployed as packets of automatic behavior. The sequence of behaviors to take is recorded as a path drawn in the working memory figure, and any such path may initiate execution of the planned behavior. Depending on the personality of the agent and contextual aspects such as urgency, this plan can be revised until it satisfies certain constraints or until action cannot be delayed further. Also, it is not necessary to have a full path drawn from start to finish (and probably not likely either unless in well-known conditions). Instead, the agent can choose to execute partial plans in order to observe their outcome and guide the construction of the plan in tandem with the real environment. An example of this is given in Figure 4.

In very familiar situations, the reasoning process finishes almost as soon as it starts, as a high level habitual behavior is selected and deployed without the need for further deliberation (in the working memory figure this would constitute a single behavior which leads from the start state to the goal state).

During automatic behavior, the agent’s schemas highlight specific affordances, allowing the agent to gravitate to that which is known to be relevant without having to explicitly contemplate each and every function of any particular object, or to differentiate between options at a cognitive level. Instead, the agent essentially responds to the “totality of equipment”, each action resulting in succession from something prior and being directed toward a future purpose―a for-the-sake-of-which (Heidegger 1996, 1.3.84). As the agent’s behavior is expressed at the physical level, the world becomes disclosed to the agent in terms of its use. As is the case with Heidegger’s account of “at hand” behavior, violated expectations during execution of an automatic schema can still result in control being returned to the “supervisory” layer of the agent which returns to reasoning as a means of planning a way around the obstacle.

Fully contextualized and expressed, the agent’s behavior is both commonsensical and adaptive, or as some might even say, “logical”. However, this logic comes not at the cost of intractable calculation, but rather from the work done throughout the agent’s development in building a set of schemas and concepts which break down the complexity of the environment, while retaining and in fact being directed towards the relevance of the environment to the agent itself. That is to say that we claim it is the internal hierarchy of schemas and concepts that convert the exterior objective reality into a personal “world” for the agent as artificial Dasein, the relevance of which in practice is found a priori, and thus at the physical behavioral level does not require intractable calculation.

Figure 4. Reasoning with the working memory figure.

5. Conclusions

In this paper, we have given an account of a rationally behaving agent or artificial Dasein which can engage with the world in terms of relevance to itself. The way we achieve this is through a process of incremental contextualization of goals and constraints that range from the purely conceptual and abstract to the well-defined and physical. The agent is capable of doing “what makes sense” in any given situation based on its past experience and stores of common sense knowledge in the form of a schema structure which is accessible through a hierarchy of concepts. Where information is lacking the agent can interact with the environment based on more basic schemas to reveal hidden structure.

The model described in this paper employs the ideomotor principle along with the complementary ideas of the recursive selection of increasingly contextualized subgoals and the tractable prediction of behavioral consequences through simulation. We believe that combining these capabilities will lead to a form of artificial intelligence more powerful than traditional attempts based on formal logic.

By sensibly using knowledge applicable to the current situation, and efficiently determining when more experimental behavior is required, the agent can maximize its past experience without being stuck in a deterministic rut and also engage in bouts of reasoning without falling into a combinatorial trap. We strongly believe the combination of planning through simulation and unreflective coping behavior as outlined in this paper will prove to be the key to solving the frame problem.

The practical significance to developing agents which do not incur the frame problem is evident, as they would be capable of solving real world problems in the complex social-material world, which relies heavily on nuance and “unspoken rules” to navigate. While the current proposal posits the agent’s reasoning skills in terms of a relatively simply reasoning task, we believe that the structures presented provide an adequate springboard for developing skills in more complex scenarios that require intimate knowledge of the social milieu. Specifically, we believe that recognizing familiar contexts will prove key to solving problems in a social arena. Further, we believe that in order to make problems tractable it will necessarily be required to reduce them to behaviors which can be completed through chains of unreflective action, actions which in our account can be processed automatically by matching to low level schemas. The fact that the combination of “mental” planning with unreflective physical action is precisely how the human being solves problems in the real world gives us confidence that such an approach would be successful if developed to its logical ends. This paper was a first step at achieving those ends.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Dreyfus, H. (2001) Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian. 1-40.
[2] Heidegger, M. (1996) Being and Time: A Translation of Sein und Zeit. SUNY Press.
[3] Perlovsky, L.I. (2006) Symbols: Inte-grated Cognition and Language.
[4] James, W. (2007) The Principles of Psychology. Cosimo.
[5] James, W. (2003) Essays in Radical Empiricism. Dover Publications.
[6] Clark, E.V. (1999) Acquisition in the Course of Conversation. Studies in the Linguistic Sciences (Forum Lectures from the 1999 Linguistics Institute), 29, 1-18.
[7] Dreyfus, H.L. (2002) Intelligence without Representa-tion—Merleau-Ponty’s Critique of Mental Representation. The Relevance of Phenom-enology to Scientific Explanation. Phenomenology and the Cognitive Sciences, 1, 367-383.
https://doi.org/10.1023/A:1021351606209
[8] Tenenbaum, J.B., Griffiths, T.L. and Kemp, C. (2006) Theory-Based Bayesian Models of Inductive Learning and Reasoning. Trends in Cognitive Sciences, 10, 309-318.
https://doi.org/10.1016/j.tics.2006.05.009
[9] Greenwald, A.G. (1970) Sensory Feedback Mechanisms in Performance Control: With Special Reference to the Ideo-Motor Mechanism. Psychological Review, 77, 73-99.
https://doi.org/10.1037/h0028689
[10] Wolpert, D.M. and Kawato, M. (1998) Multiple Paired Forward and Inverse Models for Motor Control. Neural Networks, 11, 1317-1329.
https://doi.org/10.1016/S0893-6080(98)00066-5
[11] Haruno, M., Wolpert, D.M. and Kawato, M. (2001) MOSAIC Model for Sensorimotor Learning and Control. Neural Computation, 13, 2201-2220.
https://doi.org/10.1162/089976601750541778
[12] Bergson, H. (2011) Matter and Memory. Martino Fine Books.
[13] Harvey, K. (2018) An Open-Ended Approach to Piagetian Development of Adaptive Behavior. OALib, 5, 1-33.
https://doi.org/10.4236/oalib.1104434
[14] Pezzulo, G. (2011) Grounding Procedural and Declarative Knowledge in Sensorimotor Anticipation. Mind Lang., 26, 78-114.
https://doi.org/10.1111/j.1468-0017.2010.01411.x
[15] Grush, R. (2004) The Emulation Theory of Representation: Motor Control, Imagery, and Perception. Behavioral and Brain Sciences, 27, 377-442.
[16] Schmidt, R.A. (1975) A Schema Theory of Discrete Motor Skill Learning. Psychological Review, 82, 225-260.
https://doi.org/10.1037/h0076770
[17] Jamone, L., Natale, L., Hashimoto, K., Sandini, G. and Takanishi, A. (2011) Learning Task Space Control through Goal Directed Exploration. 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, 7-11 December 2011, 702-708.
[18] Schubotz, R.I. (2007) Prediction of External Events with Our Motor System: Towards a New Framework. Trends in Cognitive Sciences, 11, 211-218.
[19] Barsalou, L.W. and Wiemer-Hastings, K. (2005) Situating Abstract Concepts. In: Pecher, D. and Zwaan, R.A., Eds., Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thinking, Cambridge University Press, Cambridge, 129-163.
https://doi.org/10.1017/CBO9780511499968.007
[20] Borghi, A.M., Bonfiglioli, C., Lugli, L., Ricciardelli, P., Rubichi, S. and Nicoletti, R. (2007) Are Visual Stimuli Sufficient to Evoke Motor Information? Studies with Hand Primes. Neuroscience Letters, 411, 17-21.
https://doi.org/10.1016/j.neulet.2006.10.003
[21] McBride, J., Boy, F., Husain, M. and Sumner, P. (2012) Automatic Motor Activation in the Executive Control of Action. Frontiers in Human Neuroscience, 6, 82.
[22] Barsalou, L.W. (2008) Grounded Cognition. Annual Review of Psychology, 59, 617-645.
https://doi.org/10.1146/annurev.psych.59.103006.093639
[23] Barsalou, L.W. (2009) Simulation, Situated Conceptualization, and Prediction. Philosophical Transactions of the Royal Society B, 364, 1281-1289.
[24] Gabora, L., Rosch, E. and Aerts, D. (2008) Toward an Ecological Theory of Concepts. Ecological Psychology, 20, 84-116.
[25] Aerts, D. (2009) Quantum Structure in Cognition. Journal of Mathematical Psychology, 53, 314-348.
https://doi.org/10.1016/j.jmp.2009.04.005
[26] Read, D.W. (2008) Working Memory: A Cognitive Limit to Non-Human Primate Recursive Thinking Prior to Homi-nid Evolution. Evolutionary Psychology, 6, 676-714.
https://doi.org/10.1177/147470490800600413
[27] Cisek, P. (2007) Cortical Mechanisms of Action Selection: The Affordance Competition Hypothesis. Philosophical Transactions of the Royal Society B, 362, 1585-1599.
https://doi.org/10.1098/rstb.2007.2054
[28] Barker, R.G. (1968) Ecological Psychology. Stanford University Press, Stanford.
[29] Melcher, D. and Kowler, E. (2001) Visual Scene Memory and the Guidance of Saccadic Eye Movements. Vision Research, 41, 3597-3611.
https://doi.org/10.1016/S0042-6989(01)00203-6
[30] Merleau-Ponty, M. (2013) Phénoménologie de la perception. Gallimard éditions.
[31] Badets, A. and Osiurak, F. (2017) The Ideomotor Recycling Theory for Tool Use, Language, and Foresight. Experimental Brain Research, 235, 365-377.
https://doi.org/10.1007/s00221-016-4812-4
[32] Pezzulo, G., Barca, L., Boc-coni, A.L. and Borghi, A.M. (2010) When Affordances Climb into Your Mind: Ad-vantages of Motor Simulation in a Memory Task Performed by Novice and Expert Rock Climbers. Brain and Cognition, 73, 68-73.
https://doi.org/10.1016/j.bandc.2010.03.002

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.