Open Journal of Philosophy
Vol.08 No.02(2018), Article ID:83615,28 pages
10.4236/ojpp.2018.82010

The Experience-Dependent Dynamics of Human Consciousness

Birgitta Dresp-Langley

CNRS UMR 7357, University of Strasbourg, Strasbourg, France

Copyright © 2018 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: February 1, 2018; Accepted: April 6, 2018; Published: April 9, 2018

ABSTRACT

By reviewing most of the neurobiology of consciousness, this article highlights some major reasons why a successful emulation of the dynamics of human consciousness by artificial intelligence is unlikely. The analysis provided leads to conclude that human consciousness is epigenetically determined, experience, and context-dependent at the individual level. It is subject to changes in time that are essentially unpredictable. If cracking the code to human consciousness were possible, the result would most likely have to consist of a temporal pattern code simulating long-distance signal reverberation and de-correlation of all spatial signal contents from temporal signals. In the light of the massive evidence for complex interactions between implicit (non-conscious) and explicit (conscious) contents of representation, the code would have to be capable of making implicit (non-conscious) processes explicit. It would have to be capable of a progressively less and less arbitrary selection of temporal activity patterns in a continuously developing neural network structure identical to that of the human brain, from the synaptic level to that of higher cognitive functions. The code’s activation thresholds would depend on specific temporal signal coincidence probabilities, vary considerably with time and across individual experience data, and would therefore require dynamically adaptive computations capable of emulating the properties of individual human experience. No known machine or neural network learning approach has such potential.

Keywords:

Consciousness, Human, Time, Temporal Neural Signatures, Experience-Dependent Brain Learning

1. Introduction

In the current context where research aimed at creating artificial intelligence capable of emulating the properties of human consciousness is proliferating, a deeper reflection on the essentially epigenetically determined and experience-dependent dynamics of human consciousness has become a timely and necessary endeavour. The astonishing plasticity of the human brain enables life-long learning at all functional levels, from the synapse to higher cognitive processes, is in itself determined by time and context, and is driven by experience dependent epigenetic and environmental factors in complex interactive ways. No machine learning algorithm has up to now succeeded in reproducing these dynamics in all their complexity. This article reviews previous attempts to “crack the code” to human consciousness in the light of what is known about the neurobiology of conscious behaviour. The conclusions from this analysis highlight why it is unlikely that any machine will ever be able to successfully emulate the dynamics of human consciousness in all their complexity.

1.1. Conscious Behaviour at the Tip of the Iceberg

Approaches where a specific conscious behaviour is considered as an indicator of consciousness ( Lashley, 1956 ) generally consist of having human observers perform specific tasks that require focussed attention or selective memory retrieval. Experimental efforts in that direction would be, for example, the experiments by Dehaene et al. ( Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006 ), where consciousness was approached in terms of what the authors called “conscious report”. Suggesting that a human subject is phenomenally conscious when some critical event is reliably reported, it was argued that consciousness can be defined in terms of “access of information to conscious report”. Such a restriction of phenomenal consciousness to processes that enable information to access a certain level of conscious representation is grounded in Block’s concept of access consciousness (e.g. Block, 1995 ). Examining conscious report of a human observer to unravel the mechanisms of consciousness, or access of information to consciousness, leads to several critical questions that remain to be answered ( Buszaki, 2007 ; Dennett, 1991, 2001 ). Does information that is made accessible to conscious report have to correspond to ongoing or past, to real or imagined events? Does the conscious experience that is subject to conscious report occur well before, immediately before, or during the report? How long would it be expected to last thereafter?

In their search for the neural correlates of consciousness, Crick & Koch (1995) , whose work had received a lot of attention and praise, employed a working model in terms of “subliminal”, “preconscious”, and “conscious perception” which adopts a taxonomy that had been proposed by Kihlstrom (1987) twenty years earlier. Crick and Koch (1995) then claimed that top-down attentive selection is the key to conscious perception. Subsequently, phenomena such as change blindness (e.g. Silverman & Mack, 2006 ), where human observers are unable to detect important changes in briefly presented visual scenes disrupted by blinks, flashes or other visual masks just before the changes occur, were interpreted in terms of “preconscious perception” on the basis of the argument that observers would fail to report what they actually see because they believe that what is there is what they have seen just before ( Dehaene, Sergent, & Changeux, 2003 ). Such belief would then block the attentive selection process that would otherwise enable the new information contained in a new visual scene (e.g. Delorme et al., 2004 ) to access the level of conscious perception. But is studying the neural correlates of conscious perception and selective visual attention sufficient for understanding the mechanisms that produce consciousness in the first place? While some seem to agree that it would be important to work out an experimental paradigm in which consciousness is not confounded with the changes in overt or covert behaviours it may engender in stimulus-response designs, most have hitherto failed avoiding this trap (e.g. Feinstein, Stein, Castillo, & Paulus, 2004 ). Thus, the difficulty of linking conscious behaviour to the mechanisms that generate consciousness in the brain appears to be a major limiting factor.

1.2. Picturing the Conscious Brain

At the same time, technological progress in the imaging sciences enabled scientists to visualize which parts of the brain are or are not activated when a human subject is or is not attentively (“consciously”) performing a behavioural task (see Rees et al., 2002 , for a review). The new technologies raised high hopes that functional brain imaging correlated with behavioural designs would help to link the mental to the physical ( Feigl, 1958 ) and, ultimately, overcome this limiting factor. Pictures of a conscious brain were believed to unravel the origins of consciousness. Like in the fourteenth century, when physicians attempted to identify the locus of the human soul in the body, the advent of modern functional imaging techniques had led to the pursuit of a localization of consciousness in the brain, which quickly became the pet subject of a small industry in science. While rapid technological progress, promoting the development of imaging and electrophysiological techniques, had indeed made it possible to correlate cognitive function with increasingly precisely located neural activities and interactions in specific brain areas, such correlations did, however, not lead us any further towards an understanding of human consciousness, or how the brain may be able of producing it. Some observations suggested that conscious activity would correlate with occipital neural activity, while others seemed to point toward a correlation between conscious mental events and late parieto-frontal activity (e.g. Driver & Vuilleumier, 2001 , Dehaene et al., 2006 ). Even though some kind of sense may be read into these largely disparate data, fact still is that the much expected break-though that would have allowed to determine the functional locus of consciousness in the human brain on the basis of pictures taken from the brain has not happened.

1.3. Consciousness and the Theatre Metaphor

As pointed out already more than a century ago by William James (1890) , consciousness encompasses far more than being able to effectively attend to, perceive, and describe stimuli. Baars (1993, 1997) referred to phenomenal consciousness as the theatre of the mind, which is reminiscent of writings from the first book (part 4, section 6) of the Treatise of Human Nature (1740) in which the Scottish Philosopher David Hume compared phenomenal consciousness to a theatre with a scene of complex events where various different sensations and perceptions make their successive appearance in the course of time:

“The mind is a kind of theatre, where several perceptions successively make their appearance; pass, repass, glide away, and mingle in an infinite variety of postures and sensations. There is properly neither simplicity in it at one time, nor identity in different, whatever natural propension we may have to imagine that simplicity and identity. The comparison of the theatre must not mislead us. They are the successive perceptions only, that constitute the mind; nor have we the most distant notion of the places where these scenes are represented, or of the materials of which it is composed.”

Hume’s phenomenal description of successive feelings or sensations appearing as sequences in time is embedded in some contemporary views of consciousness. Less than ten years ago, the neurobiologist Ramachandran discussed the concept of “Self” in relation with the concept of “consciousness”, and emphasized that phenomenal consciousness encompasses hardly more than sequences of many distinct perceptions and sensations. Moreover, these are not necessarily related to ongoing external events or stimuli ( Natsoulas, 1983 ; Ramachandran, 1998 ; Bieberich, 2002 ).

1.4. Lucid Dreaming: Dressed Rehearsal in the Theatre of Consciousness

Understanding conscious imagination and creative thinking, or the striking similarities between object descriptions resulting from conscious perception and from pure imagination (e.g. Kosslyn, 1994, 1999 ; Kosslyn et al., 2001 ) requires going beyond studying actively and consciously behaving observers. When we dream intensely, we are not attentive to stimuli, but we are phenomenally conscious ( Schwartz, 2003 ), and sometimes we may be able to access and report these phenomenal data several hours later, when we recount our dreams over breakfast. LaBerge (1990) believes that, to the neuronal functions that produce consciousness, dreaming of perceiving and doing is equivalent to perceiving and doing. Thus, in line with Hume’s or Baars’ theatre metaphors, our dreams would be the dressed rehearsals in the theatre of our conscious mind. Such a view is supported by evidence for a functional equivalence of psycho-physiological correlates of consciousness in active wakeful observers and during lucid dreaming, which occurs in REM sleep phases. Lucid dreaming and equivalent wakeful activities are measured in terms of relatively short EEG signal epochs indicating a specific activation level of the central nervous system (e.g. LaBerge, 1990 ). Other work on anesthetized patients (e.g. Drover et al., 2002 ) suggests that different levels of consciousness during anaesthesia are reliably predicted by an invariant set of changes in quantitative EEG analysis. It seems regrettable that these and similar lines of research have not received more attention from brain theories of consciousness.

1.5. Conditions on the Logic of Explanation and Occam’s Razor

Time has come to step back and recapitulate what we have learnt and what not from the past forty years of consciousness studies (in this respect, see also the earlier insights from Buzsaki, 2007 ). From the strict viewpoint of theory of explanation, no evidence that would allow us to link the phenomenon of consciousness to the brain has been made available up to now by science. This sobering conclusion stems directly from one of the ground conditions for the structure and logic of scientific explanation, which is given in the Hempel-Oppenheim, or H-O model ( Hempel & Oppenheim, 1948 ): the nature of the explanandum, or what is to be explained, must be adequately derived from the explanans, or explanation given, and an explanans can be considered adequate only in regard to the particular expression of, or dependent variable relating to, the explanandum. Consider, for example, a study on conscious perception or selective attention, knowing that both are two either related or unrelated aspects of the explanandum (consciousness). An explanans is derived from reports of conscious perception, possibly correlated with some specific activity in the brain. The explanans in this case is adequate for the particular process of conscious perception probed by the study task, in general revealed by statistically significant differences in the average numerical values of the dependent variable, but not for the explanandum as such. Along the same line of reasoning, the observed neural correlate may adequately reflect brain activity related to the particular process of conscious perception highlighted by the behavioural data, but is an inadequate neural correlate of consciousness as such.

Another limiting condition to the scientific explanation of consciousness is imposed by Occam’s razor. The latter describes what is commonly called the law of parsimony (lex parsimoniae), a principle of logic that is both ethically and pragmatically grounded in the philosophy of science of the English cleric William of Occam (14th century: “entia non sunt multiplicanda necessitatem”). It states that the explanation of a phenomenon should resort to as few “entities” (mechanisms, processes, laws) as possible. Likewise, an entire model or system of explanation should make as few assumptions as possible. Since the logic of explanation is tightly linked to the definition of the explanandum itself (e.g. Hempel & Oppenheim, 1948 ), a scientific explanation of consciousness would have the virtue of proposing the fewest possible model assumptions, with the least functional characteristics required to account for how conscious brain activity, enabling conscious experience, would be produced. The definition of such activity itself would have to be as simple as possible. These considerations together with points raised earlier in the introduction lead us to the conscious state notion as the most parsimonious definition of consciousness. The simplest possible postulate that there would be a specific brain state, the conscious state, most adequately defines the explanandum. The fewest mechanisms necessary for its genesis in the brain adequately define the explanans.

2. Information Processing and the Conscious Brain State Notion

The notion of a conscious state and how it may lead to an operational approach to the problem of consciousness was discussed twenty years ago by Tononi & Edelman (1998) . Their conscious state concept encompasses an earlier one proposed by von der Malsburg (1997) in terms of a continuous process with a limited duration. The idea here is that conscious states are neither identical nor reducible to states of awareness or vigilance ( Nagel, 1974 ; Milner, 1995 ; Humphrey, 2000 ; Nielsen & Stentstrom, 2005 ). While they may involve cognitive processes such as memory (e.g. Cowan, Elliott, Saults, Morey, Mattox, Hismjatullina, & Conway, 2005 ; Lin, Osan, & Tsien, 2006 ), attention ( Posner, 1994 ; Raz & Buhle, 2006 ), conscious pereception (e.g. Crick & Koch, 2000 ; Dehaene et al., 2006 ), or volition ( Grossberg, 1999 ; Dehaene et al., 2006 ), these would only be possible expressions of a conscious state and not to be confounded with the state as such. Rather, a conscious state would correspond to a specific functional state of the brain (e.g. Klausberger et al., 2003 ) that enables the experience of phenomenal consciousness. John (2002) argued that the most probable invariant level of neural activity or coherent interaction among brain regions that can be measured when a person is in a conscious state would be the best possible approximation of NCC, or what he called the “conscious ground state of the brain”. Earlier studies concerned with the functional characteristics of conscious and non-conscious information processing, decision making, and action (for detailed reviews see Kihlstrom, 1987 , Dehaene & Naccache, 2001 , or Buzsaki, 2007 ) point, indeed, towards approaching consciousness in terms of a brain state or “conscious state” that would result from functional properties of neural circuitry ( DeCharms & Zador, 2000 ; Lennie, 2003 ; Bullock et al., 2005 ). A conscious state appears to have two major functional characteristics, such as 1) a limited information processing capacity (e.g. Schneider & Shiffrin, 1977 ; Shiffrin & Schneider, 1977 ; Shiffrin, 2003 ; Ramsey et al., 2004 ) and 2) a unique representational content for a limited and relatively short duration (e.g. Duncan, 1980 ; Mangan, 2003 ; LeDoux, 2002 ; Dietrich, 2003 ). The content of a conscious state would be steadily updated through non-conscious processes, which constitute by far the largest part of all brain activity (e.g. Velmans, 1991 ; Gray, 2002 ; Pockett, 2004 ). Conscious information processing relies mainly on serial processing, which allows for only a very limited amount of information to be dealt with in a given time span. Most people cannot consciously follow two ideas at the same time, or consciously execute two even simple, simultaneous tasks (e.g. Cherry, 1953 ; Baars, 1998 ). Conscious “seriality” undeniably constrains any possible theory of consciousness ( Pockett, 1999 ; Seth & Baars, 2005 ; Edelman, 2003 ). Non-conscious activity, on the other hand, is largely based on massively parallel processing and can therefore handle a lot more information (e.g. Mesulam, 1990 ; Hochstein & Ahissar, 2002 ; Mangan, 2003 ; Dietrich, 2003 ). The function of serialization in terms of an ordered list of conscious events (e.g. Page & Norris, 1998 ; Seth et al., 2006 ), discussed already half a century ago by Lashley (1951) , is linked to the hypothesis that an event or piece of information, once made conscious, would become selectively available to other processes related to thought and speech production. This function of making non-conscious information accessible to the active mind is an important achievement of brain evolution. The limited capacity of conscious processes, on the other hand, represents a major functional constraint, revealed by psychophysical data, which include data on change blindness mentioned earlier, and more recent observations on change detection ( Triesch, Ballard, Hayhoe, & Sullivan, 2003 ). These have shown that observers detect sudden specific changes in visual scenes only and only just in time when they need the specific information to solve a given problem. The limited capacity of a conscious state entails that it must entirely rely on working memory, which can handle the “magic” number of about 7 representations (e.g. Oberly, 1928 ; Miller, 1956 and more recently Parkin, 1999 or Vogel, Woodman, & Luck, 2001 ). Such a limitation severely constrains the top-down processes that can effectively operate within the temporal window of a conscious state. As proposed earlier by Mangan (2003) , the pre-conscious processes at the fringe of consciousness may provide some kind of buffer, which both compensates for and regulates the limited conscious capacity. The processing capacity of the non-conscious, in contrast, may be estimated within a range of at least 107 bits, knowing that the optical nerve transfers 108 bits per second as stated by Koch (1997) , which is infinitely more than working memory can deal with. The limitations of conscious processing are defined in terms of the representational content that is authorized to invade a conscious state at a given time. Such content would be retrieved selectively from non-conscious long-term memory, where it is stored as an integrated representation. Representations are defined, as in Churchland (2002) , in terms of patterns of activity across groups of neurons which carry information. A fully integrated representation would then consist of a unique activity pattern defined by a unique temporal sequence.

Certain theoretical approaches to neural signal exchanges in the brain, such as the Lisman-Idiart-Jensen model ( Lisman & Idiart, 1995 ; Jensen et al., 1996 ; Jensen & Lisman, 1996a, 1996b, 1996c ; Lisman, 1998 ; Jensen & Lisman, 1998 ; Jensen, 2005 ), point towards the selective retrieval of temporal activity patterns as the most parsimonious explanation for conscious brain states. These approaches exploit the functional properties of working memory to explain how a temporal pattern code may activate and maintain a conscious brain state. Inspired by some of the experimental data and theoretical arguments discussed here above, the Lisman-Idiart-Jensen model consists of a working memory with a maximum processing capacity of 7 ± 2 items. Each such item is represented by the firing of a cell assembly (the so-called “coding assembly”) during one gamma period ( Herrmann, Munk, & Engel, 2004 ), the whole phenomenon occurring in a theta period composed of about 7 gamma cycles. Specific model accounts, for the slope of the so-called Sternberg curve (38 ms per item) for working memory time spans were developed on the basis of this approach (for more details, see Jensen & Lisman, 1998, 2005 ). Başar (1998) and Başar et al. (2000) considered the cognitive transfer activities to be based on oscillations at specific temporal frequencies ( Guttman, Gilroy, & Blake, 2007 ). These would be combined like the letters of an alphabet to deliver a temporal code for conscious brain activity, measurable through wavelet analysis of EEG or event-related potentials (ERP). The functional identity of the neurons delivering the code is deemed irrelevant, only the timing of the signal sequences matters, the nested functional hierarchy of spatiotemporal patterns produced by neuronal assemblies and operational modules and their intrinsic dynamics. Subsequently, Fingelkurts & Fingelkurts (2001, 2008) analyzed and modelled event-specific EEG signal sequences further in that direction. Their models highlight what they called “the nested hierarchy” of unconscious and conscious processes, where higher levels are physically composed of lower levels ( Maccaferri & Lacaille, 2003 ; Fellin & Carmignoto, 2004 ; Fields, 2004 ; Machens, Romo, & Brody, 2005 ) and where there is no central control of the system as a whole. This results in extremely weak constraint of higher processing levels in the brain on lower (non-conscious) ones, and therefore represents a major limiting condition for artificial intelligence and machine learning algorithms. There are no known criteria for integrating non-explicit (i.e. non-conscious) contents into the procedural command chains of machine learning algorithms, not even when they involve essentially unsupervised adaptive neural network learning.

2.1. Temporal Sequencing of the Neural Signatures of Conscious States

Whenever a unique combination of temporal signal sequences in the human brain attains some critical activity threshold, a unique conscious state could be generated, and regenerated whenever that signature is retrieved again, either by the same set of neurons or any other set capable of producing it. Such neural timing for conscious state access would rely on simultaneous supra threshold activation of sets of cells within dedicated neural circuits in various, arbitrarily but not necessarily randomly determined loci of the brain. The intrinsic topology that determines which single cell of a given circuit produces which spike pattern of a given temporal signature is, therefore, independent of the topological functional organization of the brain.

This idea that a conscious brain state is triggered by temporal signals of cells that are arbitrarily associated with any other functional properties of cells suggests a way of thinking about a neural code for consciousness that is radically different from that offered by most current approaches. It has the considerable functional advantage that, should some subsets of coding cells be destroyed, other subsets could still deliver the code elsewhere in the brain. Such a functional assumption is justified in the light of evidence for a considerable plasticity of functional brain organization (e.g. Wall, Xu, & Wang, 2002 ). The relevance of functional plasticity will be discussed later here in greater detail.

Like the temporal signal sequence or activity pattern of any single coding cell is determined by its firing activity across a certain length of time ( Van Rullen & Thorpe, 2001 ), the temporal signature of a conscious state would also be linked to its duration, the so-called “psychological moment” ( Pöppel & Logothetis, 1986 ; von der Malsburg, 1999 ; Tononi & Edelman, 1998 ), with variations in the limited dynamic range of a few hundreds of milliseconds. This estimate is established on the grounds of a considerable body of psychophysical and neurobiological data would suggest (e.g. Lehmann et al., 1987 ; Lestienne & Strehler, 1988 ; Thorpe & Imbert, 1989 ; Crick & Koch, 1990 ; Potter, 1993 ; Gray, 1995 ; Pascual-Marqui et al., 1995 ; Taylor, 2002 ; Koenig & Lehmann, 1996 ; Lehmann et al., 1998 ; von der Malsburg, 1999 ; Bressler & Kelso, 2001 ; Chun & Marois, 2002 ). Work by Libet (1993, 2003, 2004) , for example, has shown that a time minimum of about 500 ms is required for a near-threshold stimulus to produce a conscious perceptual experience. In order to analyze neural patterns in terms of the temporal codes they deliver, the duration of a conscious state is to be divided into critical time windows, or “bins”, the length of which would be limited by the accuracy of neuronal timing, or the lower limit of biophysics. Such a time window, or “bin”, is expressed through the parameter t which would represent the sum of standard deviations for the time delay of synaptic transmission including the duration of the refractory period. An average estimate of 6 ms for this parameter appears reasonable in light of the data available ( Bair, 1999 ). Helekar (1999) based his calculations of a temporal code on an average duration of 3 ms for Δt, operating under the hypothesis of an average estimate of only 30 ms for a state duration, expressed in terms of the parameter t. An average estimate of 6 ms for Δt is consistent with bin durations proposed by Shastri & Ajjanagadde (1993) , Moore & King (1999) , or Rieke et al. (1997) . Others ( Singer, 2000 ) have suggested bin durations of up to 10 ms and no longer than 10 ms. Interspike intervals and integration times of cortical neurons display a similar dynamic range ( Eggermont, 1998 ). Under the simple assumption that within each such “bin” there is either a signal or no signal, derived from McCullough & Pitts’ (1943) germinal work on information transmission in neural networks, the information content of each bin is 1 bit. On the basis of an average duration of 300 ms for a given conscious state, which seems more realistic than the 30 ms state duration suggested by Helekar, a 6 ms duration for a critical time window or “bin” within that state, and with a deterministic signal being generated during each “bin”, the information content of such a conscious state would be 300/6 = 50 bits. A similar computation of the maximum quantity of information conveyed by a duration t with a number of temporal windows identified by a given Δt was proposed by MacKay & McCulloch (1952) . Considering equal probabilities for activity (signal) and non-activity (no signal) within each “bin”, a conscious state of a duration of 300 ms would then generate 61 bits of content (for Δt = 6 ms). This theoretical approach is detailed in Rieke et al. (1997) , who pointed out that the neuronal systems under study approach the theoretical limit of information transmission. The figures given above may be compared with estimates of the number of visual prototypes held in memory given by Tsotsos (1990) , which correspond to information contents of 17 to 23 bits. Similar time-based estimates were suggested later by Thorpe et al. (2001) and VanRullen et al. (2005) . Approaches in terms of dynamic analyses of correlated oscillations in cortical areas at various frequencies (e.g. Bassett et al., 2006 ) and functional interactions between gamma and theta oscillations in different structures of the brain (e.g. Axmacher et al., 2006 ) are consistent with the estimates given here. How such purely temporal functional aspects of an immense variety of neural signals produce a temporal code for conscious state access was discussed in greater detail earlier ( Dresp-Langley & Durup, 2009 ). It can be understood as a result of the properties of reverberant neural circuits in the brain, functionally identified previously in neurobiology ( Llinás et al., 1998 ; Steriade, 1997 ; Pollen, 1999 ; Llinás & Ribary, 2001 ; VanRullen & Koch, 2003 ; Lamme, 2004, 2006 ).

2.2. Reverberation, Long-Distance Propagation, and Signal De-Correlation in the Brain

The reverberant circuits or loops thus far identified in the brain appear to have their own intrinsic toplogy (e.g. Abeles et al., 1993 ; Edelman, 1993 ; Crick, 1994 ; Grossberg, 1999 ; Constantinidis et al., 2002 ; Lau & Bi, 2005 ; Dehaene et al., 2006 ). Reverberant neural activity was found in thalamo-cortical ( Llinás et al., 1998 ; Llinás & Ribary, 2001 ; VanRullen & Koch, 2003 ) as well as in cortico-cortical pathways ( Steriade, 1997 ; Pollen, 1999 ; Lamme, 2004, 2006 ). Reverberant neural activity as such is a purely temporal process that generates feed-back loops in the brain, referred to by some in terms of “re-entrant circuits” ( Edelman, 1989, 1993 ; Tononi et al., 1992, 1998 ; Tononi & Edelman, 1998, 2000 ; Edelman & Tononi, 2000 ; Fuster, 2000 ; Prinz, 2000 ; Di Lollo et al., 2000 ; Klimesch et al., 1997 ; Edelman, 2003 ; Robertson, 2003 ; Koch & Crick, 2000 ; Crick & Koch, 2003 ). Reverberation is an important functional property of the brain ( Lamme & Roelfsma, 2000 ) because without it, the conscious execution of focussed action would be difficult, if not impossible (e.g. Lamme, 2006 ).

Dehaene et al. (2006) argued that conscious perception and report would rely on the extension of local brain activation to higher association cortices that are interconnected by long-distance connections and form a reverberating neuronal circuit extending across distant perceptual areas. Reverberation would allow holding information on-line for durations that are unrelated to the duration of a given stimulus and long enough to enable the rapid propagation of information through different brain systems. In their view, conscious information processing in the brain is associated with the parieto-frontal pathways of the brain ( Frith & Dolan, 1996 ), which are protected from fast fluctuations in sensory signals and which would allow information sharing across a broad variety of cognitive processes. While it is straightforward to agree with Dehaene et al.’s postulate that conscious information processing would be enabled on the basis of signal reverberation and propagation across long-distance connections in the brain, it is less clear how the complex cross-talk between neural signals necessary to generate information sharing across a broad variety of non-conscious cognitive processes could be implemented into a code for consciousness. If this were possible, the conscious brain would be able to sort out a seemingly infinite number of different signals from multi-channel cross-talk to generate stable, reliable, unifying and reportable conscious state access. In other words, the whole chain of interactive processes could in principle be made explicit.

Instead, what is happening is that the brain most likely relies on signal de-correlation, possibly enabled through long-distance reverberation and based on some critical internal threshold that is not known. De-correlation of temporal from spatial messages for capacity-limited representation within consciousness would clarify how a stable and precise brain code for conscious state access can be generated in light of the largely plastic and diffuse spatial functional organization of the brain. De-correlation has become an important concept in neural network theory and in systems theory in general. It describes a mechanism that reduces crosstalk between multi-channel signals in a system like the brain, while preserving other critical signal properties. Theoretical work by Lazar, Pipa, & Triesch (2007) on interactions between spike timing dependent and intrinsic synaptic plasticity in recurrent neural networks for the dynamic genesis of specific sequences or series of temporal activity patterns may points towards new ways of investigating the temporal signatures of conscious states.

3. The Plasticity of Spatial Functional Brain Organization

Sensory, somatosensory, and proprioceptive signals may instantly be integrated into the immediate data of a unified conscious experience, eliciting what psychophysicists call sensations. The integration of such a variety of signals into brain representations ( Revonsuo, 2000 ; Holmgren et al., 2003 ), however, relies on non-conscious mechanisms, which have to be sufficiently adaptable ( Lewis, 1983 ; Edelman, Baars, & Seth, 2005 ) and display a certain functional plasticity to enable the continuous updating of representations as a function of changes. Such changes are imposed on our brains day by day by new situations and experiences. To be made available to consciousness, there has to be some permanently reliable, unifying “tag” which ensures stable access across time. Grossberg (1999) referred to this problem as the “plasticity-versus-stability dilemma”. While such learning quite satisfactorily accounts for non-conscious information processing by the brain, it has not helped clarify through which mechanism non-conscious brain representations would be made available to consciousness. The need for a mechanism of neural integration that explains how non-conscious representations are delivered to consciousness is highlighted further by some neurological data, such as ERP and functional imaging data on neurological patients with unilateral neglect or extinction after unilateral brain damage. Such patients are unaware of objects or events that take place on the contralesional side of physical space. Depending on how far their parietal lesion extends to the occipital or temporal cortex, a more or less important amount of non-conscious perceptual processing is found to be preserved (see Driver & Vuilleumier, 2001 , for a review). This suggests that localized brain damage of the parietal lobe affects the mechanisms that enable specific perceptual representations to access the conscious state level but does not affect the perceptual representations as such.

Other neurological observations severely challenge the idea that function should be fixed in specific loci. The “phantom limb” syndrome (e.g. Ramachandran, Rogers-Ramachandran, & Cobb, 1995 ; Ramachandran, 1998 ), for example, reveals an extraordinary plasticity of topological functional brain organization. The phantom limb syndrome is a phenomenon that was already mentioned in writings by Paré and Descartes, and described in greater detail by Guéniot (1868) . It has been repeatedly observed in hundreds of case studies since. After arm amputation, patients often experience sensations of pain in the limb that is no longer there, and experimental data show that a third of such patients systematically refer stimulations of the face to the phantom limb, with a topographically organized map for the individual fingers of a hand. On the basis of similar evidence for massive changes in somatotopic maps after digit amputation and other experimental data showing that several years after dorsal rhizotomy in adult monkeys, a region corresponding to the hand in the cortical somatotopic map of the primate’s brain is activated by stimuli delivered to the face ( Merzenich et al., 1984 ), Ramachandran and his colleagues proposed their “remapping hypothesis” (e.g. Ramachandran, Rogers-Ramachandran, & Stewart, 1992 ). The latter clarifies how spatial and topological representations are referred to other loci in the brain through massive cortical re-organization. The findings reported by Ramachandran and others deliver compelling evidence that, despite dramatic changes in non-conscious topology, representations remain available to conscious state access and can still be experienced in terms of sensations such as pain, cold, digging or rubbing. This is most likely so because the temporal signatures of these representations persist in the brain. In the light of Dehaene et al.’s (2006) long-distance propagation hypothesis, it can be assumed that the neural signatures for conscious state access propagate well beyond local sensory and somatosensory areas which receive and process input from a given part of the body such as an arm or a leg. Long-distance propagation and reverberation would then lead to the consolidation of the temporal signatures of conscious sensations, resonating across the whole brain. The signatures can then reach critical threshold activation levels even when stimulus input to specific local sensory areas is no longer delivered.

3.1. The Temporal “Brain Coherence Index” and Coincidence Detection

In his “neurophysics of consciousness”, John (2001, 2002) suggested that a conscious state may be identified with a brain state where information is represented by levels of coherence among multiple brain regions, revealed through coherent temporal firing patterns that deviate significantly from random fluctuations. This assumption is consistent with the idea of a stable and perennial temporal code for conscious state access despite spatial remapping or cortical re-organization. Empirical support for John’s theory comes from evidence for a tight link between electroencephalographic activity in the gamma range defined by temporal firing rates between 40 and 80 Hz (i.e. the so-called “40-Hz” or “phase-locked” gamma oscillations) and conscious states (e.g. Engel et al., 1992 ). This “coherence index”, with its characteristic phase-locking at 40 Hz, was found to change with increasing sedation in anaesthesia, independent of the type of anaesthetic used ( Stockmanns et al., 2000 ). Decreasing temporal frequencies were reported when doses of a given anaesthetic were increased. Moreover, the characteristic phase-locking at 40 Hz displays coherence not only across brain regions during focussed arousal, but also during REM sleep, when the subject is dreaming ( Llinás & Ribary, 1993 ). Coherence disappears during dreamless, deep slow-wave sleep, which is consistent with the findings reported on deeply anesthetized patients. The fact that the temporal coherence index of a conscious state is produced during focussed arousal as well as during dreaming in REM sleep phases is fully consistent with the idea (e.g. LaBerge, 1990 ) that dreams and conscious imagination represent functionally equivalent conscious states.

The phase-locking at the critical temporal frequency would be achieved through intra-cortical reverberation, enabled by a digital event within a hybrid system, as in John’s terminology ( John, 2001, 2002 ). This hybrid system, the brain, establishes arbitrary but non-random departures from different loci or topological maps. These latter may undergo functional re-organization, yet, the temporal code for conscious state access remains intact. This would lead to cortico-thalamic feedback loops, or resonance loops which generate the temporal signatures of conscious states on the basis of a statistical computation of non-conscious memory events coinciding in time. Potential mechanisms explaining how such memory events are read out by non-conscious processes in the brain were discussed by Grossberg in his Adaptive Resonance Theory ART ( Grossberg, 1975, 1999 ).

3.2. Adaptive Resonance Theory and Grossberg’s Dilemma

Originally, Adaptive Resonance Theory (ART) was conceived as a theory of brain learning to explain how the brain generates and updates representations of continuously changing physical environments ( Grossberg, 1975 ). More recently, ART was extended to account for related phenomena such as attention, intention or volition. According to Grossberg (1999) , the link between these three could be described by the fact that intentions would lead to focus attention on potentially relevant internal or external events. These foci of attention would lead to new representations when the system (the brain) is able to validate and integrate them into resonant states, which would include, according to Grossberg, the conscious states of the brain. According to the theory, all conscious states would be resonant states, triggered either by external or internal events and mediated by either attention or volition. This claim as such, however, does not explain how non-conscious representations would become available to consciousness. In this analysis here, this can be seen as a direct consequence of the fact that the theory fails to separate spatial from temporal coding and thereby fails to resolve Grossberg’s dilemma, i.e. the stability-versus-plasticity dilemma, at the level of the transition from non-conscious representation to conscious access. Nonetheless, Grossberg’s adaptive resonance theory plausibly explains how the brain ensures the continuous updating of non-conscious representations through a mechanism termed top-down matching, which produces so-called resonant brain states.

A resonant brain state would be achieved through the repeated matching of external or internal events in short-term or working memory to internal events activating top-down representations. According to the theory, the brain is continuously confronted with ongoing internal or external representations (bottom -up) and therefore has to continuously generate probabilistic hypotheses to determine what all these transitory events are most likely to be and whether they are relevant. This involves matching the ongoing representations to representations stored in long-term memory (top-down). Coincidence of bottom-up representations and top-down representations (top-down-matches) would produce so-called matching signals, or coincidence signals which, when repeatedly generated, lead to resonant states in the brain. The representations generated through top-down matching would be, according to Grossberg, coded topologically in the “What” and “Where” processing streams of the brain (see Grossberg, 1999 for an extensive review of relevant physiological data), and what he calls “the resonant code” is therefore tightly linked to functional topological organization. The question how non-consciously encoded topological information would be made available to consciousness is left unanswered. The considerations and arguments here lead us to summarize the following:

1) only non-conscious brain processes dispose of enough capacity to integrate signals originating from various functionally specific sensory areas across both time and space;

2) temporal signatures of conscious states are likely to be generated and consolidated in reverberating inter-connected neural circuits that extend across long distances and well beyond functionally specific topology;

3) activation of a temporal signature that may trigger a conscious state depends on statistically determined temporal coincidence of activity patterns related to non-conscious memory events that cannot be made explicit;

4) temporal signatures have to be, at some stage, de-correlated from related signals or messages originating from the brain’s spatial maps.

The circuitry generating the temporal signatures would require an intrinsic and essentially arbitrary but not necessarily random topology in terms of “which cell fires first”. This intrinsic topology would be determined by temporal resonance principles only. While there is no empirically based description of resonators receiving, amplifying and transmitting time-patterned messages in the brain, it is nevertheless certain that a large number of physical and biophysical phenomena can be plausibly and parsimoniously explained on the basis of resonance principles or mechanisms. Also, it makes good sense that evolution would have produced brains capable of resonance. Biological resonators, in contrast to “ordinary” resonance devices designed by humans, would have highly sophisticated operating principles, given that hundreds of functionally different kinds of cells exist in the brain. On the other hand, there is no reason why resonators in the brain would have to function with a high level of precision, provided they operate according to some redundancy principle and the whole ensemble of cells producing a conscious resonance state behaves in a statistically predictable way. The idea that neural signal sequences could form a specific biophysical key that activates, maintains, and inactivates a conscious brain state like an electronic lock would open and close the door to a safe ( Dresp-Langley & Durup, 2009 ) is inevitably a simplification of reality. However, given the known temporal properties of conscious information processing summarized here above, we can reasonably formulate the hypothesis that the neural circuitry of the brain is capable of generating messages corresponding to variable specific contents (representations) and variable durations. In the same way as bar codes provide the key to an almost infinite variety of things, such temporal sequences could provide the keys that open the doors to conscious experience.

4. From Temporal Activity Patterns to a “Consciousness Code”?

A meaningful temporal brain code could be generated spontaneously at any given moment in early brain development, then eventually be reproduced and consolidated during brain learning. Consolidation would then be a result of repeated reverberation in cortical memory circuits, leading to resonance states which correspond to more or less specific conscious states. Once a resonance circuit is formed, it could be able to generate conscious state access at any given moment in time provided there is a statistically significant temporal coincidence between activity patterns in long-term memory. As long as this threshold of statistically significant coincidence is not attained, the representations processed in the resonant circuitry would remain non-conscious or pre-conscious. A brain or system operating on the basis of such purely temporal resonance principles would have to work in a specific way. All principal resonant neurons would have been primed during brain development to preferentially process statistically significant signals. Thus activated, principal resonant neurons would send signals along all delay paths originating from them, and all those receiving a signal coinciding with the next input signal would remain activated. The connections between principal resonant neurons of the circuit would thereby be potentiated, as in the classical Hebbian model. Simultaneously, signals travelling from initially activated neurons to connected cells with too long delay paths would be cancelled. Thus, once a given substructure of a resonant network is potentiated along all of its edges, it would reverberate temporally coinciding signals while amplifying more and more the potentiation of the resonant connections. Now, let us consider the example of a simple sensorimotor task, which can be performed either consciously or non-consciously. Obviously, the message sent by the sensory system has to be decoded by the motor system. This would happen via non-conscious signal exchanges generating cross-talk between multiple channels across different functional levels ( Nelson, 2002 ; Ransom, Behar, & Nedergaard, 2003 ; Nedergaard, Ransom, & Goldmann, 2003 ; Volterra & Meldolesi, 2005 ; Yamazaki et al., 2005 ). A conscious state, where the content of the representations activated by such crosstalk in the brain becomes subjectively experienced data of consciousness, would only be triggered if the temporal coincidence between signals reverberating within resonant circuitry generates levels of potentiation beyond a given statistical threshold. How neuronal circuits would be able to learn statistical temporal information embedded in distributed patterns of activity was recently discussed by Gutig & Sompolinski (2006) . Such resonant circuits would be inter-connected across large distances in the brain and develop all over the cortex during lifespan brain learning. Their intrinsic topology would, as explained above, not be related to cortical maps reflecting spatial functional organization of the brain. Like time-dependent resonance itself, the selection of the critical temporal firing patterns that constitute the access code for conscious states uses purely statistical criteria, leading to fewer and fewer consolidated patterns for increasingly complex and integrated signal coincidences as our brain learns and develops. When we are born, all brain activity is more or less arbitrary, not necessarily random. During brain development, temporal activity patterns elicited by events in biophysical time (t) ranging from 30 to approximately 500 ms (as explained above) will be linked to particular conscious experiences in a decreasingly arbitrary manner as frequently occurring codes are progressively consolidated through a process of developmental selection.

4.1. The “Nature versus Nurture” Problem

Helekar (1999) daringly proposed a genetically determined linkage, which flies into the face of a large body of work suggesting that brain processes are highly plastic and experience dependent, and which may explain why his work did not receive much consideration from the neuroscience communities. A linkage of subjective experience and specific temporal brain activities that would be innate and genetically determined leaves, however, the question of a mechanism for consciousness unanswered. Yet again, we find ourselves confronted with theoretical reasoning in terms of some kind of obscure superstructure. Helekar’s “elementary experience-coding temporal activity patterns” are conceived in terms of some pre-programmed designated subset of neural firing patterns belonging to the set of all possible temporal patterns that can be generated by the brain. His original hypothesis states that only those patterns that are members of the designated subset would give rise to conscious experiences upon their repeated occurrence. The repeated occurrence of ordinary patterns, which he calls non-coding patterns, would not produce conscious experience. The problem with this reasoning is that the contents we may consciously experience are also represented non-consciously in the brain. Helekar’s assumption that the subjective nature of phenomenal consciousness per se is genetically determined thus leads us right back to the old “nature versus nurture” problem raised by brain scientists since the 1950ies and, ultimately, to the question raised at the beginning of this essay: “what exactly is phenomenal consciousness” ( Gray, 1971 ; Dennett, 1991 ; Rosenthal, 1986 ; Roth, 2000 ; Zeman, 2001 ; Rosenthal, 2002 )? Thus, one is getting trapped in circular reasoning when invoking a genetic programme for consciousness. There is the far more likely hypothesis of a progressive, experience-dependent and increasingly non-arbitrary linkage of the contents of individual conscious states and their temporal signatures on the basis of non-conscious developmental processes and brain learning. Once a given temporal signature has been arbitrarily linked to a conscious state, it remains potentially available as a “brain hypothesis”, which is then either progressively consolidated, or not. Only once it is consolidated, the linkage of code to content becomes non-arbitrary, or deterministic. The progressive consolidation of linkages over time and as a function of context clearly happens without awareness, through non-explicit brain processes that operate outside the domain of our consciousness, and most likely through some form of repeated matching of currently ongoing representations to stored representations in long-term memory.

4.2. Implementing the “Consciousness Code”: The Final Limit

Multiple sensory, somatosensory, and proprioceptive signals may instantly be integrated into the immediate data of a unified individual conscious experience. The integration of these data, however, relies as pointed out in an earlier review ( Dresp-Langley, 2012 ) and here above, on non-conscious mechanisms. These have to be sufficiently adaptable and must display a considerable functional plasticity to enable the continuous updating of representations as a function of changes with context, time, the ageing of the circuitry and so on. To achieve this, the human brain is likely to rely on a great deal of redundancy in what Fingelkurts & Fingelkurts (2014) call its nested functional hierarchy, most of which is not and will never be made explicit. New information is imposed on our brains day by day by new situations and experiences, and for making all this novelty available to consciousness, there would have to be some permanently reliable, unifying “tag” which ensures stable access across time. Some have suggested that such “tagging” happens in the parieto-frontal pathways of the brain, which are protected from fast fluctuations in sensory signals and which would allow information sharing across a broad variety of cognitive processes on the basis of signal reverberation and propagation across long-distance connections in the brain. However, it is impossible to conceive how this complex cross-talk between neural signals and the information sharing across a broad variety of non-conscious processes could be implemented into a code for consciousness. If this were possible, the conscious brain would be able to sort out a seemingly infinite number of different signals from multi-channel cross-talk and, more importantly, it would be able to access the whole command chain of non-conscious processes this involves. Only then, the chain of commands could be made explicit and implemented into AI.

Implementing a code into a machine that would emulate human consciousness in all its complexity has become the final limit of our scientific endeavours and, in the light of what is discussed here above, a limit we are unlikely to ever be able to cross. It is an individual’s daily phenomenal experience that consciousness represents in terms of what was (past), what is right now (present) and what will be (future), as discussed earlier in by others ( Dresp-Langley & Durup, 2012 ; Fingelkurts & Fingelkurts, 2014 ) in the light of contemporary neuroscience of consciousness. Human consciousness has most likely evolved from the ability to be aware of, to remember, and to predict temporal order and change in nature, which exists already in some animal species ( Dresp-Langley & Durup, 2012 ). In humans, the limits of this capacity are pushed further and determined by the limits in functional plasticity of resonant brain mechanisms, some of which have been identified in neuroscience as shown here above. The conscious Self of the individual as a result of non-conscious development and individual experience across time and context is the ultimate expression of this evolution.

5. Conclusion

In the light of the complex interactions between implicit (non-conscious) and explicit (conscious) contents of representation, emulating human consciousness through artificial intelligence would imply that it is possible to make implicit (non-conscious) brain processes explicit by algorithm. These latter would have to be capable of a progressively less and less arbitrary selection of temporal signatures in a continuously developing neural network structure identical to that of the human brain, from the synaptic level to that of higher cognitive functions. This would involve dynamically adaptive computations capable of emulating the properties of individual human experience in all their complexity including sensations such as pleasure and pain and including feelings and moods such as sad or happy. Moreover, the computations would have to be able to represent past, present and future of complex event chains stored in the system’s long term memory irrespectively of immediate spatial representation. No known machine or neural network learning approach known at present has such potential.

Cite this paper

Dresp-Langley, B. (2018). The Experience-Dependent Dynamics of Human Consciousness. Open Journal of Philosophy, 8, 116-143. https://doi.org/10.4236/ojpp.2018.82010

References

  1. 1. Abeles, M., Bergman, H., Margalit, E., & Vaadia, E. (1993). Spatiotemporal Firing Patterns in the Frontal Cortex of Behaving Monkeys. Journal of Neurophysiology, 70, 1629-1638. https://doi.org/10.1152/jn.1993.70.4.1629 [Paper reference 1]

  2. 2. Axmacher, N., Mormann, F., Fernandez, G., Elger, C. E., & Fell, J. (2006). Memory Formation by Neuronal Synchronization. Brain Research Reviews, 52, 170-182. https://doi.org/10.1016/j.brainresrev.2006.01.007 [Paper reference 1]

  3. 3. Baars, B. J. (1993). How Does a Serial, Integrated and Very Limited Stream of Consciousness Emerge from a Nervous System That Is Mostly Unconscious, Distributed, Parallel and of Enormous Capacity? In Experimental and Theoretical Studies of Consciousness (Ciba Foundation Symposium 174, pp. 282-303). Chichester, NY: Wiley. [Paper reference 1]

  4. 4. Baars, B. J. (1997). In the Theater of Consciousness. New York and Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195102659.001.1

  5. 5. Baars, B. J. (1998). Metaphors of Consciousness and Attention in the Brain. Trends inNeurosciences, 21, 58-62. https://doi.org/10.1016/S0166-2236(97)01171-5

  6. 6. Bair, W. (1999). Spike Timing in the Mammalian Visual System. Current Opinion in Neurobiology, 9, 447-453. https://doi.org/10.1016/S0959-4388(99)80067-1 [Paper reference 1]

  7. 7. Başar, E. (1998). Brain Functions and Oscillations, I. Brain Oscillations: Principles and Approaches. Berlin: Springer. https://doi.org/10.1007/978-3-642-72192-2 [Paper reference 2]

  8. 8. Başar, E., Başar-Eroglu, C., Karakaş, S., & Schürmann, M. (2000). Brain Oscillations in Perception and Memory. International Journal of Psychophysiology, 35, 95-124. https://doi.org/10.1016/S0167-8760(99)00047-1 [Paper reference 1]

  9. 9. Bassett, D. S., Meyer-Underberg, A., Achard, S., Duke, T., & Bullmore, E. (2006). Adaptive Reconfiguration of Fractal Small-World Human Brain Functional Networks. Proceedings of the National Academy of Sciences of the USA, 103, 19518-19523. https://doi.org/10.1073/pnas.0606005103 [Paper reference 1]

  10. 10. Bieberich, E. (2002). Recurrent Fractal Neural Networks: A Strategy for the Exchange of Local and Global Information Processing in the Brain. BioSystems, 66, 145-164. https://doi.org/10.1016/S0303-2647(02)00040-0 [Paper reference 1]

  11. 11. Block, N. (1995). On a Confusion about a Function of Consciousness. Behavioral and Brain Sciences, 18, 227-287. https://doi.org/10.1017/S0140525X00038188 [Paper reference 1]

  12. 12. Bressler, S. L., & Kelso, J. A. S. (2001). Cortical Coordination Dynamics and Cognition. Trends in Cognitive Sciences, 5, 26-36. https://doi.org/10.1016/S1364-6613(00)01564-3 [Paper reference 1]

  13. 13. Bullock, T. H., Bennett, M. V. L., Johnston, D., Josephson, R., Marder, E., & Fields, R. D. (2005). The Neuron Doctrine, Redux. Science, 310, 791-793. https://doi.org/10.1126/science.1114394 [Paper reference 1]

  14. 14. Buzsaki, G. (2007). The Structure of Consciousness. Nature, 446, 267. https://doi.org/10.1038/446267a [Paper reference 3]

  15. 15. Cherry, E. C. (1953). Some Experiments on the Recognition of Speech, with One and Two Ears. Journal of the Acoustical Society of America, 25, 975-979. https://doi.org/10.1121/1.1907229 [Paper reference 1]

  16. 16. Chun, M. M., & Marois, R. (2002). The Dark Side of Visual Attention. Current Opinion in Neurobiology, 12, 184-189. https://doi.org/10.1016/S0959-4388(02)00309-4 [Paper reference 1]

  17. 17. Churchland, P. S. (2002). Brain-Wise. Studies in Neurophilosophy. Cambridge, MA: MIT Press. [Paper reference 1]

  18. 18. Constantinidis, C., Williams, G. V., & Goldman-Rakic, P. S. (2002). A Role for Inhibition in Shaping the Temporal Flow of Information in Prefrontal Cortex. Nature Neuroscience, 5, 175-180. https://doi.org/10.1038/nn799 [Paper reference 1]

  19. 19. Cowan, N., Elliott, E. M., Saults, J. S., Morey, C. C., Mattox, S., Hismjatullina, A., & Conway, A. R. A. (2005). On the Capacity of Attention: Its Estimation and Its Role in Working Memory and Cognitive Aptitudes. Cognitive Psychology, 51, 42-100. https://doi.org/10.1016/j.cogpsych.2004.12.001 [Paper reference 1]

  20. 20. Crick, F. (1994). The Astonishing Hypothesis: The Scientific Search for the Soul. New York: Simon and Schuster. [Paper reference 1]

  21. 21. Crick, F., & Koch, C. (1990). Towards a Neurobiological Theory of Consciousness. Seminars in Neuroscience, 2, 263-275. [Paper reference 1]

  22. 22. Crick, F., & Koch, C. (1995). Are We Aware of Neural Activity in Primary Visual Cortex? Nature, 375, 121-123. https://doi.org/10.1038/375121a0 [Paper reference 2]

  23. 23. Crick, F., & Koch, C. (2000). The Unconscious Homunculus. Neuro-Psychoanalysis, 2, 3-11. https://doi.org/10.1080/15294145.2000.10773273 [Paper reference 1]

  24. 24. Crick, F., & Koch, C. (2003). A Framework for Consciousness. Nature Neuroscience, 6, 119-126. https://doi.org/10.1038/nn0203-119 [Paper reference 1]

  25. 25. DeCharms, R. C., & Zador, A. (2000). Neural Representations and the Cortical Code. Annual Review of Neuroscience, 23, 613-647. https://doi.org/10.1146/annurev.neuro.23.1.613 [Paper reference 1]

  26. 26. Dehaene, S., & Naccache, L. (2001). Towards a Cognitive Neuroscience of Consciousness: Basic Evidence and a Workspace Framework. Cognition, 79, 1-37. https://doi.org/10.1016/S0010-0277(00)00123-2 [Paper reference 1]

  27. 27. Dehaene, S., Changeux, J. P., Naccache, L., Sackur, J., & Sergent C. (2006). Conscious, Preconscious and Subliminal Processing: A Testable Taxonomy. Trends in Cognitive Science, 10, 204-211. https://doi.org/10.1016/j.tics.2006.03.007 [Paper reference 7]

  28. 28. Dehaene, S., Sergent, C., & Changeux, J. P. (2003). A Neuronal Network Model Linking Subjective Reports and Objective Physiological Data during Conscious Perception. Proceedings of the National Academy of Sciences of the USA, 100, 8520-8525. https://doi.org/10.1073/pnas.1332574100 [Paper reference 1]

  29. 29. Delorme, A., Rousselet, G. A., Mace, M. J., & Fabre-Thorpe, M. (2004) Interaction of Top-Down and Bottom-Up Processing in the Fast Visual Analysis of Natural Scenes. Cognitive Brain Research, 19, 103-113. https://doi.org/10.1016/j.cogbrainres.2003.11.010 [Paper reference 1]

  30. 30. Dennett, D. (2001). Are We Explaining Consciousness Yet? Cognition, 79, 221-237. https://doi.org/10.1016/S0010-0277(00)00130-X

  31. 31. Dennett, D. C. (1991). Consciousness Explained. Brown: Little. [Paper reference 2]

  32. 32. Di Lollo, V., Enns, J. T., & Rensink, R. A. (2000). Competition for Consciousness among Visual Events: The Psychophysics of Re-Entrant Visual Processes. Journal of Experimental Psychology: General, 129, 481-507. https://doi.org/10.1037/0096-3445.129.4.481 [Paper reference 1]

  33. 33. Dietrich, A. (2003). Functional Neuroanatomy of Altered States of Consciousness: The Transient Hypofrontality Hypothesis. Consciousness and Cognition, 12, 231-256. https://doi.org/10.1016/S1053-8100(02)00046-6 [Paper reference 2]

  34. 34. Dresp-Langley, B (2012) Why the Brain Knows More than We Do: Non-Conscious Representations and Their Role in the Construction of Conscious Experience. Brain Sciences, 2, 1-21. https://doi.org/10.3390/brainsci2010001 [Paper reference 3]

  35. 35. Dresp-Langley, B., & Durup, J. (2009) A Plastic Temporal Code for Conscious State Generation in the Brain. Neural Plasticity, 2009, Article ID 482696. https://doi.org/10.1155/2009/482696 [Paper reference 2]

  36. 36. Dresp-Langley, B., & Durup, J. (2012) Does Consciousness Exist Independently of Present Time and Present Time Independently of Consciousness? Open Journal of Philosophy, 2, 45-49. https://doi.org/10.4236/ojpp.2012.21007

  37. 37. Driver, J., & Vuilleumier, P. (2001). Perceptual Awareness and Its Loss in Unilateral Neglect and Extinction. Cognition, 79, 39-88. https://doi.org/10.1016/S0010-0277(00)00124-4 [Paper reference 2]

  38. 38. Drover, D. R., Lemmens, H. J., Pierce, E. T., Plourde, G., Ornstein, E., Prichep, L. S., Chabot, R. J., & Gugino, L. (2002). Patient State Index (PSI): Titration of Delivery and Recovery from Propofol, Alfentanil, and Nitrous Oxide Anesthesia. Anesthesiology, 97, 82-89. https://doi.org/10.1097/00000542-200207000-00012 [Paper reference 1]

  39. 39. Duncan, J. (1980). The Locus of Interference in the Perception of Simultaneous Stimuli. Psychological Review, 87, 272-300. https://doi.org/10.1037/0033-295X.87.3.272 [Paper reference 1]

  40. 40. Edelman, D. B., Baars, B. J., & Seth, A. K. (2005). Identifying Hallmarks of Consciousness in Non-Mammalian Species. Consciousness and Cognition, 14, 169-187. https://doi.org/10.1016/j.concog.2004.09.001 [Paper reference 1]

  41. 41. Edelman, G. M. (1989). The Remembered Present. New York: Basic Books. [Paper reference 1]

  42. 42. Edelman, G. M. (1993). Neural Darwinism: Selection of Re-Entrant Signalling in Higher Brain Function. Neuron, 10, 115-125. https://doi.org/10.1016/0896-6273(93)90304-A [Paper reference 1]

  43. 43. Edelman, G. M. (2003). Naturalizing Consciousness: A Theoretical Framework. Proceedings of the National Academy of Sciences of the USA, 100, 5520-5524. https://doi.org/10.1073/pnas.0931349100 [Paper reference 2]

  44. 44. Edelman, G. M., & Tononi, G. (2000). Re-Entry and the Dynamic Core: Neural Correlates of Conscious Experience. In T. Metzinger (Ed.), Neural Correlates of Consciousness: Empirical and Conceptual Questions (pp. 139-151). Cambridge, MA: MIT Press. [Paper reference 1]

  45. 45. Eggermont, J. J. (1998). Is There a Neural Code? Neuroscience and Biobehavioral Reviews, 22, 355-370. https://doi.org/10.1016/S0149-7634(97)00021-3 [Paper reference 1]

  46. 46. Engel, A., Konig, P., Kreiter, A., Schillen, T., & Singer, W. (1992). Temporal Coding in the Visual Cortex: New Vistas on Integration in the Nervous System. Trends in Neurosciences, 15, 218-226. https://doi.org/10.1016/0166-2236(92)90039-B [Paper reference 1]

  47. 47. Feigl, H. (1958) The “Mental” and the “Physical”. In H. Feigl, M. Scriven, & G. Maxwell (Eds.), Concepts, Theories and the Mind-Body Problem, Minneapolis, Minnesota Studies in the Philosophy of Science (Vol. 2). Minneapolis: University of Minnesota Press. [Paper reference 1]

  48. 48. Feinstein, J. S., Stein, M. B., Castillo, G. N., & Paulus, M. P. (2004). From Sensory Processes to Conscious Perception. Consciousness and Cognition, 13, 323-335. https://doi.org/10.1016/j.concog.2003.10.004 [Paper reference 1]

  49. 49. Fellin, T., & Carmignoto, G. (2004). Neurone-to-Astrocyte Signalling in the Brain Represents a Distinct Multifunctional Unit. Journal of Physiology, 559, 3-15. https://doi.org/10.1113/jphysiol.2004.063214 [Paper reference 1]

  50. 50. Fields, R. D. (2004). The Other Half of the Brain. Scientific American, 290, 54-61. https://doi.org/10.1038/scientificamerican0404-54 [Paper reference 1]

  51. 51. Fingelkurts, A. A, & Fingelkurts, A. A. (2001). Operational Architectonics of the Human Brain Biopotential Field: Towards Solving the Mind-Brain Problem. Brain and Mind, 2, 261-296. https://doi.org/10.1023/A:1014427822738 [Paper reference 1]

  52. 52. Fingelkurts, A. A., & Fingelkurts, A. A. (2008). Brain-Mind Operational Architectonics Imaging: Technical and Methodological Aspects. The Open Neuroimaging Journal, 2, 73-93.

  53. 53. Fingelkurts, A. A., & Fingelkurts, A. A. (2014). Present Moment, Past, and Future: Mental Kaleidoscope. Frontiers in Psychology, 5, 395. https://doi.org/10.3389/fpsyg.2014.00395 [Paper reference 2]

  54. 54. Frith, C., & Dolan, R. (1996). The Role of the Prefrontal Cortex in Higher Cognitive Functions. Cognitive Brain Research, 5, 175-181. https://doi.org/10.1016/S0926-6410(96)00054-7 [Paper reference 1]

  55. 55. Fuster, J. M. (2000). Cortical Dynamics of Memory. International Journal of Psychophysiology, 35, 155-164. https://doi.org/10.1016/S0167-8760(99)00050-1 [Paper reference 1]

  56. 56. Gray, J. A. (1971). The Mind-Brain Identity Theory as a Scientific Hypothesis. Philosophical Quarterly, 21, 247-252. https://doi.org/10.2307/2218130 [Paper reference 1]

  57. 57. Gray, J. A. (1995). Consciousness and Its (Dis)Contents. Behavioral and Brain Sciences, 18, 703-722. https://doi.org/10.1017/S0140525X00040693 [Paper reference 1]

  58. 58. Gray, J. A. (2002). To Thine Own Synapses Be True? Nature Neuroscience, 5, 1115. https://doi.org/10.1038/nn1102-1115 [Paper reference 1]

  59. 59. Grossberg, S. (1975) A Neural Model of Attention, Reinforcement and Discrimination Learning. International Review of Neurobiology, 18, 263-327. https://doi.org/10.1016/S0074-7742(08)60037-9 [Paper reference 2]

  60. 60. Grossberg, S. (1999). The Link between Brain Learning, Attention, and Consciousness. Consciousness & Cognition, 8, 1-44. https://doi.org/10.1006/ccog.1998.0372 [Paper reference 5]

  61. 61. Gutig, R., & Sompolinski, H. (2006). The Tempotron: A Neuron That Learns Spike Timing-Based Decisions. Nature Neuroscience, 9, 420-428. https://doi.org/10.1038/nn1643 [Paper reference 1]

  62. 62. Guttman, S. E., Gilroy, L. A., & Blake, R. (2007). Spatial Grouping in Human Vision: Temporal Structure Trumps Temporal Synchrony. Vision Research, 47, 219-230. https://doi.org/10.1016/j.visres.2006.09.012 [Paper reference 1]

  63. 63. Helekar, S. A. (1999). On the Possibility of Universal Neural Coding of Subjective Experience. Consciousness and Cognition, 8, 423-446. https://doi.org/10.1006/ccog.1998.0377 [Paper reference 2]

  64. 64. Hempel, P., & Oppenheim, C. G. (1948). Studies in the Logic of Explanation. Philosophy of Science, 15, 135-175. https://doi.org/10.1086/286983 [Paper reference 2]

  65. 65. Herrmann, C. S., Munk, M. H., & Engel, A. K. (2004). Cognitive Functions of Gamma-Band Activity: Memory Match and Utilisation. Trends in Cognitive Sciences, 8, 347-355. https://doi.org/10.1016/j.tics.2004.06.006 [Paper reference 1]

  66. 66. Hochstein, S., & Ahissar, M. (2002). View from the Top: Hierarchies and Reverse Hierarchies in the Visual System. Neuron, 36, 791-804. https://doi.org/10.1016/S0896-6273(02)01091-7 [Paper reference 1]

  67. 67. Holmgren, C., Harkang, T., Svennenfors, B., & Zilberter, Y. (2003). Pyramidal Cell Communication within Local Networks in Layer 2/3 of Rat Neocortex. Journal of Physiology, 551, 139-153. https://doi.org/10.1113/jphysiol.2003.044784 [Paper reference 1]

  68. 68. Hume, D. (1740). A Treatise of Human Nature. Oxford: Oxford University Press.

  69. 69. Humphrey, N. (2000). How to Solve the Mind-Body Problem. Journal of Consciousness Studies, 7, 5-112. [Paper reference 1]

  70. 70. Guéniot, T. (1868). D’une hallucination du toucher (hétérotopie subjective des extrémités) particulière à certainsamputés. Journal de Physiologie de l'Hommeet des Animaux, 4, 416-418. [Paper reference 1]

  71. 71. James, W. (1890). Principles of Psychology. New York: Holt. https://doi.org/10.1037/11059-000 [Paper reference 1]

  72. 72. Jensen, O. (2005). Reading the Hippocampal Code by Theta Phase-Locking. Trends in Cognitive Sciences, 9, 551-553. https://doi.org/10.1016/j.tics.2005.10.003 [Paper reference 1]

  73. 73. Jensen, O., & Lisman, J. E. (1996a). Novel Tests of 7 ± 2 Known Items Can Be Reliably Stored in on Oscillatory Short-Term Memory Network: Interaction with Long-Term Memory. Learning and Memory, 3, 257-263. https://doi.org/10.1101/lm.3.2-3.257 [Paper reference 1]

  74. 74. Jensen, O., & Lisman, J. E. (1996b). Theta/Gamma Networks with Slow NMDA Channels Learn Sequences and Encode Episodic Memory: Role of NMDA Channels in Recall. Learning and Memory, 3, 264-278. https://doi.org/10.1101/lm.3.2-3.264

  75. 75. Jensen, O., & Lisman, J. E. (1996c). Hippocampal Region CA3 Predicts Memory Sequences: Accounting for the Phase Precession of Place Cells. Learning and Memory, 3, 279-287. https://doi.org/10.1101/lm.3.2-3.279

  76. 76. Jensen, O., & Lisman, J. E. (1998). An Oscillatory Short-Term Memory Model Can Account for Data on the Sternberg Task. Journal of Neuroscience, 18, 10688-10699. [Paper reference 1]

  77. 77. Jensen, O., & Lisman, J. E. (2005). Hippocampal Sequence-Encoding Driven by a Cortical Multi-Item Working Memory Buffer. Trends in Neuroscience, 28, 67-72. https://doi.org/10.1016/j.tins.2004.12.001

  78. 78. Jensen, O., Idiart, M. A. P., & Lisman, J. E. (1996). Physiologically Realistic Formation of Autoassociative Memory in Networks with Theta/Gamma Oscillations—Role of Fast NMDA Channels. Learning and Memory, 3, 243-256. https://doi.org/10.1101/lm.3.2-3.243 [Paper reference 1]

  79. 79. John, E. R. (2001). A Field Theory of Consciousness. Consciousness and Cognition, 10, 184-258. https://doi.org/10.1006/ccog.2001.0508 [Paper reference 2]

  80. 80. John, E. R. (2002). The Neurophysics of Consciousness. Brain Research Reviews, 39, 1-28. https://doi.org/10.1016/S0165-0173(02)00142-X [Paper reference 1]

  81. 81. Kihlstrom, J. F. (1987). The Cognitive Unconscious. Science, 237, 1445-1452. https://doi.org/10.1126/science.3629249 [Paper reference 2]

  82. 82. Klausberger, T., Magill, P. J., Marton, L. F., Roberts, J. D. B., Cobden, P. M., Buzsaki, G., & Somogyi, P. (2003). Brain-State- and Cell-Type-Specific Firing of Hippocampal Interneurons In Vivo. Nature, 421, 844-848. https://doi.org/10.1038/nature01374 [Paper reference 1]

  83. 83. Klimesch, W., Doppelmayr, M., Yonelinas, A., Kroll, N. E. A., Lazzara, M., Röhm, D., & Koch, C. (1997). Computation and the Single Neuron. Nature, 385, 207-210. https://doi.org/10.1038/385207a0 [Paper reference 2]

  84. 84. Koch, C., & Crick, F. (2000). Some Thoughts on Consciousness and Neuroscience. In M. S. Gazzaniga (Ed.), The New Cognition Neurosciences (2nd ed., pp. 1285-1294). Cambridge, MA: MIT Press. [Paper reference 1]

  85. 85. Koenig, T., & Lehmann, D. (1996). Microstates in Language-Related Brain Potential Maps Show Noun-Verb Differences. Brain and Language, 53, 169-182. https://doi.org/10.1006/brln.1996.0043 [Paper reference 1]

  86. 86. Kosslyn, S. M. (1994). Image and Brain: The Resolution of the Imagery Debate. Cambridge, MA: MIT Press. [Paper reference 1]

  87. 87. Kosslyn, S. M. (1999). If Neuroimaging Is the Answer, What Is the Question? Philosophical Transactions of the Royal Society of London B, 354, 1283-1294. https://doi.org/10.1098/rstb.1999.0479

  88. 88. Kosslyn, S. M., Ganis, G., & Thompson, W. L. (2001). Neural Foundations of Imagery. Nature Reviews Neuroscience, 2, 635-642. https://doi.org/10.1038/35090055 [Paper reference 1]

  89. 89. LaBerge, S. (1990). Lucid Dreaming: Psychophysiological Studies of Consciousness during REM Sleep. In R. R. Bootzen, J. F. Kihlstrom, & D. L. Schacter (Eds.), Sleep and Cognition. Washington, DC: APA Press. https://doi.org/10.1037/10499-008 [Paper reference 3]

  90. 90. Lamme, V. A. F. (2004). Separate Neural Definitions of Visual Consciousness and Visual Attention: A Case for Phenomenal Awareness. Neural Networks, 17, 861-872. https://doi.org/10.1016/j.neunet.2004.02.005 [Paper reference 2]

  91. 91. Lamme, V. A. F. (2006). Towards a True Neural Stance in Consciousness. Trends in Cognitive Sciences, 10, 494-501. https://doi.org/10.1016/j.tics.2006.09.001 [Paper reference 1]

  92. 92. Lamme, V. A., & Roelfsema, P. R. (2000). The Distinct Modes of Vision Offered by Feedforward and Recurrent Processing. Trends in Neurosciences, 23, 571-579. https://doi.org/10.1016/S0166-2236(00)01657-X [Paper reference 1]

  93. 93. Lashley, K. (1951). The Problem of Serial Order in Behavior. In L. A. Jeffress (Ed.), Cerebral Mechanisms in Behavior (pp. 112-136). New York: Wiley. [Paper reference 1]

  94. 94. Lashley, K. S. (1956). Cerebral Organisation and Behavior. In The Brain and Human Behavior. Proceedings of the Association for Nervous and Mental Diseases (pp. 1-18). New York: Hafner. [Paper reference 1]

  95. 95. Lau, P.-M., & Bi, G.-Q. (2005). Synaptic Mechanisms of Persistent Reverbatory Activity in Neuronal Networks. Proceedings of the National Academy of Sciences of the USA, 102, 10333-10338. https://doi.org/10.1073/pnas.0500717102 [Paper reference 1]

  96. 96. Lazar, A., Pipa, G., & Treisch, J. (2007) Fading Memory and Time Series Prediction in Recurrent Networks with Different Forms of Plasticity. Neural Networks, 20, 312-322. https://doi.org/10.1016/j.neunet.2007.04.020 [Paper reference 1]

  97. 97. LeDoux, J. (2002). Synaptic Self. How Our Brains Become Who We Are. New York: Macmillan. [Paper reference 1]

  98. 98. Lehmann, D., Ozaki, H., & Pal, I. (1987). EEG Alpha Map Series: Brain Microstates by Space Oriented Adaptive Segmentation. Electroencephalography and Clinical Neurophysiology, 67, 271-288. https://doi.org/10.1016/0013-4694(87)90025-3 [Paper reference 1]

  99. 99. Lehmann, D., Strik, W. K., Henggeler, B., Koenig, T., & Koukkou, M. (1998). Brain Electric Microstates and Momentary Conscious Mind States as Building Blocks of Spontaneous Thinking. I. Visual Imagery and Abstract Thoughts. International Journal of Psychophysiology, 29, 1-11. https://doi.org/10.1016/S0167-8760(97)00098-6 [Paper reference 1]

  100. 100. Lennie, P. (2003). The Cost of Cortical Computation. Current Biology, 13, 493-497. https://doi.org/10.1016/S0960-9822(03)00135-0 [Paper reference 1]

  101. 101. Lestienne, R., & Strehler, B. L. (1988). Differences between Monkey Visual Cortex Cells in Triplet and Ghost Doublets Informational Symbol Relationships. Biological Cybernetics, 59, 337-352. https://doi.org/10.1007/BF00332924 [Paper reference 1]

  102. 102. Lewis, D. (1983). Mad Pain and Martian Pain. In D. Lewis (Ed.), Philosophical Papers (Vol. 1). Oxford: Oxford University Press. https://doi.org/10.1093/0195032047.003.0009 [Paper reference 1]

  103. 103. Libet, B. (1993). The Neural Time Factor in Conscious and Unconscious Events. In Experimental and Theoretical Studies of Consciousness (pp. 282-303). Chichester, NY: Wiley. [Paper reference 1]

  104. 104. Libet, B. (2003). Timing of Conscious Experience. Consciousness and Cognition, 12, 321-331. https://doi.org/10.1016/S1053-8100(03)00048-5

  105. 105. Libet, B. (2004). Mind Time. Cambridge, MA: Harvard University Press.

  106. 106. Lin, L., Osan, R., & Tsien, J. Z. (2006). Organizing Principles of Real-Time Memory Encoding: Neural Clique Assemblies and Universal Neural Codes. Trends in Neuroscience, 29, 48-57. https://doi.org/10.1016/j.tins.2005.11.004 [Paper reference 1]

  107. 107. Lisman, J. E. (1998). What Makes the Brain’s Tickers Tock? Nature, 394, 132-133. https://doi.org/10.1038/28061 [Paper reference 1]

  108. 108. Lisman, J. E., & Idiart, M. A. P. (1995). Storage of 7 ± 2 Short-Term Memories in Oscillatory Subcycles. Science, 267, 1512-1515. https://doi.org/10.1126/science.7878473 [Paper reference 1]

  109. 109. Llinás, R., & Ribary, U. (1993). Coherent 40-Hz Oscillation Characterizes Dream States in Humans. Proceedings of the National Academy of Science USA, 90, 2078-2081. https://doi.org/10.1073/pnas.90.5.2078 [Paper reference 1]

  110. 110. Llinás, R., & Ribary, U. (2001). Consciousness and the Brain: The Thalamocortical Dialogue in Health and Disease. Annals of the New York Academy of Sciences, 929, 166-175. https://doi.org/10.1111/j.1749-6632.2001.tb05715.x [Paper reference 2]

  111. 111. Llinás, R., Ribary, U., Contreras, D., & Pedroarena, C. (1998). The Neuronal Basis for Consciousness. Philosophical Transactions of the Royal Society of London B, 353, 1841-1849. https://doi.org/10.1098/rstb.1998.0336 [Paper reference 2]

  112. 112. Maccaferri, G., & Lacaille, J.-C. (2003). Interneuron Diversity Series: Hippocampal Interaction Classification—Making Things as Simple as Possible, Not Simpler. Trends in Neurosciences, 26, 564-571. https://doi.org/10.1016/j.tins.2003.08.002 [Paper reference 1]

  113. 113. Machens, C. K., Romo, R., & Brody, C. D. (2005). Flexible Control of Mutual Inhibition: A Neural Model of Two-Interval Discrimination. Science, 307, 1121-1124. https://doi.org/10.1126/science.1104171 [Paper reference 1]

  114. 114. MacKay, D. M., & McCulloch, W. S. (1952). The Limiting Information Capacity of a Neuronal Link. Bulletin of Mathematical Biophysics, 14, 127-135. https://doi.org/10.1007/BF02477711 [Paper reference 1]

  115. 115. Mangan, B. (2003). The Conscious “Fringe”: Bringing William James Up to Date. In B. J. Baars, W. P. Banks, & J. B. Newman (Eds.), Essential Sources in the Scientific Study of Consciousness (pp. 741-759). Cambridge, MA: MIT Press. [Paper reference 3]

  116. 116. McCullough, W., & Pitts, W. (1943) A Logical Calculus of Ideas Imminent in Nervous Activity. Bulletin of Mathematical Biophysics, 5, 115-133. https://doi.org/10.1007/BF02478259 [Paper reference 1]

  117. 117. Merzenich, M. M., Nelson, R. J., Stryker, M. S., Cyander, M. S., Schoppmann, A., & Zook, J. M. (1984). Somatosensory Cortical Map Changes Following Digit Amputation in Adult Monkeys. Journal of Comparative Neurology, 224, 591-605. https://doi.org/10.1002/cne.902240408 [Paper reference 1]

  118. 118. Mesulam, M. M. (1990). Large-Scale Neuro-Cognitive Networks and Distributed Processing for Attention, Language, and Memory. Annals of Neurology, 28, 597-613. https://doi.org/10.1002/ana.410280502 [Paper reference 1]

  119. 119. Miller, G. A. (1956). The Magic Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review, 63, 81-97. https://doi.org/10.1037/h0043158 [Paper reference 1]

  120. 120. Milner, A. D. (1995). Cerebral Correlates of Visual Awareness. Neuropsychologia, 33, 1117-1130. https://doi.org/10.1016/0028-3932(95)00052-5 [Paper reference 1]

  121. 121. Moore, D. R., & King, A. J. (1999). Auditory Perception: The Near and Far of Sound Localization. Current Biology, 9, 361-363. https://doi.org/10.1016/S0960-9822(99)80227-9 [Paper reference 1]

  122. 122. Nagel, T. (1974). What Is It like to Be a Bat? The Philosophical Review, 83, 435-450. https://doi.org/10.2307/2183914 [Paper reference 1]

  123. 123. Natsoulas, T. (1983) Concepts of Consciousness. Journal of Mind and Behavior, 4, 13-59. [Paper reference 1]

  124. 124. Nedergaard, M., Ransom, B., & Goldman, S. A. (2003). New Roles for Astrocytes: Redefining the Functional Architecture of the Brain. Trends in Neurosciences, 26, 523-530. https://doi.org/10.1016/j.tins.2003.08.008 [Paper reference 1]

  125. 125. Nelson, S. B. (2002). Cortical Microcircuits: Diverse or Canonical. Neuron, 36, 19-27. https://doi.org/10.1016/S0896-6273(02)00944-3 [Paper reference 1]

  126. 126. Newman, J., & Baars, B. J. (1993). A Neural Attentional Model for Access to Consciousness: A Global Workspace Perspective. Concepts in Neuroscience, 4, 255-290.

  127. 127. Nielsen, T. A., & Stenstrom, P. (2005). What Are the Memory Sources of Dreaming? Nature, 437, 1286-1289. https://doi.org/10.1038/nature04288 [Paper reference 1]

  128. 128. Oberly, H. S. (1928). A Comparison of the Spans of Attention and Memory. American Journal of Psychology, 40, 295-302. https://doi.org/10.2307/1414490 [Paper reference 1]

  129. 129. Page, M. P. A., & Norris, D. (1998). The Primacy Model: A New Model of Immediate Serial Recall. Psychological Review, 105, 761-781. https://doi.org/10.1037/0033-295X.105.4.761-781 [Paper reference 1]

  130. 130. Parkin, A. J. (1999). Human Memory. Current Biology, 9, 582-585. https://doi.org/10.1016/S0960-9822(99)80378-9 [Paper reference 1]

  131. 131. Pascual-Marqui, R. D., Michel, C. M., & Lehmann, D. (1995). Segmentation of Brain Electrical Activity into Microstates: Model Estimation and Validation. IEEE Transactions on Biomedical Engineering, 42, 658-665. https://doi.org/10.1109/10.391164 [Paper reference 1]

  132. 132. Pockett, S. (1999). Anesthesia and the Electrophysiology of Auditory Consciousness. Consciousness & Cognition, 8, 45-61. https://doi.org/10.1006/ccog.1998.0373 [Paper reference 1]

  133. 133. Pockett, S. (2004). Does Consciousness Cause Behaviour? Journal of Consciousness Studies, 11, 3-4. [Paper reference 1]

  134. 134. Pollen, D. (1999). On the neural correlates of visual perception. Cerebral Cortex, 9, 4-19. https://doi.org/10.1093/cercor/9.1.4 [Paper reference 2]

  135. 135. Pöppel, E., & Logothetis, N. (1986). Neuronal oscillations in the human brain. Naturwissenschaften, 73, 267-268. https://doi.org/10.1007/BF00367781 [Paper reference 1]

  136. 136. Posner, M. I. (1994). Attention: The Mechanisms of Consciousness. Proceedings of the National Academy of Sciences of the USA, 91, 7398-7403. https://doi.org/10.1073/pnas.91.16.7398 [Paper reference 1]

  137. 137. Potter, M. C. (1993). Very Short-Term Conceptual Memory. Memory & Cognition, 21, 156-161. https://doi.org/10.3758/BF03202727 [Paper reference 1]

  138. 138. Prinz, J. (2000). A Neurofunctional Theory of Visual Consciousness. Consciousness and Cognition, 9, 243-259. https://doi.org/10.1006/ccog.2000.0442 [Paper reference 1]

  139. 139. Ramachandran, V. S. (1998) Consciousness and Body Image: Lessons from Phantom Limbs, Capgras Syndrome and Pain Asymbolia. Philosophical Transactions of the Royal Society of London B, 353, 1851-1859. https://doi.org/10.1098/rstb.1998.0337 [Paper reference 2]

  140. 140. Ramachandran, V. S., Rogers-Ramachandran, D., & Cobb, S. (1995) Touching the Phantom Limb. Nature, 377, 489-490. https://doi.org/10.1038/377489a0 [Paper reference 1]

  141. 141. Ramachandran, V. S., Rogers-Ramachandran, D., & Stewart, M. (1992) Perceptual Correlates of Massive Cortical Reorganization. Science, 258, 1159-1160. https://doi.org/10.1126/science.1439826 [Paper reference 1]

  142. 142. Ramsey, N. F., Jansma, J. M., Jager, G., Van Raalten, T., & Kahn, R. S. (2004). Neurophysiological Functions in Human Information Processing Capacity. Brain, 127, 517-525. https://doi.org/10.1093/brain/awh060 [Paper reference 1]

  143. 143. Ransom, B., Behar, T., & Nedergaard, M. (2003). New Roles for Astrocytes (Stars at Last). Trends in Neurosciences, 26, 520-522. https://doi.org/10.1016/j.tins.2003.08.006 [Paper reference 1]

  144. 144. Raz, A., & Buhle, J. (2006). Typologies of Attentional Networks. Nature Reviews Neuroscience, 7, 367-379. https://doi.org/10.1038/nrn1903 [Paper reference 1]

  145. 145. Rees, G., Wojciulik, E., Clarke, K., Husain, M., Frith, C., & Driver, J. (2002). Neural Correlates of Conscious and Unconscious Vision in Parietal Extinction. Neurocase, 8, 387-393. [Paper reference 1]

  146. 146. Revonsuo, A. (2000). Prospects for a Scientific Research Program on Consciousness. In T. Metzinger (Ed.), Neural Correlates of Consciousness: Empirical and Conceptual Questions (pp. 57-75). Cambridge, MA: MIT Press. [Paper reference 1]

  147. 147. Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1997). Spikes: Exploring the Neural Code. Cambridge, MA: MIT Press. [Paper reference 2]

  148. 148. Robertson, L. C. (2003). Binding, Spatial Attention and Perceptual Awareness. Nature Reviews Neuroscience, 4, 93-102. https://doi.org/10.1038/nrn1030 [Paper reference 1]

  149. 149. Rosenthal, D. M. (1986). Two Concepts of Consciousness. Philosophical Studies, 49, 329-359. https://doi.org/10.1007/BF00355521 [Paper reference 1]

  150. 150. Rosenthal, D. M. (2002). How Many Kinds of Consciousness? Consciousness and Cognition, 11, 653-665. https://doi.org/10.1016/S1053-8100(02)00017-X [Paper reference 1]

  151. 151. Roth, G. (2000). The Evolution and Ontogeny of Consciousness. In T. Metzinger (Ed.), Neural Correlates of Consciousness: Empirical and Conceptual Questions (pp. 77-97). Cambridge, MA: MIT Press. [Paper reference 1]

  152. 152. Schneider, W., & Shiffrin, R. M. (1977). Controlled and Automatic Human Information Processing: 1. Detection, Search, and Attention. Psychological Review, 84, 1-66. https://doi.org/10.1037/0033-295X.84.1.1 [Paper reference 1]

  153. 153. Schwartz, S. (2003). Are Life Episodes Replayed during Dreaming? Trends in Cognitive Sciences, 7, 325-327. https://doi.org/10.1016/S1364-6613(03)00162-1 [Paper reference 1]

  154. 154. Seth, A. K., & Baars, B. J. (2005). Neural Darwinism and Consciousness. Consciousness and Cognition, 14, 140-168. https://doi.org/10.1016/j.concog.2004.08.008 [Paper reference 1]

  155. 155. Seth, A. K., Izhkevich, E., Reeke, G. N., & Edelman, G. M. (2006). Theories of Measures of Consciousness: An Extended Framework. Proceedings of the National Academy of Sciences of the USA, 103, 10799-10804. https://doi.org/10.1073/pnas.0604347103 [Paper reference 1]

  156. 156. Shastri, L., & Ajjanagadde, V. (1993). From Simple Association to Systematic Reasoning: A Connectionist Representation of Rules, Variables, and Dynamic Bindings Using Temporal Synchrony. Behavioral and Brain Sciences, 16, 417-494. https://doi.org/10.1017/S0140525X00030910 [Paper reference 1]

  157. 157. Shiffrin, R. M. (2003). Attention, Automatism, and Consciousness. In B. J. Baars, W. P. Banks, & J. B. Newman (Eds.), Essential Sources in the Scientific Study of Consciousness (pp. 631-642). Cambridge, MA: MIT Press. [Paper reference 1]

  158. 158. Shiffrin, R. M., & Schneider, W. (1977). Controlled and Automatic Human Information Processing: 2. Perceptual Learning, Automatic Attending, and a General Theory. Psychological Review, 84, 127-190. https://doi.org/10.1037/0033-295X.84.2.127 [Paper reference 1]

  159. 159. Silverman, M., & Mack, A. (2006) Priming by Change Blindness: When It Does and Does Not Occur. Consciousness & Cognition, 15, 409-422. https://doi.org/10.1016/j.concog.2005.08.003 [Paper reference 1]

  160. 160. Singer, W. (2000). Phenomenal Awareness and Consciousness from a Neurobiological Perspective. In T. Metzinger (Ed.), Neural Correlates of Consciousness: Empirical and Conceptual Questions (pp. 121-137). Cambridge, MA: MIT Press. [Paper reference 1]

  161. 161. Steriade, M. (1997). Synchronized Activities of Coupled Oscillators in the Cerebral Cortex and Thalamus at Different Levels of Vigilance. Cerebral Cortex, 7, 583-604. https://doi.org/10.1093/cercor/7.6.583 [Paper reference 2]

  162. 162. Stockmanns, G., Kochs, E., Nahm, W., Thornton, C., & Kalkmann, C. J. (2000). Automatic Analysis of Auditory Evoked Potentials by Means of Wavelet Analysis. In D. C. Jordan, D. J. A. Vaughan, & D. E. F. Newton (Eds.), Memory and Awareness in Anaesthesia IV (pp. 117-131). London: Imperial College Press. https://doi.org/10.1142/9781848160231_0010 [Paper reference 1]

  163. 163. Taylor, J. G. (2002). Paying Attention to Consciousness. Trends in Cognitive Sciences, 6, 206-210. https://doi.org/10.1016/S1364-6613(02)01890-9 [Paper reference 1]

  164. 164. Thorpe, S. J., & Imbert, M. (1989). Biological Constraints on Connectionist Models. In R. Pfeifer, Z. Schreter, & F. Fogelman-Soulié (Eds.), Connectionism in Perspective (pp. 63-92). Amsterdam: Elsevier. [Paper reference 1]

  165. 165. Thorpe, S., Delorme, A., & Van Rullen, R. (2001). Spike-Based Strategies for Rapid Processing. Neural Networks, 14, 715-725. https://doi.org/10.1016/S0893-6080(01)00083-1 [Paper reference 1]

  166. 166. Tononi, G., & Edelman, G. M. (1998). Consciousness and Complexity. Science, 282, 1846-1851. https://doi.org/10.1126/science.282.5395.1846 [Paper reference 3]

  167. 167. Tononi, G., Sporns, O., & Edelman, G. M. (1992). Re-Entry and the Problem of Integrating Multiple Cortical Areas: Simulation of Dynamic Integration in the Visual System. Cerebral Cortex, 2, 310-335. https://doi.org/10.1093/cercor/2.4.310 [Paper reference 1]

  168. 168. Treisman, A. (1998). Feature Binding, Attention and Object Perception. Philosophical Transactions of the Royal Society of London B, 353, 1295-1306. https://doi.org/10.1098/rstb.1998.0284

  169. 169. Triesch, J., Ballard, D. H., Hayhoe, M. M., & Sullivan, B. T. (2003). What You See Is What You Need. Journal of Vision, 3, 86-94. https://doi.org/10.1167/3.1.9 [Paper reference 1]

  170. 170. Tsotsos, J. K. (1990). Analyzing Vision at the Complexity Level. Behavioral and Brain Sciences, 13, 423-445. https://doi.org/10.1017/S0140525X00079577 [Paper reference 1]

  171. 171. Van Rullen, R., & Thorpe, S. J. (2001). Rate Coding versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex. Neural Computation, 13, 1255-1283. https://doi.org/10.1162/08997660152002852 [Paper reference 1]

  172. 172. VanRullen, R., & Koch, C. (2003). Is Perception Discrete or Continuous? Trends in Cognitive Sciences, 7, 207-213. https://doi.org/10.1016/S1364-6613(03)00095-0 [Paper reference 2]

  173. 173. VanRullen, R., Guyonneau, R., & Thorpe, S. J. (2005). Spike Times Make Sense. Trends in Neurosciences, 28, 1-4. https://doi.org/10.1016/j.tins.2004.10.010 [Paper reference 1]

  174. 174. Velmans, M. (1991). Is Human Information Processing Conscious? Behavioral and Brain Sciences, 14, 651-669. https://doi.org/10.1017/S0140525X00071776 [Paper reference 1]

  175. 175. Vogel, E. K., Woodman, G. F., & Luck, S. J. (2001). Storage of Features, Conjunctions, and Objects in Visual Working Memory. Journal of Experimental Psychology: Human Perception and Performance, 27, 92-114. https://doi.org/10.1037/0096-1523.27.1.92 [Paper reference 1]

  176. 176. Volterra, A., & Meldolesi, J. (2005). Astrocytes, from Brain Glue to Communication Elements: The Revolution Continues. Nature Reviews Neuroscience, 6, 626-640. https://doi.org/10.1038/nrn1722 [Paper reference 2]

  177. 177. von der Malsburg, C. (1997). The Coherence Definition of Consciousness. In M. Ito, Y. Miyashita, & E. T. Rolls (Eds.), Cognition, Computation and Consciousness (pp. 193-204). Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198524144.003.0013 [Paper reference 1]

  178. 178. von der Malsburg, C. (1999). The What and Why of Binding: The Modeler’s Perspective. Neuron, 24, 95-104. https://doi.org/10.1016/S0896-6273(00)80825-9 [Paper reference 2]

  179. 179. Wall, J. T., Xu, J., & Wang, X. (2002). Human Brain Plasticity: An Emerging View of the Multiple Substrates and Mechanisms That Cause Cortical Changes and Related Sensory Dysfunction after Injuries of Sensory Inputs from the Body. Brain Research Reviews, 39, 181-215. https://doi.org/10.1016/S0165-0173(02)00192-3 [Paper reference 1]

  180. 180. Yamazaki, Y., Hozumi, Y., Kaneko, K., Li, J., Fujii, S., Miyakawa, H., Kudo, Y., & Kato, H. (2005). Direct Evidence for Mutual Interactions between Perineuronal Astrocytes and Interneurons in the CA1 Region of the Rat Hippocampus. Neuroscience, 134, 791-802. https://doi.org/10.1016/j.neuroscience.2005.04.024 [Paper reference 1]

  181. 181. Zeman, A. (2001). Consciousness. Brain, 124, 1263-1289. https://doi.org/10.1093/brain/124.7.1263 [Paper reference 1]