Integrating Various Neural Features Based on Mechanism of Intricate Balance and Ongoing Activity: Unified Neural Account Underlying and Correspondent to Mental Phenomena

In recent decades, brain science has been enriched from both empirical and computational approaches. Interesting emerging neural features include power-law distribution, chaotic behavior, self-organized criticality, variance approach, neuronal avalanches, difference-based and sparse coding, optimized information transfer, maximized dynamic range for information processing, and reproducibility of evoked spatio-temporal motifs in spontaneous activities, and so on. These intriguing findings can be largely categorized into two classes: complexity and regularity. This article would like to highlight that the above-mentioned properties although look diverse and un-related, but actually may be rooted in a common foundation—excitatory and inhibitory balance (EIB) and ongoing activities (OA). To be clear, description and observation of neural features are phenomena or epiphenomena, while EIB-OA is the underlying mechanism. The EIB is maintained in a dynamic manner and may possess regional specificity, and importantly, EIB is organized along the boundary of phase transition which has been called criticality, bifurcation or edge of chaos. OA is composed of spontaneous organized activity, physiological noise, non-physiological noise and the interacting effect between OA and evoked activities. Based on EIB-OA, the brain may accommodate the property of chaos and regularity. We propose “virtual brain space” to bridge brain dynamics and mental space, and “code driving complexity hypothesis” to integrate regularity and complexity. The functional implication of oscillation and energy consumption of the brain are discussed.


The Mechanism of Neural Equilibrium and
Ongoing Activity (EIB-OA)

Neural Activity and Noise
As long as there is life, the brain is always active, no matter in sleep or even in comatose state [4]. The appearance of brain activities seems complicated, no matter at whatever scales. There are many ways to indicate neuronal/neural activities, such as neuron membrane potential and conductance, open and close of ion channels, ion flow into and out of cells, cascade of biochemical reaction, cortico-electric potential and electricity flow, oxygen consumption, glucose metabolism, blood flow, etc. The methods of electrophysiology and optical imaging (e.g., calcium imaging) are frequently adopted to quantify neuronal activity.
Within a volume of brain tissue, the electrical current flowing along dendrosomatic  At the level of neuronal tissue, intermittent spikes may occur spontaneously without external stimuli, and both chaotic and regular components have been identified. The spontaneous activity of neuronal tissue in vivo is apparently stochastic with high values in Fano factor and in coefficient of variation of interspike intervals, but may actually contain reproducible spatio-temporal motifs [5] [6] [7]. At the level of neural population in resting state where quasi-irregular automatic activity dominates, EEG contains oscillation with specific spectral peaks (delta, theta, alpha, beta and gamma) and fMRI comprises low frequency fluctuation (<0.1 Hz) which is organized into modular structures [8] [9] [10] [11], evidence from large-scale network also indicating that the brain dynamics hosts both irregularity and regularity. There are several hypotheses for the neural oscillation. Endogenous pacemakers in thalamus or cortex may generate electrical rhythms (multi-generators) [12] [13], which may propagate to the remainder of the cortex. Interacting neural nodes of the cortex and thalamus may either receive white noise as input and produce brain waves, or generate rhythms out of non-linearly coupled dynamics [14] [15] [16] [17] [18]. The network organized by fast recurrent excitation followed by slower feedback inhibition seems particularly ready to give rise to oscillations [19]. The above origins are neither exclusive to each other nor an exhaustive account for the observed rhythmic neural activities. However, there remains a critical question, that is: how to reconcile the prominent neuronal stochasticity and the observed population rhythmicity?
Although it is possible that some neurons or coupled neurons may fire regularly like a clock, cortical synaptic firing, as stated above, generally looks stochastic. Brunel and Wang have made an insightful contribution to this issue that the frequency of neural network oscillation is determined by the synaptic and membrane properties, independent of neuronal firing rate [20], illustrated in  is also noticed in cortico-electrical spectrum.
Conventional neuroimaging research has centered on "regularity" aspect of brain-behavior relationship, such as peak wave of N100 and P300. For the research adopting chaos approach (more detail in Section 3), phase plot of EEG/LFP shows consistent spatio-temporal patterns [22] [23]. EEG can be decomposed into 1/f pink noise (fractal component, will be introduced in Section 3) and spectrum peaks (harmonic component). Chaos does not mean something messy or random, and in fact, there is organized structure embedded in it. Chaos and randomness (pure noise) can be differentiated by mathematical metrics, such as finite correlation dimension for the former which is not convergent for the latter. When the "regular" part of EEG is appraised closely, it appears far away from perfectly periodic. The analysis of correlation dimension and Lyapunov exponent, two common indices of chaos, demonstrated that alpha oscillation, one of the most prominent EEG constituents, is chaotic to certain degree [24]. From the evidence summarized above, either at neuronal or neural levels, the distinction of regularity and chaos in the brain dynamics seems somewhat limited time window). The network firing pattern could thus be altered, and a recurrent network endowed with STDP at local synapses could self-organize into chaotic manifestation [25]. Another class of regularity that may interact with chaotic property of the brain is neuronal/neural code (Section 7).
OA and EA used to be treated as different brain activities, dependent upon whether the organism exposes to external stimuli or not. Previous research adds "resting" before the imaging modality to describe OA, e.g., resting fMRI or sometimes resting EEG. It is arguable that the brain may not be idling in the absence of direct sensory stimulation and of behavioral output since OA may still contain evoked components from mind wandering, emotion intrusion, automatic memory retrieval, and command to or feedback from visceral organs. The distinction of OA and EA is accordingly not absolute. This theoretical article tentatively assumes that OA comprises four components: spontaneous organized activity, physiological noise, non-physiological noise, and interaction effect with internal/external stimuli. "Spontaneous organized activity" refers to repeated spatio-temporal structures, either regular or chaotic, which generally (but not always) have higher power/amplitude, i.e., spikes, action potential or oscillation.
It will become clear later (Section 6) that part of spontaneous organized activity of OA is actually a replica of EA (e.g., neuronal/neural codes), and their relationship is far more intimate than previously thought [26] [27] [28]. Physiological noise indicates stochastic activities which may facilitate signal transmission via several mechanisms, e.g., stochastic resonance and stochastic synchrony [29], whereas the non-physiological counterpart follows traditional sense of noise that has detrimental impact on information transfer and processing. Conceptually, sporadic spikes which do not contribute to psycho-physiological function should be regarded as non-physiological noise. Although neuron is a threshold machine obeying all-or-none law, it should not be confused that noise may still occur when there is no action potential. Noise, no matter physiological or not, indeed may change the conductance, capacitance, and membrane potential of a neuron without firing it, i.e., affecting the hidden state of neurons, which is different from the noise of electronic devices where circuit parameters are relatively stable. Subthreshold noise may result from many sources, such as thermodynamic noise in early sensory stage, biochemical noise of cellular machinery, and background activity caused by pre-synaptic bombardment without reaching firing threshold. Through the modulation of phase and resonance of neurons, the influence of noise will be carried over during signal transmission. Subthreshold noise may benefit signal transfer, thus probably physiological. The rationale supports the distinction of physiological and non-physiological noise will be discussed in more detail in Section 5 (Function and characteristics of OA).
Theoretical research has suggested that random noise may drive an interacting network to generate synchronous oscillation. It is notable that the oscillatory behaviors of the network are prone to occur at certain "critical" point (or zone) Through synaptic plasticity, the brain dynamics may shift along the critical boundary to be more chaotic or less chaotic. Together, noise and spontaneous neural activity, and chaos and regularity (information transmission), can be bridged by EIB at critical state. The reproducible spatio-temporal pattern of spontaneous organized activity of OA is endorsed by structured EIB, which may carry out information storage/retrieval, representation of neuronal/neural codes, and provision of immediate access of sensorimotor repertoire. These above issues will be elaborated in later sections. Across different hierarchies and scales, every system has its own noise which may come from the system itself or from the observer's measurement, and most frequently from both. The measurement noise is not interested in this theoretical article and it can be categorized into the non-physiological noise of OA. We will see that signal variability that used to be attributed to noise may be physiologically meaningful and is conceptually relevant to entropy (Section 6).

Excitatory and Inhibitory Equilibrium
Brain statistics show that 1 mm 3 of mouse cortex contains 10 5 neurons, 10 8 synapses, and 4 km of axon [30]. It was found that the white matter volume scales approximately as 4/3 power of the gray matter volume across fifty-nine mammalian species, indicating that there must be wiring principles that mammalian brains conform to [31]. Intense local computation is endorsed by the high density of axons and dendrites which constitute around 60 percent of gray matter [32]. The fact that local connections are denser than distant ones may contribute to the property of small worldness. On average each individual neuron can integrate the information from thousands of other neurons [33] [34], and send its activity back to the network. Based on the underlying hardwire, excitatory and inhibitory mechanisms work hand by hand to construct neural codes and, to process and transfer neural information. The two cardinal forces keep dynamic balance all the time either in resting or in evoked states [35]- [41], similar to the core concept of Yin-Yang theory in oriental philosophy frequently symbolized as Taiji, see Figure 5 [42]. Yin and Yang respectively denote negative and positive drives, they not only opposing but also complementing each other. The "opposing" side results in quench and saving energy, whereas the "complementing" side leads to genesis.
It has been summarized that the wiring of neural bundles can be largely categorized into two subsystems, namely informational and modulatory, with the latter adjusting the characteristics of the former [43]. Informational pathway is organized among the rapid-conducting (50 m/sec), well myelinated, large neu-T.-W. Lee, G. Tramontano World Journal of Neuroscience rons in hierarchical or heterarchical manner, where neural excitation is mainly mediated by glutamate, and neural inhibition by two fast neurotransmitters, glycine and gamma-aminobutyric acid (GABA). In contrast, modulatory pathway is distributed diffusely by poorly myelinated neurons with thin axons and slow velocity (0.5 m/sec). Unlike informational subsystem using amino acids to gate ion channels (also voltage-gated), neurotransmitters of modulatory subsystem act on G-receptors via monoamines (e.g., norepinephrine, acetylcholine, dopamine, serotonin) or peptides (e.g., substance P, endorphins). From the tissue organization perspective, neuro-inhibition is mainly executed by inhibitory interneurons.
In addition, neuronal suppression may also work through post-synaptic hyperpolarization, synaptic depression, spike frequency adaptation, elevation of spiking threshold, changes in membrane conductance, and negative auto-feedback (e.g., via auto-receptor). Referring to the theory of Yin-Yang, the counteraction between excitation and inhibition in informational network is evident but what is the "genesis" aspect at EIB? We believe that it is the rich brain dynamics emerging around the banks of EIB boundary. The versatile brain dynamics shall get dampened toward monotonic patterns in over-excitation condition, such as epilepsy, or in over-inhibition condition, such as deep anesthesia induced by GABA agonists.
Around 75 percent neocortical neurons are excitatory pyramidal neurons, while inhibitory interneurons comprise the rest one fourth. There must be organization and/or mechanism to compensate the relatively smaller number of interneurons to achieve delicate balance. For example, the firing rates and synaptic strength of inhibitory interneurons are higher than those in excitatory neurons, and the depression of inhibitory synapses due to sustained activation is less significant [44]. Neural network can thus be modeled as an array of recurrently connected excitatory neurons plus a common inhibitory neuron [45].
Distant excitatory input may activate local excitatory and then inhibitory neural populations to reach EIB. EIB is reached in both spatial and temporal domains.
The origin of short-term depression/adaptation can be either local (e.g., synaptic depression, spike frequency adaptation) or distant through thalamo-cortical pathway [46]. Recent evidence highlights that the neuronal excitation and inhibition may cover the same receptive fields, such as in acoustic tonal and visual orientation tuning [41] [47]. Examination of cellular conductance revealed that the distribution of inhibitory mechanism may possess the same preferred property (e.g., orientation) and tuning width as the excitation counterpart. Furthermore, organized in correspondence with the visual receptive fields, inhibitory interneurons may contribute to the distinguishing push-pull phenomenon of striate cortical simple cells where simultaneous excitation and inhibition can be elicited in isolation within discrete subregions [48] [49]. The above evidence suggests that spatial organizations of EIB are tissue-specific which in turn may lay the foundation of pertinent physiological functions.
As to temporal domain, the mechanism of EIB may also present tissue speci-  [49]. The temporal relationship between excitatory input and balanced inhibition is precise in auditory cortex, with the latter suppressing the former within around 4 msec during which spikes occur. The consequence of lag on neuronal dynamics is not trivial. Wehr and Zador performed computational simulation using integrate-and-fire model to explore the effects of continuing and delayed balanced inhibition [41]. The EIB with no delay (on average) and with brief delay respectively generated irregular firing and highly transient spiking, fitting with the observation in visual and auditory cortices. Since closely located neurons receive similar inputs, to investigate the dynamics of EIB, Okun and Lampl recorded 47 pairs of nearby neurons in the barrel cortices of lightly anesthetized rats and used dual recording technique to disentangle excitation and inhibition. The authors concluded highly synchronized excitatory and inhibitory inputs to a single neuron in both spontaneous and sensory-evoked (multi-whisker deflections) conditions, and they found that the inhibitory inputs lagged the excitatory counterparts for several milliseconds [40]. Mild delay of neural inhibition was also noticed by an earlier report in the prefrontal cortex [39], where excitatory conductance augmented with increasing neural activities, and the inhibitory conductance followed in a proportional way. Accordingly, EIB is spatially and temporally balanced (may have millisecond lag), maintained via various pathways, specifically organized for different brain regions, and is reached since the very first processing unit-neuron. The mechanism of EIB may avoid noise accumulation in the central nervous system [50], control the gain of excitation [37], and prevent the unnecessary firing because of saturation.
At large scale network, excitatory and inhibitory balance can be achieved through bundled hard-wiring, for example, DeLong and Wichmann loop of basal ganglia, mutual suppression between neocortical and limbic compartments, excitatory-inhibitory interaction between ventromedial prefrontal cortex and raphe nuclei [51] [52]. The consequence of loss of EIB at large scale network can be serious. Imbalance in cortico-limbic interaction may underlie the characteristic pathology of major depressive disorder, i.e., hypofrontality and limbic hyperactivity [52], see Figure 3.
EIB is not static steadiness and importantly, may be organized and structured adaptively at criticality to allow several signature dynamic features to emerge, such as oscillation and chaos, which may facilitate information processing [53] [54] [55] [56] [57]. The design of EIB guarantees that mild disruption of the equilibrium by either excitatory or inhibitory input would impose non-trivial influence on the neuron, allowing tracking changes and laying foundation for difference-based coding (Section 7). The perturbation may not be enough to fire a neuron but will substantially affect the membrane background activity and hence the timing of firing, which is one of the sources for irregularity and/or chaos to occur in vivo [38] [56]. An inspiring numerical research by van Vreeswijk T.-W. Lee, G. Tramontano Figure 3. Highlight of a network interacting pattern related to MDD, reciprocal suppression between dorsal and ventral compartments. Blue-inhibition, red-excitation. Left: short red arrows indicate various excitatory inputs to dorsal and ventral compartments of the brain. Blue arrows show mutual suppression as an interacting mechanism between the two compartments. Right: in homeostasis, balance is reached and their sizes (an abstract representative of metabolic level or engaged state) are approximately equal, with mild perturbation allowed. When reaching breaking point and the balance is broken as in MDD, the size of ventral compartment enlarges so does its negative influence on dorsal compartment, and conversely, the size of dorsal compartment shrinks and its impact on ventral limbic system reduces. The altogether consequence evolves to a state/attractor of hypofrontality and limbic hyperactivity. to external perturbation that has been reported by quite several neuroimaging literature [58]. Although traditional research used to adopt recurrent excitatory-excitatory interaction to explain neuronal oscillation, recent advance has confirmed the fundamental role of inhibitory mechanism in rhythmogenesis [19].

Criticality and EIB-OA
In physics, criticality generally refers to a state of equilibrium with potential of phase transition, like the coexistence of ice and water. In power plant, criticality is a balanced state in which producing and losing of neutrons are equivalent so that fission chain reaction of uranium is maintained and under control. The above two examples hint a research focus of great interest common to many disciplines, i.e., the equilibrium boundary of several possible states. The concept of phase transition can be re-formatted in network manifestation as the dynamics between total randomness and boring order (edge of chaos) or the dynamics with several concurrent meta-stable states, which is usually accompanied with drastic change and complicated diversity to perturbation. The denotations of equilibrium and stability in this article are different because EIB could be minimally or weakly stable to allow various possible trajectories to travel around.

Criticality and OA
It is important to note that at criticality oscillation may occur naturally. Ghosh et

Self-Organized Criticality (SOC), EIB and Neuronal Avalanches
For non-linear dynamic systems, SOC is used to describe a property that the critical point is also an attractor where the dynamics prefer to travel. SOC was first proposed by Bak, Tang and Wiesenfeld, thus also named BTW model [63].
The authors used pendulum array as a thinking experiment to highlight how a minimally stable system would evolve. They constructed cellular automata obeying simple rules to illustrate that, although still debated, SOC was a good candidate to explain 1/f noise frequently encountered in nature. Nowadays, it is generally agreed that SOC has close relationship with cardinal characteristics of chaos (the relevance to neuroscience will be discussed in Section 3), such as fractals, scale-invariance, 1/f noise, and power law distribution. The concept of SOC has been extended to many other fields, such as earthquakes, economics, epidemics, forest fires, solar physics, super-conduction, ecology, sociology, and neuroscience. In contrast to traditional criticality that demands finely tuned details, a welcome trait of SOC regarding its application to neurobiology is that the parameters of the model can be changed widely without affecting the emergence of complexity behaviors. This property of SOC implicates that continuously varied biological underpinnings may engender relatively consistent neuro-electric features, and hence mental life.
To explain SOC, a metaphorical "sand-pile" is commonly adopted to demonstrate the critical and unstable condition: sprinkle sand grains on a growing pile, and through the repeated collapsing processes the sand pile system may ultimately reach certain "equilibrium state" where next extra grain sprinkled onto it T.-W. Lee, G. Tramontano World Journal of Neuroscience could cause landslide or "avalanche". Initial empirical research on SOC accordingly inclined to investigate real granular matter, and SOC system is conceived to have accumulating as well as dissipating forces that make the system "balanced" near critical point. Beggs and Plens are among the pioneers to postulate the existence of SOC in neural tissue, dubbed neuronal avalanches [64]. In their classical work, organotypic slices of rat somatosensory cortex were prepared and LFP was recorded. A neuronal avalanche was defined as the spatial pattern of sharp LFP peaks that was preceded and ended by blank frames, i.e., no activities for at least one time bin. The "spatial pattern" in neuronal avalanche described It is imperative to note that power-law is close to self-similarity across different scales, thus is an indicator of chaos (or sometimes complexity). Criticality is frequently associated with chaotic features, whereas conversely, power law (chaos) is not sufficient to imply criticality [73]. Branching parameter also does not guarantee criticality, instead it implies EIB when equal to one. We believe that the concept of SOC is insightful and could be one of the main manifestations rooted in neural equilibrium.

Complexity/Chaos and EIB-OA
Linear system with finite dimension is never chaotic. Chaos is a mathematic discipline to explore dynamic systems that are modeled with (few) deterministic (differential) equations. The chaotic trajectory evolving with time may aggregate to show interesting topology, such as attractor; we resort to a double scroll as an illustration in Figure 4 [77]. It is evident that the spatio-temporal patterns of chaos could be correlated (even though not identical), and it is possible to transit between different wings (states) [22] [23] [75]. Conventionally, chaos is defined as "aperiodic" and "bounded" dynamics in a "deterministic" system that is "sensitive to its initial condition" (butterfly effect), four terms in total (p.27 -p.28) [78]. The four criteria of chaotic system do describe some features of brain dynamics but not satisfactory. For example, brain dynamics may contain periodic portion in the spectrum domain. The criterion of "deterministic system" may be substantiated by the neural architecture, responsive mechanics, self-balancing (negative) feedback and self-reinforcing (positive) feedback and so on, extremely complicated indeed and modifiable online via synaptic plasticity. Besides, it is generally believed that brain possesses noise and stochastic process (e.g., Poisson-like spike train). There are many interesting features of chaos theory not covered by this theoretical article, including period doubling, Cantor set, Poincaré map, strange attractor, and the relevance to entropy and so on. However, the measurements spanning from neuronal, neural to psychological and behavioral domains do show certain features of chaos although not completely obey it.
It is obvious that "chaos" does not imply that the brain works in a random or crazy way. The idea of attractor can be extended to an attractor neural network, where the spontaneous dynamics may settle into one of several possible firing patterns, which then may eventually destabilize and shift to another pattern (attractor) either automatically or under the effect of noise. The perturbation of an external stimulus may destabilize the network so that the dynamics may leave previous state and detour to the appropriate wing for the duration of the central effect of that stimulus. Fractal is often used to catch the geometry of chaos.
Fractal objects permeates our world, even at first glance they may look irregular, such as landscapes, clouds, trees, rivers, lightning, branches of tracheal tubes, blood vessels and neuronal dendrites [79]. Some fractal objects are artificial, such as Koch snowflake, Sierpinski Triangle, Dragon curve, Pythagoras tree and Cantor set [80]. Self-similarity across various scales is an important source of scale-invariance and power-law distribution and may contribute to long-range dependence/correlation in spatial and temporal domains. Self-similarity and power-law can be viewed as identical in many systems; the former is the description, and the latter is the mathematical form. The equivalence between self-similarity, scale-invariance and power low is tenable but none of them promises criticality.
Complexity theory is developed from chaos theory. Chaos is a mathematical fact, while the precise definition of complexity is yet to be delineated. Complexity system contains many non-linearly interacting, interdependent elements. In addition, complexity system spans different level, each with specific structures and organization rules, with the system at each scale made up of the constituents of the finer scale. Complexity system usually involves the interplay between chaotic and non-chaotic components. One of the most fascinating phenomena of complexity system is its emergent collective behavior, which may be hard to predict from an understanding of its lower-level constituents. For example, with T.-W. Lee, G. Tramontano World Journal of Neuroscience increasing scale, neuro-activities may manifest as neuronal spikes/avalanches, LFP/oscillation of neural tissue, and EEG/BOLD fluctuation in brain areas. The emergent property is relevant to the concept of self-organization, which can be out of simple rules like differential equation and automaton. Another notable difference between fractal and complexity system lies in that fractal mathematics can be applied to describe the structure of static geometry, whereas complexity system must evolve with time. For living organisms, "complexity adaptive system" was proposed to account for a complexity system that may alter themselves to adapt to a changing environment and conversely, may also change the environment to suit them. In this article, the term chaos is used at mathematical stance, and the term complexity is applied to denote a holistic and realistic aspect of brain dynamics, which comprises the component of chaos/fractal but not limited to it.

Virtual Brain Space and Mental Space
Many charming ideas of chaos/complexity theory are picked up to neuroscience research. Complexity theory is welcome for a lot of reasons. The most intriguing one, we believe, is because complexity theory is a potential explanation, at least an adorable endeavor, to fill in the gap between anatomical space and mental space; we name it "virtual brain space", explained below. Mapping brain function to anatomical location, dubbed functional localization or locationist account, has pushed forward substantial progress; however, it might be abused to become a form of reductionism. For example, take the enhanced neural activity in amygdala as equivalent to fear or that in striate cortex as vision. The potential fallacy to position psychological function at certain place in the brain has been challenged. For example, keeping time is the task of a clock but it is inconvincible to attribute that function to a particular gear [81]. Nevertheless, it seems appropriate to assume that there must be correspondence between the brain (biology) and psychology.
Given that major categories of psychological functions have respective materialistic implementation in specific brain regions, the next question is: what is the brain feature corresponding to the ever-changing psychological content, in real time manner? A natural candidate, no doubt, is the brain dynamics. Dynamics indicates something happening in time stream, which is a common place shared by the brain and the mind of living organisms. A theory is desperately needed to describe the brain space with its characteristics instantly homologous to psychological content which occurs in mental space. When collapsing the temporal dimension, the correspondent structures between brain space and mental space should be conspicuous. From the perspective of complexity theory, the attractors, meta-stable states, and trajectories can be respectively projected to specific psychological entities/states, psychological possibilities, and the routes of conscious flow; these topics will be introduced later. Complexity/chaos enables the brain to be a hermeneutic device [82]. In this article, "psychological" points T.-W. Lee, G. Tramontano World Journal of Neuroscience to specific capability, while "mental" is a broader term, including the states when no specific psychological function is carried out.
Is there empirical evidence mediating the relationship between brain and mental spaces? Empirical support may come from the results of multivariate analyses of functional brain imaging research. Take a general fMRI study adopt- Recover the spatially distributed pattern into the space with dimension N+1 (N voxels plus time) shall disclose that the trajectory of the cluster's dynamics indeed eschews toward and lingers around the sub-space spanned by the N voxels with higher weights (or beta values) for facial recognition. Activated clusters/blobs to distinct classes of experimental paradigms may imply differentially preferred sub-spaces, and these sub-spaces may index the "location" correspondent to the associated psychological function or state in the virtual brain space [86].
It is noteworthy that the raw neural/BOLD signals may not be the actual entity in the virtual brain space. The dynamics of N voxels may warrant certain unknown function(s) to convert them to better the match with mental world. Accordingly, "functional localization" should not be restricted to the concrete anatomical space made of neural tissues but instead, should be examined in the brain space where the characteristics of dynamics are conceptual and multi-dimensional-that is why we regard the space "virtual". What MVPA and RSA decode is not only about pattern but perhaps more importantly, is the indicated "psychological spot" in virtual brain space. Understanding the distinction between anatomical space and virtual brain space is a premise to appreciate why a theory capable of summarizing dynamics is so desirable.
T.-W. Lee, G. Tramontano World Journal of Neuroscience At this point, it is proper to contrast our virtual brain space with the reflection of brain-mind issue from other discipline [87]. Psychology largely stems from studying the content in consciousness. Based on recent meta-analysis, some researchers begin to consider "psychological primitives" that are not consciously accessible but may underlie psychological functions and may have better correspondence with neural events. Here, we would like to distinguish that above conscious level, psychology has its neural correspondence in the virtual brain space, whereas below conscious level, neuronal/neural features are the candidates for psychological primitives. Decades of endeavor to localize brain functions in neuroscience is certainly successful in some sense but is destined to be insufficient since the validity to map "below-horizon" neural features to "abovehorizon" psychology is questionable [87], see Figure 5 for explication. In the virtual brain space, psychological function or mental state may manifest as Figure 5. The correspondence between psychological primitives and neuronal/neural features, and that between psychological function and complexity topology in virtual brain space. The symbol Taiji is situated in the middle of the figure to indicate EIB-OA. Taiji is a core concept of oriental philosophy Taoism which emphasizes balance between and genesis from positive and negative forces/potentials. Solid bi-directional arrows and curved arrows respectively represent correspondence and unknown transformation. The horizontal gray bar distinguishes between above and below consciousness. World Journal of Neuroscience attractors and "meaning" may be stored in the dynamic orbit.
Within a particular brain region and the correspondent sub-space in the virtual brain space, say fusiform face area, slight deviation in the spatial distribution of neural activities may account for the capability to accommodate tremendous within-category variability, e.g., recognize so many different faces. Similarly, different odors may display different configurations of electrical fields in the olfactory system [22] [23]. It reminds that small differences in the initial condition (within category difference) would be amplified in chaotic system. The variation could be grasped by pattern-based multivariate analyses if the resolution of neuroimaging tool is high enough. The explanatory power of complexity theory has been applied in learning, memory, and motion detection and so on [88] [89]. Across several brain regions and hence the confluence of several sub-space of virtual brain space, complexity theory provides a natural platform to assess multi-modal integration, contextual effect, and to endorse a single brain region to participate in many different tasks [ [90]. It seems hard to explain the sustained visual awareness of background/environment that is out of the here-and-now focus/attention. Lingering in certain constructed attractors in virtual brain space may provide a feasible account to this stable background existence in our visual world. Studying the topology of neuropsychiatry conditions in virtual brain space shall be informative since it provides global dynamic pattern that is more relevant to (abnormal) mental phenomenon. The fronto-limbic dysregulation of MDD, as shown in Figure 3, may manifest as inflated limbic and shrink frontal attractors in virtual brain space (after proper transformation), which is hard to capture by the measured regional neural features (locationist account).
Although the transformation of brain dynamics to virtual brain space is theoretical for now, there are several constraints in the building up computational simulation that are worthy of consideration: 1) be sensitive to initial condition

Chaos across Different Scales and Modalities
Power law is frequently used as a probe to suggest chaos or complexity system.  [91]. The well-known 1/f EEG spectrum is also an instance of power-law distribution. Kitzbichler et al. applied Hilbert transformation to resting brain signals (fMRI and MEG) and then calculated local and global synchronization indices across different scales [92]. The authors discovered that both the distributions of inter-regional phase-locking intervals and global synchronization index conformed to power law scaling, regardless of imaging modalities.
Scale-invariance of functional connectivity was also observed in EEG, with the power law exponents of global synchronization differing between spectra, and the lower frequencies exhibiting steeper slope [93]. The probability distributions of EEG power of wide-range spectra and of the dwell time of different states were skewed to the right-hand tail that was also noticed in complex system [74] [ 76]. Although 1/f scaling is not applicable to neuro-electric activities of spectral peaks, the amplitude modulation of alpha oscillation and auto-correlation of alpha, mu, and beta frequencies still obeyed power law [66]. The abundance of self-similarity in central nervous system may underlie the observed scaling law in cognition, such as in perception, action, memory, and linguistics [94].
From the aspect of neuroanatomy, dendritic tree bears fractal structure [95] [96]. It was reported that hippocampal CA3 network exhibits scale-free topology in which the distribution of the output links per neuron decays as a power law [97]. Using a wide sense self-similarity as an indicator (retaining exponential functional form across different scales), fractal geometry was revealed in segmented gray matter with dimension around 2.80 [98]. Accordingly, part of the emergent chaotic property of neuro-activities could originate from the underlying neuro-architecture. The complexity features of the brain thus have various origins, so does the power-law distribution. From the above discussion, complexity/chaos may originate from interacting network, criticality/SOC (Section 2), EIB (Section 1.2, Section 3) and even from the fractal geometry of underlying neuro-anatomical architecture.

EIB and Complexity/Chaos, and Criticality
Exquisite theoretical research on neural models has shown that networks with recurrent structure and constraint of EIB can exhibit chaotic behavior and  [53]. This seems counterintuitive but the underlying concept has been introduced in the discussion of complexity (collective vs. individual), which is like that neural tissue may oscillate faster than the frequency of spikes of individual neuron, see Figure 2.
T.-W. Lee, G. Tramontano World Journal of Neuroscience The brain is different from other chaotic/complexity systems because of the constraints of EIB, which may be further self-organized at criticality [1] [2] [74].
Although it is largely unknown how criticality is achieved, EIB, chaos and criticality have each obtained empirical as well as theoretical support. We surmise that EIB is wired at criticality which enables the trajectory to be very flexible in the virtual brain space. Under the critical condition and in response to exteroceptive/interoceptive stimulation, the neuro-dynamics in virtual brain space shall engage in and depart away from an attractor easily (transition between different phases), which is correspondent to the execution and withdrawal of various psychological representations or functions. The flexibility is endorsed by the sensitivity of chaotic system to slight difference in the initial condition. We would like to emphasize that power law, complexity, chaos, and inferred criticality are phenomena and EIB may be the mechanism.
It is noteworthy that the brain has an outstanding character that most complexity systems do not capture, that is, top-down modulation. A distinguishing emblem of complexity system is the collective, emergent global pattern out of locally interacting of components. The collective dynamics of the brain, on one hand is constructed from its constituents but on the other hand, may conversely modulate the behavior of the elements at lower hierarchy, even to the most fundamental processing unit-neuron. The well-established instantaneous top-down modulation includes LTP on spikes, up-and down-state on cortical excitability, global oscillatory pattern (awake, aroused, relaxed and sleep stages) on neuronal dynamics, and attention (large-scale network) on neuronal/neural response to perception, and so on. This kind of interaction is in contrast to the inter-regional cross-talk or the phase-amplitude relationship between different spectra at roughly the same hierarchy [99] [100]. Namely, brain interaction is not only within but also across hierarchy. The bi-directional interaction, also substantiated by EIB, may be a distinctive hallmark of the brain as a unique category of complexity system. The availability of top-down modulation may provide dynamic context to synchronize its constituents and may underlie real-time psychological function, self, and consciousness. The intriguing bi-directional complexity could be the neural foundation for an organism to be a "unity" [101].

Noise and Complexity/Chaos
Spontaneous organized activity of OA projected to virtual brain space may represent the possible itineraries of the complexity system as discussed above, while noise of OA may shape the realization of the trajectory. For example, to hop to other possible attractors, noise may detour the route toward an unstable direction normal to the path hanging onto the extant attractor [102]. Introducing noise in the KIII model of olfactory bulb would not induce or suppress attractors but may stabilize the aperiodic orbits where stability was defined by centroid and standard deviation (less than two) of spatial distribution from sequential samplings [91]. With the modeling of dendritic noise in network model,  [86]. To sum up, noise may play opposite roles for engaging in and disengaging from attractors. When the noise direction points away from or toward the center of an attractor, the trajectory to an attractor is respectively destabilized or stabilized.
Although noise itself may not be enough to induce chaos, synaptic noise was suggested to be able to tune the degree of complexity of neural activities, i.e., shuttling between chaos and regularities [102] [103]. From the complexity perspective, adaptive noise (physiological) shall help the brain to locate to proper attractors to fit with the survival demand of an organism.
Since the properties of complexity/chaos and statistical randomness are both described in the brain, the apparent randomness may be the product of the two sources. Some researchers debated that true randomness does not exist in macroscopic world and stochastic model is a convenient low-dimensional approximation for high-dimensional chaos [103]. Incorporating stochastic randomness is only a simplified modeling strategy to handle the unexplained component in measurement, which is then called "noise". Obviously, part of the noise belongs to the recording machine and hence non-physiological. Since each neuron has connection with thousands of other neurons, it is also suggested that large number of signaling and massive interaction in neural system may have aver-

Information Transfer at EIB
Information can be defined in various ways. Among them, information theory (or communication theory) developed by Shannon is canonical and has motivated substantial progress in many disciplines, such as thermal physics, statistics, engineering, evolution, biology, computational neuroscience, and coding and data analysis [106]. One of the key elements of information theory is entropy, a quantity of uncertainty based on the distribution of selected random variables.
Shannon's information theory may be of limited application in other fields [107], nevertheless, it is still versatile nowadays. Freeman made an interesting contrast between artificial and biological networks: what processed for the former are bits, symbols, and information, whereas what processed for the latter are flows, patterns and meaning [108]. This contrast is insightful and reminds of the

Dynamic Range of Neural Activity at EIB-SOC
Experimental work has demonstrated that at EIB-SOC the dynamic range of input processing is maximized. Shew et al. studied organotypic culture of rat somatosensory cortex on microelectrode arrays [109]. The authors adjusted excitatory and inhibitory balance by the application of antagonists of fast glutamatergic (AP5/DNQX) and GABAergic (PTX) synaptic transmission. For both spontaneous and stimulus-evoked activities, the former and the latter would respectively reduce and increase the amplitude of LFPs. In either case, the slope of power law was deviated from the condition without administration of drugs. It is interesting to note that the range of the stimuli to that the system responded significantly shrank with either AP5/DNQX or PTX. Their findings implied that at the original equilibrium state, dynamic range in the cortical network was optimized. Under the exposure of AP5/DNQX, not PTX, the probability of cluster size seemed still conformed to power law (Figure 2A, p.15597) [109], which echoed the query that power law alone may not guarantee criticality. Nevertheless, it is reasonable to infer that disruption of EIB also compromised criticality given that neural EIB is believed to be situated at SOC. when the response to perturbation revealed drastic change (criticality), the dynamic range of the network was maximized. Similar conclusion was also reached by Kinouchi and Copelli who proposed that the sensitivity to external stimuli at criticality may underlie the amazing human sensory capabilities to encode external information spanning across several orders of magnitudes [111].
In addition to the stimulus-response profile, it is also possible to define dynamic range by the number of meta-stable states, which can be quantified as the cluster number of spatio-temporal patterns. Haldeman and Beggs found that for large network and across different degrees of connection per neural node, the number of meta-stable states was maximized at the branching parameter equal to one (EIB) [112]. Deco and Jirsa's network model adjusted the inter-regional coupling strength until a network began to disclose drastic "phase transition", T.-W. Lee, G. Tramontano World Journal of Neuroscience [2], and chose that state as criticality. The authors found that within a particular "range" of global coupling strengths, the system possessed multiple attractors and higher entropy value. The property of "range", instead of a particular value, of parameter is welcome since it is conceptually like SOC where the system behaviors are robust to mild parameter change. Furthermore, the simulated neural dynamics was transformed to BOLD signals via Balloon-Windkessel model [113], and around the critical point the similarity between the functional connectivity maps generated by theoretical and empirical approaches reached optimum.

Maximization of Information Transfer at EIB-SOC
Evidence suggests that not only the dynamic range of information processing but also the amount of information transfer is maximized at EIB-SOC. Extending previous research of pharmacological intervention (Section 5.1) on neural tissue culture, Shew et al. applied information theory to investigate the capacity of information transmission while EIB was disturbed [114]. The spontaneous activities were measured by microelectrode array which showed maximal information capacity at original equilibrium state, and the administration of either AP5/DNQX or PTX would decrease the entropy value. The optimality of entropy was present across several different bin sizes and recorded duration. In addition to resting state, the authors also studied the evoked responses of neural tissues. One electrode was selected to deliver electrode shock and the averaged pair-wise mutual information was calculated as an index of information transmission. Here, the pair-wise mutual information can be viewed as the degree of similarity of the evoked neural responses between two recording sites. Again, optimized information transmission was reached at original EIB.
In the classical paper of neuronal avalanches, Beggs and Plens designed multi-layered feed-forward network to explore the impact of branching parameters on information transmission, i.e., mutual information between input and output [64]. The computational simulation results implied that when the branching parameter equaled one, indicating EIB, the information transmission was maximized. Reduction in branching parameter would enhance the stability of the network but at the cost of sub-optimal information processing. Regarding the optimized information transfer at EIB, the results of computational and experimental approaches thus converge. It is noteworthy that equilibrium does not mean stability; by contrast, equilibrium frequently indicates meta-stability.
Conceptually, stability can be quantified by the amount of energy that is required to perturb a system to leave an attractor. EIB is organized at criticality where certain equipoise between stability and instability is reached so that fidelity of information is retained but flexible trajectories/states/attractors are also allowed to engage for the forthcoming signal processing.
This paramount phenomenon reviewed in this section has been examined in various input-driven adaptive models, such as cellular automata, Boolean network under selection pressure, recurrent neural network (with real-time com-  [119]. It is not an exception but seems quite common that meta-stability can be beneficial for a system. Bak and Chialvo constructed an adaptive network with its connections pruned based on simple rules: trafficking through strong connections and reducing connection strength in case of error [120]. The authors noticed that although the dynamics of the network were minimally stable, flexibility, adaptability and learning/unlearning were nicely balanced so that the system may handle complicated non-linear tasks, even if contaminated by noise. It has long been noticed that complexity system may emulate the functions of perception and memory [86]. At criticality, it was demonstrated that an adaptive network may also learn logical rules, even "exclusive OR" which is difficult to model in previous research [121]. who examined twenty datasets of extra-and intra-cellular recordings disclosed that the onset of stimuli consistently quenched neural variability [122]. The reduction of variability (indicated by Fano factor) to external stimuli was such a general property of cortex that it prevailed over different modalities (membrane potentials or spikes), stimulus categories, brain regions and states (awake, behaving, or anesthetized). Biyu He who investigated the interaction between OA and external stimuli (target detection) using fMRI also found that the volume of activity space in post-stimulus condition shrank compared with that in pre-stimulus condition [123]. Churchland [55]. It is noticeable that fixation on one attractor is a simplified scenario to account for the phenomenon of reduced variance to external stimuli.

Reduction of Variability and Enhancement of Fidelity at EIB
External input itself may suppress chaos in a recurrent network and hence leads to variance reduction [131]. Another theoretical work based on integrate-and-fire model noticed that EIB with mild delay in suppression may also decrease the temporal variability of neural output to external stimuli [41].  [134]. Actually, baseline OA may also contain and modulate EA (see Section 6.3); in other words, their interaction is bi-directional.
According to our definition, the interactive effect between OA and EA belongs to the fourth component of OA. The above cited research mainly focuses on the neural dynamics within restricted temporal range. With development from childhood to adulthood, increased baseline variability (in contrast to reduced variability to external stimuli) is associated with maturation of brain and better behavioral performance, which will be discussed in more detail later (Section 7).

Function and Characteristics of OA
We decompose OA into spontaneous organized neural activity (e.g., oscillation and chaos in EEG/MEG and in fMRI; neuronal spikes and subthreshold membrane dynamics at microscopic scale), physiologically beneficial randomness (physiological noise), non-physiological noise, and the interactive effects from exteroceptive and interoceptive stimuli. This section addressed the physiological function of OA, particularly physiological noise and spontaneous organized neural activity. It will become clear that there is no clear cut boundary between regularity and complexity, noise and signal, or EA and OA.

Stochastic Resonance and Physiological Randomness
We distinguish biologically relevant noise from irrelevant one. It seems counte-T.-W. Lee, G. Tramontano World Journal of Neuroscience rintuitive to assume that noise could be beneficial in terms of computation. In a non-linear threshold system, however, stochastic resonance (SR) happens (in its most primitive form) when weak/subthreshold periodic signal that is normally undetectable, can be boosted and detected by adding broadband noise to that signal, which usually appeared as a spectral peak at signal's frequency against the noise-floor background, thus the term "resonance". The phenomenon of SR was first discovered in a study of climatic oscillation and then spread to many other disciplines, such as physics, chemistry, engineering, electronics, lasers, ecology, psychophysics, cell biology and neuronal physiology [135][136][137]. With theoretical and experimental extension, SR has been applied to describe any phenomenon in a non-linear system where the presence of noise is better for output signal quality than its absence-noise benefit [137], see Figure 6.
In neurons and brain, the phenomenon of SR has gained support from two research lines. First, together with subthreshold stimuli, externally added noise may enhance sensory information processing and perception [136]. For these studies, the administered random fluctuation is not naturally occurring in the central nervous system but is part of the external input, though. The second branch of evidence comes from biomedical research, such as exploitation of SR in cochlear implant to improve hearing [138]. Biological utility of SR, i.e., neurons Figure 6. Illustration of stochastic resonance. Plot (a) shows that input of weak signal into a system does not generate observable spectrum change. The right subplot is the spectral distribution, with arbitrary units of logarithmic power (ordinate) and frequency (abscissa); Plot (b) shows that input of signal plus noise makes weak signal detectable, as a spectral peak. World Journal of Neuroscience make use of internally generated physiological noise to enhance information transfer, thus is indirectly inferred. Nevertheless, we believe that the brain has evolved to utilize some random noise for proper functioning. As to the source of noise in the nervous system, Faisal et al. has an excellent review from molecular to macroscopic and from sensory to motor levels, which further linked neural noise to behavior variability [50].
Previous models of SR derived from other disciplines may lack biological appropriateness. The manifestation, property, and function of noise in neural system are different from traditional SR in physics and engineering fields. To reconcile theoretical and experimental neuroscience, McDonnell advocates using the term "stochastic facilitation" to replace SR in biological research [29]. It is interesting to note that noise induced enhancement of signal processing depends upon the fact that the parameters of the non-linear model are "sub-optimal" [139]. Engineers usually improve the performance of a model by updating the parameters rather than adding white noise. A natural question ensuing is why the evolutionary force does not shape the brain to adopt the best neural parameters as engineers do? A likely answer is that universal optimization is never achievable since the challenge the organisms need to face every day is extremely diverse and varied, which may keep on changing with time and life cycle. Fixed and "best" model parameters in some conditions may endanger the organisms in other situations. The advantage of SR in the brain may thus represent a compromised strategy between adaptation and flexibility, not an optimal model for every scenario but still the best strategy of survival. Although contradictory to traditional engineering dogma, it is increasingly acknowledged that noise and meta-stability may benefit an adaptive system.

Synaptic Noise May Facilitate Signal Transfer
Empirical evidence has pointed out that background synaptic activity may shape the probability and variability of response to stimuli [140] [141] [142]. For cortical neurons, the background "noise", i.e., the fluctuation in membrane con- rent to the neuron soma in rat brain slice to simulate background synaptic activity [141] [149] [150]. The authors discovered that the power for neurons to detect transient current pulse was greatly enhanced in the presence of spontaneous background activity. Similar conclusion was also reported by other independent research group that the capability to detect small stimuli was dramatically increased with spontaneous fluctuation of membrane potential and conductance [140]. Shu et al. differentiated "up" and "down" states of neurons by conductance, spiking rate, and the degree of depolarization [140]. "Up" state can be simulated by certain depolarization and added Gaussian noise and, the noise may increase the spike response to applied small current, similar to SR. Faure et al. demonstrated that what had been called "synaptic noise" indeed contained periodic components which reflected the behaviors of pre-synaptic interneurons and enhanced the transmission efficacy of oscillatory temporal patterns [151].
Again, taking neuronal spikes in vivo as stochastic or random does not mean that neuron firing is governed by no rule. It is a convenient way to accommodate the complicated structure of pre-synaptic bombardment. Although it is debatable whether the mechanism of improving signals processing is different from the SR used in engineering field, these exquisite studies nevertheless have provided important insight that synaptic background activity (second type of OA) facilitates signal transmission.

Exploration of Dynamic Repertoire
From the perspective of fractal geometry, OA may empower a system to transit between attractors, as already introduced in Section 4.4. This sub-section highlights some far-reaching empirical evidence to disclose the relevance of OA to EA.
It is well acknowledged that spontaneous organized activity of OA is not purely stochastic randomness but contains repeated and coherent spatio-temporal patterns. These reverberating motifs are verified in vitro and in vivo by different research groups, may correspond to information storage or implicit retrieval and, may correspond to the neuronal/neural codes [5] [6] [7]. The robust and stereotyped reactivation may occur with precision of milliseconds [5].
At large scale level, based on the functional connectivity maps derived by resting fMRI or PET, there are modular structures compatible with major psychological functions [9] [11]. Since the resting dynamics is organized according to psycho-physiological domains, it is intuitive to assume that the intrinsic/spontaneous brain activities are not idling (or just in rest) but may be relevant to normal brain functioning. Tsodyks et al. combined optical imaging and single-unit recording to establish the relationship between population activities and single neuron spikes to visual stimuli, and then to explore the relationship between evoked and spontaneous neural activities [26]. ages [28]. The authors disclosed that with increasing ages the Kullback-Leibler divergence between evoked activities to natural scenes and spontaneous activities (exposed to nothing) drastically decreased and the similarity of their frequency distributions increased. Their results indicated that spontaneous cortical activities may come from an internal model optimal to represent the environment which is improved with maturation. The above two elegant studies provide strong evidence that the neural features of evoked responses are actually embedded in the spontaneous/intrinsic brain activities (first class of OA). Luczak et al. explored the firing pattern of neuron population (tens of neurons) in rat auditory cortex over different conditions, i.e., tones, natural sounds, and resting condition [27]. They discovered that the contour of each stimulus categories (different frequencies of tones and different natural sounds) were actually subsets within the contour of spontaneous neural events. In other words, auditory-evoked responses lied within the realm outlined by spontaneous activities.
Replay of EA in OA in sleep is supposed to reflect memory consolidation process, which is not only regional but also shows inter-areal coherence, such as between visual cortex and hippocampus [152]. To sum up, spontaneous brain dynamics comprise neural activations of specific evoked events, i.e., resting is similar to activation, and past reappears in present. The intimate relationship between resting and activation states are thus established across different observation levels, from neuronal [26], to neural population [27] [28], and to large-scale network [9] [10] [11], and across different states (from awake to sleep) [152]. The highly preserved patterns may result from re-emergence of neuronal/neural codes that may be enabled by structured and intricate EIB (discussed in Section 7).
Given the abundant information carried by OA, it is reasonable to assume that their characteristics may have strong psychological, biological and clinical implications. That is exactly the case and a few of them are listed below. It was found that baseline activities may play substantial role for the fluctuating conscious experience. Higher baseline activities in medial thalamus and fronto-parietal region and lower analogs in default mode network may facilitate the awareness of somatosensory stimulation [153]. The baseline undulation of anterior cingulate area may also predict the perceived degree of pain. Similarly, evidence suggested that enhanced pre-stimulus intrinsic activities in fusiform face area may bias the perception of Rubin's vase-face picture toward face [154]. The spectrum power and functional connectivity of spontaneous cortico-electrical dynamics may explain several fundamental mental capabilities, such as intelligence and target detection [155] [156]. Conversely, perceptual learning may modify the covariance structure of resting dynamics [157]. The impact of intrinsic state research on psychology field is brewing and their bi-directional relationship in increasingly appreciated [101] [158].

Variability-Oriented Approach
The research team at Rotman Research Institute of Baycrest, Toronto has a series of influential work on the issue of brain signal variability. The material cited in this section mainly honors their contribution.

Variability of Central Nervous System
Both variance and entropy are frequently used to represent the degree of uncertainty and variability. The optimum of entropy does not guarantee the optimum of variance. Nevertheless, in the condition of Gaussian distribution, there is a monotonic relationship between variance and entropy, and thus they can be regarded as equivalent. Given the contribution of EIB-OA to information transfer as highlighted above (e.g., variability reduction in Section 5.3), it is reasonable to infer that variability-oriented approach is informative in neuroscience, which however, has long been under-appreciated. A major proportion of previous neuroimaging studies have resorted to mean-based measures (e.g., GLM) instead of variance (or standard deviation) based counterparts. The neurobiological rationale behind variability-oriented approach has been described in Section 5.3.
With massive interaction as a core feature, multivariate brain informatics is not completely independent, which is also termed "spatial correlation". To tackle stringent multiple comparison correction, GLM is often combined with other mathematical tools such as random field theory [159] or Monte Carlo simulation [160] [163]. They compared the results of mean-based and standard-deviation-based analyses and disclosed that the variability-oriented approach not only showed different (partially overlapped) spatial patterns but may also provide higher predictive power compared to mean-based analogue [162]. In detail, younger, faster, and more consistent performers exhibited higher brain variability across cognitive tasks of perceptual matching, attention cueing, and delayed match-to-sample [163], indicating that brain variability is functional (not merely useless noise) and that increased variability in the central nervous system may underlie neural efficiency and may further reduce behavioral variability, concordant with its connection to entropy and echoing the psycho-physiological function of OA.

Variability in Behavior and Its Relevance to the Brain
From the perspective of development, behavioral performance improves in World Journal of Neuroscience terms of accuracy, enhanced speed, and reduced trial-to-trial variability. McIntosh et al. found that variability in EEG signal increased with maturation [164], like the finding of fMRI study by Garrett et al. [162]. In addition, the brain variability was negatively correlated with intra-subject variability in reaction time and positively correlated with accuracy. Further research replicated that the variability not only increase with maturation but its association with task was region specific [165]. During development, enhanced neural variability may indicate broader repertoire of mental attractors or microstates. By contrast, increased intra-individual behavioral variability and progressive impairment in performance were accompanied with aging, and the possible cause could be the decreased connectivity and hence, associated with reduced neural variability.
Accordingly, the relationship between age and behavioral variability across lifespan can be characterized by a U-shaped function, while that between age and brain variability could be an inverted-U curve [166] [167]. Increased variability in performance has been observed in various mental disorders, such as dementia, traumatic brain injury, and attention deficit hyperactivity disorder [166]. A recent report supported that greater variability in the elderly brain is associated with better memory and fluid intelligence [168].
To examine the origin of increased complexity of brain dynamics in development, Vakorin et al. used conditional entropy and mutual information to respectively represent local and distributed variability [169]. The authors found that developmental change was accompanied with reduced local information processing and enhanced global information transfer, implying that inter-regional interaction and distributed network may underlie the observed brain variability change in maturation. There are two caveats worthy of mention here. First, not every aspect of neural variability implies information processing.
For example, when considering latency variability in ERPs and reaction time, the inverse relationship of brain signal variability and behavioral variability no longer exists. Second, in the conditions of increased neural noise such as neuropsychiatric disorders (at least in some regions) [166], the composites of brain signal variability are not necessarily "physiological" and variability-oriented approach might lead to equivocal conclusion-the patients may show lower physiological but higher non-physiological variability (2 nd and 3 rd kinds of OA), whereas the healthy controls may exhibit opposite trend.

Sparse Coding and Difference-Based Coding and EIB
Although criticality and complexity are the core concepts to account for the cru-

Sparse Coding
How does the brain read (decode) the encoded neural codes to generate percept?
It seems reasonable to assume that the neural codes must carry the same amount of information as the given percept. In accord, some research applied non-linear filter to neuronal spikes to reconstruct waveform that is as similar as the physical features of external stimuli [170]. Another coding possibility is through Bayesian probabilistic model (i.e., posterior probability) that may require less "bits" of input to make inference and thus is more efficient since the perceptual system has been tuned to naturalistic stimuli through evolution and development (i.e., prior) [28] [171]. It explains why human cognition may conjecture the whole from the part and may make decision under sub-optimal condition or uncertainty [172]. However, recent evidence has suggested sparse coding. On one hand, given the limited metabolism budget, the permissible level of traffic is quite low in the brain, not allowing the massive amount of full-loaded information flows to travel [173] [174]. On the other hand, single-unit recording reveals that the structure of neuronal transients can be much more frugal than previously thought [175] [176]. In auditory cortex, neurons may even show binary responsiveness regardless the duration and frequency of tone pips [177]. Here, "binary" means either only one or zero spike to each trial of auditory input (binary coding). Binary coding is the most basic form of sparse coding, equipped with low variability and high fidelity.
Strong evidence of sparse coding comes from vision research, even in the early stage of visual system [178]. It seems counter-intuitive that sparse codes may register the time varying, vast amount information flood about the environment.
However, the coding strategy has gained support from theoretical, computational and experimental perspectives in different sensory modalities, motor system, and higher cognitive function (e.g., associative memory and hippocampus) [179] [180]. Sparse coding seems to be a general coding strategy in the brain, with the degree of sparseness increasing from lower to higher processing areas. It is interesting to note that shorter coding sequences may union to form longer repetitive motifs (cortical songs), whereas the subsets of a particular spatio-temporal pattern may organize themselves to form other patterns [5]. The re-combination T.-W. Lee, G. Tramontano World Journal of Neuroscience of spiking sequences may have to do with the coding of binding, interaction, or context effects, and importantly, may greatly enrich the manifestation of neuronal/neural codes, see Figure 7.
For sparse coding, obviously the pros are the efficiency, and the cons are its limitation to represent the detail of internal and external world. Is it inadequate for sparse code to correspond to the varied and complicated reality? The answer could be no. So copious, profuse, and affluent, resting/intrinsic/spontaneous brain dynamics is proposed to be the neural baseline for neural codes to interact.
Together with the complexity theory of brain, especially the attractors embedded in resting dynamics [26] [27] [28], sparse coding may just initiate the neural trajectory to fix on one attractor to reach a psychological certainty (may also shape the original complexity structure to some extent) [131], not necessarily having to carry every detail given. We name it "code driving complexity hypothesis" [181].
Since chaotic dynamics is sensitive to initial condition, neuronal/neural code is different from random perturbation because it may guide the trajectory toward a pertinent attractor. In this regard, neural code can be viewed as a facilitator, messenger, or stabilizer, which is supported by theoretical work of interconnected oscillators in which external cue may stabilize and create an "attentive" state [88]. The above scenario is endorsed indirectly by phase-reset model of ERPs, and by even-related synchronization and desynchronization phenomenon in the cortex [134]. (such as STDP). Sections 5 to 7 touch upon the intimate relationship between chaos/stochastic and regularity (information, entropy). Again, the underlying mechanism is EIB-OA which serves a common ground for the versatile dynamics to occur. Unlike Bayesian brain theory, optimality is not coerced for sparse coding and complexity theory, which may conceptually accommodate wider bio-psycho-social variables and situations, such as making errors, framing effect, behaving according to sub-optimal choice, etc [183] [184]. Previous simulation study of chaotic system revealed that if the learning rule has been established, incomplete external stimuli can still be recognized [89]; the conclusion was drawn by sending incomplete input to the network to obtain similar output as in the case of complete external stimuli. EIB-OA thus also provides a potential alternative account for trace-elicited memory (a case of incomplete input).

Difference-Based Coding
The evidence of different-based coding as a general coding principle stems from reward research. The midbrain dopamine neurons encode the differences between anticipation and reward, which is further modulated by their temporal interval [185]. Difference-based coding not only applies in the temporal but also in the spatial domain [186]. Neuronal membrane has been modeled as an "integral operator" that may integrate various pre-synaptic bombardments and local biochemical events [1] [3] [130]. It seems that there also exists a "differential operator" in a neuron which registers the differences between "now" and "expected", between "here" and "neighbors", and then converts the differences into neuronal/neural signals.
What is the underlying EIB mechanism for the differential operator? Previous research on the temporal precision between excitatory and inhibitory inputs provide vital clue to this issue [40] [41]. The timely quench of excitation by inhibition within several milliseconds behaves just like a differential operator. Inhibitory interneurons are local, while excitatory inputs can be local or distant.
When the inhibitory inputs bring information of local state, such as anticipation, and the distant excitatory input register external perturbation, such as extant reward, the consequent neuronal spike would issue the differences between them, exactly what difference-based coding refers to and empirical data suggests.
Similarly, when the inhibitory input carry movement information of nearby neurons (local) and the distant excitatory input denotes the motoric goal, the resultant neuronal activity would guide the neuron to be coherent to the global aim, again via differential operator [186]. It is imperative to note that the purpose of wired EIB is not limited to balancing excitation and inhibition or attenuating excitation by inhibition. Through the dynamic balancing of excitatory and inhibitory imports across spatio-temporal domains, information is processed, and difference-based coding is fulfilled.
It is interesting that the temporal scales of EIB and neuronal avalanche are both situated at several millisecond level-4 msec was proposed by independent  [41]. The coincidence is not fortuitous because the temporal lag between excitation and inhibition is expected to be the "life span" of the pattern in neuronal avalanche. It has been suggested that 25 -50 msec is a unit of psychological time [86] [187], whereas molecular events, neuronal codes and pattern in neuronal avalanche seem too brief to match psychological functioning, suggesting that the collective dynamics in virtual brain space (attractor or itinerary) is a better candidate corresponding to psychology in mental space [188]. Figure 8 summarizes the contents of EIB-OA in previous sections.

Luxurious Energy Expenditure? Coding, Chaos, Efficiency, and Economy
The human brain weighs around 2.50 to 3.25 pounds (2 percent of body weight), but it consumes 20 percent of body oxygen and 25 percent of glucose utilization for adults [189]. The brain maintains high metabolic level across varying mental activities [190], which is relatively constant in resting wakefulness and reduced by only 15 percent in sleep [191] [192] [193]. Why human brain demands so much energy even at rest?
In Section 6.3, it has been pointed out that resting state may actually contain a replica of EA. Further, the brain has evolved to present OA all the time. It is imperative to note that OA is not a privilege of the vertebrate but also exists in the brains of arthropod and insect [194]. Aplysia ganglia may generate stable 10 Hz activity. OA in the central nervous system is a fundamental phenomenon of life.
Keeping the active structure handy in virtual brain space is crucial for consciousness, self-function, adaptation, instant reaction, coping with challenges, and for learning through plasticity to consolidate. A substantial portion of brain energy spent in OA fuels the itinerary and repertoire in the virtual brain space.
With its resonance with organism's internal state and inclination [126] [128] Figure 8. A summary of EIB-OA and its characteristics.
T.-W. Lee, G. Tramontano World Journal of Neuroscience [195] [196], OA (directed by coding) may help the organism to engage in the preferred or advantageous system route and hence, response profile. On the contrary, without OA, the brain will become monotonous, then adaptation and flexibility will be limited.
Efficiency and economy are guiding principles of physiology. It is estimated that human brain has roughly 10 12 neurons, 10 15 synaptic connections between them, 10 24 elementary molecules/second engaged in brain activity, and 10 8 -10 12 bits/second of information received from environment [197]. Although the brain expends disproportionate energy relative to its weight, it has organized itself to be economic. Around 15 -20 watts only are consumed by human brain, which could be 10 million watts using modern chip technology with comparable intricacy [45]. In other words, "luxurious" is superficial; the brain has already evolved to be stringent on energy expenditure but still maintains efficient. Put it in another way, the brain must be efficient to be economic without sacrificing reliability, which could be substantiated by several mechanism, such as the designs of thresholding machinery in neurons, quench of excitation in milliseconds, and (sparse) code driving complexity. Based on the organized EIB-OA, the brain may simultaneously take care of contradictory facets, e.g., efficiency and economy, using limited budget in real time. The mechanisms that the brain saves energy are depicted in Figure 9. Figure 9. Upper right: brain dynamics is always active. Illustration of 3 mechanisms that the brain uses to save energy. (a) Neuron is designed as a thresholding machinery; (b) EIB allows the neurons to fire intensely in a brief time span (before the excitation is quenched by inhibition); upper: the firing of neurons, lower: red and blue respectively indicates excitatory and inhibitory inputs; (c) The design of (sparse) code-driving complexity may save energy expenditure; blue lines at the top and bottom imply the amount of energy consumption, which is boosted at the source and target nodes but maintains frugal during information travel between the nodes.

Conclusion
Balance between excitatory and inhibitory forces is the elementary building blocks for many if not all physiological phenomenon, such as hormone and autonomic regulations. In the brain, EIB is constructed at different levels of information processing, from microscopic to large-scale. Under the constraint of EIB at (self-organized) criticality, the OA of neuronal/neural tissue may engender the phenomena of complexity, power-law distribution, meta-stable equilibrium, multi-stable states, maximized dynamic range, optimized information transfer, difference-based and sparse coding, neuronal avalanches, both efficiency and economy, and reproducibility of evoked spatio-temporal motifs/patterns. As a complexity system, brain is unique in its intensive interacting features: regional, inter-areal, cross-spectrum, cross-hierarchy and particularly, its top-down modulation. It is noteworthy that the chaotic component of OA may contain prominent structure, while the regular component of OA may possess some characteristics of chaos. The distinction of regularity and chaos may be sometimes arbitrary, and the two can be bridged by several mechanisms, such as STDP and "code driving complexity" mechanisms. It is desirable for empirical neuroscience to examine the excitatory and inhibitory designs at different scales and regions to better understand EIB.