Investigating Neural Representation of Finger-Movement Directions Using Electroencephalography Independent Components

Abstract

There are few EEG studies on finger movement directions because ocular artifacts also convey directional information, which makes it hard to separate the contribution of EEG from that of the ocular artifacts. To overcome this issue, we designed an experiment in which EEG’s temporal dynamics and spatial information are evaluated together to improve the performance of brain-computer interface (BCI) for classifying finger movement directions. Six volunteers participated in the study. We examined their EEG using decoding analyses. Independent components (ICs) that represented brain-source signals successfully classified the directions of the finger movements with higher rates than chance level. The weight analyses of the classifiers revealed that maximal performance of the classification was recorded at the latencies prior to the onset of finger movements. The weight analyses also revealed the relevant cortical areas including the right lingual, left posterior cingulate, left inferior temporal gyrus, and right precuneus, which indicated the involvement of the visuospatial processing. We concluded that combining spatial distribution and temporal dynamics of the scalp EEG may improve BCI performance.

Share and Cite:

Tellache, M. , Kambara, H. , Koike, Y. , Miyakoshi, M. and Yoshimura, N. (2021) Investigating Neural Representation of Finger-Movement Directions Using Electroencephalography Independent Components. Journal of Biomedical Science and Engineering, 14, 240-265. doi: 10.4236/jbise.2021.146021.

1. INTRODUCTION

Our hands are involved in essential aspects of our daily lives, and loss of their function has a disastrous effect on the individual. Prosthetic limbs aim to restore the normal function of the missing body part. Brain-machine interfaces (BMI) can be used to better control prostheses, especially when electromyography (EMG) signals are insufficient [1]. The best methods of control so far involve surgically implanting electrodes to receive neuronal commands, but they come with the inherent risk associated with surgeries, uncertainty about long-term stability, and the need for highly trained personnel [2]. Recent invasive electrocorticography (ECoG) studies focused mainly on the sensorimotor cortex to decode different movements. The participants of these studies were epilepsy patients. In [3], three hand gestures and finger tapping movements were classified from electrodes in the somatosensory areas with a classification accuracy of 96.5% with high-density sensorimotor coverage. Another study [4] aimed to classify four different articulators and four different tongue movement directions using a small area in the sensorimotor cortex. They achieved 92% and 85% classification accuracy respectively, using only a 1 cm2 patch in the sensorimotor cortex. In [5], the authors focused on rapid gesture decoding and they achieved 80% accuracy in classifying three hand gestures after 0.5 s of detecting the movement. The most important electrodes were near the central sulcus. A rather interesting finding was shown in [6] where they found that contralateral representations of finger movements in the sensorimotor cortex are driven by both active movement and sensory stimulation, whereas ipsilateral representations are mainly engaged during active movement. Non-invasive methods for controlling prostheses are becoming more popular. In particular, electroencephalography (EEG) has been utilized to evaluate whether motor commands can be reliably decoded. EEG records brain activity at a high temporal resolution which can be exploited to extract movement-related information. EEG sensors are wearable and hence suitable for rehabilitation and building flexible BMI [7].

On the other hand, the use of EEG to classify movement directions for controlling prostheses is often controversial due to the contamination of EEG with motion artifacts, especially eye movements [8]. The primary concern is whether the contaminated signals possess information representing visual processing or merely eye movements only. Electrophysiological findings suggest that visuospatial areas could be useful in extracting information about movement direction since neurons in the visual cortex exhibit a certain preference to a variety of conditions such as orientation [9] and direction (directional tuning). Previous studies showed that many brain regions are involved in motor planning and motor control: a functional magnetic resonance imaging (fMRI) study [10] showed the involvement of the primary sensorimotor area, medial frontal gyrus, and middle frontal gyrus during hand movement. Another fMRI study [11] showed that the dorsal anterior cingulate cortex modulates supplementary motor area (SMA) during unimanual motor behavior. Another fMRI study of finger tapping showed the involvement of contralateral primary sensorimotor cortex, SMA, ipsilateral anterior cerebellum, and occipital cortices. EEG recorded from the scalp may also reflect information from these brain regions. The Lingual gyrus and calcarine sulcus have been shown to produce stronger signals during goal-oriented limb movements than stimulus detection without motor movements [12].

Independent component (IC) analysis (ICA) has been widely used to unmix signals of interest from those of non-interest in EEG [13]. In a previous study, hand movement direction was decoded using EEG signals that were cleaned from artifacts using ICA [14], and binary movement direction (left vs right) could be predicted from ICs in the posterior parietal cortex [15]. Furthermore, ICA-derived scalp topographies can be used to localize the active cortical sources since ICA-derived scalp topographies are well fit by a single equivalent dipole model with low residual variance as shown in [16]. Therefore, if we address the inherent problems of EEG (i.e. noise contamination and spatial resolution), we would identify and extract signals representing visual processing information that can be combined with commonly used decoding methods based on motor-related areas [17] to build more robust BMI. However, this approach has not been fully investigated as yet.

In this study, we further investigated the effectiveness of ICs associated with the planning and execution of index finger movement in classifying an increased number of movement directions with no initial limitation of the involved brain regions. By using an experimental paradigm and a labeling method in the decoding analysis to dissociate intrinsic and extrinsic coordinate frame information, we extracted EEG signals containing extrinsic information and selected ICs that represent brain activity only. We performed an eight-direction classification analysis using the selected ICs and identified the contributing brain areas to the classification and the timing of the contribution through a clustering analysis using ICs from all the participants.

2. MATERIALS AND METHODS

2.1. Participants

Six healthy participants performed the experiment (two females and four males). The mean age was M = 40.67 with a standard deviation of SD = 7.23. The study protocol was approved by the ethics committee of the University of California, San Diego (Approval No. 14353) and carried out under the Declaration of Helsinki. Written informed consent was obtained from each participant before the experiment.

2.2. Experiment

The participants sat on a chair with their forearm and wrist supported. They were asked to move their index finger to one of eight directions shown on a screen in front of them without moving their arm. The eight possible positions of the target belonged to a circle with a 10 cm radius where consecutive positions were 45˚ apart as shown in Figure 1. The index finger was positioned at the center of a touchpad for 2 seconds, the target was shown for the next 2 seconds. Using the touchpad, the participants were asked to move a red cursor to the target in one motion and wait, even if the cursor did not reach the target. When the target disappeared, the participants returned their index finger to the center of the touchpad. The cursor positions were saved at a sampling rate of 30 Hz. Each participant performed 40 sessions, and each session comprised 32 trials. Every 10 sessions the elbow angle was changed (either extended or bent at 90˚). Each target would then result in two different finger movements. For example, to reach target 4 with the elbow extended, the index finger should be flexed, while the index finger should be adducted to reach the same target when the elbow is bent at 90˚. The experimental design realizes the separation of intrinsic and extrinsic coordinate system information [10,11]. When we move our body to interact with objects, the

Figure 1. Experimental design. Each trial consisted of 4 seconds: 2 seconds of rest after which the cue is shown for 2 seconds. The participant moves the cursor in one motion and waits for the cue to disappear, and then returns to the start position in the middle. For half the trials, the elbow was extended and for the other half, the elbow was at 90 degrees.

brain calculates and transforms the positions of external objects and internal body parts using different coordinate frames. The intrinsic frame’s reference is within the body or muscle. It describes the action of the body/body part/ muscle (extension, abduction, and so on). The extrinsic frame’s reference is a point outside the body. It describes the movement related to the external environment. A study on primates [18] showed that neurons in the motor cortices may play a role in transforming between the two coordinate frames. Taking that into account, an experiment was designed to separate the effect of the two coordinate frames and study them individually [18]. The experimental design was applied to a human study where a decoding method was used to dissociate the information of the two coordinate frames [19].

2.3. EEG & EMG Acquisition

EEG and EMG were acquired using a Biosemi active two amplifier system (Biosemi, Amsterdam, Netherlands). EEG was recorded with 128 channels according to Biosemi’s equiradial layout. To identify the muscle activity onset, EMG sensors were placed over the right extensor indicis and flexor digitorum. The signals were acquired at 2048 Hz. The positions of the EEG channels along with the nasion, left pre-auricular point and right pre-auricular point were measured using a posture functional capacity evaluation system. Electrooculography (EOG) signals were recorded using four electrodes to confirm that the ICs associated with eye movements sufficiently extracted eye-movement components from EEG by comparing the signals between EOG and the ICs. Two electrodes placed on the sides of the eyes recorded horizontal EOG while an electrode on the right side of the forehead and another placed under the right eye recorded vertical EOG.

2.4. MRI Image Acquisition and Preprocessing

We acquired an anatomical MRI image for each participant using a General Electric (GE) Discovery MR750 3.0 T equipped with a 32-channel receiver coil. A sagittal image was acquired using a T1-weighted spoiled gradient recalled sequence (TR = 8.132 s; TE = 3.192 ms; FA = 8˚; FOV = 256 × 256 mm; matrix size = 256 × 256; 172 slices; slice thickness = 1.2 mm). The MRI images were used for DIPFIT as a custom MRI image. Each image was normalized to the standard MNI brain template using SPM12 (https://www.fil.ion.ucl.ac.uk/spm/).

2.5. EEG Preprocessing

EEGLAB toolbox (2019.1, MATLAB R2017b) was used for the preprocessing. To remove the baseline drift, the data was filtered with a basic FIR high-pass filter (cutoff frequency 0.5 Hz, passband edge 1 Hz, transition bandwidth 1 Hz). To get rid of the line noise at 60 Hz, the data was filtered using a basic FIR low-pass filter (cutoff frequency 45 Hz, passband edge 40 Hz, transition bandwidth 10 Hz). The filtered data was re-referenced to average and down-sampled to 512 Hz. EEG signals from the 128 channels were then manually checked and were removed if any exceeded the threshold of [−500 µv 500 µv] for more than 20% of the recording time. One channel from Participant Three was removed. The data was then divided into epochs of three seconds with one second before the visual cue onset when the target appeared and two seconds after onset. To decompose the EEG signal to its original sources, independent component analysis was performed using adaptive mixture ICA (AMICA) [20]. The obtained ICs are temporally maximally independent, which makes ICA particularly useful for unmixing eye movement artifacts from brain EEG [21]. EMG and cursor onsets were computed using EMG and cursor movement data.

2.6. Selection of Independent Components

To separate the ICs representing brain activity from non-brain ICs, i.e. channel noise, muscle, and eye movement artifacts, etc., we used two EEGLAB plugins for artifact detection: ICLabel [22], MARA [23], and EEGLAB plugin DIPFIT for source localization. ICLabel is an automated EEG independent components classifier. It classifies ICs into seven classes (brain, muscle, eye artifact, channel noise, line noise, heart, others) using a set of standard measures (scalp topography, median power spectral density, autocorrelation function, dipole location… etc.). Artificial neural network (ANN) was trained using more than 200,000 ICs originating from over 6000 EEG recordings from different experiments. The ICs were labeled by experts and by crowd labeling. MARA (Multiple Artifact Rejection Algorithm) is a binary classifier for rejecting ICs representing artifacts. The classifier was trained using 1290 ICs labeled by experts. It relies on six features from the spatial, spectral, and temporal domains, which makes it suitable to handle eye artifacts, muscular artifacts, and channel noise. DIPFIT is a source localization tool that approximates an active cortical patch to a dipole that would generate the same scalp map. This is valid since IC sources are proven to be dipolar [16]. For better localization, it was important to build an accurate forward electrical head model for each participant. This was achieved by using individual MRI images. We visualized and evaluated each IC individually. First, we collected a set of measures for each IC: the generated probabilities using ICLabel (brain, muscle, eye artifact, channel noise, line noise, heart, others), artifact probability using MARA, event-related potential (ERP) of the epoched time series of each IC, the power spectrum of each IC, the location of the equivalent dipole obtained using DIPFIT, and the residual variance (RV) from fitting a scalp topography of a theoretical projection from the estimated dipole to the actual scalp topography obtained by ICA. We then used those measures to choose ICs that represent brain activity. In particular, an IC is likely to represent brain activity if it was classified as “brain” with a probability higher than 70% using ICLabel and an artifact probability less than 50% using MARA. Other indicators that we used are peaks in the IC power spectrum between 5 Hz and 30 Hz and mostly at 10 Hz, the equivalent dipoles are located within the brain, and the residual variance is less than 30% as shown in [24].

2.7. Eight Finger-Motion Direction Classification Using Independent Components

To decode finger-movement directions, we applied sparse logistic regression with Laplace approximation (SLR) using Sparse Logistic Regression toolbox ver 1.51 (https://bicr.atr.jp//~oyamashi/SLR_WEB.html) [25]. A part of the computations was performed in Neuroscience Gateway (NSG) High-Performance Computing resources [26]. The merit of this algorithm is that it automatically chooses the important features. SLR was applied in [19,25,27] for classifying both EEG and fMRI data. Here we applied SLR to the time series of the selected ICs to classify the eight finger-movement directions and to extract the temporal transition of classification contribution of the different brain regions. To extract relevant information specific to the extrinsic coordinate frame, we assigned the same label to the trials with the same target, regardless of the elbow position. For example, trials in which the finger moved to target 4 were labeled the same regardless of elbow angle. To extract information specific to the intrinsic coordinate frame, we assigned the same label to the trials where the participant performed the same finger motion. For example, trials where the finger moved to target 4 with the elbow extended and trials where the finger moved to target 6 with the elbow bent both correspond to finger extension and were thus given the same label in classification.

The time series of the selected ICs for each participant were down-sampled to 64 Hz considering the computational cost of SLR. Only one second after the onset was kept, making every IC a set of 64 features. The total number of features for each participant was then (64 × number of selected ICs). The total number of trials was 1280 trials, except for Participant One (1278 trials). Each finger direction had 160 trials (80 trials with the elbow extended and 80 trials with the elbow at 90˚).

We applied SLR one-vs-rest with Laplace approximation algorithm and performed leave-one-out cross-validation on a trial level. For each participant, eight classifiers were trained and tested. The dataset of each participant was divided into 160 bins. Each bin contains eight trails where each trial represents one direction without repeats. Each bin was then used one time as a test set and 159 times in the training set. The most important features in the classification were automatically selected. All the weight matrices from the cross-validation were used to evaluate the importance of each feature.

2.8. Evaluation of ICs Contribution to the Classification

To evaluate the contribution of each IC to the classification, we defined a metric to calculate the importance of each data point in each IC’s time series. Datapoint importance is the number of times the said data point was chosen by all the 8-direction classifiers, averaged over all the cross-validation runs .i.e. let I C n ( t ) be the time series of I C n and the importance of each time point I C n ( t 0 ) be I m p n ( t 0 ) where:

I m p n ( t 0 ) = 1 m i = 1 m k (1)

where “m” is the number of cross-validation runs, “k” is the number of classifiers that chose I C n ( t 0 ) for the classification in a single cross-validation run). The importance of an IC is then the mean of the importance of its data points. The contribution of an IC is the ratio between its importance and the total importance of all ICs.

2.9. Clustering Analysis

Brains and scalps come in all shapes and sizes, and for EEG studies this translates to not knowing whether the same EEG channel is recording the same information from two different participants. Localizing the ICs in the brain allows us to identify ICs that belong to the same brain region in different participants. However, this alone may not be enough to say that those ICs have the same information. For this reason, we performed a clustering analysis of the ICs using multiple features. These features were ERP (0 s to 1 s and low-pass 40 Hz), scalp topography, dipole location, and dipole moment. Due to the computational cost of clustering, principal component analysis (PCA) was used to reduce the dimensionality of the features. The number of features after PCA reduction for each measure was (10 for ERP, 10 for scalp topography, 3 for dipole location, and 1 for dipole moment). Dipole locations were assigned a weight of 3 since there are only 3 features. Three distance-based machine learning algorithms in EEGLAB (k-means, neural network, and affinity propagation) were used to cluster the ICs. The number of clusters is an open parameter and needs to be empirically determined. The number of clusters using affinity propagation was automatically set to 4. Due to the low spatial resolution of the results, this algorithm was not investigated further. First, we wanted the clusters to have ICs from at least two participants. This criterion limited the maximum number of clusters to 16. Using k-means and neural networks, we varied the number of clusters from 4 to 16 to evaluate the stability of the results and confirmed that the results were stable from 12 to 16. Finally, to balance the spatial resolution and the number of unique participants within each cluster, we determined that the number of clusters was 14 using the neural networks method. The centroid of each cluster was approximated to the closest brain region in the AAL atlas [28].

To visualize the temporal transition of the contribution of different brain regions to the classification, we computed the importance of each data point of each cluster by averaging the importance of the same data point of all the ICs constituting the cluster. i.e. for c l u s t e r k that contains “n” ICs, the importance at time point “t” is:

c l u s t e r k ( t ) = 1 n i = 1 n I m p i ( t ) (2)

The importance of each cluster is then the mean of the importance of all its data points. The contribution of a cluster is the ratio between its importance and the total importance of all the clusters. The statistical significance of data points of the clusters was evaluated using a paired t-test. Multiple comparison correction for false discovery rate was performed using the Benjamini-Hochberg procedure.

3. RESULTS

3.1. Independent Components Decomposition

We obtained 128 ICs for all participants except for Participant Three, who had 127 ICs because we removed a noisy channel. From 767 ICs only 84 were kept after the rejection of non-brain ICs and ICs where the noise was dominant. Figure 2(a), Figure 2(b), Figure 2(f), and Figure 2(g) show an IC that represents brain activity and the properties used for the selection. We selected ICs with equivalent dipoles

Figure 2. Example of an accepted IC and its properties. (a) The scalp map and the probability computed by ICLabel, (b) Epoched trials with ERP, (f) Power spectrum, (g) Dipole location using DIPFIT, and the residual variance. (c) Number of ICs per residual variance. (d) Number of ICs per artifact probability range computed using MARA. (e) Number of ICs per brain probability range computed using ICLabel.

located within the brain and showed a visible ERP and power spectrum with frequency peaks in reasonable EEG frequency bands (from 5 Hz to 30 Hz and mostly around 10 Hz). Specifically, EEGLAB plugin ICLabel predicts how likely the IC represents brain activity as shown in Figure 2(a). Figure 2(b) shows the average of the time series of the IC across all the trials and we can observe a visible ERP response. The activity power spectrum in Figure 2(f) shows a peak at 11 Hz and DIPFIT results in Figure 2(g) show that the equivalent dipole is within the brain and has a low residual variance (1.4%) which means that the projected topography of the model equivalent dipole accounts for 98.6% of the variance in the scalp topography. Figure 2(c) shows the number of ICs within each residual variance range for all participants. Figure 2(e) shows the number of selected ICs per probability range that was assigned by ICLabel. In addition, the artifact probability of the selected ICs computed using MARA is shown in Figure 2(d). For every IC, we checked the six mentioned measures and decided whether the IC would be included in the classification phase.

3.2. Eight-Class Motion-Direction Classification Results Using Individually Selected ICs

The classification results in the extrinsic and intrinsic frames are shown in Table 1. To evaluate the statistical significance of the classification results, a non-parametric permutation test was applied. For each participant, the classification accuracy of a model trained with the same data with randomly permuted labels was calculated in a leave-one-out trial cross-validation manner. This was repeated 5000 times. Supplementary FigureS1 shows the histograms of the average classification accuracies of the 5000 repeats. The p-values of the real classification accuracies were computed and shown in Table 1. All participants showed significantly higher classification accuracy than chance level.

Figure 3 shows the ten most important data points of each IC of Participant One, who had the highest classification accuracy. The light-gray shaded area is the time period between the cue onset (t = 0 ms,

Figure 3. Ten most important time points for each IC for Participant One. The most important time point is circled in red and the next two in black. The size and color of each time point represent the number of times the said time point was chosen by the classifiers averaged over the cross-validation runs. The number next to each IC is the percentage contribution of the IC. The light-gray shaded area is the time period before EMG onset from 0 ms to 220 ms. The dark-gray shaded area is the time period between EMG onset and cursor onset from 220 ms to 450 ms.

Table 1. Classification accuracy in the extrinsic and intrinsic frames. Real labels extrinsic (intrinsic), represents classification accuracy using the real labels and the standard deviation (SD) from leave-one-out-trial cross-validation in the extrinsic (intrinsic) frame. Random labels extrinsic (intrinsic), represents the average classification accuracy over 5000 repeats of the permutation test using randomly generated labels in the extrinsic (intrinsic) frame, and the standard deviation (SD) from leave-one-out-trial cross-validation. p-values were obtained from the non-parametric permutation test.

target appearance) and the EMG onset (around t = 220 ms). The dark-gray shaded area is the time period between the EMG onset (around t = 220 ms) and the cursor onset i.e. when the cursor started moving at around (t = 450 ms). In Participant One, ICs in the left middle occipital gyrus (12.21%), the right lingual (10.37%, 8.92%), the left lingual (9.27%), and the left posterior cingulate gyrus (6.94%) showed a higher contribution to the classification than the other ICs. Results from the other participants are included in Supplementary Figures S3-S7. Contribution and location in the AAL atlas of selected ICs are summarized in Supplementary TableS1.

3.3. Contribution Analysis Using IC Clustering

We obtained 14 clusters (Clst 1 to Clst 14) where the number of ICs per cluster ranged from four to nine ICs per cluster. The centroid of each cluster was located in the AAL atlas: Clst 1: right lingual, Clst 2: right lingual, Clst 3: left anterior cingulate and para-cingulate gyri, Clst 4: left precentral gyrus, Clst 5: left supplementary motor area (SMA), Clst 6: left inferior temporal gyrus, Clst 7: right precuneus, Clst 8: left inferior parietal gyrus, Clst 9: right postcentral gyrus, Clst 10: right insula, Clst 11: left superior occipital gyrus, Clst 12: left posterior cingulate gyrus, Clst 13: left paracentral lobule, and Clst 14: right superior frontal gyrus. Supplementary TableS2 shows the MNI coordinates of the clusters’ centroids and their corresponding location in the AAL atlas, along with the ICs constituting the cluster and the participants they were derived from. Four clusters had ICs from five participants (Clsts 1, 2, 11, and 13), seven clusters from four participants (Clsts 3, 5, 6, 9, 10, 12, and 14), two from three participants (Clsts 7 and 8), and only one from two participants (Clst 4). Supplementary Figures S8-S11 show the constituting ICs of each cluster in blue, and the centroid of the cluster is indicated in green. The brain regions corresponding to the centroids of the clusters are indicated. For 12 clusters, the centroid was in the same brain region as most ICs of the cluster. For Clst 10: right insula and Clst 13: left paracentral lobule, most ICs were located in different brain areas than the cluster’s centroid (6 out of 6 for Clst 10 and 4 out of 5 for Clst 13).

Figure 4(a) and Figure 4(c) show the ten most important time points of each cluster in the extrinsic and intrinsic frames. All the data points in Figure 4 are statistically significant. The average contribution of each cluster to the classification is indicated next to its name in percentages. Figure 4(a) and Figure 4(c) show that the importance of clusters temporally changes following the movement phase. Here we focus on two time periods, before the EMG onset (from 0 ms to 220 ms, the light-gray shaded area) and between the EMG onset and cursor onset (from 220 ms to 450 ms, the dark-gray shaded area). To better identify the critical time periods in the classification, each four consecutive data points were averaged in Figure 4(b) and Figure 4(d). For the extrinsic frame, ten clusters had the highest contribution at t3 from 125 ms to 187 ms before EMG onset. The most contributive cluster was: Clst 2: right lingual, and then Clst 12: left posterior cingulate gyrus, Clst 6: left inferior temporal gyrus, Clst 11: left superior occipital gyrus, Clst 7: right precuneus, Clst 5: left SMA, Clst 14: right superior frontal gyrus, Clst 13: left paracentral lobule, Clst 9: right postcentral gyrus, and Clst 8: right inferior parietal gyrus in descending order. Clst 3: left anterior cingulate gyrus was most contributive around the EMG onset at t4 from 187.5 ms to 250 ms. Clst 1: right lingual and Clst 4: left precentral gyrus were more contributive after EMG onset and before cursor onset at t5 from 250 ms to 312 ms, and Clst 10: right insula was most contributive around the cursor onset at t7 from 375 ms to 437 ms. For the intrinsic frame, six clusters had the highest contribution before EMG onset: Clst 2: right lingual at t2, Clst 6: left inferior temporal gyrus, Clst 11: left superior occipital gyrus, Clst 12: left posterior cingulate, and Clst 7: right precuneus at t3, and left inferior parietal gyrus at t1 in descending order of contribution. The notable differences between the contributing clusters of the two coordinate frames are Clst 5: left SMA, Clst 4: left precentral and Clst 14: left frontal superior that are more contributing in the intrinsic frame while Clst 12: left posterior cingulate became less contributing in the intrinsic frame. The timing of the contribution in the intrinsic frame was more scattered from t1 = [0 ms to 62.5 ms] to t11 = [875 ms to 937 ms].

4. DISCUSSION

This study suggests that we could predict eight finger-movement directions from EEG-derived ICs.

Figure 4. (a) and (c) Ten most important time data points of each cluster in the extrinsic (a) and intrinsic (c) frames. The size and color of each data point reflect the number of times the time data point was selected by the classifiers, averaged over the cross-validation runs. The clusters are arranged based on the contribution of each cluster to the classification. The contribution of each cluster is indicated in percentages next to the name of the cluster. (b) and (d) Contribution of each cluster in the extrinsic (b) and intrinsic (d) frames according to the average of each four consecutive time data points. Each dot represents a time period of 62.5 ms where. Dots circled in red represent the peak values. t1 = [0 ms to 62.5 ms], t2 = [62.5 ms to 125 ms], t3 = [125 ms to 187.5 ms], t4 = [187.5 ms to 250 ms], t5 = [250 ms to 312.5 ms], t6 = [312.5 ms to 375 ms], t7 = [375 ms to 437.5 ms], t8 = [562.5 ms to 625 ms], t9 = [687.5 ms to 750 ms], t10 = [812.5 ms to 875 ms], t11 = [875 ms to 937 ms]. The unit of the color bars is the number of classifiers that selected the data point averaged over the cross-validation runs.

The ICs are located in the lingual gyrus, posterior cingulate gyrus, inferior temporal gyrus, superior occipital gyrus, precuneus, supplementary motor area (SMA), and precentral gyrus. The classification accuracy was higher when the trials were labeled according to the extrinsic frame compared to when the trials were labeled according to the intrinsic frame, suggesting that the selected ICs carry information about motor control and visuospatial processing more than information on finger muscles and joints activation.

First, we used ICA to unmix the EEG signals into temporally maximally independent signals (ICs). This technique is physiologically reasonable [16,21] because artifacts and brain signals are temporally independent. This is especially important in removing eye movement artifacts as proven in [29] and further confirmed by comparing the ERP of EOG signals recorded using separate electrodes and the ERP of the ICs classified as eye artifacts as shown in Supplementary FigureS12. We then studied the properties of each IC separately aided with two automated methods to identify ICs that reflect brain activity only. The time series of the selected ICs were then used to classify eight finger-movement directions using a sparse algorithm (SLR) that automatically selected the important time points for the classification. In this study, we were particularly interested in studying how the brain controls the finger to navigate the environment which is related to the extrinsic frame, as opposed to studying the action executed by the finger (flexion, extension…etc.) which is related to the intrinsic frame. To account for the relatively small number of participants, individual MRI images were obtained and a non-parametric permutation test with 5000 repeats was applied to check the significance of the classification results. For all the participants, the selected ICs achieved a classification accuracy higher than the accuracy achieved with classifiers trained with the same data using random labels with p-value < 1.4e−03. This suggests that the ICs may be biased towards finger movement direction. Moreover, the average classification accuracy in the extrinsic coordinate frame (47.97%) was higher than the average classification accuracy in the intrinsic frame (24.53%), suggesting that the selected ICs are biased towards the extrinsic frame. The selected ICs may carry information related to visuospatial processing as opposed to finger muscles and joints information during the movement.

To investigate the common patterns in the ICs between participants, we performed a clustering analysis where ICs from different participants that had similar properties were grouped into clusters. We then studied the spatial and temporal aspects of each cluster. The spatial aspect was defined by the position of the cluster’s centroid in the AAL atlas. For the temporal aspect, we defined a measure to quantify the importance of each time data point in the cluster.

Temporally, the time period between 125 to 187 ms right before EMG onset at 220 ms contributed most to the classification in the extrinsic frame, which is physiologically reasonable. Time points of ten clusters during that period were used the most for classification, as shown in Figure 4(b) (Clst 2: Lingual R, Clst 3: Cingulum Post L, Clst 6: Temporal Inf L, Clst 11: Occipital Sup L, Clst 7: Precuneus R, Clst 5: SMA L, Clst 14: Frontal Sup R, Clst 13: Paracentral L, Clst 9: Postcentral R, Clst 8: Parietal Inf L in descending order).

For the motor planning phase, the two clusters that contributed the most were located in the right lingual (Clst 1: right lingual and Clst 2: right lingual) accounting for 20.87% of the contribution to the classification belonging to the right Brodmann areas BA18 and BA19. Both areas are involved in visuospatial information processing [30 - 32]. An fMRI study also showed that the lingual gyrus produces stronger signals during goal-oriented limb movements than stimulus detection without motor movement [12]. We can also observe the contribution of the dorsal and ventral visual pathways [33] starting from Clst 1: lingual and Clst2: lingual following the ventral stream to Clst 6: left inferior temporal that is involved in recognizing patterns [33] and following the dorsal stream to Clst 11: left superior occipital gyrus and Clst 8: left inferior parietal gyrus that is involved in visually guided motion such as grasping [34]. In addition, Clst 7: right precuneus activation may be due to its role in motor coordination that requires shifting attention when making movements [35].

For the motor execution phase, the high contribution of Clst 12: left posterior cingulate can be attributed to its involvement in the task-negative network and the control of self-determined finger movements [36]. Clst 3: left anterior cingulate was also involved due to its role in maintaining stimulus timing in motor control [37], modulating unimanual motor behavior, shifting attention between different locations in space [38], and projecting to SMA and the primary motor area (M1) which is in accordance with our findings since both Clst 5: left SMA and Clst 4: left precentral gyrus were active. Clst 5: left SMA activation was before Clst 4: left precentral gyrus which is consistent with [39] where SMA has been shown to activate before M1 in externally cued tasks. Clst 5: SMA also interacts with the cingulate cortex in the preparation of self-generated actions [40]. Clst 4: left precentral gyrus was most contributive at t5 = 250 ms to 312 ms after EMG onset and before the cursor onset, which is reasonable. The contribution of Clst 14: right frontal superior gyrus can be attributed to its involvement in a variety of functions that are connected to our task. For instance, it is involved in finger proprioception, visuospatial and visuomotor attention, and motor sequencing and planning [10]. Clst 9: right post central gyrus is involved in the sense of touch (index finger on the touchpad). It is also involved in finger proprioception and voluntary hand movement [41].

The notable difference between the two coordinate frames is that Clst 5: left SMA and Clst 4: left precentral gyrus became more contributive in the intrinsic frame while the Clst 12: left posterior cingulate gyrus became less contributive. This suggests that the left SMA and left precentral gyrus are more biased towards the intrinsic frame while the left posterior cingulate gyrus is more biased toward the extrinsic frame. There is evidence that M1 is more biased towards the intrinsic frame [27,42 - 44] which agrees with our findings, however to the best of our knowledge, the SMA being biased towards the intrinsic frame is new. The bias of the posterior cingulate gyrus to the extrinsic frame can be attributed to its role in visuospatial processing [45].

Concerning Clst 10: right insula and Clst 13: left paracentral lobule, since the constituting ICs and the centroids did not fall into the same brain regions, we could not use the results to draw any conclusions. This is another limitation of this study. The left paracentral lobule particularly is concerned with motor and sensory innervation of the lower limbs and regulation of physiological functions, which is irrelevant to our experimental paradigm. To investigate the representations further, it might be acceptable to manually adjust the clustering results based on their brain function. This might be necessary when developing a practical BMI in the future.

The individual differences in classification accuracy may be due to the differences in ICA decomposition between participants. The number of selected ICs does not seem to affect the classification accuracy. Participant One with 16 ICs performed better than Participant Four with 9 ICs. However, Participant Four performed better than Participant Five with 19 ICs and Participant Six with 13 ICs. A noticeable difference between Participant One and Participant Six is the location of the selected ICs and their contributions. For Participant One, the dipole locations of three ICs were located in the lingual accounting for 28.56% of the classification contribution while only the dipole of one IC from Participant Six was located in the lingual accounting for 9.06% of the contribution. Another noticeable difference is that Participant One had one IC with a dipole located in the precentral gyrus, which is directly involved in finger motion while none of Participant Six ICs had a dipole in that location. A future study could give a better understanding of individual differences.

To evaluate the possibility of using the classifier for BCI control, confusion matrices of all participants (Supplementary FigureS13) were analyzed to detect patterns in false positives. Supplementary FigureS2 summarizes the false positive cases for each direction across participants. The percentage of false positives adjacent to the desired direction was 27.83% of all the classifications. The average classification accuracy across all participants of 8 finger movement directions within (−45˚, 45˚) of the desired direction would be 75.8% which is a reasonable rate for many real-world applications.

Ideally, it might be better to add a condition where the participants only observe the visual cue without moving their finger to confirm the results of ICs in visuospatial and occipital regions. However, we did not ask the participants to perform this task to make it easier for them to sustain concentration and avoid fatigue that would affect task performance. Instead, we used the established ICA-based approach to address the issue of eye movement artifacts. This means that the current study cannot completely dissociate the effect of presenting the visual cue from the visuospatial processing. However, it is natural that visuospatial processes involve occipito-parietal regions, which itself does not necessarily mean that the observed brain activity is due to the visual cue presentation. ICA decomposition tends to find more ICs in the occipital regions than anywhere else. The reason behind this is that ICA has a bias to large synchronous cortical patches which is the case of the occipital lobe. Human beings have a large visual cortex, with large functional subregions, that occupies a big ratio of the surface area of the human brain [46]. We did find clusters that were situated in motor-related areas such as the left precentral (Clst 4), SMA (Clst 5). Although we cannot dissociate the visual, visuospatial processing, and motor control components from scalp EEG as it has been discussed in [14], we eliminated the external effect of the eye movements.

5. CONCLUSION

This study aimed to investigate the neural representation of finger movement directions through classification analysis using EEG independent components. We found that during finger movements, distant brain regions are active. The selected independent components were more biased towards the extrinsic coordinate frame. Independent components from occipital areas involved in visuospatial processing contributed the most to classifying finger movements before EMG onset. This finding could be used to classify movement directions in conjunction with information from motor-related areas. Furthermore, our study confirmed that the contribution of the supplementary motor area (SMA) is biased towards the intrinsic coordinate frame rather than the extrinsic coordinate frame, which is supported by previous studies. Our approach using classification analysis suggests that EEG from occipital regions can possess visuospatial information and may be used to classify movement directions.

ACKNOWLEDGEMENTS

We thank Dr. Scott Makeig for arranging and supervising the collaborative project.

FUNDING

This work is supported by the Program for Advancing Strategic International Networks to Accelerate the Circulation of Talented Researchers from JSPS, The Swartz Foundation (Old Field, NY) through Swartz Center for Computational Neuroscience in The University of California San Diego. This work was also supported in part by JST PRESTO (Precursory Research for Embryonic Science and Technology) (grant number JPMJPR17JA).

supplementary materials

Figure S1. Results of the Non-parametric permutation test for accuracies in the extrinsic and intrinsic coordinate frames. Histograms of the null hypothesis H0: classification accuracy is due to random chance. Accuracies were calculated in a leave-one-trial-out cross-validation. The process was repeated 5000 times. Each classifier was trained with the specific participant’s data using randomly permuted labels.

Figure S2. False positives for each direction across participants. The red arrow represents the true direction. The length of the red arrow represents the number of true positives. The actual number is indicated near the arrow. Blue arrows represent false positives and their directions. Their length reflects the number of false-positive trials. Notice the false positives are concentrated in the adjacent directions (−45˚, 45˚).

Figure S3. The ten most important time points for each IC for participants two. The most important time point is circled in red and the next two in black. The size and color of each time point represent the number of times the said time point was chosen by the classifiers averaged over the cross-validation runs. The number next to each IC is the percentage contribution of the IC. The light-gray shaded area is the time period before EMG onset from 0 ms to 220 ms, the dark-gray shaded area is the time period between EMG onset and cursor onset from 220 ms to 450 ms.

Figure S4. The ten most important time points for each IC for participants three. The most important time point is circled in red and the next two in black. The size and color of each time point represent the number of times the said time point was chosen by the classifiers averaged over the cross-validation runs. The number next to each IC is the percentage contribution of the IC. The light-gray shaded area is the time period before EMG onset from 0 ms to 220 ms, the dark-gray shaded area is the time period between EMG onset and cursor onset from 220 ms to 450 ms.

Figure S5. The ten most important time points for each IC for participants four. The most important time point is circled in red and the next two in black. The size and color of each time point represent the number of times the said time point was chosen by the classifiers averaged over the cross-validation runs. The number next to each IC is the percentage contribution of the IC. The light-gray shaded area is the time period before EMG onset from 0 ms to 220 ms, the dark-gray shaded area is the time period between EMG onset and cursor onset from 220 ms to 450 ms.

Figure S6. The ten most important time points for each IC for participants five. The most important time point is circled in red and the next two in black. The size and color of each time point represent the number of times the said time point was chosen by the classifiers averaged over the cross-validation runs. The number next to each IC is the percentage contribution of the IC. The light-gray shaded area is the time period before EMG onset from 0 ms to 220 ms, the dark-gray shaded area is the time period between EMG onset and cursor onset from 220 ms to 450 ms.

Figure S7. The ten most important time points for each IC for participants six. The most important time point is circled in red and the next two in black. The size and color of each time point represent the number of times the said time point was chosen by the classifiers averaged over the cross-validation runs. The number next to each IC is the percentage contribution of the IC. The light-gray shaded area is the time period before EMG onset from 0 ms to 220 ms, the dark-gray shaded area is the time period between EMG onset and cursor onset from 220 ms to 450 ms.

Figure S8. Position of clusters centroids (in green) and their constituent ICs (in blue), left lobe, medial side. Medial cortex diagram case courtesy of Assoc Prof Frank Gaillard, Radiopaedia.org, rID: 47208.

Figure S9. Position of clusters centroids (in green) and their constituent ICs (in blue), right lobe, medial side. Medial cortex diagram case courtesy of Assoc Prof Frank Gaillard, Radiopaedia.org, rID: 47208.

Figure S10. Position of clusters centroids (in green) and their constituent ICs (in blue), left lobe lateral view. Lateral surface diagram case courtesy of Assoc Prof Frank Gaillard, Radiopaedia.org, rID: 46670.

Figure S11. Position of clusters centroids (in green) and their constituent ICs (in blue), right lobe lateral view. Lateral surface diagram case courtesy of Assoc Prof Frank Gaillard, Radiopaedia.org, rID: 46670. Insula diagram case courtesy of Assoc Prof Frank Gaillard, Radiopaedia.org, rID: 46846.

Figure S12. (a) ERP of vertical EOG of participant 5; (b) ERP of independent component 1 of participant 5 extracted with ICA.

Table S1. 1Approximate brain location in AAL atlas of selected ICs and their average contribution to the classification (%).

1Brain locations are derived from the AAL atlas. Abbreviations: R: right, L: left, Inf: inferior, Mid: middle, Sup: superior, Ant: anterior, Post: posterior, Supp: supplementary, Cingulum Ant: anterior cingulate and para-cingulate gyri.

Table S2. 2The clusters with the constituting ICs and the participant they were derived from.

2Brain locations are derived from the AAL atlas. Below each cluster’s name is the MNI coordinates of the cluster’s centroid. Abbreviations: R: right, L: left, Inf: inferior, Mid: middle, Sup: superior, Ant: anterior, Post: posterior, Supp: supplementary, Cingulum Ant: anterior cingulate and para-cingulate gyri.

Figure S13. Confusion matrices of all participants.

Conflicts of Interest

The authors declare no conflict of interest.

References

[1] Balasubramanian, S., Garcia-cossio, E., Birbaumer, N., Burdet, E. and Ramos-murguialday, A. (2018) Is EMG a Viable Alternative to BCI for Detecting Movement Intention in Severe Stroke? IEEE Transactions on Biomedical Engineering, 65, 2790-2797.
https://doi.org/10.1109/TBME.2018.2817688
[2] Shain, W., et al. (2003) Controlling Cellular Reactive Responses around Neural Prosthetic Devices Using Peripheral and Local Intervention Strategies. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 11, 186-188.
https://doi.org/10.1109/TNSRE.2003.814800
[3] Gruenwald, J., Znobishchev, A., Kapeller, C., Kamada, K., Scharinger, J. and Guger, C. (2019) Time-Variant Linear Discriminant Analysis Improves Hand Gesture and Finger Movement Decoding for Invasive Brain-Computer Interfaces. Frontiers in Neuroscience, 13, 901.
https://doi.org/10.3389/fnins.2019.00901
[4] Salari, E., Freudenburg, Z.V., Branco, M.P., Aarnoutse, E.J., Vansteensel, M.J. and Ramsey, N.F. (2019) Classification of Articulator Movements and Movement Direction from Sensorimotor Cortex Activity. Scientific Reports, 9, Article No. 14165.
https://doi.org/10.1038/s41598-019-50834-5
[5] Pan, G., et al. (2018) Rapid Decoding of Hand Gestures in Electrocorticography Using Recurrent Neural Networks. Frontiers in Neuroscience, 12, 555.
https://doi.org/10.3389/fnins.2018.00555
[6] Berlot, E., Prichard, G., O’Reilly, J., Ejaz, N. and Diedrichsen, J. (2019) Ipsilateral Finger Representations in the Sensorimotor Cortex Are Driven by Active Movement Processes, Not Passive Sensory Input. Journal of Neurophysiology, 121, 418-426.
https://doi.org/10.1152/jn.00439.2018
[7] McFarland, D.J., Sarnacki, W.A. and Wolpaw, J.R. (2010) Electroencephalographic (EEG) Control of Three-Dimensional Movement. Journal of Neural Engineering, 7, Article ID: 036007.
https://doi.org/10.1088/1741-2560/7/3/036007
[8] Jung, T.-P., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E. and Sejnowski, T.J. (2000) Removal of Eye Activity Artifacts from Visual Event-Related Potentials in Normal and Clinical Subjects. Clinical Neurophysiology, 111, 1745-1758.
https://doi.org/10.1016/S1388-2457(00)00386-2
[9] Nauhaus, I., Benucci, A., Carandini, M. and Ringach, D.L. (2008) Neuronal Selectivity and Local Map Structure in Visual Cortex. Neuron, 57, 673-679.
https://doi.org/10.1016/j.neuron.2008.01.020
[10] Nakata, H., Domoto, R., Mizuguchi, N., Sakamoto, K. and Kanosue, K. (2019) Negative BOLD Responses during Hand and Foot Movements: An fMRI Study. PLoS ONE, 14, e0215736.
https://doi.org/10.1371/journal.pone.0215736
[11] Asemi, A., Ramaseshan, K., Burgess, A., Diwadkar, V.A. and Bressler, S.L. (2015) Dorsal Anterior Cingulate Cortex Modulates Supplementary Motor Area in Coordinated Unimanual Motor Behavior. Frontiers in Human Neuroscience, 9, 309.
https://doi.org/10.3389/fnhum.2015.00309
[12] Astafiev, S.V., Stanley, C.M., Shulman, G.L. and Corbetta, M. (2004) Extrastriate Body Area in Human Occipital Cortex Responds to the Performance of Motor Actions. Nature Neuroscience, 7, 542-548.
https://doi.org/10.1038/nn1241
[13] Jung, T.P., et al. (2000) Removing Electroencephalographic Artifacts by Blind Source Separation. Psychophysiology, 37, 163-178.
https://doi.org/10.1111/1469-8986.3720163
[14] Tanaka, H., Miyakoshi, M. and Makeig, S. (2018) Dynamics of Directional Tuning and Reference Frames in Humans: A High-Density EEG Study. Scientific Reports, 8, Article No. 8205.
https://doi.org/10.1038/s41598-018-26609-9
[15] Wang, Y. and Makeig, S. (2009) Predicting Intended Movement Direction Using EEG from Human Posterior Parietal Cortex. International Conference on Foundations of Augmented Cognition, San Diego, 19-24 July 2009, 437-446.
https://doi.org/10.1007/978-3-642-02812-0_52
[16] Delorme, A., Palmer, J., Onton, J., Oostenveld, R. and Makeig, S. (2012) Independent EEG Sources Are Dipolar. PLoS ONE, 7, e30135.
https://doi.org/10.1371/journal.pone.0030135
[17] Jawad Khan, M., Hong, M.J. and Hong, K.S. (2014) Decoding of Four Movement Directions Using Hybrid NIRS-EEG Brain-Computer Interface. Frontiers in Human Neuroscience, 8, Article 244.
https://doi.org/10.3389/fnhum.2014.00244
[18] Kakei, S., Hoffman, D.S. and Strick, P.L. (2003) Sensorimotor Transformations in Cortical Motor Areas. Neuroscience Research, 46, 1-10.
https://doi.org/10.1016/S0168-0102(03)00031-2
[19] Yoshimura, N., Tsuda, H., Kawase, T., Kambara, H. and Koike, Y. (2017) Decoding Finger Movement in Humans Using Synergy of EEG Cortical Current Signals. Scientific Reports, 7, Article No. 11382.
https://doi.org/10.1038/s41598-017-09770-5
[20] Palmer, J., Kreutz-Delgado, K. and Makeig, S. (2011) AMICA: An Adaptive Mixture of Independent Component Analyzers with Shared Components. Tech. Report, Swartz Center for Computational Neuroscience, San Diego, 1-15.
http://dsp.ucsd.edu/~kreutz/Publications/palmer2011AMICA.pdf
[21] Onton, J. and Makeig, S. (2006) Information-Based Modeling of Event-Related Brain Dynamics. Progress in Brain Research, 159, 99-120.
https://doi.org/10.1016/S0079-6123(06)59007-7
[22] Pion-Tonachini, L., Kreutz-Delgado, K. and Makeig, S. (2019) ICLabel: An Automated Electroencephalographic Independent Component Classifier, Dataset, and Website. Neuroimage, 198, 181-197.
https://doi.org/10.1016/j.neuroimage.2019.05.026
[23] Winkler, I., Haufe, S. and Tangermann, M. (2011) Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals. Behavioral and Brain Functions, 7, 30.
https://doi.org/10.1186/1744-9081-7-30
[24] Miyakoshi, M., Kanayama, N., Iidaka, T. and Ohira, H. (2010) EEG Evidence of Face-Specific Visual Self-Representation. Neuroimage, 50, 1666-1675.
https://doi.org/10.1016/j.neuroimage.2010.01.030
[25] Yamashita, O., Sato, M., Yoshioka, T., Tong, F. and Kamitani, Y. (2008) Sparse Estimation Automatically Selects Voxels Relevant for the Decoding of fMRI Activity Patterns. Neuroimage, 42, 1414-1429.
https://doi.org/10.1016/j.neuroimage.2008.05.050
[26] Sivagnanam, S., et al. (2013) Introducing the Neuroscience Gateway. CEUR Workshop Proceedings, Zurich, Switzerland, 3-5 June 2013, 993.
http://ceur-ws.org/Vol-993/paper10.pdf.
[27] Yoshimura, N., et al. (2014) Dissociable Neural Representations of Wrist Motor Coordinate Frames in Human Motor Cortices. Neuroimage, 97, 53-61.
https://doi.org/10.1016/j.neuroimage.2014.04.046
[28] Rolls, E.T., Huang, C.C., Lin, C.P., Feng, J. and Joliot, M. (2020) Automated Anatomical Labelling Atlas 3. Neuroimage, 206, Article ID: 116189.
https://doi.org/10.1016/j.neuroimage.2019.116189
[29] Mennes, M., Wouters, H., Vanrumste, B., Lagae, L. and Stiers, P. (2010) Validation of ICA as a Tool to Remove Eye Movement Artifacts from EEG/ERP. Psychophysiology, 47, 1142-1150.
https://doi.org/10.1111/j.1469-8986.2010.01015.x
[30] Waberski, T.D., Gobbelé, R., Lamberty, K., Buchner, H., Marshall, J.C. and Fink, G.R. (2008) Timing of Visuo-Spatial Information Processing: Electrical Source Imaging Related to Line Bisection Judgements. Neuropsychologia, 46, 1201-1210.
https://doi.org/10.1016/j.neuropsychologia.2007.10.024
[31] Fortin, A., Ptito, A., Faubert, J. and Ptito, M. (2002) Cortical Areas Mediating Stereopsis in the Human Brain: A PET Study. Neuroreport, 13, 895-898.
https://doi.org/10.1097/00001756-200205070-00032
[32] Mechelh, A., Humphreys, G.W., Mayall, K., Olson, A. and Price, C.J. (2000) Differential Effects of Word Length and Visual Contrast in the Fusiform and Lingual Gyri during Reading. Proceedings of the Royal Society B: Biological Sciences, 267, 1909-1913.
https://doi.org/10.1098/rspb.2000.1229
[33] Shetht, B.R. and Young, R. (2016) Two Visual Pathways in Primates Based on Sampling of Space: Exploitation and Exploration of Visual Information. Frontiers in Integrative Neuroscience, 10, 37.
https://doi.org/10.3389/fnint.2016.00037
[34] Stark, A. and Zohary, E. (2008) Parietal Mapping of Visuomotor Transformations during Human Tool Grasping. Cerebral Cortex, 18, 2358-2368.
https://doi.org/10.1093/cercor/bhm260
[35] Wenderoth, N., Debaere, F., Sunaert, S. and Swinnen, S.P. (2005) The Role of Anterior Cingulate Cortex and Precuneus in the Coordination of Motor Behaviour. European Journal of Neuroscience, 22, 235-246.
https://doi.org/10.1111/j.1460-9568.2005.04176.x
[36] Schubert, T., von Cramon, D.Y., Niendorf, T., Pollmann, S. and Bublak, P. (1998) Cortical Areas and the Control of Self-Determined Finger Movements: An fMRI Study. Neuroreport, 9, 3171-3176.
https://doi.org/10.1097/00001756-199810050-00009
[37] Bubb, E.J., Metzler-Baddeley, C. and Aggleton, J.P. (2018) The Cingulum Bundle: Anatomy, Function, and Dysfunction. Neuroscience & Biobehavioral Reviews, 92, 104-127.
https://doi.org/10.1016/j.neubiorev.2018.05.008
[38] Sturm, W., et al. (2006) Spatial Attention: More than Intrinsic Alerting? Experimental Brain Research, 171, 16-25.
https://doi.org/10.1007/s00221-005-0253-1
[39] Weilke, F., et al. (2001) Time-Resolved fMRI of Activation Patterns in M1 and SMA during Complex Voluntary Movement. Journal of Neurophysiology, 85, 1858-1863.
https://doi.org/10.1152/jn.2001.85.5.1858
[40] Nguyen, V.T., Breakspear, M. and Cunnington, R. (2014) Reciprocal Interactions of the SMA and Cingulate Cortex Sustain Premovement Activity for Voluntary Actions. Journal of Neuroscience, 34, 16397-16407.
https://doi.org/10.1523/JNEUROSCI.2571-14.2014
[41] Carey, L.M., Abbott, D.F., Egan, G.F. and Donnan, G.A. (2008) Reproducible Activation in BA2, 1 and 3b Associated with Texture Discrimination in Healthy Volunteers over Time. Neuroimage, 39, 40-51.
https://doi.org/10.1016/j.neuroimage.2007.08.026
[42] Toxopeus, C.M., de Jong, B.M., Valsan, G., Conway, B.A., Leenders, K.L. and Maurits, N.M. (2011) Direction of Movement Is Encoded in the Human Primary Motor Cortex. PLoS ONE, 6, e27838.
https://doi.org/10.1371/journal.pone.0027838
[43] Kakei, S., Hoffman, D.S. and Strick, P.L. (1999) Muscle and Movement Representations in the Primary Motor Cortex. Science, 285, 2136-2139.
https://doi.org/10.1126/science.285.5436.2136
[44] Cheney, P.D., Fetz, E.E. and Palmer, S.S. (1985) Patterns of Facilitation and Suppression of Antagonist Forelimb Muscles from Motor Cortex Sites in the Awake Monkey. Journal of Neurophysiology, 53, 805-820.
https://doi.org/10.1152/jn.1985.53.3.805
[45] Vermetten, E., Charney, D.S. and Bremner, J.D. (2002) Anxiety. In: Encyclopedia of the Human Brain, Vol. 1, Elsevier, Amsterdam, 159-180.
https://doi.org/10.1016/B0-12-227210-2/00028-5
[46] Nunez, P.L. and Srinivasan, R. (2009) Electric Fields of the Brain: The Neurophysics of EEG.
https://www.doi.org/10.1093/acprof:oso/9780195050387.001.0001

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.