TITLE:
The Influence of Prior Sensory Context on Meditative Neural States
AUTHORS:
Shenhan Qiu
KEYWORDS:
Meditation, EEG, Machine Learning, LDA, SVM, Random Forest, Interpretability, Drowsiness, Consumer-Grade EEG, Methodological Comparison
JOURNAL NAME:
Journal of Behavioral and Brain Science,
Vol.15 No.12,
December
22,
2025
ABSTRACT: Meditation offers a controlled behavioral context for probing attention, arousal, and self-regulation. Rather than positioning the present work as a discovery of novel neural signatures, we analyze consumer-grade EEG recordings with the explicit aim of comparing linear and non-linear classification approaches for distinguishing meditation, alert rest, and drowsy rest. We evaluate Linear Discriminant Analysis (LDA), Support Vector Machines (SVM), and Random Forests (RF) under identical preprocessing and cross-validation protocols, and we report not only accuracy F1, and Receiver Operating Characteristic (ROC)-AUC, but also interpretability metrics (feature salience stability; alignment with canonical EEG bands) and robustness (between-participant generalization; sensitivity to artifact handling). Across participants, non-linear models (RF and SVM) yield higher predictive performance than LDA, whereas LDA provides transparent, physiologically interpretable weightings linking increased alpha/theta power and reduced beta activity to meditative engagement. We further analyze how prior context—e.g., participants’ recent sleep, baseline arousal, prior mindfulness exposure—modulates model performance and feature stability, showing that contextual variability can inflate apparent “neural markers” if not explicitly controlled. We release code and a reproducible analysis workflow to encourage methodological transparency when consumer-grade EEG is used for meditation research. Our findings suggest that model choice materially shapes conclusions: non-linear models improve classification, while linear models clarify mechanisms. We recommend a two-tiered workflow: 1) non-linear screening for sensitivity; 2) linear confirmatory modeling for interpretability, coupled with explicit contextual covariates. This reframing aligns with reviewer guidance to present the work as a methodological comparison rather than a claim of new causal neural correlates.