Share This Article:

The Microsoft KINECT: A Novel Tool for Psycholinguistic Research

Abstract Full-Text HTML XML Download Download as PDF (Size:423KB) PP. 291-301
DOI: 10.4236/ojml.2015.53026    2,245 Downloads   2,836 Views  

ABSTRACT

The Microsoft KINECT is a 3D sensing device originally developed for the XBOX. The Microsoft KINECT opens up many exciting new opportunities for conducting experimental research on human behavior. We investigated some of these possibilities within the field of psycholinguistics (specifically: language production) by creating software, using C#, allowing for the KINECT to be used in a typical psycholinguistic experimental setting. The results of a naming experiment using this software confirmed that the KINECT was able to measure the effects of a robust psycholinguistic variable (word frequency) on naming latencies. However, although the current version of the software is able to measure psycholinguistic variables of interest, we also discuss several points where the software can still stand to be improved. The main aim of this paper is to make the software freely available for assessment and use by the psycholinguistic community and to illustrate the KINECT as a potentially valuable tool for investigating human behavior, especially in the field of psycholinguistics.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

Verdonschot, R. , Guillemaud, H. , Rabenarivo, H. and Tamaoka, K. (2015) The Microsoft KINECT: A Novel Tool for Psycholinguistic Research. Open Journal of Modern Linguistics, 5, 291-301. doi: 10.4236/ojml.2015.53026.

References

[1] Baayen, R. H. (2008). Analyzing Linguistic Data: A Practical Introduction to Statistics Using R. Cambridge: Cambridge University Press.
http://dx.doi.org/10.1017/cbo9780511801686
[2] Bates, D., Maechler, M., & Dai, B. (2008). Lme4: Linear Mixed-Effects Models Using S4 Classes [Computer Software Manual].
http://lme4.r-forge.rproject.org/
[3] Biswas, K. K., & Basu, S. K. (2011). Gesture Recognition Using Microsoft Kinect?. Proceedings of the IEEE International Conference on Automation, Robotics and Applications, Wellington, 6-8 December 2011, 100-103.
http://dx.doi.org/10.1109/ICARA.2011.6144864
http://www.ieee.org/conferences_events/conferences/conferencedetails/index.html?Conf_ID=19125
[4] Dell, G. S. (1986). A Spreading-Activation Theory of Retrieval in Sentence Production. Psychological Review, 93, 283-321.
http://dx.doi.org/10.1037/0033-295X.93.3.283
[5] Forster, K. I., & Forster, J. C. (2003). DMDX: A Windows Display Program with Millisecond Accuracy. Behavior Research Methods, Instruments & Computers, 35, 116-124.
http://dx.doi.org/10.3758/BF03195503
[6] Jescheniak, J. D., & Levelt, W. J. M. (1994). Word Frequency Effects in Speech Production: Retrieval of Syntactic Information and of Phonological Form. Journal of Experimental Psychology: Language, Memory, and Cognition, 20, 824-843.
http://dx.doi.org/10.1037/0278-7393.20.4.824
[7] Kessler, B., Treiman, R., & Mullennix, J. (2002). Phonetic Biases in Voice Key Response Time Measurements. Journal of Memory and Language, 47, 145-171.
http://dx.doi.org/10.1006/jmla.2001.2835
[8] Khoshelham, K., & Oude Elberink, S. (2012). Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors, 12, 1437-1454.
http://dx.doi.org/10.3390/s120201437
[9] Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2014). lmerTest: Tests for Random and Fixed Effects for Linear Mixed Effect Models (Lmer Objects of Lme4 Package). R Package Version 2.0-6.
http://CRAN.R-project.org/package=lmerTest
[10] Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A Theory of Lexical Access in Speech Production. Behavioral and Brain Sciences, 22, 1-75.
http://dx.doi.org/10.1017/S0140525X99001776
[11] Oldfield, R. C., & Wingfield, A. (1965). Response Latencies in Naming Objects. Quarterly Journal of Experimental Psychology, 17, 273-281.
http://dx.doi.org/10.1080/17470216508416445
[12] Piana, S., Staglianò, A., Odone, F., Verri, A., & Camurri, A. (2014). Real-Time Automatic Emotion Recognition from Body Gestures. Computing Research Repository (CoRR).
http://arxiv.org/abs/1402.5047
[13] Protopapas, A. (2007). Check Vocal: A Program to Facilitate Checking the Accuracy and Response Time of Vocal Responses from DMDX. Behavior Research Methods, 39, 859-862.
http://dx.doi.org/10.3758/BF03192979
[14] R Development Core Team (2008). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.
[15] Sakuma, N., Fushimi, T., & Tatsumi, I. (1997). Measurement of Naming Latency of Kana Characters and Words Based on Speech Analysis: Manner of Articulation of a Word-Initial Phoneme Considerably Affects Naming Latency. Japanese Journal of Neuropsychology, 13, 126-136.
[16] Suarez, J., & Murphy, R. (2012). Hand Gesture Recognition with Depth Images: A Review. Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Paris, 9-13 September 2012, 411-417.
http://dx.doi.org/10.1109/roman.2012.6343787 http://www.ro-man2012.org/
[17] Starreveld, P. A., La Heij, W., & Verdonschot, R. G. (2013). Time Course Analysis of the Effects of Distractor Frequency and Categorical Relatedness in Picture Naming: An Evaluation of the Response Exclusion Account. Language and Cognitive Processes, 28, 633-654.
http://dx.doi.org/10.1080/01690965.2011.608026
[18] Zhang, Y., Zhang, L., & Hossain, A. (2013). Multimodal Intelligent Affect Detection with Kinect. Proceedings of the 12th International Conference on Autonomous Agents and Multi-Agent Systems, St. Paul, 6-10 May 2013, 1461-1462.

  
comments powered by Disqus

Copyright © 2019 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.