Information Fusion for Process Acquisition in the Operating Room

Abstract

The recognition of surgical processes in the operating room is an emerging research field in medical engineering. We present the design and implementation of a instrument localization system that is based on information fusion strategies to enhance its recognition power. The system was implemented using RFID technology. It monitored the presence of surgical tools in the interventional site and the instrument tray and combined the measured information by applying redundant, complementary, and cooperative information fusion strategy to achieve a more comprehensive model of the current situation. An evaluation study was performed that showed a correct classification rate of 97% for the system.

Share and Cite:

Neumuth, T. and Meißner, C. (2012) Information Fusion for Process Acquisition in the Operating Room. Open Journal of Applied Sciences, 2, 195-198. doi: 10.4236/ojapps.2012.24B044.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] T. Neumuth, P. Liebmann, P. Wiedemann, and J. Meixensberger, “Surgical Workflow Management Schemata for Cataract Procedures. Process Model-based Design and Validation of Workflow Schemata,” Methods of Information in Medicine, vol. 51, no. 4, May 2012.
[2] N. Padoy, T. Blum, S.-A. Ahmadi, H. Feussner, M.-O. Berger, and N. Navab, “Statistical Modeling and Recognition of Surgical Workflow,” Medical Image Analysis, vol. 16, no.3, pp632-641, Apr 2012.
[3] G. Sudra, A. Becker, M. Braun, S. Speidel, B. P. Mueller-Stich, and R. Dillmann, “Estimating similarity of surgical situations with case-retrieval-nets,” Stud Health Technol Inform, vol. 142, pp. 358–363, 2009.
[4] F. Lalys, L. Riffaud, X. Morandi, and P. Jannin, “Automatic phases recognition in pituitary surgeries by microscope images classification,” presented at the IPCAI 2010, 2010.
[5] L. Bouarfa, P. P. Jonker, and J. Dankelman, “Discovery of high-level tasks in the operating room,” J Biomed Inform, Jan. 2010.
[6] H. Lin and G. Hager, “User-independent models of manipulation using video contextual cues,” presented at the MICCAI, London, 2009.
[7] B. Varadarajan, C. Reiley, H. Lin, S. Khudanpur, and G. Hager, “Data-derived models for segmentation with application to surgical assessment and training,” Med Image Comput Comput Assist Interv, vol. 12, no. Pt 1, pp. 426–434, 2009.
[8] A. Darzi and S. Mackay, “Skills assessment of surgeons,” Surgery, vol. 131, no. 2, pp. 121–124, Feb. 2002.
[9] J. Rosen, B. Hannaford, C. G. Richards, and M. N. Sinanan, “Markov modeling of minimally invasive surgery based on tool/tissue interaction and force/torque signatures for evaluating surgical skills,” IEEE Trans Biomed Eng, vol. 48, no. 5, pp. 579–591, May 2001.
[10] A. Ahmadi, N. Padoy, K. Rybachuk, H. Feu?ner, S. Heining, and N. Navab, “Motif Discovery in OR Sensor Data with Application to Surgical Workflow Analysis and Activity Detection,” presented at the MICCAI Workshop on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), London, 2010.
[11] A. James, D. Vieira, B. Lo, A. Darzi, and G.-Z. Yang, “Eye-Gaze Driven Surgical Workflow Segmentation,” in Proc. of Medical Image Computing and Computer-Assisted Intervention (MICCAI 2007), vol. LNCS 4792, N. Ayache, S. Ourselin, and A. Maeder, Eds. 2007, pp. 110–117.
[12] A. Nara, K. Izumi, H. Iseki, T. Suzuki, K. Nambu, and Y. Sakurai, “Surgical workflow analysis based on staff’s trajectory patterns,” presented at the MICCAI Workshop on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), London, 2010.
[13] Y. Xiao, P. Hu, H. Hu, D. Ho, F. Dexter, C. F. Mackenzie, F. J. Seagull, and R. P. Dutton, “An algorithm for processing vital sign monitoring data to remotely identify operating room occupancy in real-time,” Anesth. Analg, vol. 101, no. 3, pp. 823–829, table of contents, Sep. 2005.
[14] D. Hall and J. LLinas, “An Introduction to Multisensor data fusion,” 1997, vol. 85(1), pp. 6–23.
[15] Xiong N. and Svensson P., “Multi-sensor management for information fusion: issues and approaches,” Information Fusion, vol. 3, pp. 163–186, Jun. 2002.
[16] J. Llinas, C. Bowman, G. Rogova, A. Steinberg, E. Waltz, and F. White, “Revisiting the JDL Data Fusion Model II,” IN P. SVENSSON AND J. SCHUBERT (EDS.), PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2004, vol. 2, p. 1218–1230, 2004.
[17] M. M. Kokar, J. A. Tomasik, and J. Weyman, “Formalizing classes of information fusion systems,” Information Fusion, vol. 5, no. 3, pp. 189–202, Sep. 2004.
[18] H. F. Durrant-Whyte, “Sensor Models and Multisensor Integration,” The International Journal of Robotics Research, vol. 7, no. 6, pp. 97 –113, Dec. 1988.
[19] T. Neumuth, P. Jannin, G. Strauss, J. Meixensberger, and O. Burgert, “Validation of knowledge acquisition for surgical process models,” J Am Med Inform Assoc, vol. 16, no. 1, pp. 72–80, 2009.
[20] T. Neumuth and C. Meissner, “Online recognition of surgical instruments by information fusion,” Int J Comput Assist Radiol Surg, vol. 7, no. 2, pp. 297–304, Mar. 2012.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.