➣ By Constantinos Loukas
Simulation-based education is a training and feedback method in which learners practice a task repeatedly on a simulated model until reaching a predefined competency level. Training is performed in a lifelike environment with feedback obtained either from external observers (experts), or the simulation system itself based on validated assessment metrics. In minimally invasive surgery, the instruments and training models with which the user interacts may be real (physical reality simulation) or virtual (Virtual Reality simulation). As opposed to the traditional model, in simulation training “permission to fail” can be built into the learning process without jeopardizing patient safety. Other major advantages include risk-free practice on complex and rare case scenarios, constructive feedback, individualized training, and objective assessment.
Despite the significant assets of reality simulation, a major issue is the assessment process where the traditional model requires careful review of the recorded video. This clearly leads to a lengthy process that is also prone to errors due to fatigue of the reviewer, provided that most tasks require several trials to master. Recently, there has been an influx of c omp u t a t i o n a l tools enabling automated performance analysis and assessment. Hand motion analysis systems constitute the majority of these tools, whereby specialized sensors are attached either on the surgeon’s hand or instrument handle (see Fig. 1). A well-established assessment metric is the hand motion trajectory. This is usually obtained by means of a motion analysis system equipped with electromechanical or infrared sensors. The signal emitted (or reflected) by the sensors is acquired by a receiver placed at a fixed position nearby the system, thus providing kinematic measurements of the instruments in real-time. More advanced systems utilize multisensory information including force and torque signals, or specialized sensor-gloves that capture hand gestures during the performance of a laparoscopic task. The signal data obtained are modeled with advanced computational techniques, such as Hidden Markov and Multivariate Autoregressive Models, in order to generate an assessment index based on the data collected. These methods allow association with quantifiable parameters that correlate with surgical experience.
An alternative, yet more challenging, methodology for obtaining kinematic information is based purely on the visual information obtained from the endoscopic camera, implying a sensorless training environment that provides greater flexibility to the trainee. In the literature there are a small number of systems that attempt to detect and track the laparoscopic instruments. The instruments are first detected using, for example, edge or color information, sometimes with the aid of a color marker, and then tracked in subsequent frames. Visual tracking of objects of interest in the surgical simulation space provides a great range of opportunities for the development of hybrid training methods such as Augmented Reality Simulation, whereby physical and virtual objects are mixed, allowing users to interact with virtual models using real surgical instruments. Recent research has revealed the potential advantages behind this method such as realistic haptic feedback, objective assessment of performance, high quality visualizations and great flexibility in the development of training scenarios.
Medical Physics Lab-Simulation Center, Medical School
University of Athens
President of Virtual Reality Medical Institute (VRMI) in Brussels, Belgium. Executive VP Virtual Reality Medical Center (VRMC), based in San Diego and Los Angeles, California. CEO of Interactive Media Institute a 501c3 non-profit Clinical Instructor in Department of Psychiatry at UCSD Founder of CyberPsychology, CyberTherapy, & Social Networking Conference Visiting Professor at Catholic University Milan.