Dr. Ryan P. McMahan
University of Texas
Monday, March 25, 2019
3:30PM – 4:30PM – HEC 450
Dr. Ryan P. McMahan is an Associate Professor of Computer Science and of Arts, Technology, and Emerging Communication at The University of Texas at Dallas. He directs the Future Immersive Virtual Environments (FIVE) Lab, which focuses on using immersive technologies, such as virtual reality (VR) and 3D interactions, along with human-computer interaction (HCI) methodologies, to facilitate and enhance education and training through cyberlearning. Dr. McMahan is a National Science Foundation (NSF) CAREER Award winner and recipient of a Google Internet of Things (IoT) Technology Research Award. He co-authored the 3D User Interfaces: Theory and Practice, Second Edition book and his research is highly cited. He is recipient of the 2016 UT Dallas Provost’s Award for Faculty Excellence in Undergraduate Research Mentoring and of an Erik Jonsson School Outstanding Faculty Teaching Award. Dr. McMahan also serves as an Associate Editor for the International Journal of Human-Computer Studies and as an advisory board member for MyndVR, LLC. He received his Ph.D. in Computer Science and Applications from Virginia Tech in 2011.
Robotic operating room (ROR) surgeons have many opportunities for training and practice. However, non-surgeon ROR team members often receive limited training and rarely have opportunities to practice their skills. Consumer virtual reality (VR) technologies afford unique opportunities to remedy these issues, including cognitive, gross-motor, and team skills development, unlimited virtual supplies, and deployable solutions.
In this talk, I present the evolution of VR applications focused on training ROR skills. I discuss several research challenges, including mesh simplification of high-poly manufacturer models, dense interaction spaces for manipulating the virtual robot, and 3D interactions with complex inverse kinematics (IK). I present the results of a formative evaluation of our initial VR training solution involving actual ROR team members. I then describe how we addressed the critical usability and training issues identified with improved IK interactions, a new design methodology for VR training solutions, and animated feedforward techniques. Finally, I conclude with current and future research plans that leverage the developed training solution as a testbed.