The HoloLens’ Potential Impact on Neurosurgery

hololensThe DiVE recently acquired the Microsoft HoloLens, which is a new technology that provides a mixed reality experience. The HoloLens provides new opportunities for creativity and innovation.

The HoloLens has the potential to expand and advance different professional fields because of its ability to combine the real world with the virtual world. The interdisciplinary nature of this technology is highlighted in the development of an application for neurosurgery.

Shervin Rahimpour, M.D., a third year neurosurgery resident at Duke Hospital, and Andrew Cutler, M.D., a second year neurosurgery resident at Duke Hospital teamed with faculty mentor and neurosurgeon, Dr. Patrick Codd, to develop an application that they hope will enhance brain navigation in surgery. As neurosurgeons, Rahimpour and Cutler are familiar with the challenges in performing surgery on the brain.

Rahimpour and Cutler were brainstorming about how to increase the accuracy and ease of neurosurgical procedures around the time when the HoloLens was being advertised and discussed. They saw the potential of the HoloLens to enhance neurosurgery and decided to explore the idea.

Rahimpour comments that neurosurgeons base much of what they do during bedside procedures on “landmarks of the head;” this is not as precise as it could be. To address this issue, they aim to create a virtual patient specific map of the brain. The map will then be projected through the HoloLens onto the patient’s head. This overlay will provide a more accurate navigational system for the brain. Accurately overlaying the virtual map on top of the brain is anticipated to be difficult, but not impossible.

One procedure that the application will enhance is the bedside placement of an external ventricular drain. With current approaches, the accuracy of this procedure is insufficient and very user dependent. The virtual map projected through the HoloLens should increase the accuracy of the procedure. The usage of the application in this procedure is expected to prove the concept of the project. Once proven effective, Rahimpour and Cutler hope that the application will be available to other neurosurgical procedures in need of augmented reality based navigational system.

The DiVE is happy to partner with Rahimpour and Cutler in developing this application. We are excited to see the implications that this new navigational system will have in the future.

 

New Equipment: HoloLens

HoloLensIn addition to the Oculus Rift (CV1), the DiVE has recently acquired the Microsoft HoloLens. The HoloLens is a device that provides a mixed reality experience.

Mixed reality is different from virtual reality and augmented reality. Virtual reality offers a completely immersive experience; when a virtual reality user looks around a virtual reality world, their view is adjusted as it would be in the real world. The user’s brain is convinced that it’s somewhere it’s not. Augmented reality adds digital information on top of the user’s view of the real world.

Mixed reality is a blend between the real and the virtual world; it combines virtual reality and augmented reality. It allows virtual objects to coexist with real objects. It appoints virtual objects to a position in space so that they may interact in a way similar to real objects with other virtual objects, real objects, and the user. For example, from the perspective of the person wearing mixed reality lenses, a virtual ball could be hidden under a real table. To read more about the differences between virtual reality, augmented reality, and mixed reality, visit Recode’s article called “Choose Your Reality: Virtual, Augmented or Mixed.”

Microsoft describes the HoloLens as “the first fully self-contained, holographic computer, enabling you to interact with high-definition holograms in your world.” It allows the user to experience a mixed reality world, allowing the virtual to coexist with the real. It opens new opportunities for collaboration and creativity that virtual reality and augmented reality don’t provide.

The HoloLens enables users to use their hands to work on a virtual object and to see their hands at work. Additionally, other users are able to collaborate on the object, whether they are physically present in the actual location or not.

The HoloLens employs a Holographic Processing Unit (HPU)—made of custom silicon—that processes data coming from sensors at an extremely high speed. There are many sensors on the product, which each require Windows 10—which includes the Windows holographic mixed reality platform—to read them correctly and in real time. Holographic, HD lenses create multi-dimensional, color images with low latency through advanced optical projection. The audio components have the duty of knowing where the head is so that the brain knows there is something real there.

Due to the new opportunities that mixed reality provides surrounding creation and collaboration, there are a variety of opportunities that the HoloLens provides surrounding research, business, and science. The DiVE is excited to take advantage of this new technology.

For more information on mixed reality and the Windows HoloLens, explore Microsoft’s page about the HoloLens.

New Equipment: Oculus Rift (CV1)

The DiVE is on top of the latest virtual reality gadgets and gizmos. Earlier this year, Oculus released the newest head-mounted virtual reality display: the Oculus Rift (CV1). We are so excited to announce that we have acquired the Rift!

Contrast to the DiVE’s 6-sided, CAVE-type system, virtual reality head mounted displays do not allow one to directly see one’s hands and body while in the virtual world, which leads to a slightly disembodied experience. However, this does provide interesting opportunities for alternative embodiment (e.g. you could look down and see a dragon body and dragon wings!). Oculus Rift (CV1)

The Oculus Rift (CV1) comes with four different pieces: a position tracker, a headset, an Xbox remote, and an integrated audio system. The position tracker looks like a classy microphone. It embodies the Constellation Tracking System. This system monitors infrared LEDs in the headset. The presence of LEDs on both the front and the back of the headset provides a 360 degree experience.

The headset is light and adjustable. The two lenses magnify the screen to eliminate blurriness. The Adjacent Reality Tracker is a key part of the headset. It is what enables the system to take into account the many small movements of the head and adjust the program accordingly. There are two displays in the headset and they run at a high resolution of 2160 x 1200.

The Rift’s audio system provides an immersive sonic experience; sound seems to come from all directions. It does this through the combination of the Head-Related Transfer Function (HRTF) and the Oculus’ head tracking. The HRTF has data references data pointing to changes in sound coming from different directions. The result is a smooth and immersive sound system. Detachable headphones come with the headset in order to provide the user with the option of using his or her own headphones if desired.

Oculus’ partnership with Microsoft has enabled Oculus to provide an Xbox One controller with every Rift. The controller adds an interactive component to the virtual reality experience, enabling the user to play a part in the world.

Compatibility with Windows 10 is built into the Rift. This compatibility enables developers to create new worlds through the Microsoft system. Doing this requires computers that meet a list of requirements. They must be Oculus ready.

The DiVE is very excited for the research and education opportunities that the Oculus Rift (CV1) will enable. We also are enjoying exploring the new technology by playing applications such as Lucky’s Tail!

For more information on the Oculus Rift, read “How Oculus Rift works: Everything you need to know about the VR sensation” and Oculus’ Page on the Rift.

DiVE Featured on EdTech

Reprinted with permission from EdTech: Focus on Higher Education

Reprinted with permission from EdTech: Focus on Higher Education

Virtual reality provides amazing opportunities in the fields of teaching, learning, and research.  Jacquelyn Bengfort focuses on these possibilities in her article “Virtual Reality Facilitates Higher Ed Research and Teaches High-Risk Skills” on EdTech, a publication focused on technology and education.

Bengfort discusses how virtual reality can enhance education by acting as a simulator; it enables professors to “bring the world to their students.” Duke is on the cutting edge of this educational opportunity with its recent renovations to the DiVE.

Virtual reality also provides opportunities for interdisciplinary research. The DiVE brings together a wide variety of professionals to collaborate on projects.

In addition to focusing on the DiVE, Bengfort highlights virtual reality’s role at the California State University Maritime Academy, in which it helps train students for sea, and at the Harvard Business School, in which virtual reality provides students with a virtual classroom in HBX Live.

Read Bengfort’s article at http://www.edtechmagazine.com/higher/article/2016/05/virtual-reality-facilitates-higher-ed-research-and-teaches-high-risk-skills.

Congratulations to Duke team for Honorable Mention at the 2016 IEEE VR Conference

A team of Duke researchers, David Zielinski, Hrishikesh Rao, Nick Potter, Lawrence Applebaum, and Regis Kopper, represented Duke at the 2016 IEEE VR Conference, which took place March 19-23 in Greenville, NC. This year was the 26th year this premier international conference and exhibition took place, featuring some of the most innovative research, brightest minds, and top companies in virtual reality technology.

divepic2 The Duke team presented their latest research as a poster on “Evaluating the Effects of Image Persistence on Dynamic Target Acquisition in Low Frame Rate Virtual Environments.”Out of 84 poster presentations, the team won the honorable mention for best poster award. This places our team of Duke research at the top of cutting edge virtual reality technology advancements. A big congratulations to them!

divepic1

Their presentation, which was also featured as a full paper at the Symposium on 3D User Interfaces, was on recent research that analyzes a visual display technique for low frame rate virtual environments called low persistence (LP). Especially interesting to study is its difference to the low frame rate high persistence technique (HP). In the HP technique, the same rendered frame gets repeated a number of times until a new frame is generated—a process that we all see when running complex games in slow computers. With the LP technique, when a frame is generated, rather than showing it multiple times, a black frame is shown instead while waiting for the next new frame to be generated. To learn more about the LP technique, researchers at Duke evaluated user learning and performance during a target acquisition task. This task is similar to a shotgun trap shooting simulation, where the user has to acquire targets that were moving along several different trajectories. The results concluded that the LP technique may be just as useful as the low frame rate high persistence (HP) technique. The LP condition approaches high frame rate performance within certain classes of target trajectories, and user learning was similar in the LP and high frame system.

For more information, check out the poster abstract and the full paper.

Team of Duke researchers featured at this year’s IEEE Virtual Reality Conference

Screen Shot 2016-03-07 at 10.08.12 AM

The IEEE Virtual Reality Conference is the premier meeting on its topic and this year it will be held in Greenville, SC featuring recent developments in virtual reality technology with the attendance of academics, researchers, industry representatives, and VR enthusiasts. Our own David Zielinski will be representing Duke to present his recent paper, “Evaluating the Effects of Image Persistence on Dynamic Target Acquisition in Low Frame Rate Virtual Environments,” co-authored with Hrishikesh Rao, Nick Potter, Marc Sommer, Lawrence Appelbaum, and Regis Kopper in the IEEE Symposium on 3D User Interfaces, co-located with the Virtual Reality conference. In addition, a team of researchers including Leonardo Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli and Marcio Pinho from PUCRS/Brazil, and MEMS professor and DiVE director Regis Kopper entered into the IEEE 3DUI 7th annual contest, which will be judged live at the Symposium. The purpose of the contest is to promote creative solutions to challenging 3DUI problems, and the Duke team’s submission about the Collaborative Hybrid Virtual Environment does just that.

Zielinski’s paper analyzes a visual display technique for low frame rate virtual environments called low persistence (LP). Especially interesting to study is its difference to the low frame rate high persistence technique (HP). In the HP technique, we have one frame of fresh content that is repeated a number of times until the system produces the next frame, causing that break of motion perception that we usually see when trying to play a video game in a slow computer. With the LP technique, we have the fresh content, but instead of showing it multiple times while the next frame is being generated, black frames are inserted instead, effectively causing a stroboscopic effect. To learn more about the LP technique, researchers at Duke evaluated user learning and performance during a target acquisition task. This task is similar to a shotgun trap shooting simulation, where the user has to acquire targets that were moving along several different trajectories. The results concluded that the LP technique performs as well as the low frame rate high persistence (HP) technique. The LP condition approaches high frame rate performance within certain classes of target trajectories, and user learning was similar in the LP and high frame system. A future area of research is to investigate in what situations using the LP technique can have performance or experience benefits over traditional low frame rate simulations.

The Collaborative Hybrid Virtual Environment project that was entered into the 3DUI contest is a system where a single virtual object is manipulated simultaneously by two users performing different operations, like scaling, rotating, and translating objects. It tested which point of view, exocentric or egocentric, is better for each operation, and how many degrees of freedom between the two users would be most efficient in completing a task. The exocentric view is one where the user is standing at a given distance from the object, while the egocentric view is one where the user has the object’s perspective. With the two users having the same view, they can complete the task almost identically, which wouldn’t be much different than one person completing it alone. By giving two users different perspectives, complex operations could be performed more efficiently.