The HoloLens’ Potential Impact on Neurosurgery

hololensThe DiVE recently acquired the Microsoft HoloLens, which is a new technology that provides a mixed reality experience. The HoloLens provides new opportunities for creativity and innovation.

The HoloLens has the potential to expand and advance different professional fields because of its ability to combine the real world with the virtual world. The interdisciplinary nature of this technology is highlighted in the development of an application for neurosurgery.

Shervin Rahimpour, M.D., a third year neurosurgery resident at Duke Hospital, and Andrew Cutler, M.D., a second year neurosurgery resident at Duke Hospital teamed with faculty mentor and neurosurgeon, Dr. Patrick Codd, to develop an application that they hope will enhance brain navigation in surgery. As neurosurgeons, Rahimpour and Cutler are familiar with the challenges in performing surgery on the brain.

Rahimpour and Cutler were brainstorming about how to increase the accuracy and ease of neurosurgical procedures around the time when the HoloLens was being advertised and discussed. They saw the potential of the HoloLens to enhance neurosurgery and decided to explore the idea.

Rahimpour comments that neurosurgeons base much of what they do during bedside procedures on “landmarks of the head;” this is not as precise as it could be. To address this issue, they aim to create a virtual patient specific map of the brain. The map will then be projected through the HoloLens onto the patient’s head. This overlay will provide a more accurate navigational system for the brain. Accurately overlaying the virtual map on top of the brain is anticipated to be difficult, but not impossible.

One procedure that the application will enhance is the bedside placement of an external ventricular drain. With current approaches, the accuracy of this procedure is insufficient and very user dependent. The virtual map projected through the HoloLens should increase the accuracy of the procedure. The usage of the application in this procedure is expected to prove the concept of the project. Once proven effective, Rahimpour and Cutler hope that the application will be available to other neurosurgical procedures in need of augmented reality based navigational system.

The DiVE is happy to partner with Rahimpour and Cutler in developing this application. We are excited to see the implications that this new navigational system will have in the future.

 

New Equipment: HoloLens

HoloLensIn addition to the Oculus Rift (CV1), the DiVE has recently acquired the Microsoft HoloLens. The HoloLens is a device that provides a mixed reality experience.

Mixed reality is different from virtual reality and augmented reality. Virtual reality offers a completely immersive experience; when a virtual reality user looks around a virtual reality world, their view is adjusted as it would be in the real world. The user’s brain is convinced that it’s somewhere it’s not. Augmented reality adds digital information on top of the user’s view of the real world.

Mixed reality is a blend between the real and the virtual world; it combines virtual reality and augmented reality. It allows virtual objects to coexist with real objects. It appoints virtual objects to a position in space so that they may interact in a way similar to real objects with other virtual objects, real objects, and the user. For example, from the perspective of the person wearing mixed reality lenses, a virtual ball could be hidden under a real table. To read more about the differences between virtual reality, augmented reality, and mixed reality, visit Recode’s article called “Choose Your Reality: Virtual, Augmented or Mixed.”

Microsoft describes the HoloLens as “the first fully self-contained, holographic computer, enabling you to interact with high-definition holograms in your world.” It allows the user to experience a mixed reality world, allowing the virtual to coexist with the real. It opens new opportunities for collaboration and creativity that virtual reality and augmented reality don’t provide.

The HoloLens enables users to use their hands to work on a virtual object and to see their hands at work. Additionally, other users are able to collaborate on the object, whether they are physically present in the actual location or not.

The HoloLens employs a Holographic Processing Unit (HPU)—made of custom silicon—that processes data coming from sensors at an extremely high speed. There are many sensors on the product, which each require Windows 10—which includes the Windows holographic mixed reality platform—to read them correctly and in real time. Holographic, HD lenses create multi-dimensional, color images with low latency through advanced optical projection. The audio components have the duty of knowing where the head is so that the brain knows there is something real there.

Due to the new opportunities that mixed reality provides surrounding creation and collaboration, there are a variety of opportunities that the HoloLens provides surrounding research, business, and science. The DiVE is excited to take advantage of this new technology.

For more information on mixed reality and the Windows HoloLens, explore Microsoft’s page about the HoloLens.

New Equipment: Oculus Rift (CV1)

The DiVE is on top of the latest virtual reality gadgets and gizmos. Earlier this year, Oculus released the newest head-mounted virtual reality display: the Oculus Rift (CV1). We are so excited to announce that we have acquired the Rift!

Contrast to the DiVE’s 6-sided, CAVE-type system, virtual reality head mounted displays do not allow one to directly see one’s hands and body while in the virtual world, which leads to a slightly disembodied experience. However, this does provide interesting opportunities for alternative embodiment (e.g. you could look down and see a dragon body and dragon wings!). Oculus Rift (CV1)

The Oculus Rift (CV1) comes with four different pieces: a position tracker, a headset, an Xbox remote, and an integrated audio system. The position tracker looks like a classy microphone. It embodies the Constellation Tracking System. This system monitors infrared LEDs in the headset. The presence of LEDs on both the front and the back of the headset provides a 360 degree experience.

The headset is light and adjustable. The two lenses magnify the screen to eliminate blurriness. The Adjacent Reality Tracker is a key part of the headset. It is what enables the system to take into account the many small movements of the head and adjust the program accordingly. There are two displays in the headset and they run at a high resolution of 2160 x 1200.

The Rift’s audio system provides an immersive sonic experience; sound seems to come from all directions. It does this through the combination of the Head-Related Transfer Function (HRTF) and the Oculus’ head tracking. The HRTF has data references data pointing to changes in sound coming from different directions. The result is a smooth and immersive sound system. Detachable headphones come with the headset in order to provide the user with the option of using his or her own headphones if desired.

Oculus’ partnership with Microsoft has enabled Oculus to provide an Xbox One controller with every Rift. The controller adds an interactive component to the virtual reality experience, enabling the user to play a part in the world.

Compatibility with Windows 10 is built into the Rift. This compatibility enables developers to create new worlds through the Microsoft system. Doing this requires computers that meet a list of requirements. They must be Oculus ready.

The DiVE is very excited for the research and education opportunities that the Oculus Rift (CV1) will enable. We also are enjoying exploring the new technology by playing applications such as Lucky’s Tail!

For more information on the Oculus Rift, read “How Oculus Rift works: Everything you need to know about the VR sensation” and Oculus’ Page on the Rift.

DiVE Featured on EdTech

Reprinted with permission from EdTech: Focus on Higher Education

Reprinted with permission from EdTech: Focus on Higher Education

Virtual reality provides amazing opportunities in the fields of teaching, learning, and research.  Jacquelyn Bengfort focuses on these possibilities in her article “Virtual Reality Facilitates Higher Ed Research and Teaches High-Risk Skills” on EdTech, a publication focused on technology and education.

Bengfort discusses how virtual reality can enhance education by acting as a simulator; it enables professors to “bring the world to their students.” Duke is on the cutting edge of this educational opportunity with its recent renovations to the DiVE.

Virtual reality also provides opportunities for interdisciplinary research. The DiVE brings together a wide variety of professionals to collaborate on projects.

In addition to focusing on the DiVE, Bengfort highlights virtual reality’s role at the California State University Maritime Academy, in which it helps train students for sea, and at the Harvard Business School, in which virtual reality provides students with a virtual classroom in HBX Live.

Read Bengfort’s article at http://www.edtechmagazine.com/higher/article/2016/05/virtual-reality-facilitates-higher-ed-research-and-teaches-high-risk-skills.

Congratulations to Duke team for Honorable Mention at the 2016 IEEE VR Conference

A team of Duke researchers, David Zielinski, Hrishikesh Rao, Nick Potter, Lawrence Applebaum, and Regis Kopper, represented Duke at the 2016 IEEE VR Conference, which took place March 19-23 in Greenville, NC. This year was the 26th year this premier international conference and exhibition took place, featuring some of the most innovative research, brightest minds, and top companies in virtual reality technology.

divepic2 The Duke team presented their latest research as a poster on “Evaluating the Effects of Image Persistence on Dynamic Target Acquisition in Low Frame Rate Virtual Environments.”Out of 84 poster presentations, the team won the honorable mention for best poster award. This places our team of Duke research at the top of cutting edge virtual reality technology advancements. A big congratulations to them!

divepic1

Their presentation, which was also featured as a full paper at the Symposium on 3D User Interfaces, was on recent research that analyzes a visual display technique for low frame rate virtual environments called low persistence (LP). Especially interesting to study is its difference to the low frame rate high persistence technique (HP). In the HP technique, the same rendered frame gets repeated a number of times until a new frame is generated—a process that we all see when running complex games in slow computers. With the LP technique, when a frame is generated, rather than showing it multiple times, a black frame is shown instead while waiting for the next new frame to be generated. To learn more about the LP technique, researchers at Duke evaluated user learning and performance during a target acquisition task. This task is similar to a shotgun trap shooting simulation, where the user has to acquire targets that were moving along several different trajectories. The results concluded that the LP technique may be just as useful as the low frame rate high persistence (HP) technique. The LP condition approaches high frame rate performance within certain classes of target trajectories, and user learning was similar in the LP and high frame system.

For more information, check out the poster abstract and the full paper.

Team of Duke researchers featured at this year’s IEEE Virtual Reality Conference

Screen Shot 2016-03-07 at 10.08.12 AM

The IEEE Virtual Reality Conference is the premier meeting on its topic and this year it will be held in Greenville, SC featuring recent developments in virtual reality technology with the attendance of academics, researchers, industry representatives, and VR enthusiasts. Our own David Zielinski will be representing Duke to present his recent paper, “Evaluating the Effects of Image Persistence on Dynamic Target Acquisition in Low Frame Rate Virtual Environments,” co-authored with Hrishikesh Rao, Nick Potter, Marc Sommer, Lawrence Appelbaum, and Regis Kopper in the IEEE Symposium on 3D User Interfaces, co-located with the Virtual Reality conference. In addition, a team of researchers including Leonardo Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli and Marcio Pinho from PUCRS/Brazil, and MEMS professor and DiVE director Regis Kopper entered into the IEEE 3DUI 7th annual contest, which will be judged live at the Symposium. The purpose of the contest is to promote creative solutions to challenging 3DUI problems, and the Duke team’s submission about the Collaborative Hybrid Virtual Environment does just that.

Zielinski’s paper analyzes a visual display technique for low frame rate virtual environments called low persistence (LP). Especially interesting to study is its difference to the low frame rate high persistence technique (HP). In the HP technique, we have one frame of fresh content that is repeated a number of times until the system produces the next frame, causing that break of motion perception that we usually see when trying to play a video game in a slow computer. With the LP technique, we have the fresh content, but instead of showing it multiple times while the next frame is being generated, black frames are inserted instead, effectively causing a stroboscopic effect. To learn more about the LP technique, researchers at Duke evaluated user learning and performance during a target acquisition task. This task is similar to a shotgun trap shooting simulation, where the user has to acquire targets that were moving along several different trajectories. The results concluded that the LP technique performs as well as the low frame rate high persistence (HP) technique. The LP condition approaches high frame rate performance within certain classes of target trajectories, and user learning was similar in the LP and high frame system. A future area of research is to investigate in what situations using the LP technique can have performance or experience benefits over traditional low frame rate simulations.

The Collaborative Hybrid Virtual Environment project that was entered into the 3DUI contest is a system where a single virtual object is manipulated simultaneously by two users performing different operations, like scaling, rotating, and translating objects. It tested which point of view, exocentric or egocentric, is better for each operation, and how many degrees of freedom between the two users would be most efficient in completing a task. The exocentric view is one where the user is standing at a given distance from the object, while the egocentric view is one where the user has the object’s perspective. With the two users having the same view, they can complete the task almost identically, which wouldn’t be much different than one person completing it alone. By giving two users different perspectives, complex operations could be performed more efficiently.

Friday’s April 17th Visualization Forum will discuss DiVE’s upgrades

IMG_3871

April 17th  Noon – 1pm
D106 LSRC · West Campus

Open to the public

Regis Kopper & David Zielinski · Duke DiVE

The DiVE (Duke immersive Virtual Environment) came online at Duke in 2005. Thanks to an NSF instrumentation grant we have recently completed a large hardware upgrade of the DiVE. We will discuss the new upgrades and how this will improve the user experience. We will then discuss several of our projects that we have presented at conferences in the last year (landscape archeology, training fidelity, visual persistence, and sonifications cues), along with several ongoing projects.

DiVE Officially re-opens!

The DiVE officially re-opened on Tuesday April 14th! We would like to thank everyone who came to its opening and look forward to seeing all the new faces that will be coming from here on out.DiVE Grand OpeningDiVE Grand Opening 3

DiVE Re-opens on April 14th

 The DiVE’s new face-lift

DiVE Perspective

For the last six months, the Duke immersive Virtual Environment has undergone its first significant set of renovations. The newly remodeled DiVE will officially open to the public on Tuesday April 13th at 4:00pm in the CIEMAS building, room 1617A. Thanks to a Major Research Instrumentation Award from the National Science Foundation, the DiVE’s renovations will provide industry-leading service for years to come. We are confident that the recent installations will augment our collective sense of reGlassesality beyond that which we might generally encounter. For one, the newly installed system generates 1920 x 1920 pixels on each wall (versus the original resolution of 1050 x 1050pixels). With almost four times the number of pixels, we will surely notice a greater amount of detail then we would have before.

     Projectors All 2These new implementations can be attributed to the newly installed Christy WU7K-M projectors[1]. In order to generate a higher resolution image, each wall has two projectors working in unison. These projectors are simultaneously generating the same image, blending the overlap zone between them as a means to increase the output resolution. The DiVE supports multiple platforms—including C, C++ (through CCG), MATLAB, AVIZO, and Unity—which minimizes future compatibility issues.

      In addition, the newly installed projectors allow us to precisely align the seams of each corner—making it increasingly difficult to detect breaks in corners. As a result of these upgrades, students and faculty can now make out the finer details of projects that were undetectable before. And thus, we can now use the DiVE as a visual tool for scientific data and be more confident about our empirical observations.

[1] For projector specs: http://www.christiedigital.com/en-us/business/products/projectors/3-chip-dlp/m-series/Pages/christie-WU7K-M-DLP-Digital-Projector.aspx

Congrats to this year’s FIP International Year of Light Student Winners!

International Year of Light PicOn March 8th 2015, The DiVE was showcased for the first time since the recent renovations at the International Year of Light Conference. We would like to congratulate this year’s FIP International Year of Light Student Winners: Kapila Wijayarante (1st Place Winner), Sam Migirditch (2nd Place Winner) and Niranjan Sridhar (3rd Place Winner)!

We would also like to take the time to thank all participants. Your efforts has helped us raise global awareness about how light-based technologies raise solutions to global challenges in energy education, agriculture and health.

 

DiVE on NPR – Virtual Reality Opens New Worlds

from http://wunc.org/post/virtual-reality-duke-opens-new-worlds

Researchers at Duke University are using a virtual reality center to test experiments that aren’t feasible in the real world.

It’s called the Duke Immersive Virtual Environment, or the DIVE, for short. In reality, it’s a cube. Six sides. You get inside. Images are projected on each wall. With the help of special goggles, the images become an immersive 3-D world. A special wand allows you to interact with the world.

It has applications for everything from psychiatry, to the mining industry, and even creative writing.

One of the simulations is of a kitchen. Whoever enters is asked to find a pair of keys. The sound in the room increases in intensity and volume as the simulation continues. A car starts beeping, a pot boils over and creepy violins play in the background.

Regis Kopper is the  director of the DIVE and part of the Pratt School of Engineering. He says the kitchen simulation is a good example of how the DIVE can be used for psychological experimentation.

“This environment was used in an experiment exactly to examine how people would get stressed in a virtual environment. There are actually no keys here,” he says. “The idea is as you hear all this background noise… the idea was to see if people would start getting stressed by examining the skin conductance, the heart rate and all that.”

The great thing about the DIVE is programmability. No matter what your specialty, you can probably come up with a use for it. Kopper says researchers are working on adapting a mining simulation for the DIVE. Mining’s a pretty dangerous profession, and it’s safer to train in a virtual environment than the real one.

There are even applications for something like creative writing. David Zielinski, software engineer for the DIVE, says that MFA students at Duke use it to create virtual worlds.

“That’s another example of the cross disciplinary work we’ve done,” he said. “You can create realistic training environments in here, but you can also be more imaginative and not be limited to creating structures that can even feasibly exist.”

There is another draw to the DIVE. It can help attract people to science. You can’t find these things everywhere. Kopper says there are only about 10 to 15 in the United States. Every Thursday, the DIVE holds an open house where anybody can come.

The DIVE has a variety of practical applications, but it’s also just plain fun. Most tours end with a ride on the virtual roller coaster. It’s thrilling, and, other than the lack of wind rushing past, almost as good as the real thing.

zSpace

Date: April 16th, 2013
Time: 10:45am-noon
Location: CIEMAS Building, Room 1131Screen Shot 2013-04-18 at 10.03.56 AM

  • Discuss ongoing and potential research in the area of 3D immersive technology.
  • Be on the forefront of a major shift in human-computer interaction.
  • Discover how zSpace revolutionizes the way people learn, play and create.
  • Learn how zSpace transforms ordinary PCs into powerful workstations.

new art works in virtual reality

Friday, Feb 8 4-7pm

DiVE
CIEMAS/Fitzpatrick
Engineering Quad, West Campus

The Duke immersive Virtual Environment & the MFA in Experimental and Documentary Arts at Duke University present two new art works in Virtual Reality.

artist talks, immersive experiences, light refreshments

Preview of “Re- DiVE work followup” copy
Did You Mean? 
Elizabeth Landesberg, Peter Lisignoli, Laura Dogget in collaboration with David Zielinski
Did You Mean? explores the implications of using the word “word” to imply a single vessel or container of meaning. It invites DiVE participants to play around inside language itself, moving through the multiplicities of possibilities for meaning in a single word, and especially in translation. In Sanskrit, any number of words can be combined using hyphens and still be considered a single modifier, the longest of which has 54 component words joined by hyphens. Participants can select certain elements of this word with the wand and see and/or hear how they are usually defined on their own. Selecting a part of the word with the wand triggers a sound and/or an image, which build to a cacophony of sometimes meaningful, sometimes jarring collisions.

vegasDive

BIDDIE’S BIG BENDER
Laurenn McCubbin and Marika Borgeson in collaboration with David Zielinski
an exploration of the sounds, spaces and visuals of Las Vegas, though the eyes of a senior citizen looking to have a good time in Sin City.

rvsp on facebook

Models and techniques that improve and augment interactive visualization at the Friday Visualization Forum

Friday January 25, 2013

Regis Kopper will discuss Models and techniques that improve and augment interactive visualization at the Friday Visualization Forum at the LSRC D106. Lunch will be provided.

Immersive virtual reality offers the possibility for high fidelity understanding of 3D visualizations. The interaction with components of the 3D visualization, however, is challenging. The complexity of the display coupled with the unconstrained interaction space makes the interaction in such an environment a non-trivial endeavor. In this talk, I will go over my career research arc, from touch generation and multiscale navigation in virtual environments to models for understanding and techniques that improve ray-casting, the basic 3D interaction metaphor. I will also discuss my postdoctoral work on interaction between humans and virtual humans with the goal of interpersonal training. I will finish by presenting and discussing the prospects for continuing research at the Duke immersive Virtual Environment (DiVE) as a unique multidisciplinary research tool.

BME Open House

November 9th, 2012

Do you regularly work with 3D structures?  Do you need to understand the relationship between the structures and how they might change over time?  The DiVE (Duke Immersive Virtual Environment) is an ideal facility to help you gain an intuitive understanding of regularly structured data.

The DiVE promotes the use of visualization and virtual reality technologies for improved understanding of scientific data and human cognition.

The new DiVE director, Regis Kopper, has substantial experience applying virtual reality solutions to medical problems.  We invite you to meet Regis and attend a DiVE event just for you.

We are planning an afternoon at the DiVE for Bio Medical Engineering on Friday, November 9th from 4:00-5:30. You will have the opportunity to view demos of various medical structures in the DiVE including the 3D Brain and various molecules. If you have a structure you would like to see, please send us your data file by Nov 5th and we will attempt to view it. Regis Kopper will be available to explain the advantages of importing your structures in an immersive virtual environment.

We are excited about meeting and working with your group.

Cheers!

The DiVE Staff