Team of Duke researchers featured at this year’s IEEE Virtual Reality Conference

Screen Shot 2016-03-07 at 10.08.12 AM

The IEEE Virtual Reality Conference is the premier meeting on its topic and this year it will be held in Greenville, SC featuring recent developments in virtual reality technology with the attendance of academics, researchers, industry representatives, and VR enthusiasts. Our own David Zielinski will be representing Duke to present his recent paper, “Evaluating the Effects of Image Persistence on Dynamic Target Acquisition in Low Frame Rate Virtual Environments,” co-authored with Hrishikesh Rao, Nick Potter, Marc Sommer, Lawrence Appelbaum, and Regis Kopper in the IEEE Symposium on 3D User Interfaces, co-located with the Virtual Reality conference. In addition, a team of researchers including Leonardo Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli and Marcio Pinho from PUCRS/Brazil, and MEMS professor and DiVE director Regis Kopper entered into the IEEE 3DUI 7th annual contest, which will be judged live at the Symposium. The purpose of the contest is to promote creative solutions to challenging 3DUI problems, and the Duke team’s submission about the Collaborative Hybrid Virtual Environment does just that.

Zielinski’s paper analyzes a visual display technique for low frame rate virtual environments called low persistence (LP). Especially interesting to study is its difference to the low frame rate high persistence technique (HP). In the HP technique, we have one frame of fresh content that is repeated a number of times until the system produces the next frame, causing that break of motion perception that we usually see when trying to play a video game in a slow computer. With the LP technique, we have the fresh content, but instead of showing it multiple times while the next frame is being generated, black frames are inserted instead, effectively causing a stroboscopic effect. To learn more about the LP technique, researchers at Duke evaluated user learning and performance during a target acquisition task. This task is similar to a shotgun trap shooting simulation, where the user has to acquire targets that were moving along several different trajectories. The results concluded that the LP technique performs as well as the low frame rate high persistence (HP) technique. The LP condition approaches high frame rate performance within certain classes of target trajectories, and user learning was similar in the LP and high frame system. A future area of research is to investigate in what situations using the LP technique can have performance or experience benefits over traditional low frame rate simulations.

The Collaborative Hybrid Virtual Environment project that was entered into the 3DUI contest is a system where a single virtual object is manipulated simultaneously by two users performing different operations, like scaling, rotating, and translating objects. It tested which point of view, exocentric or egocentric, is better for each operation, and how many degrees of freedom between the two users would be most efficient in completing a task. The exocentric view is one where the user is standing at a given distance from the object, while the egocentric view is one where the user has the object’s perspective. With the two users having the same view, they can complete the task almost identically, which wouldn’t be much different than one person completing it alone. By giving two users different perspectives, complex operations could be performed more efficiently.

Friday’s April 17th Visualization Forum will discuss DiVE’s upgrades

IMG_3871

April 17th  Noon – 1pm
D106 LSRC · West Campus

Open to the public

Regis Kopper & David Zielinski · Duke DiVE

The DiVE (Duke immersive Virtual Environment) came online at Duke in 2005. Thanks to an NSF instrumentation grant we have recently completed a large hardware upgrade of the DiVE. We will discuss the new upgrades and how this will improve the user experience. We will then discuss several of our projects that we have presented at conferences in the last year (landscape archeology, training fidelity, visual persistence, and sonifications cues), along with several ongoing projects.

DiVE Officially re-opens!

The DiVE officially re-opened on Tuesday April 14th! We would like to thank everyone who came to its opening and look forward to seeing all the new faces that will be coming from here on out.DiVE Grand OpeningDiVE Grand Opening 3

DiVE Re-opens on April 14th

 The DiVE’s new face-lift

DiVE Perspective

For the last six months, the Duke immersive Virtual Environment has undergone its first significant set of renovations. The newly remodeled DiVE will officially open to the public on Tuesday April 13th at 4:00pm in the CIEMAS building, room 1617A. Thanks to a Major Research Instrumentation Award from the National Science Foundation, the DiVE’s renovations will provide industry-leading service for years to come. We are confident that the recent installations will augment our collective sense of reGlassesality beyond that which we might generally encounter. For one, the newly installed system generates 1920 x 1920 pixels on each wall (versus the original resolution of 1050 x 1050pixels). With almost four times the number of pixels, we will surely notice a greater amount of detail then we would have before.

     Projectors All 2These new implementations can be attributed to the newly installed Christy WU7K-M projectors[1]. In order to generate a higher resolution image, each wall has two projectors working in unison. These projectors are simultaneously generating the same image, blending the overlap zone between them as a means to increase the output resolution. The DiVE supports multiple platforms—including C, C++ (through CCG), MATLAB, AVIZO, and Unity—which minimizes future compatibility issues.

      In addition, the newly installed projectors allow us to precisely align the seams of each corner—making it increasingly difficult to detect breaks in corners. As a result of these upgrades, students and faculty can now make out the finer details of projects that were undetectable before. And thus, we can now use the DiVE as a visual tool for scientific data and be more confident about our empirical observations.

[1] For projector specs: http://www.christiedigital.com/en-us/business/products/projectors/3-chip-dlp/m-series/Pages/christie-WU7K-M-DLP-Digital-Projector.aspx

Congrats to this year’s FIP International Year of Light Student Winners!

International Year of Light PicOn March 8th 2015, The DiVE was showcased for the first time since the recent renovations at the International Year of Light Conference. We would like to congratulate this year’s FIP International Year of Light Student Winners: Kapila Wijayarante (1st Place Winner), Sam Migirditch (2nd Place Winner) and Niranjan Sridhar (3rd Place Winner)!

We would also like to take the time to thank all participants. Your efforts has helped us raise global awareness about how light-based technologies raise solutions to global challenges in energy education, agriculture and health.

 

DiVE on NPR – Virtual Reality Opens New Worlds

from http://wunc.org/post/virtual-reality-duke-opens-new-worlds

Researchers at Duke University are using a virtual reality center to test experiments that aren’t feasible in the real world.

It’s called the Duke Immersive Virtual Environment, or the DIVE, for short. In reality, it’s a cube. Six sides. You get inside. Images are projected on each wall. With the help of special goggles, the images become an immersive 3-D world. A special wand allows you to interact with the world.

It has applications for everything from psychiatry, to the mining industry, and even creative writing.

One of the simulations is of a kitchen. Whoever enters is asked to find a pair of keys. The sound in the room increases in intensity and volume as the simulation continues. A car starts beeping, a pot boils over and creepy violins play in the background.

Regis Kopper is the  director of the DIVE and part of the Pratt School of Engineering. He says the kitchen simulation is a good example of how the DIVE can be used for psychological experimentation.

“This environment was used in an experiment exactly to examine how people would get stressed in a virtual environment. There are actually no keys here,” he says. “The idea is as you hear all this background noise… the idea was to see if people would start getting stressed by examining the skin conductance, the heart rate and all that.”

The great thing about the DIVE is programmability. No matter what your specialty, you can probably come up with a use for it. Kopper says researchers are working on adapting a mining simulation for the DIVE. Mining’s a pretty dangerous profession, and it’s safer to train in a virtual environment than the real one.

There are even applications for something like creative writing. David Zielinski, software engineer for the DIVE, says that MFA students at Duke use it to create virtual worlds.

“That’s another example of the cross disciplinary work we’ve done,” he said. “You can create realistic training environments in here, but you can also be more imaginative and not be limited to creating structures that can even feasibly exist.”

There is another draw to the DIVE. It can help attract people to science. You can’t find these things everywhere. Kopper says there are only about 10 to 15 in the United States. Every Thursday, the DIVE holds an open house where anybody can come.

The DIVE has a variety of practical applications, but it’s also just plain fun. Most tours end with a ride on the virtual roller coaster. It’s thrilling, and, other than the lack of wind rushing past, almost as good as the real thing.