Team of Duke researchers featured at this year’s IEEE Virtual Reality Conference

Screen Shot 2016-03-07 at 10.08.12 AM

The IEEE Virtual Reality Conference is the premier meeting on its topic and this year it will be held in Greenville, SC featuring recent developments in virtual reality technology with the attendance of academics, researchers, industry representatives, and VR enthusiasts. Our own David Zielinski will be representing Duke to present his recent paper, “Evaluating the Effects of Image Persistence on Dynamic Target Acquisition in Low Frame Rate Virtual Environments,” co-authored with Hrishikesh Rao, Nick Potter, Marc Sommer, Lawrence Appelbaum, and Regis Kopper in the IEEE Symposium on 3D User Interfaces, co-located with the Virtual Reality conference. In addition, a team of researchers including Leonardo Soares, Thomas Volpato de Oliveira, Vicenzo Abichequer Sangalli and Marcio Pinho from PUCRS/Brazil, and MEMS professor and DiVE director Regis Kopper entered into the IEEE 3DUI 7th annual contest, which will be judged live at the Symposium. The purpose of the contest is to promote creative solutions to challenging 3DUI problems, and the Duke team’s submission about the Collaborative Hybrid Virtual Environment does just that.

Zielinski’s paper analyzes a visual display technique for low frame rate virtual environments called low persistence (LP). Especially interesting to study is its difference to the low frame rate high persistence technique (HP). In the HP technique, we have one frame of fresh content that is repeated a number of times until the system produces the next frame, causing that break of motion perception that we usually see when trying to play a video game in a slow computer. With the LP technique, we have the fresh content, but instead of showing it multiple times while the next frame is being generated, black frames are inserted instead, effectively causing a stroboscopic effect. To learn more about the LP technique, researchers at Duke evaluated user learning and performance during a target acquisition task. This task is similar to a shotgun trap shooting simulation, where the user has to acquire targets that were moving along several different trajectories. The results concluded that the LP technique performs as well as the low frame rate high persistence (HP) technique. The LP condition approaches high frame rate performance within certain classes of target trajectories, and user learning was similar in the LP and high frame system. A future area of research is to investigate in what situations using the LP technique can have performance or experience benefits over traditional low frame rate simulations.

The Collaborative Hybrid Virtual Environment project that was entered into the 3DUI contest is a system where a single virtual object is manipulated simultaneously by two users performing different operations, like scaling, rotating, and translating objects. It tested which point of view, exocentric or egocentric, is better for each operation, and how many degrees of freedom between the two users would be most efficient in completing a task. The exocentric view is one where the user is standing at a given distance from the object, while the egocentric view is one where the user has the object’s perspective. With the two users having the same view, they can complete the task almost identically, which wouldn’t be much different than one person completing it alone. By giving two users different perspectives, complex operations could be performed more efficiently.