360-Video Multiuser Analysis Tool

A 360-video multiuser analysis tool is currently being developed at the lab as a way of teleconferencing. Multiple people in multiple places can watch the same video in virtual reality. The application currently works with oculus rift and uses oculus touch controllers for annotations, as seen in blue in the photo above. The development team, consisting of Mitchell Berger, Isaac Andersen, and Rayan Tofique, are working towards a pause and play functionality as well as other networking and UI components. They see one potential use of their application as a teaching tool to be used in museums and other interactive learning environments. The featured photo is a screen shot of the interface including purple annotations.

The Effects of Rest Frames on Simulator Sickness Reduction

Master’s Student Zekun Cao is currently working on a project that addresses simulator sickness, a very common problem with virtual reality. Virtual Reality sickness occurs, like most forms of motion sickness when, visual, vestibular (inner ear), and muscular cues do not match up the way they normally would in the real world. Take a flight simulator, for example. While flying an actual plane, a person would feel and see the plane turning, rolling and pitching and his/her muscle would react to the movement accordingly. In Virtual Reality the only sensory input received is visual and this separation between what is felt and what is seen is what makes users sick. This project researches the effects of rest frames on simulator sickness reduction. In Zekun’s model, the rest frame takes the form of a mask that appears in the user’s field of view. Having this fixed reference point was hypothesized to help users cope with simulator sickness, and was proven effective through a user study. Participants were asked to complete 6 laps of a circle in a simulated environment and were asked to submit a sickness evaluation from 1-10 at six different points in the circle. With the rest frame, subjects completed a greater number of laps and submitted lower sickness scores on average.

Digital Cities and Cyberarcheology

Digital Cities and Cyberarchaeology is a Bass Connections project that is currently housed at the DiVE. This project is a collaboration with individuals across multiple fields of study including Classical Studies, Art, Art History and Visual Studies, Remote Sensing and 3D technologies with the goal of recreating an archeological dig site in the Etruscan city of Vulci, Italy in Virtual Reality. The recreation is built based off of space and environmental data gathered by an archeological team onsite and will be compatible with Oculus Rift and HTC Vive headsets and allow for simultaneous use in hybrid systems. Researchers have implemented user-testing to vet the project, focusing on ease of use and availability of data for the project’s target demographic, researches who will use the product to visualize their work. Since the work is product based, the results are only as good as users say, and according to the research team, so far feedback has been positive.

Specimen Box

Virtual Reality is becoming better and better at imitating the real world as graphics and other technologies improve. Yet, one of the barriers that continues to keep us from being fully immersed is the need for hand held wands, or other tracked devices, to aid interaction. The Specimen Box project explores new interaction techniques that may be more natural in world fixed displays, such as the CAVE-type system here at the Duke Immersive Virtual Environment.

The Specimen Box is a clear physical box with a tracking component which, when in the environment, appears to house virtual content. Users can hold the box and manipulate the content inside. The user feels the weight of a physical object in their hands, resulting in less cognitive dissonance between the user’s mental understanding of what interacting with an object should feel like and what it actually feels like in virtual reality. Researchers hypothesized that this new interaction technique would increase the speed at which individuals can manipulate objects when asked.

A recent user study asked subjects to read the colors written on each side of a virtual box projected inside of the Specimen Box. In the easiest trial, the font color matched the word that was written (ie. “green” was written in green). In the harder trials, the font color did not match the word that was written. The study found over repeated trials that participants’ response times were faster when manipulating a physical box than when manipulating a virtual box.

Escape Room

Senior Meng’en Huang developed a virtual reality application, Escape Room, as an independent study project. The goal of her independent study was to explore the equipment and applications of a virtual environment. As a Computer Science and Visual Media Studies double major, Huang’s interest in virtual reality stems from its ability to merge her interests.

Developed in Unity and enabled to interact with the DiVE through MiddleVR, Escape Room combines trendy entertainment and cutting-edge technology. It presents the user with a variety of puzzles that need to be solved before the room can be escaped, just like a real escape room.

In a real life escape room, a group of people are locked in a room and given many tasks to complete before being allowed to escape. The participants have access to hints if they are having trouble. Escape rooms combine both the social and the physical, enabling people to be entertained without a screen.

Escape rooms have been present in Asia, Europe, and the West Coast for years. Now, they are are an emerging popular form of entertainment in many large United States cities. For more information on escape rooms, read Jessica Contrera’s article in the Washington Post called “It’s no puzzle why ‘escape room’ adventures are so popular.”

The application that Huang created is a virtual version of these escape rooms. It is an old-time wooden house which is inhabited by a soul who wants to be loved and understood. The soul communicates through a mask in the wall. The player may only leave the room when the soul’s requests are met.

The house has an eerie feel to it. It is decorated with floating candles, no door, a mirror, and unintelligible clocks and paintings. When the player attempts to examine something closely, he or she is pushed away. Through exploring and experimentation, the player incidentally determines how to move things around in the room.

There is a risk involved in playing this game: If a player pushes something out of the room, the thing will never be able to reenter the room. Huang says that if this happens the player “may lose the chance to satisfy the lonely soul and will have to accompany him forever.”

The DiVE wishes Meng’en the best in her post-graduation adventures!

Wayfinding by Audio Cues in Virtual Environments

Undergraduate Computer Science major, Ayana Burkins, hopes to evaluate the use of 3D spatial sound as a wayfinding device in virtual environments. Users navigate through mall-like mazes with various target locations, each of which may contain a localized audio cue, and perform several wayfinding tasks. We hypothesize that users will be able to perform the tasks faster and more accurately in an environment with spatial sound than in one without.

Walk Again Project

The first ceremonial kick of the World Cup game (Brazil 2014) may be made by a paralyzed teenager, who, flanked by the two contending soccer teams, will saunter onto the pitch clad in a robotic body suit.

Led by Miguel Nicolelis, the Walk Again Project is a nonprofit, international collaboration among the Duke University Center for Neuroengineering, the Technical University of Munich, the Swiss Federal Institute of Technology in Lausanne, the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil, The University of California, Davis, The University of Kentucky, and Regis Kopper of The Duke immersive Virtual Environment.

This started with research from the Nicolelis lab using hair-thin and flexible sensors, known as microwires, that have been implanted into the brains of rats & monkeys. These flexible electrical prongs can detect minute electrical signals, or action potentials, generated by hundreds of individual neurons distributed throughout the animals’ frontal and parietal cortices—the regions that define a vast brain circuit responsible for the generation of voluntary movements.

Now, with further advancements, the candidate teenage kicker will be trained in Virtual Reality to control technology that will eventually allow them to kick the ball at the world cup. They will do this by wearing a non-invasive headpiece that detects brain waves.

More information about the Walk Again Project can be found in this Washington Post article.

Civil Engineering

As part of completing their engineering curriculum requirements, seniors take Integrated Structural Design and/ or Integrated Environmental Design, where they interact as employees of a fictitious company, Overture. The courses are interwoven with Architectural Engineering II—Overture’s “architectural division” – taken by juniors and seniors pursuing an architectural engineering certificate. Student “employees” work within their division and then collaborate in cross-divisional teams. The groups tackle aspects of real-world engineering projects. Students typically collaborate on the design of 50,000 square feet of campus research space. To kick-off the project, a site development roundtable is conducted utilizing local multidisciplinary professionals. Students are responsible for different facets of the project, ultimately working as teams for the most cost-effective, code and standards compliant design. As project deliverables, students develop suitable documentation and reports supporting design assumptions, in a manner sufficient for bidding.

075609_sr_design_class_dive002

DiVE These tools allow students an experiential opportunity that reveals the impact and inter-relationship of technical and aesthetic decisions made during the design process. The University’s Immersive Virtual Environment (DiVE) provides exactly this kind of kinesthetic learning experience for Overture students. The 6-sided virtual theater provides an immersive and fully interactive spatial experience of each Overture’s team design solutions.

075609_sr_design_class_dive062

Çatalhöyük

Çatalhöyük @ DiVE was featured on Duke Today as well as on Duke’s YouTube Channel

Çatalhöyük @ DiVE project seeks to make available to the Duke community and the general public a 1:1 scale, immersive representation of a mud brick Neolithic house (Building 89) that has been excavated in the archaeological site of Çatalhöyük, in Central Anatolia, Turkey since 2011. Building 89 is a well preserved mud brick house belonging to Çatalhöyük, an outstanding example of prehistoric settlement that was inhabited for a period of over 1500 years by a society of early farmers, builders, and artists.ÄatalhîyÅk @ DiVE_07

The site is one of the first urban centers in the world (7400 BC circa) and features the first examples of wall paintings and mural art. Such spectacular artworks provide a clear view of life and ritual scenes from 9000 years ago, making Çatalhöyük universally recognized as a milestone in our understanding of the origins of agriculture and civilization. The digital documentation process of Building 89 has produced a rich set of multimodal information including terrestrial laser scanning data, image-based 3D models, GIS, drawings, pictures, stereo-videos, and metadata.

ÄatalhîyÅk @ DiVE_03

Thanks to the immersive visualization at the DiVE, archaeologists and students can now experience and study this Neolithic house as if they were actually on site in Turkey without leaving Duke West campus. In addition, the immersive experience of Building 89 at the DiVE is enhanced by in-context stratigraphic layers menu, volumetric visualization of the excavated areas, shaders, first-hand interaction with the models, and a virtual anastylosis that shows how the building looked like before being abandoned.

ÄatalhîyÅk @ DiVE_01

ML2VR – Providing MATLAB Users an Easy Transition to Virtual Reality and Immersive Interactivity

David J. Zielinski & other collaborators have developed a software system that easily integrates with MATLAB scripts to provide the capability to view visualizations and interact with them in virtual reality (VR) systems. We call this system “ML2VR” and expect it will introduce more users to VR by enabling a large population of MATLAB programmers to easily transition to immersive systems.

IEEE VR Conference

Evaluating Display Fidelity and Interaction Fidelity in a Virtual Reality Game

Immersive virtual reality (VR) allows us to achieve very high levels of fidelity. This study evaluated display fidelity and interaction fidelity independently, at extremely high and low levels, for a VR first-person shooter (FPS) game. The goal was to gain a better understanding of the effects of fidelity on the user in a complex, performance-intensive context. The results of the study indicate that both display and interaction fidelity significantly affect strategy and performance, as well as subjective judgments of presence, engagement, and usability. In particular, performance results were strongly in favor of two conditions: low-display, low-interaction fidelity (representative of traditional FPS games) and high-display, high-interaction fidelity (similar to the real world).

Comparison of Virtual Environments using a Stressful Task

The 3D Kitchen is an interactive kitchen scene in which every object (drawers, dishes, food), can be picked up and moved in the world with realistic physics. The kitchen has been used as a basis for several cognitive psychology experiments. Most recently, researcher Kwanguk Kim conducted a “Comparison of Virtual Environments using a Stressful Task”. In this study, fifty-three participants were asked to perform a modified Stroop task to investigate the effects of different virtual environment technologies on emotional arousal and task performance. The participants were examined for their reaction to both low- and high-stress conditions in three virtual environment systems: a desktop system, head mounted display (HMD), and the DiVE. Results were measured based on self-reported emotional arousal and valence, skin conductance, task performance, presence, and simulator sickness. The researchers found that the DiVE, the one fully immersive system, induced the highest sense of presence, while the HMD system elicited the highest amount of simulator sickness.

Field Goals – Evaluating the Impact of Virtual Reality in Electromagnetics Education

Field Goals is a game developed for the Duke Immersive Virtual Environment (DiVE), which allows for 3D visualization of and interaction with electromagnetic fields and forces, both of which are not easily visualized in everyday life. In each game level, the user has control of a charged particle and must hit a target, but the user must take into account the effects of the various electric and/or magnetic fields also present in the level. These fields affect the trajectory of the particle, and Field Goals makes use of the DiVE’s 3D visualization capabilities to show the full path of the particle as it travels throughout the level. The levels are fully customizable, and subsequent users can easily change the number of fields, as well as their strength, color, and position, without diving into the source code.

Three Dimesional Reconstruction of Premature Infant Retinal Vessels

Babies born prematurely are prone to several diseases. One of them is Retinopathy of Prematurity, a leading cause of blindness due to retinal vessel immaturity. These fragile vessels can be imaged with optical coherence tomography, a non-invasive imaging tool. Subsequently, using volume-rendering software, we produced a three dimensional reconstruction of normal and abnormal vessels in these infants. These findings have helped clinicians to understand from another perspective the dynamic changes occurring in this serious disease affecting the eyes of these babies.

This slideshow requires JavaScript.

Super-KAVE

Located under Japan’s Mount Ikenoyama, the Super-Kamiokande (or “Super-K”) neutrino detector is used to study neutrino particle physics. The Super-K detector consists of a cylindrical stainless steel tank (41.4m tall and 39.3m in diameter) holding 50,000 tons of water and 13,031 photomultiplier tubes (PMTs). To view the data captured by these sensors, many physicists use 2D visualization tools which present the data color-coded on a deconstructed representation of the cylinder. Unfortunately, this deconstructed visualization makes it difficult for physicists to fully visualize patterns of neutrino interactions. To address this, we have developed a novel virtual reality (VR) application called “Super-KAVE”, which uses a CAVE to immerse users in a life-size representation of the Super-K detector. Super-KAVE displays the collocation of photon sensors and their color-coded data, provides a new visualization technique for neutrino-interaction patterns, and supports transitioning between data events. In this paper, we describe in detail the Super-K detector and its data, discuss the design and implementation of our Super-KAVE application, and report on its expected uses.

This slideshow requires JavaScript.

Super-KAVE is a visualization application for datasets from the Super-Kamiokande neutrino detector in Japan.  Given a dataset produced by either simulation or the actual detector, we can provide a full-scale, explorable, immersive visualization of the results.

This data originates in a FORTRAN library.  We first parse this with a FORTRAN script.  Then our OpenGL system backed by the Syzygy parallel rendering library reads in this data file and places the user inside a full-scale representation of the 40-meter tall, cylindrical detector.  Therein the user can fly around via standard DIVE controls, as well as access numerous visualization options via a heirarchical menu system.

Super-KAVE was largely based and inspired by an existing FORTRAN Super-K visualization application called Superscan, used by our physics colleagues.

 

A master feature list is as follows:

  • Full-scale wireframe representation of the detector (40 meters tall, 40 meters in diameter)
  • Disk representation of the inner photodetector PMTs, colored by either charge or time data, scaled by charge data.
  • Disk representations contained within a square boundary, representing the outer photodetector PMTs, also colored by charge of time data and scaled by charge data.
  • Colored lines drawn on the walls representing the intersection of the Cherenkov energy cone intersecting with the walls.  Color is based on the type of molecule producing the Cherenkov radiation.
  • A spherical object representing the neutrino interaction vertex.
  • Lines between the neutrino vertex and lines on the wall, representing the cherenkov cone itself.
  • A heirarchical menu allowing control over:
    • Which cherenkov cones are being drawn
    • Whether detectors are colored by CHARGE or TIME
    • Which event is being shown
    • Whether the outer detector PMTs are being shown.
  • A tablet display granting the user:
    • Knowledge of current system hand location (granting feedback of system hand position)
    • Current event out of total number of events
    • Current coloring mode
    • Whether the outer detector is visible or not.

Nicotine – Smoking and How it Changes the Brain

Developed with SEDAPA funding from the National Institute on Drug Abuse.

Travel into the avatar’s brain to the “reward pathway”. There, you will interact with nicotine molecules to learn how smoking changes receptors for nicotine on the neurons that provide pleasurable feelings. You’ll take a ride along the reward pathway..woo-hoo! It’s the next best thing to “being there”.

To watch a video of the experience, click here.

Did You Mean?

Did You Mean? explores the implications of using the word “word” to imply a single vessel or container of meaning. It invites DiVE participants to play around inside language itself, moving through the multiplicities of possibilities for meaning in a single word, and especially in translation. In Sanskrit, any number of words can be combined using hyphens and still be considered a single modifier, the longest of which has 54 component words joined by hyphens. Participants can select certain elements of this word with the wand and see and/or hear how they are usually defined on their own. Selecting a part of the word with the wand triggers a sound and/or an image, which build to a cacophony of sometimes meaningful, sometimes jarring collisions.

BIDDIE’S BIG BENDER

This project is an exploration of the sounds, spaces and visuals of Las Vegas, though the eyes of a senior citizen looking to have a good time in Sin City.

La Villa Di Livia

La Villa Di Livia was created to let users explore a Roman village, both in it’s current state of ruins and in a hypothetical reconstruction of the space. The village ruins are recreated from field research where Professor Forte collects data from archaeological and ancient landscapes using laser scanning, photomodelling, photogrammetry, differential global positioning systems, spatial technologies, movies, and traditional archaeological documentation.

Solomon’s Temple

For a course in the Divinity School, Professor Anathea Portier-Young brought students into the DiVE to discuss a virtual reconstruction of Solomon’s Temple. She paired the immersive experience with her students’ reading of the books of Kings and Chronicles, and timed it to coincide with a lecture on Chronicles that focused on worship. The project allowed students to physically and visually experience the religious environment of an otherwise textually encountered site (only the foundations remain today).

Class discussion focused on was that the virtual experience to allow reflection on the textual descriptions of the Temple, how the Temple functioned as a symbol in biblical literature, and how the literary accounts may have functioned as a virtual experience for ancient Jews living outside of Israel. By situating the model topographically in the Jerusalem environs, the student will have the pilgrimage experience of ascending upwards to the imposing Temple complex. Focus will be on experiencing the Temple complex as a religious space. Creating a reliable virtual model will allow the students to explore and experience ancient reality and what it means to be in an ancient Temple space, an experience that cannot be had in any modern contexts.