What’s Next For VR Audio
What’s Next For VR Audio
Where VR Audio is At
Before the age of VR, the 2D video story was not influenced by end-user’s interaction. The spatial resolution for audio improved just by adding more speakers around the end-user’s frontal rectangular screen. The biggest hurdle for immersion was instead ‘present room effect.’ One could never fully be there in the story because the virtual world was limited by the screen size. This is now a different story because the presence of the real world is blocked by wearing HMD and headphones. VR certainly helps the content consumer be completely transported to a different world, but again it delivers different levels of immersiveness depending on type of content.
In 360 video or 3DOF type content, the world is already pre-rendered. The three dimensional space is projected in a spherical world that you can look from different directions upon free will, but cannot walk around. Your position remains fixed in one spot. This is why an Ambisonics audio signal, a way of recording and reproducing 3D sound as a snapshot, became such a popular audio format for 360 videos. Just like the 360 video, this spherical audio format can be easily rotated to reflect head orientation yaw, pitch, and roll. However, Ambisonics is limited to 360 type content only, where the end-user is fixed at one position. Increasing the order of Ambisonics does not support greater interactivity or 6DOF, but merely increases the spatial resolution. Think of it as how increasing the pixel resolution doesn’t transform 360 video into walkable video.
Meanwhile, full VR or 6DOF content is rendered in real time while the user interacts and moves around in the scene. This requires the objects in the scene to be controlled individually, rather than as a chunk of pre-configured video and audio. When each sound source is delivered to the playback side as an individual object signal, it can truly reflect both the environment and the way the user is interacting within the environment. This full control capability of object-based audio may be used in 2D or 360 video, but it’s potential is best realized in full VR.
VR Audio Moving Forward
While more and more VR content is being made in the full VR format, the audio industry is barely catching up with Ambisonics signals for 360 videos. Second order Ambisonics already requires a minimum of 9 channels, and higher order Ambisonics are not feasible in many cases because the network bandwidth is limited in mobile, not to mention the restrained processing power allocated for audio.
Some might argue personalized audio is the most important challenge going forward. Until capturing the exact anthropometric information requires quite a bit less resources than now, customization for each person’s ear shape and head size will remain as the last step to perfection. Luckily, four out of five people can already feel immersed in the VR scene with general binaural rendering process. What needs to be figured out in the foreseeable future is how to deliver interactive 3D audio without compromising the content quality, from creators to consumers and across multiple platforms. Once best practices are determined and a recommended workflow is set, standardizing those practices should follow to improve interoperability.
The Future of VR Audio – 3 Trends to Track This Year 2017 has pushed the VR industry forward in countless ways, including the recognition of audio as an absolutely critical element in VR experiences. Here are a few trending efforts that creators are using to push the envelope with sound. The industry will embrace object-based audio for every kind of experience. Utilizing object-based audio also gives more creative freedom to content creators since it’s easier to manipulate post-production effects on a single sound — think of it as a single raw element as opposed to a big, messy sound glob. In addition, object-based audio works perfectly for 6DOF (six-degrees-of-freedom) VR content, which is rapidly growing in popularity. 6DOF content is just like a game — the character moves around within the space in every direction and has the agency to interact with objects in the environment. When the character does either of these things, the sound needs to change accordingly. Because it is better at pinpointing sound and easily reflecting the changes during gameplay, object-based audio has actually already been used in 3D game engines for quite some time. As more 6DOF content is being built on game engines, it’s plausible that more audio engineers will be forced to learn how to mix and master sound in game engines rather than their traditional Digital Audio Workstations. Quality VR content will be published with more players embracing spatial audio. These limits on publishing platforms have discouraged content creators from fully embracing spatial audio in their productions. Still, as sound is met with increased appreciation, renderers or players will eventually have to support spatial audio. When Vimeo launched Vimeo 360 in March to support 360 content, a huge amount of the requests from users involved a desire for a spatial audio feature and the official help page states that they are planning to support spatial audio in the near future. Smaller players and platforms will follow the path laid out by Facebook, YouTube, and Vimeo. As user standards for VR content quality continue to rise, adoption for spatial audio will race to keep pace. Out of the many reasons that have kept content publishing platforms from adopting spatial audio, the primary one has been the absence of a dedicated VR audio format along with a compatible renderer. With more emphasis being put on object-based audio, Ambisonics alone will be phased out as the standard format of the future. Creators will push beyond post-production to create new listening experiences. Sound is not just a storytelling cue that can be used to encourage VR users to look in a certain direction. In some new use cases, you are actually able to hear certain sounds over others within the same experience if you want. The following recorded 360 video, for example, lets users hear what they are looking at more clearly than the other instruments placed all around them. This new wave of sound won’t just be part of an evolution of current techniques. In many cases, it will give way to revolutionary new forms of entertainment. The virtual canvas for artists is expanded 360 degrees horizontally and 360 degrees vertically beyond the physical dimensions of a stage in real life. Musicians will now be able to play with “virtual location,” along with their traditional considerations of pitch, loudness, and timing. They’re also learning how to exploit human auditory perception to influence these experiences at an even deeper level. Some psychoacoustic principles that you have already experienced in real life can be taken advantage of in VR to make each experience different at the individual level. While there is a whole lot to consider in that realm, our collective knowledge about it continues to grow.2018.07.05
Gaudio Lab Upgrades Its Spatial Audio Solution “WORKS” Virtual Reality Reporter has featured G’Audio Lab’s announcement on the launch of upgraded Works software. As stated in the news, Works is expected to provide an even more intuitive workflow with powerful new features in this new version. As an AAX plugin to Pro Tools, Works can be seamlessly used. It allows creators to accurately place object sounds in the virtual environment. Each sound source then has specific positional metadata, which goes through a process called binaural rendering. When content built with Works is played on an HMD, sound objects change according to the users’ interactions, accurately synchronizing what they see with what they hear. Its new features include a built-in volume fader, expandable window and timbre preservation. It also supports output format monitoring, which lets you hear the difference in sound quality between Ambisonics and GA5. How to use Works Just get your hands on the Gaudio Lab website, request Works for free. Your VR creation life cannot be any easier.2018.08.16