BINAURAL RENDERING
VR Sound Is Mobile and Personal
The need to transform 3D sounds into two-channel signals

It is currently quite difficult and inefficient to deliver 3D sound with traditional physical speaker placements for virtual reality experiences. Overcoming this challenge is the reason why so many industry veterans have been turning to headphones as the medium of choice for bringing quality audio to the virtual realm. The G’Audio team utilizes decades of audio experience to process and provide the appropriate signals by mixing well-defined Head-Related Transfer Function (HRTF) data into its powerful computing algorithms.

G'Audio's Proven Binaural Rendering Technology
Immersive and interactive sound with minimal processing

G’Audio Lab’s innovative binaural rendering technology was integrated into the MPEG-H 3D Audio international standard by the ISO/IEC committee in 2014. This decision was based on its ability to deliver the best sound quality while requiring minimal processing power. This binaural rendering technology has enabled a software driven approach across all of G’Audio’s production and rendering solutions, freeing users from expensive 3D headphones or additional hardware gadgets.

Binaural Rendering Meets Interactivity
An unmatched level of realism isn’t rocket science for us

Binaural rendering technology that can play sounds coming from different directions has been around for a long time. Until now, however, it has been limited to the movement of the sound source. For the new era in VR, G’Audio Lab has created tools to support the interaction and head orientation of VR headset users. The localization accuracy is particularly impressive when taken to the next level in six-degrees-of-freedom (6DOF) settings. While participants move freely about a scene and examine it from various angles, sound matches the viewing experience.

How Binaural Rendering is Used for VR
Interactive 3D Audio for Lifelike VR Experiences

Content built with G’Audio’s production solutions synchronize the user’s viewing and listening perceptions based on real-time calculations of the constantly moving positions of sources in the virtual world. The process begins with tracking the position of a moving source, and perpetually calculating its relative direction and distance from the listener. Sound is adjusted as the user’s head orientation or position changes. This information, when fed through a human’s actual hearing mechanism (HRTF data), produces metadata with positional information about how the source should sound for each ear.