Thirteenth Floor and Gaudio form alliance to provide a cinematic VR platform for 5G mobile networks
The partnership targets a premium user experience by combining immersive spatial audio with high quality 360 video through 5G networks
BERKELEY, CA – October 16, 2018 – Cinematic VR production company and technology developer the Thirteenth Floor has joined forces with next-generation audio solutions expert Gaudio to provide a premium level VR experience. The two companies first introduced their partnership on September 28 in Seoul. The Thirteenth Floor has licensed Gaudio’s Sol VR360 SDK and the two development teams are embarking on their mission to build a VR content service platform targeting 5G networks called THere. The limitations of current 4G LTE mobile networks have forced VR360 producers and developers to compromise quality, but the with the advent of 5G, the industry can finally begin to realize the potential of this emerging immersive medium.
The Thirteenth Floor began as a VR technology startup in 2015 and is now well-known for its unique, proprietary 360-degree video shooting and drone shooting systems. Gaudio is an award-winning audio technology startup with R&D based in Seoul focusing on the development of solutions for streaming media in addition to audio for cinematic VR.
The two companies plan to launch the THere platform in 2019 as telecoms begin to roll out 5G services to major markets. By taking advantage of the larger 5G pipe, Thirteenth and Gaudio will deliver hyper-realistic immersive experiences to viewers featuring a new level of quality. The experience will enable completely synchronized immersive video and spatial audio that responds instantly to viewer head-tracking, suspending disbelief, reducing motion sickness, and opening up the potential to transport audiences into other realities.
“With the potential of 5G now in focus, developing a new VR platform in collaboration with a leading audio technology company like Gaudio represents a meaningful strategic alliance” said Jung-woo Park, CEO of Thirteenth Floor. Park added that, “We are aiming to deliver the ultimate user experience in Cinematic VR, the one viewers have been waiting for.”
“We are very excited to showcase our next gen audio solutions in combination with the Thirteenth Floor’s innovative content production and video technologies”, said Henney Oh, CEO of Gaudio. “This is an opportunity to lead the way into premium VR on 5G and take advantage of 100x the bandwidth compared to 4G LTE.”
In addition to cinematic VR, the two companies also plan to develop immersive audio/video technologies for location-based VR theme parks and VR cinema experiences that are growing in popularity, as well as livestreaming VR for sports and live events.
Gaudio is a next generation audio technology company dedicated to enhancing the listening experience for XR and streaming media. Gaudio is customizing solutions for forward-looking companies like Honda, Naver, and partners like Tiledmedia and NexStreaming. With seven audio Ph.Ds leading R&D, the team has invented numerous patented technologies and excels in algorithm development with deep knowledge in acoustics, psychoacoustics, and signal processing. Winner of the AMD Studios VR Awards 2017 Innovative VR Company of the Year, and with active membership in standard developing organizations including MPEG and 3GPP, Gaudio is building the next generation playground for audio innovation. For more information, visit Gaudio
About Thirteenth Floor
Thirteenth Floor produces world-class cinematic VR content using a variety of innovative and proprietary technologies for clients including RedBull, Hyundai, Samsung, and Coca-Cola. The team is comprised of VR experts with experience as executive producer, art director, and VFX director at companies such as CJ Corporation, SBS, and Nexon. Now in its third year, Thirteenth is building on core IP to develop a new VR content platform. For more information, visit Thirteenth Floor.
Audio technology company announces software solution that smoothes variations in loudness and provides continuity of perceived sound levels for OTT video and music streaming platforms BERKELEY, CA – September 12, 2018 – Gaudio, a next-generation audio technology company dedicated to enhancing the listening experience for XR and streaming media, today launched its Sol Loudness SDK for streaming media. Starting today, video and music streaming platforms can leverage Gaudio’s Sol Loudness SDK to provide a smoother audio experience for subscribers. The popularity of OTT video (over-the-top) is growing rapidly and streaming media consumption is increasing across multiple devices, including smartphones, Smart TVs, and game consoles. Over the past few years, streaming media companies have prioritized the seamless transmission of high-quality video and music, and improved UX. Audio quality and audio features have not received the same level of attention, resulting in inconsistencies in the listening experience across devices. Emerging audio technologies, such as Gaudio’s Sol Loudness SDK, solve for this by smoothing variations in loudness and providing continuity of the perceived sound level between streaming content for end-users. Henney Oh, CEO and Co-founder of Gaudio, said: “At Gaudio, we are dedicated to improving the end-user listening experience. Our loudness management solution addresses common subscriber complaints and solves for volume inconsistencies between content programming. A smoother audio experience leads to increased subscriber satisfaction and that leads to improved retention.” Alexis Macklin, Analyst at Greenlight Insights, said: “With 62% of U.S. adults streaming TV and movies each week, streaming services have revolutionized the way consumers listen to and watch media. Unfortunately, this mass appeal has yet to translate to consistent audio quality. The streaming market is set to be disrupted by new solutions focused on setting higher standards in streaming audio quality. Loudness management will be central to the features that creators and publishers will need to deliver as a growing number of consumers expect consistent audio quality across streaming platforms.” Gaudio’s SDK product line includes spatial audio rendering for 360° video and loudness management for streaming media. Other loudness management software solutions today employ a legacy “file-based” approach, which is a pre-processing pipeline that destructively modifies the original content. The Sol Loudness SDK uses a server-client architecture in which the server-side performs loudness measurement and generates metadata that the client-side uses to normalize content to the target loudness setting. Advantages to the server-client solution include, (1) the opportunity to set loudness targets per platform, end-user device, or even listening environment, and (2) the original content is never modified, avoiding compression artifacts. Naver Corporation, a South Korea-based internet content service company, is one of the first companies to integrate the Sol Loudness SDK for their streaming media services. For more information on G-Audio technology solutions, visit Gaudio Lab. About GaudioGaudio is a next generation audio technology company dedicated to enhancing the listening experience for XR and streaming media. Gaudio is customizing solutions for forward-looking companies like Honda, Naver, and partners like Tiledmedia and NexStreaming. With seven audio Ph.Ds leading R&D, the team has invented numerous patented technologies and excels in algorithm development with deep knowledge in acoustics, psychoacoustics, and signal processing. Winner of the AMD Studios VR Awards 2017 Innovative VR Company of the Year, and with active membership in standard developing organizations including MPEG and 3GPP, G-Audio is building the next generation playground for audio innovation. For more information, visit Gaudio Lab.2018.10.11
The Audio Voice: Hearables and Augmented Reality By J. Martins, Editor-in-Chief You might have noticed that on my previous The Audio Voice write-up about Headphone Technology and Markets I didn’t use the term “hearables” since that would take the discussion to a whole other level. That was a deliberate option given that I really wanted to address the topic separately. On the consumer electronics side, we have in-ears and truly wireless stereo earbuds, and on the other side of the equation, we have the audiology and medical market, with hearing aids now expanding into Personal Sound Amplification Devices (PSADs) and hearing enhancers that soon will be available over the counter. But hearables are (or should be) different. How different? Curiously, the same day we sent out that edition of The Audio Voice with my editorial on the topic, we received a press release from an American company – ZVOX Audio. The company is working so far in the field of intelligibility with sound for TV and soundbars, that it decided to be among the first to grab the OTC opportunities and design an affordable solution, purposely designed for voice enhancement, targeting a very specific market of older people. It is a curious example of the diversity of approaches this segment will see in the following months. The Jabra Elite 65t can be considered the state-of-the-art true wireless stereo earbuds struggling to become hearables, even benefiting from the extensive experience in wireless audio and hearing aids from the GN Group – owner of Jabra and ReSound, among other brands. The term “hearables” has been so far associated with the evolution of consumer-oriented in-ears toward something that could be health-related, not necessarily medical. As consultant Nick Hunn wrote in his document “The Market for Hearable Devices 2016-2020″, published in November 2016, “The real hearables revolution began in 2014 when two European companies launched crowdfunding campaigns for earbuds. In Sweden, Earin acquired funding for a pair of Bluetooth earbuds that would stream audio. Approximately 1,500 km farther south in Munich, another startup – Bragi, raised the unprecedented sum of $3.39 million for a far more ambitious hearable device – the Dash.” As Hunn explains and audioXpress reported at the time, the Dash could stream music like Earin, but it “could also store and play music without the presence of a phone, as well as housing a host of biometric sensors, which would feed back data to a range of fitness applications.” Things have evolved from there, and many industry analysts have frequently noted the fact that many of those subsequent TWS crowdfunded projects were based on the assumption that they were “the first to have the idea,” when in fact multiple companies, including some of the largest audiology and large consumer electronics manufacturers had been exploring the field and held important patents on key aspects of technology implementation and design. Not surprisingly, since Bragi successfully launched its Dash Pro “hearables” the company also decided to collaborate with the leading US audiology company Starkey, launching The Dash Pro tailored by Starkey Hearing Technologies. Bragi itself holds an important patent portfolio in this area, and the company makes no assumption that there will be confusion between the two markets, as Bragi’s founder and CEO, Nikolaj Hviid stated in the interview I did with him at CES 2018. In that interview, Nikolaj even confessed the wish that medically oriented hearing aid companies would enter the “hearing enhancement” space proudly stating, “Hearing aids are not a luxury item, because it is a tool for those people. They need it and it’s very important to them. But there’s this huge unsatisfied group of people that have an issue that they need to deal with. I even see many people that would buy our product, will eventually go and buy a hearing aid. Because they will realize that what we can do for them is OK, but that they actually need more. What we do will also help making hearing aids more accessible.” That’s why it’s important to address the market for hearables from its own intrinsic perspective and key selling arguments, outside of the “hearing enhancement” applications. As Hunn clarified in the mentioned 2016 report, “Previously I defined a hearable as any device that included wireless connectivity, as the differentiating factor between wired and wireless headphones. That included wireless stereo headphones and mono Bluetooth earpieces, but excluded most hearing aids which had no wireless connection to a phone. In just two years, the picture has become far more complex. When I coined the word “hearables” at the beginning of 2014, the wireless headphone market was still niche, and no one had considered sound isolation, audio curation, or translation as real consumer opportunities. All of those are now in development or already shipping. So now I’m considering anything that fits in or on an ear that contains a wireless link, whether that’s for audio, or remote control of audio augmentation.” And he states, “…the real innovation in hearables will come from other earbud developers, not least because of their willingness to add biometrics. The intimate, relatively isolated contact that earbuds provide, along with the stabilizing effect on balance from the semi-circular canals in our ears, means that the ear is one of the best locations for sensing many physiological parameters. Whilst some of the biometrics will not be applicable to headphones, some will be, and we will see them incorporated in new headphone designs.” “The applications being considered are more diverse than what we’ve seen so far with other wearables,” he adds and mentions specifically “the rise of voice communications for Internet of Voice (IoV) applications” with voice detection and processing,” while detailing the enormous potential in Audio Curation and Augmented Hearing, together with Hearing Protection and Isolation, as main drivers for market development and expansion of hearables. As always, Apple gets the fundamentals right, first. With the AirPods, Apple focused on what people needed from wireless earbuds and made it work 100%. They didn’t overpromise, kept it simple, and delivered. The AirPods get consistent great reviews from every user. No wonder Apple commands the segment and owns 83% of this market. Augmented Hearing was precisely the topic for a presentation I attended at the IBC 2018 show, with Gaudio Labs VP of Business Development Adam Levenson, titled “Augmented Reality Audio: The Next Generation of Hearables.” In this session, Levenson presented his forward-looking glimpse into the future of Augmented Reality (AR) Audio. As a starting point, Levenson asked the audience the same questions: What are Hearables? He defines it as a product at the crossroads of headphone and hearing aid technologies and Augmented Reality (AR). In his presentation, he referenced Poppy Crum, Chief Scientist at Dolby, who said hearables are at the “convergence between entertainment, lifestyle, and hearing health” and David Cannington, the co-founder of Nuheara, who said, a hearable is a device to “control the elements of your physical environment,” to “orchestrate your soundscape,” and to provide an “on-the-fly personalized hearing experience.” Levenson goes on to detail that, apart from the basic functions of music streaming and input level adjustment from connected devices, hearables should support audio enhancement features, with personalization based on a hearing test, noise cancellation, and noise reduction. But more important, hearables should have “smart capabilities” and support speech amplification, listening directivity, translation, voice assistant, and biometric tracking – all things that we’ve seen in multiple products so far. In the session Levenson also says, “Imagine having the ability to adjust the mix of your daily listening experience with a technology that reduces ambient noise, amplifies and focuses speech, adjusts EQ to match your unique hearing profile, and interactively layers music with your voice assistant. This is AR Audio, and the latest Hearables on the market are already tapping into this potential. But this is just the beginning. The future of AR Audio is Artificial Intelligence (AI), advanced DSP, and procedural sound. “Machine learning will teach AR Audio systems to recognize the sound of a dog barking, a jack hammer, a sports bar crowd, and many more common sound events. With a constantly expanding database of recognized sounds, AI will power adaptive and precise noise cancellation. An understanding of language and accent will enable enhanced speech intelligibility. Intelligent DSP systems will smooth variations in volume levels, and situational awareness will allow us to zoom in and focus on specific sound sources. In the near future, procedurally-generated sound effects will attach to virtual objects and respond to physics.” Levenson’s company, Gaudio Labs is developing and licensing technology in this domain, and they have recently announced a related and interesting SDK targeting loudness management for streaming services, from over-the-top to music streaming platforms. The company has a strong R&D background in the development of spatial audio technologies and its founders have in fact co-invented and developed the MPEG-H 3D Audio Binaural Renderer that is now an ISO/IEC international standard. Since I was unaware of the company and its work, I was intrigued with the topics addressed in the IBC session, and I met Levenson after his talk for a brief interview, where he shared his conviction that “the software solution for hearables is extremely complex.” Adam Levenson, V.P. of Business Development, Gaudio Lab, believes that hearables hold a lot potential for augmented reality (AR) and his company is looking at offering unique solutions from an engineering perspective. “I think the whole area of AR in audio is inevitable. How it’s going to look, how this is going to become a consumer product is still very much unknown.” “It’s a difficult challenge and it requires a lot of knowledge. Like active noise cancellation. Consumers are now familiar with it. We use it and we love it. That tech is out there, but unless you are licensing from Bose, you are going to have to create it, and it is complex. And to make it work on a hearable device, with a very specific chipset, is not an easy challenge. You need DSP knowledge, spatial audio knowledge, machine learning knowledge, it’s a really wide range of skills and knowledge that you have to have in order to address the software problem for hearables.” “When you look at startups, companies like Nuheara and Bragi, they are struggling with noise cancellation. There’s latency and as a result you get phasing. But what these guys are doing is amazing. They still have not conquered the latency issue, and even Bose is having problems with their hearable product. And battery life… And the form factors. No one is there yet. Even companies like Starkey, which are doing amazing work for hearing aids, are trying to put this amazingly complex software solution together. Because all of us are missing this piece or that piece. But someone is going to do it!” I fully agree with Levenson when he states that hearables should evolve toward the “Augmented Reality Audio” approach, powered by machine learning (ML) algorithms, digital signal processing (DSP), and binaural rendering. In his opinion, this will enable new features like Situational Responsiveness, Focused Listening, Selective Hearing, and positioning of Virtual Objects. And this should be supported in automatic response to the “environment classification” and not something manually adjusted by the user. I believe that we will see multiple companies exploring this hearables potential, both from the consumer electronics side and audiology leaders. As Bragi has shown with its Dash Pro and Starkey Hearing Technologies has announced with its Livio AI platform, this will be both a multipurpose and completely-new augmented concept. The result of a multi-year engineering effort, combining artificial intelligence (AI) and advancements in sensors and hearing technology, Starkey’s new Livio AI platform combines tracking brain and body health and advanced environmental detection to create what the company calls “Hearing Reality technology.” When Starkey’s President Brandon Sawalich says, “we are not in the hearing aids business, we are in the communication business,” that is a clear sign where things are heading. Their Livio AI platform explores all the embedded processing power and AI capabilities and goes as far as pitch new language translation possibilities. Starkey calls the concept “healthables” and they talk about a new “hearing reality.” In the end, I just hope these companies prepare themselves to deal with the “reality” of not knowing exactly how consumers are going to react to all the exciting new possibilities. If only they could use AI for that…2018.10.25