뒤로가기back

Integrating Audio AI SDK with WebRTC (1): A Look Inside WebRTC's Audio Pipeline

2023.06.26 by Dewey Yoon

Integrating Audio AI SDK with WebRTC (1): A Look Inside WebRTC's Audio Pipeline

(Writer: Jack Noh)

 

Curious About WebRTC?

 

The MDN documentation describes WebRTC (Web Real-Time Communication) in the following manner.
(It should be noted that the MDN documentation is essentially a standard reference that anyone engaged in web development will inevitably encounter.)

 

WebRTC (Web Real-Time Communication) is a technology enabling web applications and sites to capture and freely stream audio or video media between browsers, eliminating the need for an intermediary. In addition, it permits the exchange of arbitrary data. The series of standards composing WebRTC facilitate end-to-end data sharing and video conferencing, all without requiring plugins or the installation of third-party software.

 

To simplify, WebRTC is a technology that allows your browser to communicate in real-time with only an internet connection, which eliminates the need to install any extra software. Services exemplifying the use of WebRTC include Google Meet, a video conferencing service, and Discord, a voice communication service. (This technology also gained substantial attention during the outbreak of Covid-19!) As an open-source project and web standard, WebRTC's source code can also be accessed and modified via the following link.

 

 

Understanding WebRTC's Audio Pipeline

 

WebRTC is a comprehensive multimedia technology, encompassing diverse technologies such as audio, video, and data streams. In this article, I aim to delve into aspects related to WebRTC's audio technology.

 

If you've had experience using a WebRTC-enabled video conferencing or voice call web application (e.g., Google Meet), you might be intrigued to understand how the Audio pipeline is structured. The Audio pipeline can be separated into two distinct Streams. Firstly, 1) the Stream of voice data captured from the microphone device and transmitted to the other party, and concurrently, 2) the Stream that receives the other party's voice data and outputs it via the speaker. These are respectively referred to as the Near-end Stream (sending the microphone input signal to the other party) and the Far-end Stream (outputting the audio data received from the other party through the speaker). We'll take a closer look at each Stream, which consists of five steps, in the sections below.

 

1) Near-end Stream (Transmitting Microphone Input Signal to the Receiver)

  1. Audio signals are received from the microphone device. (ADM, Audio Device Module)
  2. Enhancements are applied to the input audio signal to augment call quality. (APM, Audio Processing Module)
  3. If there are other audio signals (e.g., file streams) to be concurrently transmitted, they are integrated using an Audio Mixer.
  4. The audio signal is subsequently encoded. (ACM, Audio Coding Module)
  5. The signal is converted into RTP packets and dispatched through UDP Transport. (Sending)
  6.  

2) Far-end Stream (Projecting the Received Audio Data from the Sender through the Speaker)

  1. Audio data in the form of RTP packets is received from the connected peers (multiple Peers). (Receiving)
  2. Each RTP packet is decoded. (ACM, Audio Coding Module)
  3. The decoded multiple streams are merged into a single stream by an Audio Mixer.
  4. Enhancements are applied to the output audio signal to augment call quality. (APM, Audio Processing Module)
  5. The audio signal is eventually outputted through the speaker device. (ADM, Audio Device Module)

The names of the modules responsible for each stage of the process are noted in brackets on the right in the preceding descriptions. WebRTC exhibits this level of modularization for each process.

 

Here are more detailed explanations for each module:

  • ADM (Audio Device Module): This interfaces with the input/output hardware domain, facilitating the capture/render of audio signals. It's implemented using APIs tailored to the respective platform (Windows, MacOS, etc.).
  • APM (Audio Processing Module): This comprises a set of audio signal processing filters designed to boost call quality. It's primarily employed on the client end.
  • Audio Mixer: This consolidates multiple audio streams.
  • ACM (Audio Coding Module): This executes the encoding/decoding of audio for transmission/reception.

The aforementioned process can be visualized as shown in the following diagram.

 

 

As previously described, the audio pipeline in WebRTC is notably modular, with its functionalities neatly divided.

 

 

Enhancing WebRTC Audio Quality with Gaudio SDK

 

Gaudio Lab houses several impressive and practical audio SDKs, such as GSA (Gaudio Spatial Audio), GSMO (Gaudio Sol Music One), and LM1 (a volume normalization standard based on TTA). The idea of developing applications or services using these SDKs, thus delivering superior auditory experiences to users, is indeed captivating.

 

(Did you know?) Gaudio Lab boasts an SDK that fits seamlessly with WebRTC – The GSEP-LD, a noise reduction feature that operates on AI principles. Interestingly, it offers real-time functionality with minimal computational demand (and provides top-tier performance on a global scale!)

 

We often endure discomfort due to ambient noise while conducting video conferences. To alleviate such noise-related concerns, WebRTC incorporates a noise suppression filter rooted in signal processing. (As a point of interest, WebRTC already contains filters beyond noise suppression to improve call quality!) This noise suppression filter forms part of the previously mentioned APM (Audio Processing Module).

 

Imagine the potential improvements if we replaced the conventional signal processing-based noise suppression filter with Gaudio Lab's AI-driven noise suppression filter. However, despite the eagerness to instantly substitute the existing noise suppression filter with GSEP-LD, it is crucial to proceed with caution. Attempting hasty integration (or replacement) in such a complex, large-scale project can generate complications, as raised by the following considerations:

 

  • Does the original performance of GSEP-LD measure up well? → It is important to verify the quality of the original performance.
  • Could there be any side effects with the existing signal processing-based filters? → It is advisable to check the effects while managing other filters in WebRTC.
  • Does the optimal point of integration align with the location of the existing noise reduction filter? → Various points of integration should be tested.
  • Can performance be guaranteed across diverse user environments? → This requires a wide range of experimental data and consideration of different platform-specific settings.

If one dives headfirst into the project solely driven by enthusiasm, they might find themselves overwhelmed by the questions mentioned earlier, causing effective integration to become increasingly challenging. To circumvent this, the primary step entails building a 'robust testing environment'. The larger the project, with its many interconnected technologies, the greater the emphasis on this requirement.

However, establishing a robust testing environment is not an easy undertaking. In this article, I have discussed the audio technology of WebRTC. In the next article, I will share my experience in establishing a solid testing environment within WebRTC with relative ease. This allowed me to significantly boost my confidence in the performance as I was able to integrate GSEP-LD into the WebRTC audio pipeline.

 

 

 

pre-image
DCASE 2023: Gaudio Lab paved the way as always in ‘AI Olympics for Sound Generation’

Introduction to DCASE   DCASE is an esteemed international data challenge in the field of acoustics, with prestigious institutions from around the globe participating.   In the world's inaugural AI Sound Generation Challenge, Gaudio Lab not only pioneered the Foley Sound Synthesis sector of the DCASE (an acronym for Detection and Classification of Acoustic Scenes and Events) Challenge but also managed to secure second place overall, despite participating light-heartedly.   Launched in 2013, the DCASE competition, marking its ninth year now, holds a stature equivalent to the 'Olympics' in the field of Sound AI. Coinciding with the advent of the AI era, a sound generation category was introduced for the first time in this edition of the competition. The contest saw participation from not just global corporations like Gaudio Lab, Google, Sony, Nokia, and Hitachi, but also renowned global universities such as Carnegie Mellon University, University of Tokyo, Seoul National University, and KAIST. This led to a stage brimming with cutting-edge competition in the Sound AI sector. With 123 teams applying across seven projects, a total of 428 submissions were received, highlighting the intensity of the competition.     World's First AI Sound Generation Challenge: Foley Sound Synthesis Challenge   The 'Foley Sound Synthesis' task, a new entrant in the field of generative AI, garnered substantial interest this year. This task specifically involved the generation of sounds belonging to specific categories (like cars, sneezes, etc.), utilizing AI technology and data. Given our extensive experience in this sector, Gaudio Lab served as the organizer, setting the direction for the task. Despite our nonchalant participation, we accomplished the feat of securing second place. Notably, in the evaluation of 'sound diversity', which is considered an essential metric from a commercialization perspective, Gaudio Lab received scores significantly higher than our direct competition.   [Figure 1] Overview and Organizers of the DCASE 2023 Foley Sound Synthesis Challenge     Remarks on DCASE 2023   You might be curious as to how Gaudio Lab, a small Korean startup, was able to organize this competition and even stand tall on the podium among renowned global corporations and world-class universities. This success can be attributed to Gaudio Lab's foresight and early beginnings in the field of generative AI research and development, coupled with the relentless efforts of our AI researchers working diligently behind the scenes. Now that we've boasted enough, let's turn the spotlight to the real heroes.   [Figure 2] DCASE Ranking Announcement Screen, 'Chon_Gaudio' presents the results submitted by Gaudio Lab.   What does DCASE mean to Gaudio Lab?   Ben Chon: Gaudio Lab embarked on a journey of researching and developing sound generation AI with the ambitious goal of reproducing all sounds in the world as early as 2021, well before ChatGPT became a sensation (see [Figure 4] for reference). After extensive research, we achieved Category-to-Sound generation in June 2022, a concept akin to this DCASE challenge. Since then, we've dedicated ourselves to the more ambitious objectives of (arbitrary) Text-to-Sound and (arbitrary) Image-to-Sound research to attain a commercial-level implementation outside of the lab, where significant strides have already been made. We ultimately envision our Video-to-Sound generation model to become an indispensable solution in every scenario where sound is needed - not only in traditional media such as movies and games but also in next-generation media platforms like the metaverse, by creating an apt sound for any form of input.   [Figure 3] Evolution of Generative Sound AI, Gaudio Lab is at Difficulty Level 3   Compared to Gaudio Lab's AI, whose goal is to generate all conceivable sounds, the Category-to-Sound model as mandated by DCASE is somewhat restrictive, narrowing the scope of sound generation to a handful of categories. As a result, this category seemed to be somewhat of a small playground for Gaudio Lab's technological prowess. In this competition, over 30 technological entries were submitted.   There were moments of solitude, questioning if we were the only ones spearheading this niche. However, being the organizers of the competition allowed us to stimulate research in this field, and in the process, reaffirm the global stature of our technology, which has been a meaningful experience. As we are at the forefront of commercialization, we intend to sustain our leadership in this market by learning from the research outcomes of other participants.   [Figure 4] The cover of the materials from the kickoff meeting for SSG (Sound Studio Gaudio), Gaudio Lab's sound generation AI project, a truly legendary beginning   What were the most significant challenges in preparing for DCASE?   Keunwoo Choi : As Gaudio Lab was the principal organization in this field, the most challenging aspect was balancing between the roles of an international competition organizer and Gaudio Lab's Research Director. Given that Foley Sound Synthesis was a challenge introduced in DCASE for the first time, we endeavored, as organizers, to set a positive benchmark by curating a fair and scholarly consequential competition. Concurrently, in my capacity as Gaudio Lab's Research Director, I had to orchestrate and implement a comprehensive research plan while dividing limited computational resources. This task felt analogous to resolving an intricate puzzle. To allocate human and GPU resources efficiently, I developed detailed charts to optimize workload distribution. In retrospect, following the successful completion of the competition, the experience seems invaluable.   Rio Oh : Although the entire process was complex, training the LM (Language Model) based model was particularly difficult. The overall process was strenuous, mainly because the outcomes didn't always match the amount of effort we put in.     What was the most memorable moment during the preparation for DCASE?   Manuel Kang : The most memorable moment was when our AI successfully generated a realistic animal sound for the first time in June 2022. I was very proud to see our initial model, which started with no sound production, gradually improve to this point.   Monica Lee : Indeed, the moment our model produced a genuine animal sound for the first time remains unforgettable. When I played the artificially generated puppy sound at home, my pet dog, Sabine, reacted by barking and appearing confused. It seems we effortlessly passed the puppy Turing test! (Haha)   Rio Oh : Our generation model underwent numerous updates During the preparation process. Each time the model operated as planned without any glitches, it was immensely satisfying. Among these moments, I remember most vividly when we could control aspects like background noise and recording environment to our liking.   Devin Moon : Performing optimizations to capture subtle nuances in sound through prompt engineering was exciting. I distinctly remember when we generated the sound of quickly running on a creaky wooden floor within a resonant space. The generated sound was so realistic that it was still hard to distinguish from the actual sound.     What sets Gaudio Lab's generative AI apart?   Ben Chon : A key distinction of Gaudio Lab's AI lies in its capabilities that surpass the original parameters of the Category-to-Sound task. The AI can generate virtually any sound, extending from Text-to-Sound to Image-to-Sound. In simpler terms, while our model has the potential to create a vast array of sounds, it was constrained to generate sounds from certain categories in the context of the competition. This situation is similar to a marathon runner participating in a 100-meter sprint. In reality, our AI can synthesize nearly every conceivable sound, ranging from various animal noises to the ambient sounds of an African savannah teeming with hundreds of species. Moreover, its ability to isolate and generate the sound of a single object without noise interference offers significant advantages when the technology is used directly in content production for films and games.   Keunwoo Choi : In order to develop a high-performing and versatile model, we dedicated significant effort from the outset to data collection, arguably the most crucial aspect of AI development. We systematically gathered all possible data worldwide and supplemented gaps in information with the assistance of AI tools, such as ChatGPT. Our aim was to accumulate the best quality data possible. A key initiative in our data collection strategy was the acquisition of 'Wavelab,' a top-tier film sound studio in South Korea. This step enabled us to secure high-quality data. Furthermore, our generative model's design sets Gaudio Lab's AI apart. Our model deviates from traditional AI models that specialize in music or voice, and is designed to create a wide spectrum of sounds or audio signals.     Would you mind sharing your thoughts on behalf of your team about this achievement?   Ben Chon: Gaudio Lab has transcended the constraints of the DCASE task to develop a Text-to-Sound model capable of producing virtually any sound. Recognition from DCASE, which operates within a select range of sound categories, is a strong testament to the maturity of Gaudio Lab's AI development capabilities, bringing us closer to a truly 'universal' sound model.   Our indirect validation of world-class quality across diverse sound categories, some not even covered by DCASE, gives us greater confidence for future research. I believe our team has achieved something truly remarkable. Kudos to all the researchers at Gaudio Lab for their hard work!   Keunwoo Choi : It is extremely rewarding to see the fruits of our continuous research and development in the vast field of generative audio AI. DCASE held its first generative audio challenge, which was relatively straightforward in terms of problem definition. However, our system was already performing well with far more intricate text prompts. I hope we can further develop and commercialize this technology, which possesses infinite possibilities, to cause a significant ripple effect in the audio industry.     Could you please share your future aspirations or vision?   Ben Chon : We believe it's crucial for Gaudio Lab's generative AI to secure practical use cases in the real industry, in addition to making an impact in the academic realm. Having participated in DCASE, our generative AI has grown beyond the Text-to-Sound capability to effectively handle Image-to-Sound tasks. We're also considering expanding our scope to include Video-to-Sound. With technology advancing at an astonishing pace, it's time for us to evolve our focus towards impacting people's lives directly by integrating our technology in real-world industry applications. In fact, our efforts are already bearing fruit, with ongoing discussions about potential collaborations with companies in forward-looking sectors such as film production and the metaverse. I am eager to ensure that Gaudio Lab leads both technological advancement and commercialization, aiming for a future where we stand at the heart of global sound production. We appreciate your continued interest and support for Gaudio Lab's AI technology!     In closing,   I am truly delighted to announce that the relentless efforts of the Gaudio Lab researchers, who have silently navigated uncharted territories, can now be proudly showcased on the global stage. Until we realize our vision of "All sounds in the world originating from Gaudio Lab," we ask for your continued interest and support for Gaudio Lab's AI technology.

2023.06.12
after-image
Integrating the Audio AI SDK into WebRTC (2): Methodology for Building a Testing Environment

Integrating the Audio AI SDK into WebRTC (2): Methodology for Building a Testing Environment for Effective Integration Development (Writer: Jack Noh)   As outlined in the previous post (Integrating Audio AI SDK with WebRTC (1): A Look Inside WebRTC's Audio Pipeline), WebRTC is a substantial multimedia technology, encompassing audio, video, and data transmission, among other aspects. Even considering the audio segment alone, it encompasses various modules (APM, ACM, ADM, …). It's a testament to the high applicability of this technology. In this upcoming post, I plan to share the methodology behind creating a 'robust testing environment' — a critically significant step when incorporating an Audio AI SDK into WebRTC.   WebRTC Audio Processing Module   Among the audio modules in WebRTC, which one is most suited for integrating a noise suppression filter? You may have already inferred from previous posts, the module most compatible for such integration (indeed, specifically designed for this purpose) is the Audio Processing Module (APM). Primarily, the APM is a collection of signal-processing based audio filters constructed with the aim of enhancing call quality.   The Audio Processing Module chiefly functions as an essential module imparting effects on audio at the client-side. This module is designed with the purpose of assembling filters that elevate the quality of audio signals, making them suitable for calls. These filters, also known as sub-modules within the APM, carry out various functions.   Here's a brief introduction of some prominent sub-modules in WebRTC:   High Pass Filter (HPF): This filter operates by removing low-frequency signals to isolate high-frequency ones, such as voice. Automatic Gain Controller (AGC): This filter maintains a uniform level by automatically adjusting the amplitude of the audio signal. Acoustic Echo Cancellation (AEC): This filter eradicates echo by preventing signals from the speaker (far-end) from reentering the microphone. Noise Suppression (NS): A signal-processing-based filter that eradicates ambient noise. These sub-modules are strategically used in both Near-end Streams (where microphone input signals are transmitted to the other party) and Far-end Streams (where audio data received from the other party are outputted to the speaker). To illustrate this, consider the example of a video conference scenario.   Initially, the audio signal inputted into the microphone is processed through the High Pass Filter (HPF) to eradicate low-frequency noise. Then, to prevent discomfort from abrupt loud sounds, the signal passes through the Automatic Gain Controller (AGC) which automatically adjusts the signal's amplitude. Following this, the Acoustic Echo Cancellation (AEC) comes into play, preventing echo by stopping signals from the speaker from reentering the microphone. Finally, the Noise Suppressor (NS) eliminates any remaining ambient noise.   Visual representations often simplify the explanation process compared to textual descriptions. The function ProcessStream() depicted in the diagram below signifies the processing of the signal stream captured from the microphone, a pathway often referred to as the forward direction. Concurrently, ProcessReverseStream() exists to handle the processing of the reverse stream, which aims to render audio data received from a remote peer via the speaker. Each of these processes can be understood as an effect-processing step within the Near-end Stream and Far-end Stream of the APM.   Let's revisit the aforementioned process flow via the following diagram:   (WebRTC Branch: branch-heads/5736)   It's crucial to note that the two stream processing stages are not independent. This is because, in order to mitigate the echo—a situation where the voice of the other party enters the microphone again—the signal needs to be analyzed within the ProcessReverseStream() before being passed onto the ProcessStream() for echo cancellation. (This is the role of AEC as discussed earlier!)   With an understanding that the structure of WebRTC's APM is as described, my intention was to integrate it with Gaudio Lab’s superior audio separation technology, GSEP-LD. At first glance, it seems logical to replace NS with GSEP-LD, but is this the optimal strategy?   From a signal processing perspective, this might seem plausible. However, as the SDK we intend to integrate is AI-based, it's not possible to conclusively determine this. Positioning it before AEC might enhance the output, or conversely, placing it at the very end might yield superior results.   In response to the question 'Would the optimal integration location coincide with the location of the existing noise removal filter?' numerous additional queries arise:   ‘Will there be any side-effects with other Submodules?’   ‘What about simultaneously using NS and GSEP-LD?’   ‘Is it necessary to utilize the HPF filter to eliminate low-frequency noise?’   ‘How would the efficacy of GSEP-LD alter depending on the operation of AEC?’     Upon analyzing WebRTC's APM, it became clear that several elements need to be carefully evaluated during testing. Now, let's delve into strategies for robust testing of these scenarios.   Note that the integration of another SDK as a Submodule into WebRTC is not particularly challenging. This article primarily discusses the establishment of a test environment for the WebRTC Audio pipeline and will not extensively cover this topic, but here is a brief overview: WebRTC's APM is scripted in C++ language. Initially, a Wrapper Class is devised to manage instances and states, mirroring the functionality of other Submodules within the APM. Subsequently, the Wrapper Class that you wish to integrate is embedded into the actual APM Class, and is managed in a similar fashion to other Submodules.     Leveraging APM CLI for Efficient Integration   Building upon our comprehension of the APM module, we now aim to establish a rigorous testing environment to derive results from GSEP-LD. For this, creating a Command Line Interface (CLI) capable of running the APM independently proves effective for obtaining file output results (Indeed, we need to actually hear the output!)   We have identified two methods for harnessing the APM CLI for efficient integration:   1) Method one involves using WebRTC's open-source resources directly.   2) Method two makes use of an open-source project that exclusively features the WebRTC APM.     1) Utilizing WebRTC's Open Source Directly   WebRTC's open-source resources can be directly accessed from the following link. The WebRTC project offers comprehensive guides on build procedures for each platform. Upon downloading the source code, installing the necessary software according to the guide, and ensuring a successful build, you are then equipped to use the APM CLI!   By checking the build path, you will be able to locate audioproc_f , a file mode testing tool for APM. With audioproc_f , you can feed a WAV file into the APM and obtain an output that has been processed through the audio filtering effects of the APM. To verify the APM output results using default settings, execute the following command:   $ ./audioproc_f -i INPUT.wav --all_default -o OUTPUT.wav   The testing environment can also be customized. For the next step, we will set the stream delay between the Near-end and Far-end to 30ms and activate the Noise Suppressor. Here, 'stream delay' refers to latency introduced by hardware and system constraints. Notably, to improve the performance of AEC, accurately specifying the stream delay between Near-end and Far-end is crucial, making it an important parameter to be cautious of during testing.   ./audioproc_f -i INPUT.wav --use_stream_delay 1 --stream_delay 30 --ns 1 -o OUTPUT.wav   2) Leveraging an Open Source Project Solely Featuring WebRTC APM   The first method indeed offers a substantial level of testing. By modifying the source code of audioproc_f , a robust test environment can also be established. However, within the WebRTC project, there exists a range of additional, nonessential multimedia source codes (such as those pertaining to video, data network source codes, and other modules like ADM, ACM for audio only), thereby presenting a disadvantage due to their redundant presence. Consequently, we contemplated the possibility of devising a leaner test environment focusing exclusively on the APM, the essential component. In the course of this consideration, we stumbled upon a valuable open source project which enables us to isolate and structure tests solely for the WebRTC's APM module (Special thanks to David Seo for suggesting the idea!). This constitutes our second method.   Here is the link to the open source project employed in the second method. This particular project provides the opportunity to separate the APM of WebRTC and assemble it using the Meson build system.   A brief interjection on Meson: Meson represents a forward-thinking C++ build system, capable of constructing codes more swiftly compared to other build systems such as CMake, primarily due to its user-friendly syntax. Furthermore, it offers straightforward handling of test cases and management of test architectures. To compose a unit test in Meson, you first generate a test execution file encompassing the test code, and subsequently, inscribe the test code in said file. Afterwards, you define the test with the test() function and instruct Meson to carry it out. For instance, you could draft a code as follows:   test('test_name', executable('test_executable', 'test_source.cpp'), timeout: 10)   This code specifies a test named test_name . It conducts the test by operating test_source.cpp, a test execution file constructed from the test_executable file. The timeout parameter assigns the duration for the test to be carried out, expressed in seconds. If the test fails to complete within the stipulated time, it is deemed unsuccessful. It's noteworthy that the test execution file is capable of receiving arguments, thereby allowing for the application of different values to accommodate various scenarios.   Having covered the simplified process of crafting test codes with the Meson build system, the subsequent step involves orchestrating the tests for integrating GSEP-LD. The subsequent image offers a representation of the test design we assembled (albeit it being more abridged than the actual test we conducted).       First, we partitioned our tests into two primary categories. As you may know, there are two types of streams in WebRTC: Near-end and Far-end streams. We designated these as Group A and Group B, respectively.   The principal distinction between the two groups lies in the operation of the Acoustic Echo Canceller (AEC), responsible for echo removal. (When only the Near-end stream is active, the AEC is turned off, whereas it is enabled when both the Near-end and Far-end streams are utilized.)   Each group also has a unique configuration for the input files. In Group A, the signal consisting of speech through a microphone with added noise suffices as the input. However, Group B requires a simulation that mimics a scenario where the output signal from a speaker (i.e., the signal produced by the other party) re-enters as an echo. The echo_generator serves this purpose. The input signal for Group B is formed by summing the echo originating from the echo_generator with the signal transmitted through the microphone.   Subsequently, we controlled the integration point and operation of GSEP-LD. The integration was managed at the forefront of the processing (Pre-processing), at the existing noise removal filter's location, and finally at the Post-processing stage.   It is essential also to examine the potential side effects with other submodules, isn't it? Let's explore the side effects concerning the Noise Suppressor (NS). To validate the side effect with NS, we toggled its status and set the parameter controlling the degree of noise reduction. It's worth mentioning that the noise reduction parameter in NS can be set anywhere from Low ~ VeryHigh. However, employing an excessively high value may lead to significant sound distortion.   Finally, we categorized the noise types present in the input file. We performed tests for a variety of noise types, including noise typically found in a café, car noise, and the sound of rain. Despite this degree of detail leading to a substantial number of test branches (owing to the multiplication of each independent branch), the test functionality of the Meson build system allows for easy generation using just a few for-loops. Below is a pseudo-code depiction of the Meson build file.     The test code is then complete. Using Meson commands to execute the prepared test code enables us to cover all designated test branches and obtain respective output results.   The test went well and creating the output Check metrics to be ensured the test went well     Conclusion   In a project like WebRTC, which encompasses numerous technologies, integrating a third-party technology presents a host of considerations. One must understand the overall flow for selecting an integration point and anticipate the integration across a multitude of environments. These environments involve the location of integration, the operating environment, and the potential side effects with pre-existing technologies. The details discussed above take into account relatively straightforward cases. Nevertheless, we are still presented with: Noise types (5 types) x AEC On/Off (2 types) x NS settings (5 types) x GSEP On/Off (2 types) = 100 scenarios!   These 100 scenarios require consideration. However, in practice, the need to test an even wider variety of noise types, consider additional submodules beyond NS, and account for platform-specific differences (Windows, MacOS, etc.) results in an exponential increase in the number of test cases. Furthermore, should the GSEP positioning need to be controlled across a broader range of locations for testing purposes, it becomes rather confusing.   In this article, we have shared our experience of effectively managing complex scenarios, utilizing a more streamlined Command Line Interface (CLI) environment for testing. Our learnings have demonstrated that by harnessing the capabilities of the open-source projects that isolate WebRTC's Audio Processing Module (APM) and employing the testing feature of the Meson build system, a relatively straightforward environment construction is feasible. If you wish to integrate specific filters into WebRTC in the future, we hope this article provides some insights and assistance. Thank you for taking the time to read this article.   +) For those eager to integrate Gaudio Lab’s unique audio SDK into WebRTC and develop a service, you are always most welcome! +) Thanks to David Seo for letting me know about WebRTC's audio popeline, and for the idea of using an open source that stands alone as a WebRTC APM!       

2023.06.26