The sound of liberation heard again after 80 years

2025.08.27ㆍ by Dewey Yoon

 

광복절 대한독립만세

 

 

Binggrae’s [First Sound of Liberation] campaign brings back the cry of Daehan Doknip Manse (Long Live Korean Independence) with Korean AI technology.

 

Just six days after its release, Binggrae’s First Sound of Liberation campaign surpassed 4 million views, quickly becoming a topic of conversation. Marking the 80th anniversary of Korea’s Liberation Day, the campaign reimagined the sound of liberation that had never been heard before—made possible through cutting-edge AI audio technology. Behind this recreation stood the technology of Gaudio Lab, a Korean Audio AI company. By combining historical research with advanced audio AI, they revived the cries of liberation that history had recorded only in text, but never in sound.

 

So, how was this unheard sound, lost for 80 years, finally brought back to life?

 

 

Recreating the Cry of Independence

On August 15, 1945, Japan announced its surrender. The next day, jubilant crowds filled the streets shouting “Daehan Doknip Manse”—a cry of independence that symbolized the nation’s freedom. Countless written records remain from that time, but no sound recordings exist.

 

This absence became the starting point for Binggrae and Innocean, who collaborated with Gaudio Lab to recreate what had once been impossible: the lost sound of Korea’s liberation. By restoring the voices of that day, the project not only gave sound to history but also honored the sacrifices of independence activists. It was more than a campaign—it was a meaningful attempt to revive the fading memory of liberation through hearing.

 

 

Gaudio Lab's AI Source Separation, GSEP

At the heart of the project was AI source separation. This technology extracts individual sounds from complex audio signals. Gaudio Lab’s AI source separation technology, GSEP, was used to remove noise from recorded audio, delivering the core sounds clearly and cleanly.

 

Testimonies from independence activists’ descendants, historical references, and verified archival materials were collected. Much like preparing ingredients for a fine meal, GSEP processed these “sound ingredients” into clean, high-quality audio—the foundation for recreating history.

 

 

Bringing History to Life with Generative AI

To generate sounds that had never existed, Gaudio Lab’s generative AI technology came into play. Binggrae and Innocean carefully curated factual details—ranging from historical documents to environmental conditions such as temperature and humidity on that day. Based on these cues, they reconstructed everything—the sound of clothes brushing, footsteps for different kinds of shoes, the tram running on the street, and flags waving in the wind. Expert historians validated each step of the process, ensuring historical accuracy.

 

Generative AI then added human dimensions—age, gender, emotion—to recreate the voices of crowds shouting “manse.” Layered with the acoustic atmosphere of Seoul Station Plaza, the voices came alive with an authentic sense of immersion. Finally, audio upscaling technology was applied to transform lower-quality sounds into richer, more impactful experiences.

 

 

 

Immersion Through Spatial Audio

Spatial audio was the finishing touch, giving listeners the sensation of truly standing amidst history. Gaudio Lab’s spatial audio technology, recognized as an international standard, applies binaural rendering to deliver a fully immersive 360-degree experience.

 

The creak of iron gates opening, the rush of crowds spilling onto the streets, the metallic clang of sheet iron, the fluttering of Taegeukgi (Flag of South Korea)—each was placed in a 3D soundscape. From the footsteps outside Seodaemun Prison to the swelling cheers at Seoul Station Plaza, the recreated scenes resonated with a realism that text alone could never convey.

 

 

 

More than sound—Capturing emotion

Recreating the “Sound of Liberation” was never just about reproducing audio. It was about conveying the emotions of that moment—the heat in the air, the tears of joy, the overwhelming surge of voices rising together. Each sound was carefully refined and layered with AI technology to carry not just noise, but history, memory, and feeling.

 

For Gaudio Lab, it was a profound honor to take part in this meaningful project as a Korean Audio AI company. Bringing the first-ever “Sound of Liberation” to life and passing it down as a historical record for future generations was both a responsibility and a privilege.

 

And we are not stopping here. Gaudio Lab will continue striving to deliver new experiences through sound—bridging technology, history, and emotion.

 

 

 

pre-image
How can broadcasters localize content faster?

      Gaudio Studio Pro (GSP) is here to help anyone dealing with the challenges of conventional localization processes In today’s global media landscape, speed and scale matter more than ever. Broadcasters and distributors face mounting pressure to prepare films and shows for multiple regions at once—but traditional localization workflows are slow, fragmented, and often blocked by missing D/M/E tracks or music rights issues.   That’s why Gaudio Studio Pro (GSP) was built: a cloud-native, AI-powered localization platform that removes the barriers between great content and global audiences, with expert (human)-in-the-loop support for even better quality and reliability.   We’ve gathered the top questions broadcasters ask about GSP—and the answers that show how it can cut turnaround time by up to 90%, simplify copyright compliance, and unlock new revenue from both new and legacy titles.       Product Overview & Target Users Q: What is Gaudio Studio Pro (GSP)? GSP is a cloud-native content localization SaaS designed for film and broadcast industries. It automates DME separation, dubbing, subtitle syncing, music replacement, and cue sheet generation—even when only the final master file is available (no D/M/E stems). With award-winning AI audio separation and copyright-cleared music replacement, GSP turns any film—old or new—into a distribution-ready asset for global markets.   We also offer expert services from our audio post-production team, WAVELAB, to ensure quality and adapt localization to your project’s needs. Q: Who is GSP for? GSP is designed for studios, filmmakers, broadcasters, OTT platforms, distributors, and post-production teams who need to prepare content for international release. It’s also useful for archives and rights holders who want to revive legacy titles for new markets.    Q: Can smaller studios or indie filmmakers use GSP? Absolutely. While GSP supports large broadcasters and distributors, it’s also designed to be accessible for everyone—from directors and engineers to translators and indie creators who want to prepare their content for global release. Its intuitive interface and automation reduce the need for technical expertise or large budgets.   Q: Has GSP been recognized in the industry? Yes. GSP’s core technologies—AI Audio Separation, AI Music Recommendation, and loudness management—have won CES Innovation Awards (2023–2025). The loudness management technology is also an official ANSI/CTA standard, widely adopted across the industry.     Q: Is there a free trial version available? Yes. We will soon open Gaudio Developers and provide access through an API. If you don’t have engineers on your team, please contact us and talk to our experts:)    Q: Have you worked with overseas clients? Yes. We are working and conducting PoCs with a number of Korean companies and several companies in the U.S., Japan, and Europe.       Key Features & Workflow Q: What makes GSP different from traditional localization tools? All-in-one workflow: DME Separation, dubbing, subtitles, music replacement, and cue sheets in one platform. Expert service from our audio post-production team, WAVELAB, for customers who want even better quality. AI-powered automation: Faster turnaround (up to 90% reduction in localization time). Legal readiness: Music replacement without copyright issues Cloud-native collaboration (coming soon): Real-time comments, multitrack editing, and version tracking.   Q: Can GSP be used for live broadcasting or only for pre-recorded content? GSP is primarily designed for film, OTT, and broadcast post-production, so unfortunately, our current technology cannot handle live broadcasting scenario. In the future, however, processing speed may improve to make it feasible.   Q: Can GSP be integrated into existing production pipelines? Yes. GSP is built as a cloud-native SaaS, so teams can use it alongside existing video editing tools or dubbing studios. Its format-flexible export ensures compatibility with any distribution workflow.   Q: How does GSP support collaboration across teams? GSP includes multitrack editing and version control, making it easier for translators, engineers, and producers to work together—even across time zones. Everyone works on the same project environment, eliminating file conflicts and delays.       Efficiency & Productivity Q: How fast is localization with GSP? By consolidating multiple fragmented tools into a single AI-powered tool, GSP can reduce localization time by up to 90%—cutting months of work down to days—without compromising quality. To put it more specifically, without human-in-the-loop (i.e., AI-only processing), localizing one hour of content that once took a month now takes just one hour. When expert quality checks and edits are added, the same work that once took a month can now be completed in about three days.   Q: Is GSP scalable for large catalogs? Yes. GSP was designed for enterprise-level scalability. Whether localizing a single short film or processing entire archives with hundreds of titles, it handles projects in parallel while keeping version control and collaboration centralized.   Q: How does GSP save costs compared to traditional localization? No need to rebuild missing D/M/E stems from scratch. Automated solutions that alleviate music copyright concerns and cue sheet generation reduce manual labor. Faster turnaround (up to 90% time savings) means lower studio and staffing costs.Overall, GSP lets studios localize more titles with fewer resources.   Q: What kind of revenue opportunities does GSP unlock? By restoring and clearing rights for films that were previously stuck in archives, GSP allows studios and distributors to monetize dormant catalogs. It also shortens time-to-market for new releases, helping companies reach more global audiences faster.   Also, it empowers music artists to earn from their original creations. Artists can easily register their music in the GSP library, while users discover fresh tracks through AI-driven recommendations—creating a new ecosystem that connects broadcasters and musicians.       Feature-Specific Questions DME Separation Q: How does GSP handle projects when only a master file is available? Even without original dialogue, music, and effects (DME) stems, GSP’s AI audio separation engine extracts DME with studio-level fidelity. This makes it possible to restore, localize, and repackage films that were previously blocked from distribution.   Q2: How accurate is GSP’s DME separation? GSP uses Gaudio Lab’s proprietary AI stem separation model —one of the most advanced in the world. Its separation performance has been ranked No. 1 by Musicradar, MusicTech, and LANDR. Providing studio-level fidelity, GSP extracts dialogue, music, and sound effects cleanly even from legacy or compressed master files.        Music Replacement Q: How does GSP's Music Replacement handle copyright issues? Even a single copyrighted background track can block international release. Content distribution often stalls due to music rights issues. GSP alleviates them by recommending and replacing tracks from a globally licensed library of 110K+ human-made, high-quality music.       Q: Does GSP use AI-generated music for replacements? No. GSP’s replacement library features 110K+ tracks composed by real artists, not generative AI. Its AI recommends and places the most fitting track, ensuring the emotional tone of the original BGM is preserved while maintaining full copyright clearance.   Q: How accurate is the AI in replacing the original music with suitable alternatives? It depends on the genre you want to replace. For variety shows, replacement is essentially fully automated. For dramas, the numerical accuracy is similar, but the acceptance level or creative bar is much higher. Still, only about 10% of tracks may need human review.   Q: Can the AI adapt to different types of video content, like documentaries and commercials? Commercials can certainly be processed with GSP, but given their short format and the need for a single strong track, advertisers often prefer to handpick music from recommended options. For documentaries, the results are even better than with variety shows. Since the number of music mixes is smaller and the takes are longer—typically 2–3 minutes per track—the replacement process is smoother and more consistent.   Q: Can the AI handle various music genres and cultural nuances? Yes. For example, if the original music is reggae, the recommended replacements will usually come from the same genre, since tempo, instrumentation, and melody align.        Dubbing & Subtitle Synchronization Q: How does AI dubbing in GSP work? GSP’s AI dubbing faithfully replicates the original actor’s voice, tone, and timing in the target language. Powered by Gaudio Spatial Audio, CES Innovation Award-winning technology, it even matches scene-specific room tone and spatial effects—so a voice in a poolside scene, for example, sounds naturally wet and reverberant, not like a dry studio recording.   Q: How does GSP automate subtitle synchronization? GSP uses AI to automatically generate and synchronize subtitles with dialogue tracks. Even when only the master file is available, it aligns timing accurately—producing high-quality subtitles that comply with the strict timing guidelines of the largest video streaming platforms—reducing manual adjustment time and ensuring subtitles match the original speech and scene context.   Q: Can subtitle translations be customized? Yes. Editors and translators can directly review and edit subtitles. Once the real-time collaboration feature is updated, multiple team members will be able to refine translations simultaneously without any version conflicts.   Q: Does GSP support multiple languages simultaneously? Absolutely. GSP can generate and manage subtitles in multiple target languages at once, streamlining international releases.       Cue Sheet Generation Q: Can cue sheets be exported in standard formats? Yes. GSP exports cue sheets in industry-standard formats compatible with broadcasters, OTT platforms, and regulatory requirements worldwide.       Localize your content fast with Gaudio Studio Pro GSP is redefining how broadcasters, studios, and distributors bring stories to global audiences. By combining AI-driven efficiency with industry-standard compliance, it doesn’t just solve the challenges of traditional localization—it opens up new opportunities for revenue, scale, and creativity.   Whether you’re preparing a single indie film or scaling an entire archive, GSP helps you move faster, stay compliant, and connect with audiences worldwide.         👉 Ready to see how GSP can transform your workflow? Let’s make your content truly global.   ✉️ Contact us for more information 📖 Read more about Music Replacement 📖 Read more about how GSP handles music copyright issues    

2025.08.28
after-image
California Just Got Quieter — And That’s a Good Thing (Complies with SB576)

    Ever been jolted out of your seat by a blaring ad right after your favorite show? You’re not alone. But if you’re in California, you can look forward to a future with fewer audio-induced jump scares. Starting July 2026, streaming services that serve California residents will no longer be allowed to blast ads louder than the actual content. That’s thanks to a new law — Senate Bill 576 (SB 576) — recently signed by Governor Gavin Newsom. The law targets IP-based streaming services and sets a clear rule: keep your ads as quiet as the show they’re interrupting*.*a video streaming service that serves consumers residing in the state(CA) shall not transmit the audio of commercial advertisements louder than the video content the advertisements accompany  From TV to TikTok: Our Soundscape Has Shifted  Remember the days when loud commercials were just a TV thing? That got fixed years ago with the CALM Act, which standardized broadcast loudness levels. But as our viewing habits moved from cable boxes to smart TVs, tablets, and phones, the rules didn’t keep up. Streaming platforms weren’t held to the same standards — until now. As a result, we’ve all grown used to that jarring volume jump between content and ads. It’s been normalized, even though it’s never been okay.  Welcome to the Loudness War 💥 There’s actually a term for this — The Loudness War. It refers to the long-standing trend in media and music production where everyone tries to sound louder than the competition. From ’70s rock albums to the latest K-pop hits, volume has kept creeping up. Ironically, cranking up the loudness often reduces audio quality — but the pressure to stand out keeps pushing it higher. Now add ads to the mix — and with millions in ad dollars on the line, marketers want to make sure their message cuts through. The solution? Just turn up the volume. Louder. Louder still. Until, of course, your ears give up.  A Law Born from Sleepless Nights and Screaming Babies This new bill wasn’t just inspired by science — it was inspired by sleep-deprived parents. Senator Thomas Umberg, who introduced SB 576, said he was inspired by stories like one from “baby Samantha” — a newborn who was finally rocked to sleep, only to be startled awake by a screaming streaming ad. Every parent who’s been there knows the pain.  “This bill was inspired by baby Samantha and every exhausted parent who’s finally gotten a baby to sleep, only to have a blaring streaming ad undo all that hard work.”— Senator Thomas Umberg    We Need Laws That Match How We Watch Streaming isn’t a trend anymore — it’s the norm. But while our devices and apps have evolved rapidly, legislation hasn’t kept pace. Unlike the broadcast world, there are no global loudness standards for streaming — and with countless apps, platforms, and playback environments, the user experience is inconsistent at best (and ear-splitting at worst). That’s why SB 576 matters. It acknowledges that volume isn’t just a feature — it’s a user experience issue, a health issue, even a parenting issue. And it’s time more states and countries followed suit.  Loudness Normalization: Not Just Nice to Have, But Must-Have  So what happens next? This new regulation serves as a gentle but important wake-up call for streaming platforms, content creators, and tech providers. It’s a reminder to be prepared with loudness normalization tools that ensure consistent audio across devices, content types, and formats. Long before the new regulation came into focus, Gaudio Lab had already addressed this challenge. After years of in-depth research, we developed LM1 (Loudness Normalization 1) — a loudness normalization technology built on a side-chain metadata architecture within a server–client framework. The server stores the original audio asset exactly as it is, even in its compressed form, while extracting only the loudness metadata. Because this metadata is delivered separately as a side-chain, it remains independent of the codec used in streaming services. (more info) The actual loudness adjustment happens on the client side, allowing playback at different target levels tailored to each user’s device and listening environment. Even when program content and ads are managed on separate servers, each can generate and send its own metadata to the client app — which then aligns both to a common reference loudness level. In other words, LM1 already meets the requirements set by California’s SB 576. Gaudio Lab has also enhanced LM1 for the live-streaming era and continue to refine it in collaboration with Korea’s leading content streaming platforms. It’s worth noting that Gaudio Lab’s loudness solution earned the CES 2023 Innovation Award, a testament to its technological leadership and real-world impact on the audio industry.         Hearing Is Believing — and Protecting  Here’s the bottom line: your ears aren’t infinite-use tools. Prolonged exposure to excessive volume causes real, irreversible hearing damage. In an age where earbuds are practically glued to our heads, we can’t afford to treat sound discomfort as “just part of the experience.” The passage of SB 576 is more than a policy shift — it’s a signal that audio UX matters. And with smarter regulation and tech innovation working together, we can build a media environment that’s easier on the ears — and kinder to the humans listening.   So, what’s next? Our next post will explore the technical foundations and practical applications of LM1, explaining how it can help streaming platforms comply with the new loudness regulation. (Link to the next blog post)  

2025.10.28