California Just Got Quieter — And That’s a Good Thing (Complies with SB576)

2025.10.28ㆍ by Dewey Yoon

 

 


Ever been jolted out of your seat by a blaring ad right after your favorite show? You’re not alone. But if you’re in California, you can look forward to a future with fewer audio-induced jump scares.
 
Starting July 2026, streaming services that serve California residents will no longer be allowed to blast ads louder than the actual content. That’s thanks to a new law — Senate Bill 576 (SB 576) — recently signed by Governor Gavin Newsom. The law targets IP-based streaming services and sets a clear rule: keep your ads as quiet as the show they’re interrupting*.
*a video streaming service that serves consumers residing in the state(CA) shall not transmit the audio of commercial advertisements louder than the video content the advertisements accompany
 

From TV to TikTok: Our Soundscape Has Shifted

 
Remember the days when loud commercials were just a TV thing? That got fixed years ago with the CALM Act, which standardized broadcast loudness levels. But as our viewing habits moved from cable boxes to smart TVs, tablets, and phones, the rules didn’t keep up. Streaming platforms weren’t held to the same standards — until now.
 
As a result, we’ve all grown used to that jarring volume jump between content and ads. It’s been normalized, even though it’s never been okay.
 

Welcome to the Loudness War 💥


There’s actually a term for this — The Loudness War. It refers to the long-standing trend in media and music production where everyone tries to sound louder than the competition. From ’70s rock albums to the latest K-pop hits, volume has kept creeping up. Ironically, cranking up the loudness often reduces audio quality — but the pressure to stand out keeps pushing it higher.
 
Now add ads to the mix — and with millions in ad dollars on the line, marketers want to make sure their message cuts through. The solution? Just turn up the volume. Louder. Louder still. Until, of course, your ears give up.
 


A Law Born from Sleepless Nights and Screaming Babies


This new bill wasn’t just inspired by science — it was inspired by sleep-deprived parents. Senator Thomas Umberg, who introduced SB 576, said he was inspired by stories like one from “baby Samantha” — a newborn who was finally rocked to sleep, only to be startled awake by a screaming streaming ad. Every parent who’s been there knows the pain.


 “This bill was inspired by baby Samantha and every exhausted parent who’s finally gotten a baby to sleep, only to have a blaring streaming ad undo all that hard work.”
— Senator Thomas Umberg
 

 

We Need Laws That Match How We Watch


Streaming isn’t a trend anymore — it’s the norm. But while our devices and apps have evolved rapidly, legislation hasn’t kept pace. Unlike the broadcast world, there are no global loudness standards for streaming — and with countless apps, platforms, and playback environments, the user experience is inconsistent at best (and ear-splitting at worst).
 
That’s why SB 576 matters. It acknowledges that volume isn’t just a feature — it’s a user experience issue, a health issue, even a parenting issue. And it’s time more states and countries followed suit.
 

Loudness Normalization: Not Just Nice to Have, But Must-Have

 
So what happens next? This new regulation serves as a gentle but important wake-up call for streaming platforms, content creators, and tech providers. It’s a reminder to be prepared with loudness normalization tools that ensure consistent audio across devices, content types, and formats.
 
Long before the new regulation came into focus, Gaudio Lab had already addressed this challenge. After years of in-depth research, we developed LM1 (Loudness Normalization 1) — a loudness normalization technology built on a side-chain metadata architecture within a server–client framework. The server stores the original audio asset exactly as it is, even in its compressed form, while extracting only the loudness metadata. Because this metadata is delivered separately as a side-chain, it remains independent of the codec used in streaming services. (more info)
 
The actual loudness adjustment happens on the client side, allowing playback at different target levels tailored to each user’s device and listening environment. Even when program content and ads are managed on separate servers, each can generate and send its own metadata to the client app — which then aligns both to a common reference loudness level. In other words, LM1 already meets the requirements set by California’s SB 576. Gaudio Lab has also enhanced LM1 for the live-streaming era and continue to refine it in collaboration with Korea’s leading content streaming platforms.
 
It’s worth noting that Gaudio Lab’s loudness solution earned the CES 2023 Innovation Award, a testament to its technological leadership and real-world impact on the audio industry.

 

 

 

 

Hearing Is Believing — and Protecting

 
Here’s the bottom line: your ears aren’t infinite-use tools. Prolonged exposure to excessive volume causes real, irreversible hearing damage. In an age where earbuds are practically glued to our heads, we can’t afford to treat sound discomfort as “just part of the experience.”
 
The passage of SB 576 is more than a policy shift — it’s a signal that audio UX matters. And with smarter regulation and tech innovation working together, we can build a media environment that’s easier on the ears — and kinder to the humans listening.

 

So, what’s next?

Our next post will explore the technical foundations and practical applications of LM1, explaining how it can help streaming platforms comply with the new loudness regulation. (Link to the next blog post)

 

pre-image
The sound of liberation heard again after 80 years

      Binggrae’s [First Sound of Liberation] campaign brings back the cry of Daehan Doknip Manse (Long Live Korean Independence) with Korean AI technology.   Just six days after its release, Binggrae’s First Sound of Liberation campaign surpassed 4 million views, quickly becoming a topic of conversation. Marking the 80th anniversary of Korea’s Liberation Day, the campaign reimagined the sound of liberation that had never been heard before—made possible through cutting-edge AI audio technology. Behind this recreation stood the technology of Gaudio Lab, a Korean Audio AI company. By combining historical research with advanced audio AI, they revived the cries of liberation that history had recorded only in text, but never in sound.   So, how was this unheard sound, lost for 80 years, finally brought back to life?     Recreating the Cry of Independence On August 15, 1945, Japan announced its surrender. The next day, jubilant crowds filled the streets shouting “Daehan Doknip Manse”—a cry of independence that symbolized the nation’s freedom. Countless written records remain from that time, but no sound recordings exist.   This absence became the starting point for Binggrae and Innocean, who collaborated with Gaudio Lab to recreate what had once been impossible: the lost sound of Korea’s liberation. By restoring the voices of that day, the project not only gave sound to history but also honored the sacrifices of independence activists. It was more than a campaign—it was a meaningful attempt to revive the fading memory of liberation through hearing.     Gaudio Lab's AI Source Separation, GSEP At the heart of the project was AI source separation. This technology extracts individual sounds from complex audio signals. Gaudio Lab’s AI source separation technology, GSEP, was used to remove noise from recorded audio, delivering the core sounds clearly and cleanly.   Testimonies from independence activists’ descendants, historical references, and verified archival materials were collected. Much like preparing ingredients for a fine meal, GSEP processed these “sound ingredients” into clean, high-quality audio—the foundation for recreating history.     Bringing History to Life with Generative AI To generate sounds that had never existed, Gaudio Lab’s generative AI technology came into play. Binggrae and Innocean carefully curated factual details—ranging from historical documents to environmental conditions such as temperature and humidity on that day. Based on these cues, they reconstructed everything—the sound of clothes brushing, footsteps for different kinds of shoes, the tram running on the street, and flags waving in the wind. Expert historians validated each step of the process, ensuring historical accuracy.   Generative AI then added human dimensions—age, gender, emotion—to recreate the voices of crowds shouting “manse.” Layered with the acoustic atmosphere of Seoul Station Plaza, the voices came alive with an authentic sense of immersion. Finally, audio upscaling technology was applied to transform lower-quality sounds into richer, more impactful experiences.       Immersion Through Spatial Audio Spatial audio was the finishing touch, giving listeners the sensation of truly standing amidst history. Gaudio Lab’s spatial audio technology, recognized as an international standard, applies binaural rendering to deliver a fully immersive 360-degree experience.   The creak of iron gates opening, the rush of crowds spilling onto the streets, the metallic clang of sheet iron, the fluttering of Taegeukgi (Flag of South Korea)—each was placed in a 3D soundscape. From the footsteps outside Seodaemun Prison to the swelling cheers at Seoul Station Plaza, the recreated scenes resonated with a realism that text alone could never convey.       More than sound—Capturing emotion Recreating the “Sound of Liberation” was never just about reproducing audio. It was about conveying the emotions of that moment—the heat in the air, the tears of joy, the overwhelming surge of voices rising together. Each sound was carefully refined and layered with AI technology to carry not just noise, but history, memory, and feeling.   For Gaudio Lab, it was a profound honor to take part in this meaningful project as a Korean Audio AI company. Bringing the first-ever “Sound of Liberation” to life and passing it down as a historical record for future generations was both a responsibility and a privilege.   And we are not stopping here. Gaudio Lab will continue striving to deliver new experiences through sound—bridging technology, history, and emotion.      

2025.08.27
after-image
Ensuring SB 576 Compliance in Streaming Ads with LM1 Loudness Metadata

    1. Introduction: SB 576 and the Challenge for Streaming Services Beginning July 1, 2026, the State of California will enforce SB 576, which prohibits a “video streaming service” from transmitting the audio of commercial advertisements louder than the video content the advertisements accompany (related blog post).  This statute builds upon the federal Commercial Advertisement Loudness Mitigation Act (CALM Act) regime, which applies to television broadcast stations, cable operators, and multichannel video programming distributors but until now excluded internet-protocol streaming.  Accordingly, streaming platforms (including those offering ad-supported tiers) that serve California consumers must ensure that any advertisement’s audio does not exceed the loudness of the program content with which it is associated. To comply, robust and automated loudness-management workflows are now essential.     2. Why LM1 is the Best Solution for SB 576 Compliance SB 576 imposes a maximum loudness threshold on commercial advertisements in streaming environments: a video streaming service may not transmit the audio of commercial advertisements louder than the video content the advertisements accompany.    For streaming operators, the key technical challenges include: Ad content and program content may originate via different delivery paths or systems, leading to inconsistent loudness levels when stitched together. Separate production of program and ad assets may result in incomplete or inconsistent loudness metadata. The perceived loudness varies greatly depending on device type, application, and playback context. Unlike linear broadcast, streaming workflows often include dynamic ad insertion (DAI/SSAI), adaptive bitrates, device-specific decoders, and heterogeneous inventory, complicating loudness control.   By adopting LM1’s metadata-based approach, streaming services can solve all of these issues efficiently: The ad server generates LM1 loudness metadata for each ad and sends it as a side-chain along with the audio stream. The client compares this metadata against the program’s loudness level and adjusts dynamically in real time. The server no longer needs to remaster or re-encode ads, reducing operational complexity and cost. On the client side, loudness normalization can be achieved simply by referencing LM1 metadata—no heavy computation required. This ensures a consistent audio experience across all devices and platforms. And since LM1 is standardized in ANSI/CTA-2075.1, it’s already a trusted and validated solution for OTT/video streaming loudness management.  In short, LM1 simultaneously ensures SB 576 compliance, enhances user experience, and simplifies operations—a proven, standards-based solution ready for deployment.     3. Technical Background: What is LM1?   LM1 is a metadata-driven loudness normalization technology developed by Gaudio Lab, engineered to maintain consistent audio loudness across program and advertisement assets in broadcast, streaming, and on-demand environments.   Key Features   1. Standardized Technology Established as the standard TTAK.KO-07.0146 (2020) by the Telecommunications Technology Association of Korea. Incorporated into the standard ANSI/CTA-2075.1-2022: Loudness Standard for Over-the-Top Television and Online Video Distribution for Mobile and Fixed Devices – LM1 2. Side-chain Metadata Structure Generates very compact metadata (typically < 1 KB per asset) that runs as a side-chain alongside the audio stream. Allows the original audio to remain untouched—no alteration of program/ad audio is required. Enables real-time normalization in the client environment with minimal latency.   3. Accuracy & Performance Uses the ITU-R BS.1770 standards (up to BS.1770-5) for loudness measurement. Zero-latency processing (with a Peak Limiter delay typically ≤ 0.001 sec when applied). Supports advanced modes such as Dialogue (Anchor) Normalization, Transparent Mode, and Quality Secure Mode. Ultra-low CPU load, optimized for both server and client environments. Supports all device and application platforms: mobile, smart TV, web, automotive, and embedded systems. Custom integration available for diverse workflows and applications. Proven reliability—used daily by over 50 million end users worldwide.   4. How It Works   LM1 measures the loudness of each audio/video asset on the server side and generates side-chain metadata (LM1 Metadata). During playback, the client parses this metadata and applies real-time loudness normalization.   [Figure 1. LM1 Server–Client Architecture]   Metadata Generation The LM1 Metadata Generator measures and analyzes each content’s loudness level (ITU-R BS.1770 based). Generates a < 1 KB LM1 metadata file per content item. Transmission (Side-chain) The LM1 metadata is transmitted to the client as a separate side-chain alongside the audio stream. It can be embedded using ID3 tags or Timed Metadata, depending on the delivery protocol — for example, via the EXT-X-ID3 tag in HLS streams or the emsg (Event Message Box) in DASH streams. Client Processing The client decodes audio normally while the LM1 parser feeds data to the Loudness Normalizer. Output targets are adjusted per device (e.g., TV = −24 LUFS, Mobile = −16 LUFS), ensuring a consistent listening experience across platforms.   The following diagram illustrates how LM1 can be integrated into a Server-Side Ad Insertion (SSAI) workflow to comply with SB 576. (The same concept also applies to client-side ad insertion (CSAI).)     [Figure 2. LM1 in an SSAI Environment]     Content Server (Program) - optional Measures the loudness of the program audio and generates the corresponding LM1 metadata. The metadata is then embedded into the audio stream (HLS or DASH) as Timed Metadata. For services that have already normalized the program audio to a specific target loudness level, this process may be omitted.   Ad Server Generates LM1 metadata for each ad track and transmits it to the SSAI server via a VAST Extension or Sidecar Metadata format. The ad server analyzes the loudness of each ad’s audio and generates the corresponding LM1 metadata. When delivering the ad content, it provides the SSAI server with a VAST XML that includes the LM1 metadata in the Extension field, together with the existing ad metadata such as URL, ID, duration, and tracking information.   SSAI Server (Ad Assembler) The SSAI server combines the program and ad streams into a single continuous playback stream. Within this process, the Manifest Stitcher aligns program and ad segments based on the HLS or DASH manifest, converts the LM1 metadata received from the Ad Server into Timed Metadata (e.g., EXT-X-ID3 for HLS or emsg for DASH), and inserts it into the manifest at each ad insertion point.   Client (Player + LM1 SDK) The client uses the LM1 Client SDK to read the LM1 metadata of both advertisements and, optionally, the program content. The SDK analyzes this metadata in real time and automatically adjusts the loudness to maintain a consistent audio level across all playback segments.   This architecture enables SB 576-compliant loudness control even in dynamic ad insertion environments. Because LM1 metadata is codec-agnostic, it integrates seamlessly into existing SSAI pipelines without requiring infrastructure changes.     5. Why Side-chain Metadata Matters   Sends reference information only—no re-encoding or re-mastering needed for the video/audio asset. Existing infrastructure requires minimal modification—metadata generation is lightweight and compatible. Program and ad metadata are maintained separately, supporting independent workflows and broad vendor interoperability. This also enables a single ad asset to be used consistently across multiple content platforms that each define different target loudness levels, without the need for re-encoding or creating separate ad versions. Codec-agnostic: operates seamlessly across AAC, AC-3, and other audio formats — even when multiple codecs coexist within the same program or advertising stream. Metadata is generated directly from measured content, ensuring 100% trustworthy data. Already deployed across multiple device/app platforms, it is highly scalable and production-ready.     6. Wrap-Up   While SB 576 is a consumer-friendly regulation that improves viewer experience, it also introduces new technical challenges for streaming providers.   In fact, the Motion Picture Association and the Streaming Innovation Alliance, representing major platforms like Netflix and Disney, have already expressed concerns about the complexity of implementing the law in practice.   LM1 was specifically designed to address these industry-wide issues. When the standard was first defined in 2020, it was built to solve the structural problem of inconsistent loudness across broadcast and OTT ecosystems.   If you’re interested in learning more about how LM1 can help your platform prepare for SB 576, please contact us through the Contact Us link.    

2025.11.04