Ensuring SB 576 Compliance in Streaming Ads with LM1 Loudness Metadata

2025.11.04ㆍ by Henney Oh

 

 

1. Introduction: SB 576 and the Challenge for Streaming Services


Beginning July 1, 2026, the State of California will enforce SB 576, which prohibits a “video streaming service” from transmitting the audio of commercial advertisements louder than the video content the advertisements accompany (related blog post).

 
This statute builds upon the federal Commercial Advertisement Loudness Mitigation Act (CALM Act) regime, which applies to television broadcast stations, cable operators, and multichannel video programming distributors but until now excluded internet-protocol streaming. 

Accordingly, streaming platforms (including those offering ad-supported tiers) that serve California consumers must ensure that any advertisement’s audio does not exceed the loudness of the program content with which it is associated. To comply, robust and automated loudness-management workflows are now essential.

 

 

2. Why LM1 is the Best Solution for SB 576 Compliance


SB 576 imposes a maximum loudness threshold on commercial advertisements in streaming environments: a video streaming service may not transmit the audio of commercial advertisements louder than the video content the advertisements accompany. 

 

For streaming operators, the key technical challenges include:

  • Ad content and program content may originate via different delivery paths or systems, leading to inconsistent loudness levels when stitched together.
  • Separate production of program and ad assets may result in incomplete or inconsistent loudness metadata.
  • The perceived loudness varies greatly depending on device type, application, and playback context.
  • Unlike linear broadcast, streaming workflows often include dynamic ad insertion (DAI/SSAI), adaptive bitrates, device-specific decoders, and heterogeneous inventory, complicating loudness control.

 

By adopting LM1’s metadata-based approach, streaming services can solve all of these issues efficiently:

  • The ad server generates LM1 loudness metadata for each ad and sends it as a side-chain along with the audio stream. The client compares this metadata against the program’s loudness level and adjusts dynamically in real time.
  • The server no longer needs to remaster or re-encode ads, reducing operational complexity and cost.
  • On the client side, loudness normalization can be achieved simply by referencing LM1 metadata—no heavy computation required.
  • This ensures a consistent audio experience across all devices and platforms.
  • And since LM1 is standardized in ANSI/CTA-2075.1, it’s already a trusted and validated solution for OTT/video streaming loudness management.

 
In short, LM1 simultaneously ensures SB 576 compliance, enhances user experience, and simplifies operations—a proven, standards-based solution ready for deployment.

 

 

3. Technical Background: What is LM1?

 

LM1 is a metadata-driven loudness normalization technology developed by Gaudio Lab, engineered to maintain consistent audio loudness across program and advertisement assets in broadcast, streaming, and on-demand environments.

 

Key Features

 

1. Standardized Technology

  • Established as the standard TTAK.KO-07.0146 (2020) by the Telecommunications Technology Association of Korea.
  • Incorporated into the standard ANSI/CTA-2075.1-2022: Loudness Standard for Over-the-Top Television and Online Video Distribution for Mobile and Fixed Devices – LM1

2. Side-chain Metadata Structure

  • Generates very compact metadata (typically < 1 KB per asset) that runs as a side-chain alongside the audio stream.
  • Allows the original audio to remain untouched—no alteration of program/ad audio is required.
  • Enables real-time normalization in the client environment with minimal latency.

 

3. Accuracy & Performance

  • Uses the ITU-R BS.1770 standards (up to BS.1770-5) for loudness measurement.
  • Zero-latency processing (with a Peak Limiter delay typically ≤ 0.001 sec when applied).
  • Supports advanced modes such as Dialogue (Anchor) Normalization, Transparent Mode, and Quality Secure Mode.
  • Ultra-low CPU load, optimized for both server and client environments.
  • Supports all device and application platforms: mobile, smart TV, web, automotive, and embedded systems.
  • Custom integration available for diverse workflows and applications.
  • Proven reliability—used daily by over 50 million end users worldwide.

 

4. How It Works

 

LM1 measures the loudness of each audio/video asset on the server side and generates side-chain metadata (LM1 Metadata). During playback, the client parses this metadata and applies real-time loudness normalization.

 

[Figure 1. LM1 Server–Client Architecture]

 

Metadata Generation

  • The LM1 Metadata Generator measures and analyzes each content’s loudness level (ITU-R BS.1770 based).

  • Generates a < 1 KB LM1 metadata file per content item.

Transmission (Side-chain)

  • The LM1 metadata is transmitted to the client as a separate side-chain alongside the audio stream. It can be embedded using ID3 tags or Timed Metadata, depending on the delivery protocol — for example, via the EXT-X-ID3 tag in HLS streams or the emsg (Event Message Box) in DASH streams.

Client Processing

  • The client decodes audio normally while the LM1 parser feeds data to the Loudness Normalizer.

  • Output targets are adjusted per device (e.g., TV = −24 LUFS, Mobile = −16 LUFS), ensuring a consistent listening experience across platforms.

 

The following diagram illustrates how LM1 can be integrated into a Server-Side Ad Insertion (SSAI) workflow to comply with SB 576. (The same concept also applies to client-side ad insertion (CSAI).)

 

 

[Figure 2. LM1 in an SSAI Environment]

 

 

Content Server (Program) - optional

  • Measures the loudness of the program audio and generates the corresponding LM1 metadata.

  • The metadata is then embedded into the audio stream (HLS or DASH) as Timed Metadata.

  • For services that have already normalized the program audio to a specific target loudness level, this process may be omitted.

 

Ad Server

  • Generates LM1 metadata for each ad track and transmits it to the SSAI server via a VAST Extension or Sidecar Metadata format.

  • The ad server analyzes the loudness of each ad’s audio and generates the corresponding LM1 metadata.

  • When delivering the ad content, it provides the SSAI server with a VAST XML that includes the LM1 metadata in the Extension field, together with the existing ad metadata such as URL, ID, duration, and tracking information.

 

SSAI Server (Ad Assembler)

  • The SSAI server combines the program and ad streams into a single continuous playback stream.

  • Within this process, the Manifest Stitcher aligns program and ad segments based on the HLS or DASH manifest, converts the LM1 metadata received from the Ad Server into Timed Metadata (e.g., EXT-X-ID3 for HLS or emsg for DASH), and inserts it into the manifest at each ad insertion point.

 

Client (Player + LM1 SDK)

  • The client uses the LM1 Client SDK to read the LM1 metadata of both advertisements and, optionally, the program content.

  • The SDK analyzes this metadata in real time and automatically adjusts the loudness to maintain a consistent audio level across all playback segments.

 

This architecture enables SB 576-compliant loudness control even in dynamic ad insertion environments. Because LM1 metadata is codec-agnostic, it integrates seamlessly into existing SSAI pipelines without requiring infrastructure changes.

 

 

5. Why Side-chain Metadata Matters

 

  • Sends reference information only—no re-encoding or re-mastering needed for the video/audio asset.

  • Existing infrastructure requires minimal modification—metadata generation is lightweight and compatible.

  • Program and ad metadata are maintained separately, supporting independent workflows and broad vendor interoperability. This also enables a single ad asset to be used consistently across multiple content platforms that each define different target loudness levels, without the need for re-encoding or creating separate ad versions.

  • Codec-agnostic: operates seamlessly across AAC, AC-3, and other audio formats — even when multiple codecs coexist within the same program or advertising stream.

  • Metadata is generated directly from measured content, ensuring 100% trustworthy data.

  • Already deployed across multiple device/app platforms, it is highly scalable and production-ready.

 

 

6. Wrap-Up

 

While SB 576 is a consumer-friendly regulation that improves viewer experience, it also introduces new technical challenges for streaming providers.

 

In fact, the Motion Picture Association and the Streaming Innovation Alliance, representing major platforms like Netflix and Disney, have already expressed concerns about the complexity of implementing the law in practice.

 

LM1 was specifically designed to address these industry-wide issues. When the standard was first defined in 2020, it was built to solve the structural problem of inconsistent loudness across broadcast and OTT ecosystems.

 

If you’re interested in learning more about how LM1 can help your platform prepare for SB 576, please contact us through the Contact Us link.

 

 

pre-image
California Just Got Quieter — And That’s a Good Thing (Complies with SB576)

    Ever been jolted out of your seat by a blaring ad right after your favorite show? You’re not alone. But if you’re in California, you can look forward to a future with fewer audio-induced jump scares. Starting July 2026, streaming services that serve California residents will no longer be allowed to blast ads louder than the actual content. That’s thanks to a new law — Senate Bill 576 (SB 576) — recently signed by Governor Gavin Newsom. The law targets IP-based streaming services and sets a clear rule: keep your ads as quiet as the show they’re interrupting*.*a video streaming service that serves consumers residing in the state(CA) shall not transmit the audio of commercial advertisements louder than the video content the advertisements accompany  From TV to TikTok: Our Soundscape Has Shifted  Remember the days when loud commercials were just a TV thing? That got fixed years ago with the CALM Act, which standardized broadcast loudness levels. But as our viewing habits moved from cable boxes to smart TVs, tablets, and phones, the rules didn’t keep up. Streaming platforms weren’t held to the same standards — until now. As a result, we’ve all grown used to that jarring volume jump between content and ads. It’s been normalized, even though it’s never been okay.  Welcome to the Loudness War 💥 There’s actually a term for this — The Loudness War. It refers to the long-standing trend in media and music production where everyone tries to sound louder than the competition. From ’70s rock albums to the latest K-pop hits, volume has kept creeping up. Ironically, cranking up the loudness often reduces audio quality — but the pressure to stand out keeps pushing it higher. Now add ads to the mix — and with millions in ad dollars on the line, marketers want to make sure their message cuts through. The solution? Just turn up the volume. Louder. Louder still. Until, of course, your ears give up.  A Law Born from Sleepless Nights and Screaming Babies This new bill wasn’t just inspired by science — it was inspired by sleep-deprived parents. Senator Thomas Umberg, who introduced SB 576, said he was inspired by stories like one from “baby Samantha” — a newborn who was finally rocked to sleep, only to be startled awake by a screaming streaming ad. Every parent who’s been there knows the pain.  “This bill was inspired by baby Samantha and every exhausted parent who’s finally gotten a baby to sleep, only to have a blaring streaming ad undo all that hard work.”— Senator Thomas Umberg    We Need Laws That Match How We Watch Streaming isn’t a trend anymore — it’s the norm. But while our devices and apps have evolved rapidly, legislation hasn’t kept pace. Unlike the broadcast world, there are no global loudness standards for streaming — and with countless apps, platforms, and playback environments, the user experience is inconsistent at best (and ear-splitting at worst). That’s why SB 576 matters. It acknowledges that volume isn’t just a feature — it’s a user experience issue, a health issue, even a parenting issue. And it’s time more states and countries followed suit.  Loudness Normalization: Not Just Nice to Have, But Must-Have  So what happens next? This new regulation serves as a gentle but important wake-up call for streaming platforms, content creators, and tech providers. It’s a reminder to be prepared with loudness normalization tools that ensure consistent audio across devices, content types, and formats. Long before the new regulation came into focus, Gaudio Lab had already addressed this challenge. After years of in-depth research, we developed LM1 (Loudness Normalization 1) — a loudness normalization technology built on a side-chain metadata architecture within a server–client framework. The server stores the original audio asset exactly as it is, even in its compressed form, while extracting only the loudness metadata. Because this metadata is delivered separately as a side-chain, it remains independent of the codec used in streaming services. (more info) The actual loudness adjustment happens on the client side, allowing playback at different target levels tailored to each user’s device and listening environment. Even when program content and ads are managed on separate servers, each can generate and send its own metadata to the client app — which then aligns both to a common reference loudness level. In other words, LM1 already meets the requirements set by California’s SB 576. Gaudio Lab has also enhanced LM1 for the live-streaming era and continue to refine it in collaboration with Korea’s leading content streaming platforms. It’s worth noting that Gaudio Lab’s loudness solution earned the CES 2023 Innovation Award, a testament to its technological leadership and real-world impact on the audio industry.         Hearing Is Believing — and Protecting  Here’s the bottom line: your ears aren’t infinite-use tools. Prolonged exposure to excessive volume causes real, irreversible hearing damage. In an age where earbuds are practically glued to our heads, we can’t afford to treat sound discomfort as “just part of the experience.” The passage of SB 576 is more than a policy shift — it’s a signal that audio UX matters. And with smarter regulation and tech innovation working together, we can build a media environment that’s easier on the ears — and kinder to the humans listening.   So, what’s next? Our next post will explore the technical foundations and practical applications of LM1, explaining how it can help streaming platforms comply with the new loudness regulation. (Link to the next blog post)  

2025.10.28
after-image
Tired of waiting forever for new K-Drama uploads?

Tired of waiting forever for new K-Drama uploads? You might not have to wait much longer. A movement to distribute K-content 10× faster worldwide through AI     K-pop, K-dramas, and Korean variety shows have become familiar sights across the globe. In online communities dedicated to sharing Korean content, questions like “When will today’s episode be uploaded?“, “Is there a way to watch it outside Korea?“, or “How do I use a VPN?” appear constantly. People even share platform recommendations and viewing hacks. Until now, global viewers had to wait at least two weeks—and sometimes several months—after a Korean broadcast. But this waiting time is expected to shrink dramatically very soon. Why is that?   Korea Is Now Testing “K-FAST”  The Korean government and private companies are running an initiative to rapidly spread K-content worldwide through FAST channels. The strategy centers on combining AI automation with Human-In-The-Loop (HITL) expertise. FAST (Free, Ad-Supported, Streaming TV) channels operate on a “Free + Ad-supported” model, which limits how much time and budget can be invested. Delivering Netflix or Disney+ level localization quality under these constraints is nearly impossible. Still, viewers expect quality that feels natural and comfortable. This requires maintaining the creator’s original intent while adapting to regional cultural nuances — a delicate balance Korea aims to solve with AI.   Why Did Global Viewers Have to Wait So Long for K-Dramas?  Exporting a drama involves far more than adding subtitles. Transcription, subtitling, music replacement, dubbing, loudness adjustment, remastering, quality checks… and more. The workflow is complex, and multiple vendors often share the work, naturally extending timelines. And as the saying goes, “time is money,” costs can rapidly snowball. In the K-FAST experiment, Korea is making a bold shift from a subtitle-focused workflow to a dubbing-first approach — something previously avoided due to cost and technical constraints. (Fun fact: In North America, viewers watch dubbed content about seven times more than subtitled content.) For older series and films, there’s an additional challenge: many have lost their D/M/E (Dialogue, Music, Effects) files, which are essential for re-production. As a result, countless classic works have remained locked away for years.   Faster, More Affordable, Higher Quality — and Even a Revival of Classics  As K-content’s popularity rises, Korean studios are flooded with export requests. Localization speed has essentially become a measure of global competitiveness. Fortunately, Korea has a company capable of solving these challenges with AI. Three years ago, Gaudio Lab introduced its own AI audio-separation technology. After years of development, it released Gaudio Studio Pro (GSP) — a comprehensive solution for content localization and re-versioning.   How Does GSP (Gaudio Studio Pro) Work? Using its AI source-separation engine, GSP cleanly extracts D/M/E from the original content. This instantly addresses common roadblocks to global distribution — missing M&E tracks, music copyright issues, lack of stem files, and more.  The clean dialogue is then translated and dubbed using AI. After minor script edits, files are uploaded into GSP, where carefully crafted prompts and a small amount of human touch raise quality further. Background music is automatically replaced with copyright-clear tracks. The speed is over 10× faster than existing workflows. AI precisely identifies and replaces only the required music segments. Users can preview, re-request recommendations, or adjust sections to better preserve the original intent. Importantly, all replacement tracks are high-quality, human-made music, not AI-generated — sometimes even revealing hidden gems that rival the original soundtrack. Many older works have only a single master file without segmented D/M/E, making re-production impossible. But once processed with GSP, these classics can finally cross borders again. It’s a true revival of global heritage. By automating the entire pipeline, tasks that once took over a month can now be completed 90% faster.   From Traditional Dubbing to AI-Powered GSP… and a Double Win at CES 2026  Traditional dubbing required dozens of voice actors, studios, engineers, producers, and scriptwriters. With GSP, a single dubbing producer can manage the entire process. Premium legacy titles can be restored and distributed worldwide through advanced separation technology. This workflow is already powering the global expansion of popular Korean shows like Running Man. GSP’s quality has even been validated by a major Japanese broadcaster known for its extremely strict standards — a strong signal that Gaudio Lab’s technology meets FAST-market expectations.       GSP’s ability to break global distribution barriers earned it two CES 2026 Innovation Awards. Its flexibility, business potential, and technical excellence — enabling global release of both new and archival content — received high marks. Thanks to automatic music replacement powered by a library of over 110,000 high-quality human-made tracks, GSP quickly resolves legal bottlenecks that typically slow down global content delivery, resulting in enthusiastic interest from companies worldwide. Gaudio Lab’s broader portfolio — including source separation, instrument isolation, real-time noise removal, spatial audio, and loudness normalization — has earned CES Innovation Awards four years running, cementing its position as a world-leading audio technology company.     Solving One of AI Audio’s Toughest Problems  One of the hardest challenges in AI audio separation — and a point where many companies fail — is when two actors are speaking while background music includes lyrics. Most AI separators confuse vocals with dialogue, leaving messy results. Gaudio Lab’s models, however, are specially trained to differentiate speech from singing, enabling cleaner and far more accurate separation. This gives GSP a major advantage in producing FAST-ready content quickly and reliably. It’s also why an increasing number of content exporters are turning to Gaudio Lab.   HITL (Human in the Loop): Perfecting the Final 1%  AI adoption in production workflows has been met with excitement — and hesitation. Many still say, “It needs human review,” “It’s not fully ready,” or “It can feel unnatural.” Concerns about compromising the original quality remain common. HITL bridges that gap. AI handles time-consuming or technically difficult tasks, while humans focus on creative decision-making, contextual nuance, and artistic quality. This combination lowers the barrier to global content distribution without sacrificing authenticity. Gaudio Lab has built an internal HITL expert team that oversees prompt design, workflow supervision, and manual refinements whenever needed. According to Gaudio Lab, “Team members using AI have significantly improved their project-management capabilities, with rapid growth in skill and proficiency.” They add, “By working on hundreds of titles, we’ve created a positive cycle — faster output, higher quality, and stronger team expertise.”   Who Is Gaudio Lab?  Gaudio Lab is Korea’s leading AI audio company, renowned for its world-class AI source-separation technology. It has earned global recognition through: CES Innovation Awards for four consecutive years (2023–2026) — SXSW Innovation Awards finalist (2024)— ANSI/CTA international standard adoption (2022)— ISO/IEC MPEG-H standard adoption (2013, 2018)— VR Awards ‘Best VR Innovation Company of the Year’ (2017) At CES 2024, Microsoft CEO Satya Nadella personally visited the Gaudio Lab booth, experienced the technology, and exclaimed “Amazing,” drawing industry-wide attention. The company includes more than 40 audio experts worldwide, including eight PhDs in audio engineering, an extremely rare concentration even globally. Gaudio Lab provides advanced audio solutions across OTT, streaming, AR/VR, smartphones, cinemas, and automobiles — anywhere sound matters.    

2025.11.25