뒤로가기back

AI Music Replacement: Solve Music Copyright Issues & Go Global with Gaudio Lab

2025.01.24 by Hailey Moon

Case 1.

The Netflix documentary <Dear Jinri>, which delves into the untold story of the celebrity Sulli, encountered a critical issue during its production. The film aimed to incorporate a self-recorded video of Sulli, left on her phone, as a key element in the end credits. However, the video, much like a personal diary, also captured the background music La Vie en Rose by Edith Piaf. Due to unresolved copyright issues for the song, the filmmakers were unable to use this deeply emotional scene.

 

Case 2.

A popular South Korean variety show was exported to Taiwan, where it became a major success. However, complications arose when the music used in the show couldn't be cleared for copyright in Taiwan, forcing the production company to pay substantial royalties. This unexpected cost ended up surpassing the revenue earned from the show's export, creating a net financial loss.

 

Case 3.

A well-known vlogger encountered problems while trying to upload a video of their live experience at a soccer match. The stadium’s background music included copyrighted songs, which triggered YouTube’s Content ID system for copyright infringement. As a result, the video could not be uploaded as planned.

 

These cases highlight real-life inquiries received by Gaudio Lab, a company dedicated to solving diverse audio challenges. 

 

Whether for individual creators like YouTubers or professional broadcasters, producing video content often requires dealing with unexpected situations where music must be removed or replaced. And these examples are just the beginning.

 

Let’s explore how Gaudio Lab resolved these music copyright challenges.

 

 

(사진= '진리에게' 스틸컷)

(Photo = Still from Dear Jinri)

 

 

 

Why Replace Music?

 

The most common reason is to address music copyright issues.

 

Broadcast networks typically pay copyright fees for music used during the initial airing of a program. To elaborate, most networks pay a fixed fee to music copyright management organizations, which grants them unlimited rights to use songs managed by the organization—but only for broadcasts on their own channels.

 

However, when the same content is distributed to platforms like Netflix or FAST (Free Ad-Supported Streaming TV) channels, additional music licensing must be secured in each country where the content is streamed. This can incur substantial costs. Even if the content is already complete and ready to sell, licensing fees can easily exceed the revenue potential.

 

To navigate such copyright hurdles, creators have historically relied on the following options:

  • (1) Abandon exporting the content.
  • (2) Remove the affected portions entirely (impossible for variety shows with music throughout).
  • (3) Replace the music with copyright-free alternatives (a process called "video re-editing").

 

 

For option (3), the process has been entirely manual: editors use basic audio tools to isolate the music, search for similar alternatives in limited databases, and seamlessly reinsert the replacements into the original video. This painstakingly labor-intensive process often takes two to three weeks to edit a single 60-minute video.

 

Individual creators like YouTubers face similar obstacles. When dealing with videos containing copyrighted music, they often (1) abandon uploading the video, (2) edit out the affected portions entirely, (3) (recently) use music separation technology to remove the music while retaining other audio elements.

 

Music copyright is so crucial for platforms like YouTube that their Content ID management system detects copyrighted music in all uploads and suggests these same three options.

 

 

 

Gaudio Lab's Music Replacement: A Revolutionary AI Solution for Copyright Issues

GSP-MR: A Revolutionary AI Solution for Copyright Issues

Gaudio Lab’s latest AI-powered video audio editing solution, Gaudio Music Replacement, revolutionizes how these challenges are addressed. 

 

 

How does it differ from traditional manual methods?

 

 

First, users upload their video to Music Replacement, where the AI separates the audio tracks into dialogue, music, and effects (commonly referred to as DME). Gaudio Lab’s audio separation technology, GSEP, is renowned for its exceptional quality. It has been widely recognized as the best-in-class audio separation technology. At CES 2024, Gaudio Lab even won a CES Innovation Award for its product, Just Voice, which enables real-time voice isolation.

 

Next, the separated music track is analyzed by AI, which identifies individual pieces of music and divides them into segments. The system then uses a music recommendation engine to search its extensive music database for similar tracks. This database contains tens of thousands of songs across various genres, all cleared for global use. Unlike low-quality AI-generated music, these tracks are created and uploaded by professional artists worldwide, ensuring top-notch quality.

 

Finally, the AI seamlessly re-mixes the replaced music track with the original dialogue and effects tracks to produce the final edited video.

 

 

 

With Music Replacement, creators no longer need to (1) abandon their content, (2) cut out affected portions (3) or spend weeks and substantial resources on manual re-editing. Instead, they simply upload their video to Music Replacement, wait briefly, and… Boom! They receive a video where the original musical intent is preserved, but all copyright issues are resolved. (Blind tests conducted with professionals found that many couldn’t distinguish the edited version from the original!)

 

The clients mentioned earlier have already adopted Music Replacement as a core solution. Exporting content internationally involves more than resolving music copyright issues—it often requires broader content localization efforts. Clients naturally request additional services like dubbing, subtitles, cue sheet generation, and video editing.

 

“I want to handle everything within Gaudio!”

 

As demand grows, Gaudio Lab’s research and product teams get busy adding new features, enhancing both convenience and performance. The name Music Replacement no longer fully captures the product’s capabilities, leading the product owner to contemplate a new name. 😅

 

Most of Music Replacement’s features are powered by AI. In our next post, we’ll dive deeper into the technical details of each process and showcase the convenient tools in the Gaudio Music Replacement Editor. Stay tuned!

 

pre-image
Dissonance, Mahler, and Beyond - Part 2

Hello! I’m Ste, and I’m researching voice AI at Gaudio Lab.   In the previous post, I talked about how Mahler’s dissonance was the language he used to express his emotions.This time, I’d like to explore the traces and significance that this language left in the history of music.Shall we dive deeper into Mahler’s music now? 🎶     2 Mahler   One of the composers who tackled this challenge was Mahler. In his symphonies, Mahler elevated dissonance to an artistic pinnacle. He once said, “A symphony must be like the world. It must embrace everything.” His music encapsulates the complexity of human emotions, the order and chaos of the universe, and the entirety of life and death. To Mahler, dissonance wasn’t merely a clash of notes; it was an essential tool that freely crossed the boundaries between harmony and tension, simultaneously expressing contradictory emotions.   In his symphonies, Mahler fused the diversity of the world into a cohesive whole, leading audiences toward new philosophical reflections. For instance, in Symphony No. 1, he sings of the vitality of spring while reflecting on his painful childhood, juxtaposing consonance and dissonance, harmony and chaos. In Symphony No. 2, he explores the journey from death to resurrection, expressing the weight of life and the possibilities beyond it.   Symphony No. 3 delves into the layers of existence, showing the harmony of nature, humanity, and love. The Adagietto of Symphony No. 5 is outwardly beautiful but imbued with the sorrow and anxiety of love, demonstrating how music can embody love, loss, pain, and joy simultaneously.   Mahler’s final symphony, Symphony No. 10, represents the zenith of his artistic exploration and showcases the climax of dissonance. Although unfinished, Mahler used this work to embrace pain and despair, transforming human wounds and suffering into an artistic universe through dissonance.     2.1 A Love Letter in the Key of A: The Fourth Movement of Symphony No. 5   The fourth movement of Mahler’s Symphony No. 5 is famously dedicated to his wife, Alma Mahler. Its distinctively beautiful melody is slow yet far from monotonous, moving listeners with its profound emotions. It has gained renewed attention recently, being featured in director Park Chan-wook’s film Decision to Leave, dramatically portraying the protagonist’s existential struggles between life and death.   Set in F Major, the movement is slow and tranquil, and the persistent use of non-chord tones evokes a reminiscent of Schumann’s piano piece Träumerei. Both share the same key, similar opening note structures, and the prominent use of a high A in their climactic sections. In Träumerei, this high A is sustained in the climax, initially harmonized with an A Major chord and later with a G Major 9th chord. The latter intensifies emotional tension as the A becomes the ninth, evoking poignant sentiments.     Figure 4: Excerpt from Mahler’s Symphony No. 5, 4th Movement (left) and Symphony No. 10, 1st Movement (right)     Mahler adopts a similar structure. The same high A note is harmonized with an F Major chord in the first section, while in the second section, as shown in Figure 4, it is embellished with a B♭mM7 chord, a Bø7 chord, and then resolves to an F Major chord. While transitioning through two chords to reach F Major, two non-chord tones are employed: G♯ moving toward A and C♯ resolving to D, played by the second violin.     2.2 “My Wife Has a Boyfriend”   Though Mahler and Alma were married, their love did not last. The reasons for their estrangement, as is often the case, remain known only to them—and sometimes not even to themselves. Alma, who had aspired to be a composer in her youth, abandoned her ambitions after marrying Mahler. Mahler once criticized her compositions, saying, “Her music is steeped in nauseating dilettantism, and her mind wanders lazily between fantasies of submission and domination.” [1]   However, Alma’s compositions, such as the first song in her 5 Lieder collection, Die Stille Stadt, reveal significant talent. The opening melody descends D-C-B♭-A-G, echoed in the piano accompaniment, which modulates the B♭. Her harmonic abilities—such as seamlessly transitioning from diminished seventh chords to different tonalities—demonstrate a solid musical education and an exceptional grasp of the floating harmony system prevalent in the late Romantic era. Despite her evident talent, Mahler discouraged her from composing and harshly criticized her work. Why he did so remains unclear. During a period of separation, Alma had an affair with architect Walter Gropius. Gropius deliberately sent a letter intended for Alma to Mahler, revealing their relationship. Distraught, Mahler sought therapy from none other than Sigmund Freud. Though the details of their discussions remain confidential, it appears Freud’s counsel was effective to some extent. On his way home, Mahler wrote the following poem: [1]     ... “I love you!”—these words are the source of my strength,The melody of life I’ve wrested from pain.“Oh, love me!”—these words are the wisdom I know,The root note grounding my soul’s melody....     Though Mahler often spoke harshly of his wife, his deep affection for her was undeniable. Despite receiving Alma’s promise to end her relationship with Gropius, their marriage never truly recovered. It was during this time that Mahler composed his final symphony, Symphony No. 10, pouring his inner torment into its notes. The right-hand side of Figure 4 illustrates the intense dissonance in the symphony’s first movement. Mahler constructs an A∅7 chord anchored on the high A used in the climax of Symphony No. 5, juxtaposed against a G♯˚7 chord and further destabilized by a C♯ in the bass. These dissonant non-chord tones—G♯ and C♯—once enriched the expression of love in his earlier symphony but now create a wailing dissonance, mirroring the irreparable rift between Mahler and Alma. Unlike Mozart, who resolved dissonance consonance, Mahler leaves it unresolved, generating overwhelming beats akin to the tension in their estranged relationship.     3 And Beyond   At the turn of the 20th century, composers like Mahler began to incorporate dissonance and new harmonies more broadly into their works. Russian composer Stravinsky experimented with percussive dissonance in The Rite of Spring, while Hungarian composer Bartók fused folk melodies with unconventional harmonic elements in works like Music for Strings, Percussion, and Celesta. These approaches contrasted sharply with the lush tonal landscapes of Russian composer Rachmaninoff and the modal harmonies of French composers Debussy and Ravel.   Mahler’s exploration of dissonance influenced the next generation of Austrian and German composers. Arnold Schoenberg, for instance, developed the twelve-tone technique, which treated all twelve notes equally, laying the groundwork for serialism and influencing composers like Boulez and Stockhausen.     3.1 Penderecki’s Threnody to the Victims of Hiroshima   By the mid-20th century, clusters of dissonance had become a musical material in their own right, as seen in Penderecki’s Threnody to the Victims of Hiroshima. This piece, performed by a large string ensemble consisting of 24 violins, 10 violas, 10 cellos, and 8 double basses, opens with each instrument producing its highest possible note, played fortissimo. This piercing soundscape immediately immerses the listener in the raw, visceral emotions evoked by the aftermath of an atomic explosion. Penderecki employed not only traditional techniques like arco and pizzicato but also a wide range of modern techniques, including harmonics, col legno, and percussive effects such as tapping the instrument. These pushed traditional boundaries to deliver an unprecedented auditory experience. The result is an overwhelming cacophony that elicits a mix of terror and sorrow, perfectly encapsulating the devastating effects of the Hiroshima bombing.   Figure 5: Excerpt from Penderecki’s Threnody to the Victims of Hiroshima (left) and Ligeti’s Atmospheres (right)     3.2. Ligeti and Atmospheres   Hungarian composer Ligeti took a different approach, using dissonant tone clusters to create atmospheric effects in works like Atmospheres and Lux Aeterna. These pieces evoke the microscopic movements of molecules or photons, suggesting a detached beauty akin to background noise. Director Stanley Kubrick famously used these works in 2001: A Space Odyssey, pairing them with scenes of space’s stark, indifferent beauty.   Ligeti employed micropolyphony to achieve these effects, layering clusters of notes to create evolving textures. In Atmospheres, for example, a single sound unfolds over several minutes, shifting imperceptibly as it builds tension. Unlike Penderecki’s violent dissonance, Ligeti’s approach emphasizes subtlety, drawing the listener into a meditative and almost otherworldly state.       Figure 6: From Stanley Kubrick’s film 2001: A Space Odyssey     As I write this, a couple is arguing in the café behind me. The woman, having discovered the man’s infidelity, caused a commotion at his workplace. I don’t know who’s more at fault, but their quarrel feels like the tone clusters—simultaneously tragic and comedic. As the Korean pansori verse says:     “Oh, such is life—futile indeed.Do you not see the peach blossoms in the Eastern Garden?They bloom for but a moment.Mistress of the brothel, why do you laugh?”     References [1] Jens Malte Fischer. Gustav Mahler. Yale University Press, 2011. [2] Hermann LF Helmholtz. On the Sensations of Tone as a Physiological Basis for the Theory of Music. Cambridge University Press, 2009.[3] Reinier Plompand Willem JM Levelt. “Tonal consonance and critical bandwidth”. In: Journal of the Acoustical Society of America38(1965), pp. 548–560.

2024.11.21
after-image
How Does AI Find Similar Music? Understanding the Criteria Behind the Match

  In our previous post, we explored the story behind the creation of Gaudio Music Replacement. This time, we’ll dive into how AI determines whether two pieces of music are "similar"—not just in theory, but in practice.   When broadcasters or content creators export their content internationally, they must navigate complex music licensing issues, which vary by country. To avoid these legal hurdles, replacing the original soundtrack with copyright-cleared alternatives has become a common workaround. Traditionally, this replacement process was manual—listening to tracks and selecting substitutes by ear. However, because human judgment is subjective, results vary widely depending on the individual’s taste and musical experience. Consistency has always been a challenge.   To address this, Gaudio Lab developed Music Replacement, an AI-powered solution that finds replacement music based on clear and consistent criteria.     Let’s explore how it works.       Finding Music That Feels the Same   How Humans Search for Similar Music When we try to find music that sounds similar to another, we subconsciously evaluate multiple elements: mood, instrumentation, rhythm, tempo, and more. But these judgments can differ from one day to the next, and vary depending on our familiarity with a genre.   Moreover, what we consider "similar" depends heavily on what we focus on. One person may prioritize melody, while another emphasizes instrumentation. This subjectivity makes it difficult to search systematically.     Can AI Understand Music Like We Do? Surprisingly, yes—AI models are designed to mimic how humans perceive music similarity. Multiple academic studies have explored this topic. In this post, we focus on one of the foundational models behind Gaudio Music Replacement: the Music Tagging Transformer (MTT), developed by a Gaudin, Keunwoo.   The core concept behind MTT is music embedding. This refers to converting the characteristics of a song into a numerical vector—essentially, a piece of data that represents the song’s identity. If we describe a track as “bright and upbeat,” an AI can interpret those traits as vector values based on rhythm, tone, instrumentation, and more.   These vectors act like musical DNAs, allowing the system to compare a new song against millions of candidates in the database and return the most similar options—quickly and consistently.   MTT plays a key role in generating these embeddings, automatically tagging a song's genre, mood, and instrumentation, and transforming them into vector representations the AI can process.         Replacing Songs with AI: Matching Similar Tracks   Music Embedding vs. Audio Fingerprint There are two main technologies AI uses to analyze music: music embedding and audio fingerprint. While both convert audio into numerical form, their goals differ.   Audio fingerprint is designed to uniquely identify a specific track—even in cases where it has been slightly altered. It’s great for detecting duplicates or copyright infringements. Music embedding, on the other hand, captures a song’s style and feel, making it ideal for identifying similar—but not identical—tracks.   When it comes to music replacement, embedding is far more useful than fingerprint. AI needs to recommend tracks that evoke a similar emotional or sonic atmosphere, not just technical matches.     How AI Searches for Similar Tracks Music Replacement uses music embeddings to search and replace music in a highly structured way.   First, it builds a database of copyright-cleared music. Each song is divided into segments of appropriate length. The AI then pre-processes each segment to generate and store its embedding vector.   When a user uploads a song they want to replace, the AI calculates that song’s embedding and compares it to all stored vectors in the database. It uses a mathematical metric called Euclidean distance to measure similarity. The smaller the distance, the more similar the tracks.   But it doesn’t stop there. The AI also takes into account genre, tempo, instrumentation, and other musical properties. Users can even prioritize specific elements—like asking the AI to find replacements that match tempo above all else.   Gaudio Music Replacement also supports advanced filtering, allowing users to fine-tune their search results to fit exact needs.     The Devil Is in the Details: From Model to Product   While music embeddings provide a strong technical foundation, deploying this system in real-world environments revealed a new set of challenges. Let's explore a few of them.     Segment Selection Choosing which part of a song to analyze can impact results just as much as choosing which song to use.   If we divide every track into uniform chunks-say, 30 seconds each-we might cut across bar lines, break musical phrases, or miss important transitions, resulting in poor matches.   Music is typically structured into intros, verses, choruses, and bridges. These sections often carry distinct emotional tones. By analyzing the song’s internal structure and aligning segments accordingly, we improve both matching accuracy and musical coherence.     Volume Dynamics: The Envelope Problem In video content, background music often changes dynamically depending on what’s happening in the scene. For example, during dialogue, the music volume might fade low, and then rise during an action sequence.   These dynamic shifts are represented by the envelope—a term for how a sound’s volume or intensity changes over time. If AI ignores the envelope when replacing music, the result can feel awkward or unnatural.   Ideally, the AI finds a replacement track with a similar envelope. If that’s not possible, it can learn the original envelope and apply it to the new track—preserving the intended emotional flow.     Mixing & Mastering Finding the right song is only half the battle. For a replacement to feel seamless, the new track must blend naturally with existing dialogue and sound effects.   While AI can find musically similar tracks, determining whether the new audio actually fits the original mood, tone, and mix often requires human expertise. In fact, professionals say they spend as much time mixing and mastering as they do selecting the right music.   To address this, Gaudio Lab turned to WAVELAB, its own subsidiary and one of Korea’s leading film sound studios. With years of experience in cinema and broadcast sound design, WAVELAB contributed its expertise to develop a professional-grade AI mixing and mastering engine. This engine goes beyond simple volume adjustment, capturing the director’s intent and applying it to the new track with nuance and precision.     Coming Up Next: Where Does the Music Begin—and End?       The image above shows Gaudio’s Content Localization end-to-end system diagram, including Gaudio Music Replacement. In this post, we focused on Music Recommender, which takes a defined music segment and swaps it with a similar one.     But in real-world content, the first challenge isn’t always about which song to replace—it’s figuring out where the music even is.   In many videos, music, dialogue, and sound effects are mixed into a single audio track. Before we can replace anything, we need to separate those elements.   But here’s a question: in a movie scene, is a cellphone ringtone considered music—or a sound effect? And what if multiple songs are joined together with fade-ins and fade-outs? The AI must then detect where one track ends and another begins, by accurately identifying timecodes.     In the next article, we’ll explore two powerful technologies Gaudio Lab developed to solve these problems: DME Separator: separates dialogue, music, and effects from a master audio track TC(Time Code) Detector: identifies precise start and end points for music segments     Stay tuned—we’ll dive deeper into how AI learns to define the boundaries of music itself.  

2025.05.16