Music Industry Faces Challenges in Combatting AI’s Unauthorised Use of Content
The music industry is actively combating the theft and misuse of its content by generative AI, challenging platforms, legal systems, and lawmakers—but the battle is far from won.
Recently, Sony Music revealed that it had demanded the removal of 75,000 deepfakes—simulated images, songs, or videos that closely mimic real content—underscoring the scale of the problem.
Despite claims by information security firm Pindrop that AI-generated music can be easily detected due to its "telltale signs," these fake creations are rampant.
Pindrop, which specialises in voice analysis, said:
"Even when it sounds realistic, AI-generated songs often have subtle irregularities in frequency variation, rhythm and digital patterns that aren't present in human performances."
A quick search on platforms like YouTube or Spotify reveals examples such as a fabricated 2Pac rap about pizzas or an Ariana Grande cover of a K-pop song she never performed, highlighting the extent of AI’s intrusion into the music world.
Sam Duboff, Spotify's lead on policy organisation, expressed:
"We take that really seriously, and we're trying to work on new tools in that space to make that even better."
YouTube said it is "refining" its own ability to spot AI dupes, and could announce results in the coming weeks.
Jeremy Goldman, an analyst at the company Emarketer, noted:
"The bad actors were a little bit more aware sooner," leaving artists, labels and others in the music business "operating from a position of reactivity."
Adding that he trusts YouTube is working seriously to fix it:
"YouTube, with a multiple of billions of dollars per year, has a strong vested interest to solve this. You don't want the platform itself, if you're at YouTube, to devolve into, like, an AI nightmare."
Intellectual Property Rights at Risk
Beyond concerns over deepfakes, the music industry is increasingly alarmed by the unauthorised use of its content to train generative AI models like Suno, Udio, and Mubert.
Last year, several major labels filed a lawsuit in a New York federal court against the parent company of Udio, accusing it of using copyrighted sound recordings to develop its technology with the intent of attracting listeners, fans, and potential licensees of the copied material.
More than nine months later, the case has yet to move forward significantly, and a similar lawsuit against Suno, filed in Massachusetts, remains stalled.
At the heart of these legal battles is the concept of fair use, which allows limited use of copyrighted content without permission, potentially undermining intellectual property protections.
Joseph Fishman, a law professor at Vanderbilt University, stated:
"It's an area of genuine uncertainty."
However, any initial rulings may not be conclusive, as conflicting decisions from different courts could ultimately bring the issue before the Supreme Court.
In the interim, companies developing AI-generated music continue to use copyrighted works to train their models, prompting the question of whether the fight is already lost.
According to Fishman, it may be too early to judge the outcome: while many models are currently trained on protected content, these models evolve rapidly, and it remains uncertain whether court decisions will affect future versions.
Little Progress for Labels, Artists, and Producers in Legislation
In the legislative arena, labels, artists, and producers have faced limited success in addressing the challenges posed by AI.
Although several bills have been introduced in the US Congress, none have yielded significant outcomes.
A few states, particularly Tennessee—home to much of the influential country music industry—have enacted protective legislation, particularly concerning deepfakes.
However, opposition from figures like Donald Trump, who has championed deregulation, especially in AI, poses another hurdle.
Notably, Meta has urged the administration to "clarify that the use of publicly available data to train models is unequivocally fair use."
Should Trump's White House adopt this stance, it could tip the scale in favour of AI developers, potentially undermining the interests of music professionals, even if the courts ultimately decide the issue.
The situation is similarly complex in the UK, where the Labour government is considering overhauling laws to allow AI companies to use publicly available online content to train models unless rights holders explicitly opt out.
In response, over a thousand musicians, including Kate Bush and Annie Lennox, released an album titled Is ‘This What We Want?’ in February, featuring silence recorded in various studios, as a protest against these efforts.
For analysts like Goldman, AI is likely to continue disrupting the music industry, particularly as long as the sector remains fragmented and disorganised in its response.
He lamented:
"The music industry is so fragmented. I think that that winds up doing it a disservice in terms of solving this thing."