The Sound of AI: Litigating the Future of Music

  • Print Page

By James L. Walker Jr.

The Sound of AI: Litigating the Future of MusicWhile workers in numerous industries contemplate their possible replacement by artificial intelligence, the Taylor Swifts of the world have reason for concern now that Xania Monet, a fully AI-generated artist, has debuted on a Billboard chart and earned a multimillion-dollar record deal. 

AI is rapidly reshaping the music and entertainment industries, redefining norms and processes for the creation of music, the protection of voice and likeness rights, and the legal frameworks governing copyright. At the heart of this transformation lie two major innovations: generative AI music platforms and voice-cloning technologies. 

Generative AI platforms use machine learning models trained on vast catalogs of songs and recordings to produce original compositions that mimic human creativity. Voice-cloning technologies replicate vocal timbre, tone, and inflection with uncanny precision, enabling the creation — or imitation — of performances that sound indistinguishable from those of human artists. Both technologies have been the source of mounting concerns among music artists, singers, songwriters, and other creators, particularly in a challenging regulatory environment. 

Innovation Sparks Legal Storm

To safeguard artists’ work, major record labels such as Sony Music and Universal Music Group (UMG) are now deploying neural “fingerprinting” technology for detecting AI-derived infringement. In September 2025, Spotify announced that it had removed approximately 75 million AI-generated “spammy tracks” over a 12-month period. The company said its new spam filter will help prevent bad actors from generating royalties that otherwise could be distributed to professional artists and songwriters. 

It is clear that generative AI music systems have begun to disrupt traditional music composition, enabling hybrid workflows where humans interact with machine learning models to produce coherent, commercially viable works. Xania Monet, for example, was created by poet Telisha “Nikki” Jones, who writes the lyrics that Suno, Inc. uses to generate the music. 

These developments have prompted significant litigation. AI music generators like Suno, Uncharted Labs (the company behind AI platform Udio), and Anthropic have faced multiple lawsuits. In June 2025, for example, independent artists filed a class action suit against Suno, alleging that its generative music model unlawfully scraped copyrighted works and produced unlicensed derivatives. In October a group of artists similarly sued both Suno and Uncharted Labs over the unlicensed use of plaintiffs’ sound recordings and musical works to train their AI models. The plaintiffs also sought protection of their rights under the Illinois Biometric Information Privacy Act.

These actions followed suits brought in 2024 by the Recording Industry Association of America and its affiliated record labels against Suno and Udio for copyright infringement. In these cases, the battleground has been “fair use” — whether the creation of competing music in the exact same marketplace could constitute prima facie copyright infringement. On October 29, UMG announced that it settled its AI copyright infringement lawsuit against Udio. In addition to the compensatory legal settlement, the parties entered into license agreements for UMG’s recorded music and publishing catalogues. 

In June 2025, federal courts weighed in through two closely watched cases in the U.S. District Court for the Northern District of California: Bartz v. Anthropic PBC and Kadrey v. Meta Platforms, Inc. In Bartz, Judge William Alsup held that model training using lawfully obtained books was “exceedingly transformative” and therefore fair use, likening it to a human reading and learning process. Nevertheless, the court required trial proceedings on Anthropic’s retention of millions of pirated works. In Kadrey, Judge Vince Chhabria similarly granted summary judgment for Meta, finding that large language model training constitutes fair use but that plaintiffs had failed to demonstrate market harm from the model’s training activity. 

In September 2025, Anthropic reached a $1.5 billion preliminary settlement with a certified class of authors. Given the decision and outcome in Bartz, record labels sought to amend their case against Anthropic, focusing on stream-ripping as piracy rather than transformative use.

The U.S. Copyright Office has added complexity to the issue through Part 3 of its Report on Copyright and Artificial Intelligence, issued May 2025. The office emphasized that high-volume copying for AI training “goes beyond established fair use boundaries” but declined to draw categorical lines, instructing courts to engage in case-by-case analysis. 

Parallel developments abroad underscore the global reach of these disputes. In Germany, GEMA, the national music rights society, sued OpenAI in 2024 before the Munich Regional Court for alleged unauthorized use of lyrics and musical works to train its models. In November 2025, the court sided with GEMA, finding that the unlicensed use of those protected musical works violated German copyright law. This decision has been widely regarded as the first major European judicial ruling to address whether generative AI developers can be held liable for using copyrighted music to train their models without obtaining licenses. Similarly, Indian music publishers, including Saregama and T-Series, have brought litigation against OpenAI, alleging infringement of their catalogs in model training. 

Cumulatively, these cases highlight the unsettled state of copyright doctrine as applied to AI in the music industry. Plaintiffs are advancing hybrid theories of liability while defendants argue that training constitutes transformative fair use. As litigation proceeds, the pressure to reach licensing arrangements — similar to those reported between major labels and AI companies — will only intensify.

Voice Cloning, Deepfake Voice Modeling

AI-driven voice synthesis has evolved far beyond early text-to-speech engines, now capable of replicating the subtleties of human performance with remarkable precision. Contemporary voice-cloning models analyze tone, rhythm, breath, and inflection, enabling digital reproductions that can simulate a singer’s style, accent, and emotional timbre. These tools have legitimate applications when it comes to dubbing, accessibility, and content localization.

However, AI also enables unauthorized impersonations, posthumous “performances,” and monetized voice reproductions that infringe upon both copyright and publicity rights. The proliferation of deepfake vocals — songs that use the synthetic voices of recognizable artists without consent — has triggered growing concern within the recording industry and urgent calls for regulation.

A dispute concerning Michael Jackson’s posthumous catalog helped set the stakes for future litigation. In Serova v. Sony Music Entertainment, a fan alleged that, in violation of California consumer protection laws, several tracks on Jackson’s 2010 posthumous album Michael were performed by an impersonator rather than Jackson himself. The California Supreme Court held that Sony’s marketing statements about authenticity could be actionable under state consumer protection laws. Sony and the Jackson estate later removed the contested songs from streaming platforms and settled the case, underscoring the risks of mislabeling posthumous works. 

In 2024 Tennessee enacted the groundbreaking Ensuring Likeness Voice and Image Security (ELVIS) Act, expressly prohibiting the commercial use of an individual’s cloned voice or likeness without authorization. The law has provisions for both criminal penalties and civil remedies and extends liability to technology providers that distribute AI tools primarily designed for unauthorized replication. The ELVIS Act represents an expansion of traditional right-of-publicity protections, codifying a digital-age “voice right” for performers. 

Similar regulatory discussions are underway abroad. In December 2024, the UK’s Department for Culture, Media, and Sport launched a consultation on AI and performer rights, assessing whether new directives should require consent for training or reproduction of vocal likenesses. Findings were expected at the end of 2025. Parallel to the generative music lawsuits in the United States, these efforts aim to demystify whether the reproduction of a human voice, when generated by an AI model, constitutes a derivative work or an unauthorized performance under copyright law. 

Meanwhile, organizations such as the Recording Academy and SAG-AFTRA have urged the inclusion of AI-specific contractual clauses in collective bargaining agreements to safeguard performers who want to maintain control over synthetic versions of their voices. 

The broader legal questions — how to balance innovation with identity protection, and whether cloning a voice for expressive use constitutes speech or theft — remain unresolved. As voice synthesis grows more sophisticated, the boundaries between homage, parody, and misappropriation will continue to blur, leaving artists and courts to grapple with what authenticity means in an age of algorithmic creativity. 

Considerations for Artists

Artists, songwriters, and other creatives who post freely online are most vulnerable to AI. It pays to protect one’s work before posting by registering a copyright. Creatives who have contracts can audit them to make sure there is an AI clause — language that defines AI training, derivative works, and voice cloning and also requires client consent before their catalog or likeness can be used for machine learning.

Lawsuits are important even if they don’t succeed. They create precedent and signal to tech companies that artists will not sit on the sidelines. If experiencing theft, artists can document any measurable economic harm they are experiencing, such as streaming declines.

The Anthropic settlement shows that litigation has teeth. There might be future settlement funds compensating those whose works were used in training. It remains to be seen whether courts will order unlearning as an equitable remedy, but artists could start asking for it in settlement negotiations today. 

James L. Walker Jr. is an entertainment attorney with more than 30 years of experience. He is the author of This Business of Urban Music and taught entertainment law for over a decade at the University of Connecticut School of Law and Boston College School of Law. He can be reached at www.walkerandassoc.com. Awa Nyambi (Howard University School of Law, class of 2025) contributed to this article.

Skyline