
London, December 02, 2025
Jorja Smith’s record label, FAMM, has condemned the unauthorized use of artificial intelligence to clone the singer’s voice on the viral dance track “I Run” by Haven, which gained popularity on TikTok before being removed from streaming platforms due to copyright infringement and impersonation. The incident has triggered calls for urgent regulation of AI-generated music and raises serious questions about consent and ownership in the evolving digital landscape.
Incident Overview
In October 2025, an AI-generated song titled “I Run” by an artist named Haven surfaced on TikTok, quickly gaining viral status in the UK and US markets. The track notably featured a vocal performance that resembled Jorja Smith’s distinctive voice, but according to her label, no permission was granted for the use of her vocals. Following widespread attention and controversy, major streaming platforms removed the song due to copyright violations and the unauthorized voice cloning involved.
A second version of the song appeared later with newly recorded vocals, yet this too was addressed by FAMM as infringing on Jorja Smith’s rights, underscoring the label’s firm stance against any unauthorized exploitation.
Label’s Statement and Legal Action
FAMM, the label representing Jorja Smith, has publicly denounced the release and asserted that the use of AI to clone Smith’s voice without consent constitutes a violation of intellectual property and personal rights. The label is pursuing compensation for the unauthorized use and is lobbying for clearer policies and stronger regulation around AI-generated content.
Highlighting broader concerns, FAMM emphasized that “this isn’t just about Jorja. It’s bigger than one artist or one song.” They warn that unchecked use of AI voice cloning could establish a damaging precedent, threatening artist autonomy and creative control across the music industry.
Industry Context and Legal Gray Areas
The case comes amid escalating debates about the role and limits of AI in music creation. Some AI companies argue that training models on existing copyrighted music qualifies as “fair use,” a legal defense that remains ambiguous and highly disputed. Currently, no comprehensive regulatory framework exists to govern the use of AI-generated voices in commercial music, leaving artists vulnerable to misuse and potential exploitation.
FAMM called for urgent, coordinated regulatory efforts to address these gaps, noting that AI technology is advancing more rapidly than legislation can keep pace. The risk, they argue, is that without clear rules and transparency—such as proper labeling of AI-generated works—similar rights violations will become commonplace.
Significance and Wider Implications
This controversy is a watershed moment in discussions about artist rights and the future of creative content in the age of AI. It highlights critical questions about ownership, consent, and the ethical use of technology that can replicate human voices convincingly.
As AI tools become more accessible, the potential for unauthorized reproductions and unauthorized exploitation grows, putting pressure on legal systems, industry players, and policymakers to rethink traditional intellectual property protections. The outcome of this dispute could shape how recording labels, artists, and regulators approach AI-generated music, possibly influencing the development of global standards and protections for artists.
Ultimately, the Jorja Smith case illustrates the urgent need to balance innovation with respect for creative rights, establishing safeguards that protect artists’ voices—both literally and figuratively—from unauthorized copying or imitation in a rapidly changing digital environment.

