AI Voice Cloning: Who Owns Your Voice?
At this year’s Hay Festival, I experienced something deeply unsettling. A tool called Vocalize took a brief sample of my voice and used AI to generate convincing audio, making me say things I would never endorse. It was disturbingly accurate, mimicking my tone and cadence with eerie precision.
The purpose of the demonstration was clear—to highlight the dangers of unregulated AI voice cloning. But sitting there, listening to “my voice” say words I never spoke, I couldn’t help but feel that this wasn’t just a hypothetical concern. AI voice replication is no longer a futuristic concept—it’s happening now, and it’s raising urgent questions about ownership, consent, and creative control.
Vocalize uses AI to recreate my Voice after just a few phrases given to it in a Practice Room
As a voice actor (on a very small scale, compared to many), broadcaster, and podcast host, my voice is my craft, my livelihood, and an integral part of my identity. Whether narrating audiobooks, voicing adverts, or hosting radio shows, I rely on the uniqueness of my sound to connect with audiences. But as AI advances, voice artists—myself included—are faced with a new challenge: how do we maintain control over our own voices when AI can replicate them at scale?
The Vocalize demonstration made me realize just how easy it is for technology to take someone’s voice and manipulate it, often without consent. And real-world cases are proving this fear valid.
ScotRail’s AI-generated announcer, "Iona," sparked controversy when voiceover artist Gayanne Potter discovered that her voice had been used to train the AI without her permission. Despite her protests, ScotRail has refused to remove the AI voice, revealing the legal loophole—while copyright protects written and recorded works, UK law does not clearly safeguard a person’s voice from AI replication.
Similarly, Stephen Fry was horrified to learn that AI had cloned his voice from his Harry Potter audiobook recordings and used it in a historical documentary without his consent. He warned that AI voice cloning could be exploited for misinformation, political messaging, or even explicit content, all while impersonating real people.
For professionals who rely on their voices for their careers—actors, narrators, broadcasters, and creatives—the implications are staggering.
The legal landscape surrounding AI voice cloning remains deeply flawed. While intellectual property laws protect written works, music, and trademarks, voices exist in a grey area. Some pressing concerns include:
- Consent and ownership: Should individuals have legal rights over their own voice?
- Compensation: If AI-generated voices replace human performers, should the original artists be compensated?
- Misinformation and fraud: AI-generated voices can be used for scams, impersonation, and reputational damage.
Countries like the United States and European Union are beginning to explore regulations, but there is no universal framework to protect individuals from unauthorized AI voice use.
What This Means for Broadcasters, Voice Actors, and Creatives
For radio hosts, podcasters, and voice actors, AI voice cloning presents both opportunities and threats. While AI-generated voices could assist in content creation, allowing broadcasters to automate repetitive tasks, they also pose serious risks to creative integrity, compensation, and ethical boundaries.
Beyond creative industries, AI-generated voices are increasingly being used in fraud and deception. Cybercriminals have cloned voices of CEOs and government officials to orchestrate scams, exploiting the emotional familiarity of a trusted voice.
Where Do We Go From Here?
At Hay Festival, Vocalize made one thing clear—voice cloning isn’t just a distant possibility, it’s happening now. Should AI-generated voices always require explicit permission? Should voice cloning be licensed as intellectual property, just like an artist’s painting or a writer’s novel?
As someone whose voice is integral to my professional identity, I believe this conversation needs urgency. AI should enhance human creativity, not replace it or exploit it without consent. We need regulations that protect the integrity of our voices before AI speaks for us in ways we never intended.