fbpx
Use of AI to Clone Voices for Creative Purposes |

Use of AI to Clone Voices for Creative Purposes

Context

∙ Recently, music composer A.R. Rahman used an Artificial Intelligence (AI) software to recreate the voices of singers Bamba Bakya and Shahul Hameed who are now dead.

About 

∙ A report by Market US has revealed that the global market for these voice cloning applications stands at $1.2 billion in 2022 and is estimated to touch almost $5 billion in 2032 with a CAGR above 15-40%.

Voice cloning

Use of AI to Clone Voices for Creative Purposes |

∙ Voice cloning technology employs sophisticated AI algorithms to replicate the intricacies of human speech patterns. 

∙ This innovative process hinges on the principle of training neural networks, a fundamental aspect of artificial intelligence, using extensive datasets of recorded speech. 

∙ There’s a host of these applications online with popular ones like Murf, Resemble and Speechify.

∙ Recently, former Pakistani Prime Minister Imran Khan’s political party used an AI-generated speech from the now imprisoned leader in an attempt to rally for votes.

Applications

∙ Preserving legacy: Can keep the voices of loved ones alive for future generations.

∙ Apple introduced a voice cloning feature in iOS 17 intended to help people who may be in danger of losing their voice to a degenerative disease.

∙ Personalized experiences: Custom virtual assistants, interactive storytelling, and more immersive digital interactions.

∙ Gaming: Prominent tech companies also have a hand in the AI voice game. Recently, Meta launched SeamlessM4T, which can understand nearly 100 languages from speech or text and generate translations in real-time. 

∙ Accessibility: Can offer voice to those who have lost it or will lose it due to illness or disability.

∙ Song creations: YouTube took a similar route and announced Dream Track that allows them to create song clips featuring AI vocals with permission from pop stars like Demi Lovato, Sia and John Legend.

∙ Creative applications: Enhancing storytelling, audio games, and immersive experiences.

Issues/Concerns

∙ Scams: In April 2023, a family living in Arizona, U.S. was threatened to pay ransom for a fake kidnapping pulled off by an AI cloned voice. 

∙ Reporting issues: Several cases went unreported and only some came to light.

∙ Fake news: Easy access to AI voice clones also spawned disinformation.

∙ Harry Potter actress Emma Watson allegedly read out a portion of the Mein Kampf.

∙ Privacy and consent: Concerns about unauthorized recording and use of voices without consent need to be addressed.

∙ Ethical considerations: Potential for exploitation, manipulation, and emotional harm through impersonation and misuse.

∙ Social implications: Impact on identity, trust, and communication dynamics in the digital age.

∙ Hate speech: Recently, users started flocking to free AI voice cloning tools to generate celebrity hate speech.

∙ Conservative political pundit Ben Shapiro allegedly made racist comments against Democrat politician Alexandra Ocasio-Cortez.

India: a major target for AI voice clone scams– A report titled ‘The Artificial Imposter’ published in May last year revealed that 47% of surveyed Indians have either been a victim or knew someone who had fallen prey to an AI generated voice scam. – The numbers are almost twice the global average of 25%. In fact, India topped the list with the maximum number of victims to AI voice scams. a. In December, a Lucknow resident fell prey to a cyberattack that used AI to impersonate the voice of the victim’s relative, requesting the person to transfer a substantial amount through UPI. – Indians have been found to be particularly vulnerable to scams of this nature. a. According to McAfee, 66% of Indian participants admitted that they would respond to a voice call or a phone call that appeared to be from a friend or family member in urgent need of money.b. The report also shared that 86% Indians were prone to sharing their voice data online or via voice notes at least once a week which has made these tools potent.

Measures

∙ Regulatory frameworks: Robust legal and ethical guidelines are crucial to prevent misuse and protect privacy.

∙ The U.S. Federal Trade Commission is considering the adoption of a recently-proposed Impersonation Rule that will help deter deceptive voice cloning.

∙ Technological safeguards: Watermarking and other authentication mechanisms can help identify and verify cloned voices.

∙ Public awareness and education: Educating the public about voice cloning technology and its potential risks is vital.

∙ The US Federal Trade Commission has also launched a Voice Cloning Challenge which asked the public to send in their ideas to detect, evaluate and monitor cloned devices. 

∙ Responsible development and application: Promoting ethical and transparent use of voice cloning for positive societal impact.

Way Ahead

∙ The future of voice cloning hinges on responsible development and utilization, balancing its potential benefits with ethical considerations and safeguards to avoid its misuse.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

You cannot copy content of this page

0
Would love your thoughts, please comment.x
()
x
Scroll to Top
×