top of page

AI-Driven Robocall Scams: How Fraudsters Are Cloning Voices to Deceive and Steal

Fraudsters are leveraging AI to enhance robocalls, using voice cloning to create convincing scams that mimic friends, loved ones, and business contacts. We spoke with Chris Drake, SVP, Corporate and Business Development at iconectiv to explore these AI-driven tactics, their impact on digital security, and the steps being taken to combat them.

Chris Drake, SVP Corporate and Business Development, iconectiv

How are fraudsters incorporating AI into their robocall efforts?

Fraudsters are always looking for new ways to scam consumers and businesses alike. AI now allows them to execute robocalls on a larger scale with more convincing tactics. Chief among these is the ability to clone someone’s voice with even a small voiceprint from that person. In fact, research from McAfee says 52% of Americans share their voice online, increasing the illegal voice-cloning opportunity for fraudsters. As the name implies, these AI-generated voice clones are intended to make the victim think they’re talking to a friend, loved one, company executive, business partner or celebrity, when it’s actually a scammer.

What are some examples of AI-enhanced “deepfake” robocalls?

Examples of AI-enhanced deepfake robocalls include a fraudster using an AI clone to pose as someone’s child. Using the clone, the fraudster might call the parent claiming they’ve kidnapped their child and demand a ransom for the child’s safe return. The cloned voice of the child would serve as false proof that the child is indeed in the kidnapper’s custody.

Likewise, using an AI clone and social engineering, a fraudster could call an employee at a particular company and claim to be that person’s boss – demanding that they immediately withdraw funds from a corporate account. 

These tactics have been largely effective. Research from McAfee indicates that 77% of victims in AI-enabled scam calls said they lost money.

How are these scams affecting the security of everyone’s digital identity?

An individual’s digital identity consists of various data points such as their name, physical address, e-mail address, IP address, biometrics and phone number, among other signals.

These AI-enhanced scams are the newest way fraudsters are exploiting people’s digital identities. 

In particular, when illegal AI-generated calls are made to consumers, these attacks erode their trust in voice communications. Consumers, many of whom are already reluctant to pick up the phone, have a hard time trusting that that the caller is actually from a particular business. Likewise, even with information like billing and email addresses, businesses face complexities when trying to determine that their customers are who they say they are.

This abuse of communications networks threatens to impede the ecosystem’s ability to effectively keep people connected and commerce flowing.

What are service providers, regulators and telecom vendors doing to address this problem?

Regulators, such as the FCC, are consistently working with communications service providers and telecom vendors to understand how fraudsters are using AI to assist in their illegal activities and mitigate the impact to consumers and businesses alike. In February, for instance, the FCC issued a declaratory ruling prohibiting unsolicited robocalls with AI-generated voices. This was essentially an addendum to the Telephone Consumer Protection Act (TCPA).

As a key player in this fight to combat the misuse and abuse of communications networks and to mitigate illegal robocalls and fraud, iconectiv solutions help protect the integrity of the phone number, which has become the key digital identity for a person or business. That’s because the phone provides the convenience and simplicity that consumers demand, the reliable, verifiable data that businesses need and the global ubiquity that national registries cannot replicate.

More specifically, iconectiv provides authoritative phone numbering intelligence that can be used to verify the digital identities of consumers and businesses. This helps enterprises protect their brand and revenue while boosting consumer confidence in voice communications. This also helps government bodies protect residents and legitimate businesses to ensure their customers know the business is who they say they are when they call.  

What can individuals do to protect themselves?

Fraudsters use urgency, fear and intimidation to trick their victims. Now, as fraudsters clone voices using AI, it has gotten much harder to decipher truth from fraud.  As a general rule, people should not automatically trust that who they’re hearing on the other side of the phone is indeed who they say they are. Individuals should independently verify what they’re being told. If employees, for example, think their boss is calling with some questionable requests, they  should verify those claims with other members of the company to ascertain if it might be a scam. Likewise, if someone believes their loved one is in trouble, they should hang up and call that person back or contact other friends or family members and not immediately act on impulse. 

Moreover, there are mechanisms in place to drive consumer confidence in voice calls and text messages.  For example, consumers looking to engage with their favorite stores, their bank or pharmacy,  can opt-in to receive text messages via a trusted SMS Short Codes. These text messages, which come from 5- or 6-digit numbers, are much more reliable because businesses must go through a strict vetting process to get them, and consumers must opt to receiving them.

Comments


bottom of page