A new wave of AI-powered phishing attacks has surfaced, leveraging deepfake technology to impersonate YouTube’s CEO, Neal Mohan, in an effort to compromise content creators’ accounts. The attack, which has alarmed cybersecurity experts and the platform’s creator community, highlights the growing sophistication of cybercriminal tactics in the age of artificial intelligence.
The Scam: AI-Generated Trust Manipulation
The phishing campaign begins with an email that appears to originate from an official YouTube address, notifying creators that a private video has been shared with them. Once accessed, the video features an AI-generated version of Neal Mohan, meticulously crafted to mimic his voice, appearance, and mannerisms.
In the fraudulent video, the deepfake Mohan claims that YouTube is implementing new monetization policies and instructs creators to take specific actions—such as clicking on links, entering login credentials on a fake website, or downloading malicious software.
This attack exploits the trust that content creators place in official communications from YouTube, making it an especially dangerous form of credential theft. Once an attacker gains access to an account, they can use it to spread misinformation, conduct additional phishing attempts, or monetize fraudulent activities.
YouTube’s Response: A Warning to Creators
In response to the growing threat, YouTube has issued a warning to its creator community, stating that it does not use private videos to disseminate official information.
“We’re aware that phishers have been sharing private videos to send false videos, including an AI-generated video of YouTube’s CEO Neal Mohan announcing changes in monetization. YouTube and its employees will never attempt to contact you or share information through a private video,” the company stated in an official announcement.
YouTube further urged users to treat any privately shared video claiming to be from the platform with extreme skepticism.
“If a video is shared privately with you claiming to be from YouTube, the video is a phishing scam. Do not click these links as the videos will likely lead to phishing sites that can install malware or steal your credentials,” Rob from YouTube cautioned.
Cybersecurity Experts Weigh In: The Growing Threat of Deepfakes
Security professionals warn that this attack marks a shift in phishing strategies, demonstrating how AI is making fraudulent schemes increasingly difficult to detect.
Gabrielle Hempel, Security Operations Strategist at Exabeam, emphasized the concerning evolution of deepfake attacks:
“A lot of the early deepfake attacks we have seen involved audio impersonation only or manipulated footage that already existed. This is a worrying development because it involves a fabricated video that is pretty convincing and really shows the lengths to which people are going to make phishing more effective.
Looking for inconsistencies in quality still seems to be the most effective way to spot deepfakes, although this is becoming harder as the technology gets better as well. Unnatural facial movements, words not matching the mouth, and background glitches are usually tell-tale signs.
The barrier to accessing tools that allow for sophisticated attacks like these is becoming so low. It is both easy and affordable to do this, which makes it fair game for really anyone. Detection is really struggling to keep up with these attacks. There's no great solution that will do it without human eyes on the footage, and even that is becoming less reliable.”
Anna Collard, SVP of Content Strategy & Evangelist at KnowBe4, noted that while the tactics are evolving, the core principles of social engineering remain unchanged:
“This latest phishing scam targeting YouTube creators is a reminder that social engineering tactics don’t need to be new—just more convincing. The use of deepfake videos of YouTube’s CEO isn’t groundbreaking; scammers have long exploited our trust in authority figures to manipulate emotions like curiosity or greed. What has changed is the ease and accessibility of AI, which makes these scams appear more polished and credible.
According to Egress (2024), 82% of phishing kits now include deepfake capabilities, democratizing this technology for any cybercriminal with the right motivation. This means low-effort scams can now look far more legitimate, making vigilance more important than ever.”
She added that the key to defense remains digital mindfulness and skepticism:
“The key defense remains the same: digital mindfulness and a zero-trust mindset. Pause before reacting impulsively, particularly if it triggers an emotion or existing bias, verify independently, and never assume legitimacy just because something looks real. AI may enhance deception, but our best defense is still critical thinking and security vigilance.”
How to Protect Your Account from Deepfake Phishing Scams
As AI-powered cyberattacks continue to advance, users must adopt proactive security measures to protect themselves from phishing scams:
Verify Communications: YouTube has confirmed that it will never share official information through private videos. Always double-check the source before engaging with such content.
Look for Deepfake Red Flags: Pay attention to unnatural facial movements, lip-syncing errors, or subtle distortions in deepfake videos.
Use Two-Factor Authentication (2FA): Enabling 2FA on your YouTube and Google accounts can prevent unauthorized access even if login credentials are compromised.
Avoid Clicking Suspicious Links: If an email or message urges you to take immediate action, always verify the legitimacy of the source before clicking on any links.
Report Suspicious Content: If you receive a suspected phishing attempt, report it to YouTube or Google through their official channels.
The Future of AI-Powered Cyber Threats
This latest deepfake phishing attack serves as a stark reminder that AI is not only revolutionizing legitimate industries but also transforming cybercrime. As AI tools become more accessible, deepfake-driven scams will likely become more frequent, targeting a broader range of victims.
For now, the best line of defense remains vigilance, education, and the implementation of strong cybersecurity practices. In an era where seeing is no longer believing, skepticism is an essential survival skill in the digital world.