Introduction
Malicious actors are increasingly leveraging AI-generated SMS and voice messages, known as smishing and vishing [3] [6] [8], to impersonate senior US government officials [5] [7]. This sophisticated campaign targets government employees and their contacts to extract sensitive information and gain unauthorized access to accounts.
Description
Malicious actors are increasingly employing AI-generated SMS and voice messages, known as smishing and vishing [3] [6] [8], respectively [6] [8], to impersonate senior US government officials [5] [7]. This ongoing campaign, which began in April 2025 [1] [7], targets current and former federal and state government employees [6] [7], as well as their contacts [4], with the aim of extracting sensitive personal information and gaining access to accounts. The FBI has issued a warning about these sophisticated tactics [2], advising individuals to be cautious and not to assume the authenticity of messages claiming to be from trusted figures [7].
Deceptive messages often claim urgent business or security matters [1], creating a false sense of trust with victims and tricking them into clicking on malicious links that redirect conversations to platforms potentially containing malware or fraudulent login pages. This can lead to the disclosure of sensitive information, including login credentials and multi-factor authentication codes [1]. The use of AI-powered voice cloning has surged significantly [9], enabling criminals to create realistic impersonations of public figures with minimal audio samples [9]. This technology has been weaponized, resulting in a reported 442% increase in its use for fraudulent schemes between the first and second halves of 2024. Notable incidents have included hackers impersonating IT help desk workers to gain access to devices [9], as well as red-team exercises where AI voice spoofing was utilized to infiltrate internal networks [9]. Once access is obtained [6], attackers can exploit compromised accounts to target other officials or associates, potentially extracting sensitive information or financial resources [6].
Smishing techniques involve using software to generate phone numbers that are not linked to specific subscribers [8], allowing attackers to masquerade as trusted individuals [8]. Meanwhile, vishing increasingly utilizes AI-generated audio to convincingly impersonate well-known figures [8], enhancing the credibility of these schemes [8]. Both smishing and vishing share similarities with spear phishing [8], which targets individuals through email [8]. The FBI has previously highlighted the increasing use of generative AI in financial fraud schemes [4], which can create convincing text [4], images [4], audio [1] [4] [5] [7] [8] [9], and video to deceive victims [4].
Limited protections exist against these technologies [7], emphasizing the importance of vigilance [7], verifying identities through official channels [7], and employing multifactor authentication to secure accounts [7]. Users are advised to scrutinize phone numbers and message content, be cautious of unusual phrasing or distorted audio [5], and avoid clicking on suspicious links [5]. Establishing secret passphrases with family members can also help confirm identities in future communications [5]. It is crucial for individuals to discern between legitimate communications and AI-generated content [8], as the latter can closely mimic real voices [8]. In cases of uncertainty regarding the authenticity of communications [8], individuals are advised to consult security officials or the FBI for assistance [8]. The FBI emphasizes the need for vigilance against unsolicited messages [2], particularly those claiming to be from senior officials [2], as threat actors can spoof known phone numbers [2], complicating detection efforts and creating a false sense of security for users [2].
Conclusion
The rise of AI-generated smishing and vishing poses significant threats to information security, particularly for government officials and their associates [6]. As these technologies evolve, the potential for misuse in fraudulent schemes increases, necessitating heightened awareness and proactive measures. Individuals must remain vigilant, verify communications through official channels, and employ robust security practices to mitigate risks. The ongoing development of AI technologies underscores the need for continuous adaptation of security strategies to protect against emerging threats.
References
[1] https://www.cleveland.com/nation/2025/05/fbi-warns-of-scam-using-ai-to-impersonate-senior-us-officials.html
[2] https://hackread.com/fbi-warn-ai-voice-scams-impersonate-us-govt-officials/
[3] https://www.usatoday.com/story/news/politics/2025/05/16/fbi-ai-messages-officials-smishing-vishing/83666372007/
[4] https://www.nbclosangeles.com/news/business/money-report/fbi-warns-of-ai-voice-messages-impersonating-top-u-s-officials/3701991/
[5] https://interestingengineering.com/culture/fbi-warns-ai-hackers-impersonating-us-officials
[6] https://www.infosecurity-magazine.com/news/us-officials-impersonated-sms/
[7] https://cyberscoop.com/fbi-warns-of-ai-deepfake-phishing-impersonating-government-officials/
[8] https://www.ic3.gov/PSA/2025/PSA250515
[9] https://www.cybersecuritydive.com/news/fbi-us-officials-impersonated-text-ai-voice/748334/