Menu
Blog

Flash Report: Deepfake Attacks Pose a Growing Threat to MFA

Flash Report: Deepfake Attacks Pose a Growing Threat to MFA
4 minute read

Key Findings

  • The use of synthetic media to bypass multi-factor authentication (MFA) systems is a growing threat to organizational security.
  • Attackers use social engineering techniques to collect data from users for generating realistic audio and video content.
  • The AI-based deepfake technology can deceive MFA systems that rely on voice, face, or behavior recognition, enabling attackers to access sensitive information, accounts, and systems.
  • The use of social engineering tactics to collect data further emphasizes the need for continuous user education and awareness training.

Analyst Commentary

Threat actors have increasingly leveraged phishing techniques where traditionally-strong MFA gives little or no protection. This includes “in-the-middle” techniques, MFA fatigue[1]—as encouraged by Lapsus$—and OAuth Consent phishing attempts.[2] Attackers increasingly use phishing attacks to bypass multifactor authentication (MFA) in Microsoft 365. Frameworks such as Evilginx2 are used to steal login credentials and session cookies for initial access and MFA bypass. While certificate-based authentication and FIDO2 can mitigate these attacks, many organizations still use time-based one-time passwords (TOTPs) and push notifications for MFA.[3] These gaps can be exploited by attackers, who are constantly innovating and evolving their tactics to compromise networks.

As technology advances, so too does the sophistication of cyber attacks. In recent years, the rise of advanced artificial intelligence (AI) and machine learning techniques has allowed threat actors to bypass MFA mechanisms by creating synthetic media through the use of Adversary-in-the-Middle (AiTM) techniques. MFA has been widely adopted as a security measure to prevent unauthorized access to sensitive information, accounts, and systems. However, the use of AI-based deepfake technology by attackers has introduced new risks to MFA systems that rely on biometric or behavioral data.

The creation of synthetic media to deceive biometric and behavioral data-based multi-factor authentication systems is a growing threat to organizational security.  Synthetic media can deceive MFA systems that rely on voice, face, or behavior recognition, allowing attackers to gain access to sensitive information, accounts, and systems despite the presence of MFA. Attackers use social engineering techniques like phishing and vishing to collect data from users, which is then used to generate realistic audio and video content for use in the attacks. As a result, this trend is a significant threat to the security of organizations and their data.

Recommendations

  • In response to this growing threat, ZeroFox recommends additional security measures, such as behavioral biometrics, which uses machine learning algorithms to analyze a user’s unique behavioral patterns to confirm their identity.
  • ZeroFox Intelligence recommends implementing phishing-resistant MFA methods that support Fast ID Online v2.0 (FIDO2) and certificate-based authentication in conjunction with a broader Zero Trust initiative.
  • Users are also advised to use strong passwords and regularly change them, avoid clicking on suspicious links, and report any suspected phishing or vishing attempts to their IT security team.
  • It is crucial for users to remain vigilant, stay up-to-date with emerging threats, and implement effective security measures to safeguard their personal and organizational data.
  • ZeroFox recommends remaining vigilant and denying MFA requests not triggered explicitly by logging in or requesting device enrollment. These requests are typically immediate and should not randomly appear throughout the day.

Scope Note

ZeroFox Intelligence is derived from a variety of sources, including—but not limited to—curated open-source accesses, vetted social media, proprietary data sources, and direct access to threat actors and groups through covert communication channels. Information relied upon to complete any report cannot always be independently verified. As such, ZeroFox applies rigorous analytic standards and tradecraft in accordance with best practices and includes caveat language and source citations to clearly identify the veracity of our Intelligence reporting and substantiate our assessments and recommendations. All sources used in this particular Intelligence product were identified prior to 3:00 PM (EST) on February 14, 2023; per cyber hygiene best practices, caution is advised when clicking on any third-party links.

[1] MFA fatigue is the practice whereby threat actors utilize social engineering (frequently phishing), in order to gain user credentials, and then bombard the user with MFA push notifications to their device in order to try to get the user to confirm their identity through these notifications.

[2] hXXps://www.csoonline[.]com/article/3674156/multi-factor-authentication-fatigue-attacks-are-on-the-rise-how-to-defend-against-them.html

[3] hXXps://securityboulevard[.]com/2023/02/threat-actors-turn-to-aitm-to-bypass-mfa/

Tags: Artificial IntelligenceDeepfakesIncident ResponseThreat Intelligence

See ZeroFox in action