Deepfake Fraud

Combating Deepfake Fraud is a Growing Challenge for Organizations

Christopher Eaton | January 10th, 2025

During a virtual conference call with organizational leaders from across the globe, employees from renowned engineering firm, Arup, were victimized by an elaborate scam. Unbeknownst to attendees, threat actors had infiltrated the meeting and comprised numerous individuals on the call. The leaders were fake. Other team members, invented. The Chief Financial Officer (CFO) was only present in name, as his image and voice were AI generated. The CFO’s voice was a spot-on clone, overriding thoughts of a simple social engineering attempt. Despite this elaborate fraud, one Hong Kong-based finance employee was so convinced of its legitimacy they wired $25 million in a string of transactions to accounts established by the threat actors1.

This is but one of many anecdotes demonstrating the rise of an emerging cyber threat: the deepfake.

What is a Deepfake?

A portmanteau of deep learning and fake, the term deepfake refers to a type of synthetic media—image, video, audio—designed to seem legitimate. Often, but not always, deepfakes are used to manipulate or convince the target(s) that something false is, in fact, true. These invented media can mimic existing artifacts or be completely new, authentic appearing content. For instance, the Arup fraud case involved altering video and audio files of the CFO, so it appeared he was saying something he never said. While deepfakes can be used for entertainment, educational, and creative purposes, they also pose significant risks for misinformation, fraud, and identity theft.

The above mentioned deepfake is among the most complex examples, though there are more standard forms people can expect to encounter more regularly. For instance, one might receive a call from somebody claiming to be and who sounds remarkably similar to a team leader. Or, there might be a video call from someone claiming to be company security or IT imploring your personal information to engage a fraudulent password change. Or you might be confronted with a deepfake video of your CEO giving an invented presentation, fake articles about organizational changes, or manipulated pictures. Each of these may be a culminating instance or initial stage in a more elaborate social engineering scheme.

Whatever the medium or reason, deepfakes are increasingly problematic for individuals and organizations.

Challenges of Deepfake fraud

The Challenges of Deepfakes

The rise of AI technology has been accompanied with broad application and rapid user adoption rates. For context, ChatGPT—an industry leader in generative AI technology—achieved 100 million active users in two months, outpacing TikTok (9 months) and Instagram (2.5 years)2. Two years later, the hundreds of millions of individuals who engage with ChatGPT and similar software include malicious actors who use the technology to assist in creating and disseminating fraudulent content with relative ease and impressive realism.

Even those who do not engage with generative AI technologies are largely aware of its existence and potential negative impact. For instance, every election now comes with warnings of potential deepfake perpetrated fraud3 and alerts of industry specific threats seem ubiquitous4. Knowledge of dangerous deepfakes is often enough to erode trust in legitimate institutions and can confuse users into thinking everything is fake. This challenging space and general skepticism are, in a sense, a tremendous opportunity for threat actors to exploit overwhelmed users.

One potential blindspot that can be used to manipulate users is on the back of the psychological phenomenon of confirmation bias. There is a tendency for individuals to believe to be true something they want to be true or everything something that appears true5. Combined with the ease of impersonating authority figures in appearance, speech, or text and simplicity of inventing content, the average user is at a disadvantage. How are we supposed to know what is true and what is fake?

deepfake fraud news media

Confirmation bias is additionally problematic when combined with deepfakes because individuals are not great at distinguishing between the real and fraudulent (even though we like to think we are). According to research published by the University of Amsterdam, people express great confidence in being able to identify and avoid being convinced by deepfakes but, when confronted with them, cannot detect them with significant accuracy6. It is basically a coin flip. And whether we accurately detect a deepfake or not, additional research indicates our “attitudes and intentions” can be greatly impacted by content we know is fake7,8.

Deepfakes and the Cyber Landscape

Because users are often the proverbial ‘front line’ of defense for organization’s against cyberattacks, it is incumbent on every individual to become more educated on the threats and understand how to properly respond in the face of potential deepfake content. The first and simplest step would be for organizations to mandate security awareness training that exposes users to threats, offers defensive insights, and increases positive behavior. As was noted with the Arup example, enhanced vigilance and layered security solutions are required, in addition to ongoing training, to properly combat deepfake media. The error of one person should not result in a $25 million dollar loss.

Another troubling deepfake trend is that some AI programs have manipulated stolen credentials to bypass established protections, like biometric scans. In one such instance, an individual’s identification card was stolen, aspects of it then changed, ending with threat actors who “use the falsified photo to bypass [the employee’s] institution’s biometric verification systems9”. Theoretically, as AI technology continues its exponential increase in sophistication, the areas of fraud expand: retinal scans, facial recognition, voice confirmation.

With deepfakes accounting for more than 40% of all “fraud attempts across video biometrics”, it is imperative for private, public, and corporate institutions to coordinate in ways that limit the breadth and impact of deepfake threats10.

One way this is already happening is through the inclusion of watermarks on AI generated content. Google, whose AI software Gemini is used for legitimate and deepfake purposes, is among those leading this charge11. The result of this effort, which still requires some user education, would limit the reach and impact of fraudulent media.

Risk management through layered cybersecurity

Risk management through layered cybersecurity

While no cybersecurity solution acts as a panacea, adopting a layered approach can significantly improve the cyber hygiene of effective organizations. What layers are necessary? The first, which is worth mentioning again, is requiring ongoing security awareness training for every person involved in an organization, including 3rd party vendors or contractors. In the case of wire fraud as noted in the Arup case, organizations should create processes for funds transfers requiring in person or private connection verification before any money can move.

Multi-factor authentication (MFA) is also necessary, as it protects against credential theft. Another is the strict enforcement of a complex password policy requiring regular changes—credential theft is still the greatest single manner of initial access in a data breach. Additionally, new technologies like liveness detection software help organization’s verify video, audio, or photographic evidence by “ensuring that users are dealing with genuine documents rather than digital imitations or photocopies12.”

Beyond these standard cybersecurity measures, contending with deepfakes may require more innovative practices. For instance, exposing employees to simulations using deepfakes will not only raise awareness of the challenges but provide real-world scenarios making identification and proper response more likely. To recognize unusual speech patterns or unrealistic edits and boundaries that are hallmarks of deepfakes requires repeated exposure and practice.

All of these gaps and vulnerabilities could be uncovered and corrected as a result of proactive cybersecurity assessments, including Incident Response Planning, Risk Quantification assessments, and regular Penetration Testing. Furthermore, organizations should work with experienced cybersecurity professionals to establish 24/7 Security Operation Center monitoring of all endpoints and email tenants to safeguard against fraud and unauthorized network access.

Unfortunately, deepfakes are here to stay. As the fraudulent media becomes increasingly sophisticated, it is crucial that individuals and organizations work proactively to safeguard operations and sensitive information. The Arup incident—while not a singular event—serves as a stark reminder of the potential losses and disruptions that can arise from such threats. Act now to protect your organization and people from the deceptive dangers of deepfakes.

Sources

  1. Magramo, Kathleen. “British Engineering Giant Arup Revealed as $25 Million Deepfake Scam Victim | CNN Business.” CNN, 17 May 2024, https://www.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html.
  2. Hu, Krystal. “ChatGPT Sets Record for Fastest-Growing User Base – Analyst Note.” Reuters, 2 Feb. 2023. reuters.com, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/.
  3. CBS News. FBI Warns of Deepfake Videos Ahead of Election Day – CBS News. https://www.cbsnews.com/video/fbi-warns-of-deepfake-videos-ahead-of-election-day/.
  4. 13 Nov. 2024, https://www.fincen.gov/news/news-releases/fincen-issues-alert-fraud-schemes-involving-deepfake-media-targeting-financial.
  5. Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology2(2), 175-220. https://doi.org/10.1037/1089-2680.2.2.175
  6. Köbis, N. C., Dolezalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11), Article 103364. https://pure.uva.nl/ws/files/67787899/1_s2.0_S2589004221013353_main.pdf
  7. OSF. https://osf.io/preprints/psyarxiv/4ms5a.
  8. Hughes, Sean. Deepfakes Can Be Used to Hack the Human Mind | Psychology Today. https://www.psychologytoday.com/us/blog/spontaneous-thoughts/202110/deepfakes-can-be-used-hack-the-human-mind.
  9. Winder, Davey. “Now AI Can Bypass Biometric Banking Security, Experts Warn.” Forbes, https://www.forbes.com/sites/daveywinder/2024/12/04/ai-bypasses-biometric-security-in-1385-million-financial-fraud-risk/.
  10. Entrust Cybersecurity Institute. 2025 Identify Fraud Report. https://www.entrust.com/sites/default/files/documentation/reports/2025-identity-fraud-report.pdf
  11. Shah, Agam. Google’s AI Watermarks Will Identify Deepfakes. 15 May 2024, https://www.darkreading.com/cloud-security/google-ai-watermarks-identify-deepfakes.
  12. Deepfake Trends 2024. https://static-content.regulaforensics.com/PDF-files/0831-Regula-Deepfake-Research-Report-Final-version.pdf

Categories

Connect With Us

Featured Articles

fasthttp
fasthttp Used in New Bruteforce Campaign
13 January 2025
Deepfake Fraud
Combating Deepfake Fraud is a Growing Challenge for Organizations
10 January 2025
EDR Silencers
Responding to the Exigent Emergence of EDR Silencers
06 December 2024
Illusion of Invulnerability
How the Illusion of Invulnerability Can Elevate Business Risk
22 November 2024

See ShadowSpear in Action

Identify, neutralize, and counter cyberattacks - provide confidence in your security posture

Stay Connected With SpearTip

inside the soc

Inside the SOC Newsletter

View our articles that cover trending topics in cybersecurity with insights from our 24/7/365 Security Operations Center.
shadowspear platform

ShadowSpear Platform

Cybersecurity actors are working around the clock, shouldn’t your security team be too? Technology solutions and security controls fail for a number of reasons, poor deployment, improper implementation, or just no one monitoring the alerts.
shadowspear demo

ShadowSpear Demo

Experience ShadowSpear for yourself. Our lightweight, integrated solution will help you sleep easier at night and provide immediate confidence in your security posture.