MAS highlights three urgent threats for financial institutions in Singapore

Monetary Authority of Singapore (MAS) sounded the alarm on deepfake cyber risks in the financial sector. The report identifies three key threat areas - defeating biometric authentication, impersonation scams, and misinformation.

MAS highlights three urgent threats for financial institutions in Singapore

The Monetary Authority of Singapore (MAS) sounded the alarm on deepfake cyber risks in the financial sector. MAS’s new information paper warns that AI-generated “deepfake” media have emerged as a growing threat to banks and financial institutions, already enabling fraud, defeating security controls, and spreading falsehoods with costly impact. The report identifies three key threat areas - defeating biometric authentication, impersonation scams, and misinformation – each backed by real incidents that have cost institutions millions. MAS’s message is clear: financial institutions (FIs) must treat deepfake threats as a high priority and implement robust measures to counter them.

Deepfakes defeating biometric security

Biometric authentication systems are at risk. Deepfake technology can create highly realistic fake faces, voices, or fingerprints capable of tricking facial and voice recognition used in customer onboarding and login processes. This risk is not theoretical - recent incidents in Asia have proven it. For example, in August 2024 an Indonesian bank’s digital loan KYC process was duped by AI-generated fake photos, which fooled the system’s facial recognition into accepting a synthetic face as a live customer. That same year, attackers in Vietnam and Thailand combined malware with deepfakes to bypass several banks’ selfie-authentication for online banking, successfully logging in as customers by mimicking their faces. These cases show how criminals are using deepfakes to impersonate legitimate users and bypass biometric checks - opening the door to illicit accounts, loans, and transactions.

MAS’s recommended defense: implement strong liveness detection in biometric verification systems. Traditional face recognition can be fooled by a static fake image or recording; voice liveness detection adds checks to ensure a real, live person is present. Regular stress-testing of biometric systems against deepfake attack scenarios is also crucial to identify vulnerabilities before criminals do. By hardening biometric authentication with such measures, FIs can raise the hurdle for deepfake fraud attempts.

Impersonation scams powered by deepfakes

The most high-impact deepfake attacks to date have come in the form of social engineering and impersonation scams. In these schemes, fraudsters use deepfake-generated video or audio to pose as trusted individuals - such as CEOs, senior executives, or clients - and trick employees into bypassing normal controls. MAS highlights that malicious actors can now appear in a video call or voicemail looking and sounding like an authority figure, thereby exploiting human trust to initiate unauthorized actions. The consequences can be severe. One striking example occurred in Hong Kong: in January 2024, an employee of a multinational company was targeted with a deepfake video call of his CFO. The fake CFO (and colleagues) instructed him to transfer funds - and he paid out US$25 million to the fraudsters before the scam was uncovered.

Singapore has already faced a similar threat. In March 2025, a local finance director nearly lost half a million dollars after scammers impersonated the company’s CEO on a Zoom call. The finance director was contacted via WhatsApp and then joined what appeared to be a legitimate video conference with the CEO and other colleagues, except the “CEO” was a deepfake. Under urgent instructions, she initiated a transfer of about US$499,000 to a purported partner company. Only when the impostors immediately demanded a second, much larger transfer (US$1.4 million) did she grow suspicious, preventing further loss. Thanks to swift intervention by the bank and police, the initial funds were eventually recovered. This incident underscores how convincing deepfake scams can be, and how quickly large sums can be siphoned away if staff are pressured to skip verification steps.

Beyond financial fraud, deepfakes have also been used to impersonate job candidates and infiltrate organizations. In one 2024 case, a North Korean operative used an AI-generated face and phony credentials to get hired by a tech firm, only revealing himself when he tried to install malware on the company network. These scenarios, while less direct than a money transfer scam, demonstrate the broader insider threats posed by deepfake-enabled social engineering.

MAS’s guidance for FIs is to bulk up both human and technical controls against such impersonation scams. First, raise awareness among staff and customers: train employees to recognize deepfake signs (e.g. unnatural face movements or audio artifacts) and to be alert to any unusual or urgent requests, even if coming “from” a familiar person. MAS recommends conducting regular internal phishing/drill exercises using simulated deepfake video or audio, so that employees get practice in spotting a fake CEO voice or manipulated video before a real attack.

On the technical side, MAS advises deploying deepfake detection tools on employees’ devices and communication platforms to flag suspicious media in real time. Advanced software can analyze audio streams during calls, detecting tell-tale inconsistencies and alerting the user if the voice might be fake. They add an important layer of defense especially as deepfake content becomes harder to recognize with the naked ears and eyes.

Deepfake-fueled misinformation and market manipulation

Not all deepfake risks involve direct theft - some threaten the integrity of information and markets. Deepfakes can be weaponized to spread misinformation or disinformation that damages an institution’s reputation or even moves financial markets. MAS’s third key risk area highlights how synthetic media could be used to manipulate public perception, stock prices, or news narratives. For example, in May 2023 a fabricated image of an explosion at the Pentagon went viral on social media; even though it was quickly debunked, the fake news briefly triggered a market sell-off that caused a noticeable dip in the U.S. S&P 500 index. This incident showed that a single convincing deepfake or fake image, even if only believed for a few minutes, can spark real financial consequences.

Closer to Singapore, leaders have also been targeted by deepfake-driven scams. In March 2025, Prime Minister Lawrence Wong publicly warned that scammers were using AI-generated videos and images of him in online ads to promote cryptocurrency schemes. (Senior Minister Lee Hsien Loong was similarly impersonated in deepfake investment “opportunities” in late 2023.) These cases didn’t necessarily hack any bank systems, but they can erode public trust, both in the individuals and in digital media generally, if not swiftly addressed. In the financial realm, a deepfake press release or fake executive comment could falsely move a stock price or prompt customers to pull funds, creating chaos before the truth is known.

MAS urges FIs to be proactive in countering deepfake misinformation. Firms should actively monitor for fake or unauthorized content involving their brand, executives, or customers across the internet. This means leveraging tools that scan social media, websites, and news outlets for potential deepfakes.

Equally important is being prepared to react fast. Incident response plans must be updated to include deepfake attack scenarios. MAS recommends that FIs establish clear protocols for handling a deepfake incident: how to investigate and validate the truth, how to report it to regulators and law enforcement, and how to communicate rapidly and transparently with stakeholders.

Key actions for CISOs and security leaders

MAS sets a high bar in its deepfake guidance, signaling that senior security leaders must take decisive steps to strengthen their defenses. Below are some of the critical actions MAS recommends for financial institutions:

  • Implement biometric liveness detection: Upgrade facial and voice recognition systems with liveness detection techniques to ensure a real, live person is present. This helps counter deepfake attempts to spoof biometric logins or onboarding using photos or recordings. Regularly test and update these controls against new deepfake tricks.
  • Deploy deepfake detection tools: Use advanced tools on corporate devices and communication channels to automatically detect and alert on manipulated audio content in real time. For example, integrate deepfake detection into video-conferencing platforms and phone systems used for payment approvals or customer verification, so that suspicious video calls or voice instructions are flagged before they cause harm.
  • Update and test incident response plans: Revise your incident response plans to include scenarios involving deepfake fraud or misinformation. Establish clear playbooks for quickly validating the authenticity of suspicious communications or viral news, and for escalating confirmed deepfake incidents to management, regulators, and law enforcement. Conduct drills or tabletop exercises focused on deepfake attack scenarios to ensure your team can respond swiftly and effectively. Regularly update and rehearse these plans as deepfake technologies evolve.

MAS’s information paper makes it plain that deepfakes are not a distant science-fiction threat, they are here now, and financial institutions must act with urgency. Senior leaders in Singapore’s financial sector should approach deepfake risk as a serious component of cyber risk management and operational resilience. That means investing in detection capabilities, shoring up processes, and fostering a vigilant culture. As deepfake tools become more accessible and convincing, the window for preventive action is closing. By heeding MAS’s guidance and proactively implementing the recommended controls, CISOs and their teams can stay ahead of this fast-evolving threat and protect their organizations’ integrity, customers, and assets from the very real dangers of deepfake attacks. The time to act is now, before the next deepfake fraud or hoax strikes.

You can read the full report here

Read more