A faked video of a politician can cause public shock and outrage. A faked video call to a bank, on the other hand, can wipe out millions without leaving a trace beyond an empty account.
Political deepfakes seek to influence public opinion, while financial deepfakes operate surgically, targeting a single person with the authority to approve transactions. Their invisibility makes them even more dangerous, requiring technical, procedural, and internal solutions.
Psychology as the weak point
The common root is human psychology. Most cyberattacks succeed not because they “break” technology, but because they manipulate people. Verizon’s 2023 DBIR report recorded that 74% of breaches were linked to the human element: social engineering, abuse of access rights, or simple mistakes.
Deepfakes exploit this vulnerability by multiplying the power of image and sound. So banks are no longer just defending servers; they are defending the very concept of reality. The appeal of deepfakes is that they are cheap, accessible and effective.
This brings a variety of players into play: state regimes looking to destabilize markets, terrorist groups looking for untraceable revenue, organized crime seeking profits through Scams-as-a-Service, and private hackers working on contract. Add to that activists using the technology to protest or corporate spies trying to discredit competitors.
This complexity makes attribution nearly impossible: an attack can be planned in one country, executed from servers in another, financed elsewhere and targeted at a bank in a fourth jurisdiction.
Regulatory Gaps and Geopolitical Asymmetries
The regulatory landscape is fragmented. The European Union is moving forward with the AI Act, China is enforcing strict labeling, the US is approaching the issue at the state level, while countries in the Global South – such as Indonesia, India and Brazil – still rely on general data laws.
Attacks, however, know no borders. A massive cross-border scam can trigger liquidity crises, waves of withdrawals due to loss of confidence and market manipulation through false announcements.
Why is the Global South more exposed?
Banks in the North have advanced tools, such as biometric solutions and collaboration networks (e.g. FS-ISAC). In contrast, in the South, the rapid spread of fintech services is not always accompanied by strong security or adequate regulatory frameworks.
Lower digital literacy increases vulnerability. The consequence is that financial systems in the Global South are at higher risk of systemic instability, with weaker resilience when trust is eroded.
Impacts beyond borders
Deepfake abuses in the South are not a local phenomenon. Loss of trust can lead to “digital bank runs,” halt progress in microfinance, skyrocket the cost of remittances, or trigger macroeconomic turbulence through fake central bank announcements. These shocks are quickly transmitted to the North, through portfolios, trade, and international banks.
From firewalls to guardrails
The response cannot be limited to national solutions. A common framework of standards for deepfakes detection and biometric verification, transnational cooperation through institutions such as the G20, BIS and BRICS, and the adoption of a zero-trust approach by banks themselves are needed.
At the same time, the so-called penta helix model – cooperation between government, industry, academia, civil society and the media – is crucial for building resilience. The media in particular has a role to play in educating the public and maintaining trust.
Technologies such as blockchain and digital IDs embedded in smart contracts can offer additional protection, creating immutable traces and reliable identity verification. Think tanks such as Brookings and Chatham House emphasize the need for transnational standards and intergovernmental solutions that take into account the geopolitical dimension.
CPF (Creative Permutation Foresight) scenarios show how governments can change the course of risks. A wave of deepfake fraud could destabilize microfinance in India or Indonesia, cripple remittance flows in South Asia, or lead to a regulatory overreaction in Brazil that would slow innovation.
The point is that the threat is not just technological, but also depends on political choices and institutional readiness. Deepfakes are not just another form of cybercrime. They pose a structural threat to the very concept of trust in finance. The next crisis may not start with mortgages or sovereign debt, but with synthetic lies traveling at the speed of code.
The image that captures the danger is not of a boardroom, but of a farmer in India trying out a digital payments app for the first time.
When he falls victim to a convincing deepfake, he goes back to cash. If stories like this multiply, the promise of digital financial inclusion may collapse not because the technology has failed, but because trust has been lost.



