
Top stories





Marketing & MediaAds are coming to AI. Does that really have to be such a bad thing?
Ilayaraja Subramanian 16 hours

More news
















Today’s attacks can exploit real-time vulnerabilities across digital channels, convincingly mimic human behaviour, and even manipulate the very systems designed to detect them.
As these threats grow more sophisticated, traditional security measures are struggling to keep pace. Unless banks collaborate more closely by sharing incident intelligence, deepening behavioural insights and strengthening AI-driven defences, the risk to customers will rise sharply, forcing the financial sector to rethink how it protects accounts, transactions and trust.
“Rather than new types of fraud, what we’re seeing is a step‑change in how adaptable, realistic, and efficient existing attack patterns have become. Bots can learn, adapt, and personalise continuously once they’re interacting with live systems,” says Andries Maritz, principal product manager at banking and payment authentication company, Entersekt.
Maritz says that when it comes to card‑not‑present (CNP) and 3‑D Secure (3DS) environments, the company is seeing significant growth in social engineering and BIN‑level attacks. Criminals are now using methods to beat banks' own authentication protocols, and the way they’re being executed has become significantly more sophisticated and co-ordinated.
Similarly, on the digital access side, there is clear and concerning growth in bot-driven and -assisted behaviour when it comes to login and account creation.
“These aren’t simple scripted attacks anymore. The attacks have become far more detailed. Traditional identifiers are being spoofed much more accurately, and in many cases the behaviour being presented is increasingly human‑like,” he says.
Maritz says fraudsters are now deploying AI-driven dynamic attacks on bank systems that learn and adapt in real time.
These begin by probing a bank’s authentication or transaction controls, observing live feedback, including responses such as a successful login, or a payment getting flagged. This allows them to map out which defences are weakest or most predictable, and focus on these vulnerabilities with probing follow-ups.
Automation amplifies this approach by enabling attackers to collect vast amounts of personal or contextual information at scale and craft hyper-targeted strikes on specific customers, rather than broad, generic campaigns. By constantly tweaking signals (such as device fingerprints or timing patterns) they can quietly slip past rule-based detectors.
Most worryingly, Maritz says advanced AI agents are now replicating subtle human behaviours, including irregular typing speeds or mouse movements, making it increasingly difficult for banks to distinguish between bots and their users.
“Just like us, these machines are learning through each interaction. And, because they are acting at scale, their ability to improve becomes exponentially greater,” Maritz warns.
Maritz’s fears are not unfounded. Deloitte’s Center for Financial Services predicts that gen AI could see fraud losses reaching $40bn in the United States by 2027, a compound annual growth rate of 32%.
Despite this growth, companies are still relying on manual validation, with nearly half (48%) of US organisations still depending on human reviews and manual checks. Using this method, Maritz points out that banks simply can’t manage the volume of AI attacks currently being experienced.
Maritz says the sophistication of the attacks has also made fraud reporting more complex.
“Some attacks impact account balances in real time, which can trigger alerts and customers reporting fraud within minutes. Others are far more subtle, like when an account might be compromised and controlled for weeks or months before any money is moved. That means the time between initial compromise and reported fraud can vary enormously,” says Maritz.
Despite these challenges, he is quick to commend banks on their response, confirming there has been significant investment into sophisticated tooling, anomaly detection, and mitigations, resulting in industry-wide progress.
However, Maritz points out that gaps in fraud detection often arise from isolated data and signals across systems, often due to regulatory, privacy, or architectural constraints. This hinders a unified view of attacks spanning channels and time.
This fragmentation extends across institutions, with mule activity and compromised identities ignoring bank boundaries.
Maritz shares that AI proves useful in fighting the growing sophistication of dynamic fraud, itself now driven by advancements in AI technology. He says AI can provide explainable models, rapid pattern detection via generative tools, and dynamic intel sharing, making it essential for trusted, adaptive defences against evolving fraud, beyond basic scores.
Maritz says the role of the chief information security officer (CISO) is becoming more complex. Not only has it become more difficult for them to trust the raw signals they’re using to assess risk, but attacks now rarely start and end in a single system – often beginning outside the institution and moving across channels, partners, and platforms.
“Fraud prevention is a team sport, and building resilience means improving how intelligence is shared to both upstream and downstream systems. This means sharing incident information, behavioural insights, or signal quality improvements across internal and external systems,” he advises.
Finally, Maritz warns CISOs that relying solely on fixed rules or static mitigations has become obsolete. “Systems need to support dynamic scoring, contextual risk conditions, and rapid feedback loops so that insights from one part of the ecosystem can inform defences everywhere else.
"Organisations must shift away from reacting to individual fraud events toward building adaptive, resilient systems that can keep pace as threats evolve."