Banks and other payment service providers (PSPs) are missing key defensive weapons to prevent APP scams. Existing AI-powered behaviour and pattern recognition solutions are not enough. More can be done to strengthen pre-payment screening measures to ensure scams are detected before payments are made and received. All this matters now more than ever given the upcoming changes to mandatory reimbursement rules.
What's driving this need?
APP scams are proving to be the biggest and most urgent challenge for the banking and payments industries.
New regulations in the UK will force banks and PSPs to absorb all APP scam losses suffered by consumers, micro-entities, and charities.
The EU, Australia and other jurisdictions will likely follow suit.
Banks have always struggled with APP scams, but the rate of growth and loss value has significantly accelerated in recent years, thanks in part to the advent of real-time payment networks and the shift towards digital, remote transactions.
Today in the UK, APP scams account for 7% of total payment fraud cases but represent a whopping 40% of total scam losses.
Totalling half a billion each year and set to rise over the next 5, banks and PSPs are understandably very concerned that they will soon have to foot this enormous bill.
The urgent challenge now, therefore, for banks and PSPs is to sufficiently bolster APP scams defence capabilities to mitigate risk of losses once the new rules come into effect on 7 October 2024.
What defence measures are available?
Stopping APP scams requires a multi-faceted approach. Broadly speaking, there are 3 types of solutions available.
Behavioural biometric, device and pattern recognition solutions. These AI-driven machines are trained on billions of transactions to help spots anomalies and suspicious patterns of transactions. They form the bedrock of fraud detection for any major bank or PSP. But on their own they are not enough to stop potential scams. Why?
New rules will require specific, contextualized warning messages for each transaction. It won't be enough to say "Payment is likely a fraud - not advised to proceed". Banks will need to say why they think it’s a fraud and give reasoning.
Some solutions capture data points like IP address, device used, adaptive multi-factor authentication and other biometric risk signals. These are good, but not enough.
Banks and PSPs will require individualised data on the payee (if outbound) or payor (if inbound) and context of the transaction - something existing behavioural monitoring solutions aren't built to capture.
Data sharing. In an ideal world, the entire financial services industry would share info on fraudsters, mules and other bad actors so they can each be on the lookout for payments made/received from these accounts.
There are some interesting initiatives forming at a domestic and global level on this, both from the private and public sector. For inbound payments this is a particularly appealing solution as it might help banks and PSPs stop crediting accounts of suspected mules.
But ultimately these initiatives are hamstrung by data protection laws and confidentiality restrictions that limit what banks can and can't share on suspicious individuals.
There are some workarounds, but it’s all a grey-area that lacks sufficient clarity for banks and PSPs to feel comfortable enough to share all the information needed.
Pre-payment screening. For outbound payments, all banks and their customers have is confirmation of payee.
This is wholly inadequate for stopping APP scams, not least because it tells us nothing about the underlying payee account other than whether the name matches.
Anyone can buy fake ID credentials on the dark web and open a bank account with the same name.
Onboarding teams at banks and PSPs must frequently contend with this challenge, and the prospect of generative AI being used against banks in this process makes confirmation of payee an obsolete tool that’s not fit for purpose.
What can be improved?
Behavioural, device and pattern monitoring solutions have impressive technology behind them and are getting better. AI will no doubt form the keystone behind adaptability to new scam tactics and behaviours, but these are expensive, long-term investments that require time to train and adapt. And AI alone can’t stop APP scams.
Similarly, any legislative reform and guidance on loosening data sharing restrictions will take a long time to come to fruition.
The shortcomings of pre-payment screening can, however, be solved today.
For outbound payments:
We need a solution that goes far beyond confirmation of payee and gathers transaction and ID data on payees before payments are made. Think of this like early warning radar as payees are created. Data-driven insights on how long an account has been used for, total money coming in and out, payment volumes, etc. all can help banks (and ultimately their payor customers) to assess whether there are specific risks about this account that raise the probability of the payee account being a fraudster.
E.g. you're making a home deposit to your solicitor, but it's flagged that the bank account you're making a payment to is 14 days old and has £10 worth of payments gone through. Would your solicitor's client account really have that kind of activity, or is this potentially a scammer who's spoofed the firm and given you false bank details? Happens frequently.
By obtaining and analysing this data, banks are then able to deliver tailored and specific warnings to payor customers about why a transaction may be fraudulent and whether they advise for it to proceed.
This would also discharge obligations under new rules and plugs the data gap that transaction monitoring and screening solutions have. It also enables the bank's customer to stop the fraud from happening in the first place.
At Lucra, this is exactly the kind of major upgrade to confirmation of payee we’re already delivering to businesses, banks and payment firms.
For inbound payments:
We need to find a way to capture key characteristics about the payor and reasons for the payment to understand what the risk level is on crediting the account with the money being received.
E.g. if a payor is considered vulnerable, there is very little (if not zero) wiggle room for the receiving banks/PSPs in the chain to absolve themselves of liability if the payment turns out to be a scam. This greatly increases the risk of the payment and will undoubtedly require increased scrutiny, which will in turn mean more friction and longer waiting times before payments are credited.
Ideally, we could capture this data through the payment message itself. Pay.UK is leading efforts around a standardised API for enhanced data sharing between banks and PSPs. But it’s difficult to see how such a solution can be developed, tested and deployed in time for October 7.
In either case, the whole process of course needs to be automated. The scale of real-time payments today is vast, and set to rise. It’s not remotely feasible for this to be a manual exercise. Any technology solution would need to deliver customisable rulesets to enable pre-determined workflows to raise alerts and hold up suspicious payments.
Lucra is already delivering an outbound pre-payment screening solution with these kinds of capabilities. We are also actively speaking with banks and PSPs to architect a workable solution for inbound payments. For those interested to learn more or contribute ideas, please don’t hesitate to reach out.
Alan Schweber is the founder and CEO of Lucra. Previously, Alan was a debt finance lawyer at Kirkland & Ellis LLP.
Comentarios