Cybersecurity is of the utmost concern for financial institutions (FIs) of all types, ranging from community credit unions to multibillion-dollar international banking conglomerates to everyday consumers. More than 2 million fraud reports were issued
to the Federal Trade Commission in 2020, reaching a total loss of more than $3 billion. One survey found that 47 percent of businesses around the world have reported being victimized by digital crime within the past two years, with losses totaling $42 billion.
Fraudsters are also growing more advanced in their tactics, leveraging sophisticated technologies like artificial intelligence (AI) and machine learning (ML) to deploy millions of attacks simultaneously. The overwhelming volume of attacks has put organizations on the back foot, scrambling to find countermeasures to the account takeovers (ATOs), phishing attacks and other schemes they face by the thousands every day.
The following Deep Dive explores how fraudsters target FIs and how these institutions are leveraging AI- and ML-based defenses of their own to fight back.
How Fraudsters Scam Banks And Businesses
Digital fraud rates have ramped up in many industries over the past year, with the financial field seeing its biggest spike in March 2020. Experts estimate that anywhere from 79 percent to 90 percent of these attacks were ATOs, which consist of bad actors assuming control over customer accounts for a variety of purposes, such as using their payment information to make fraudulent purchases or stealing their personal data and selling it on dark web marketplaces.
Other fraudsters instead go through the front door, so to speak, by creating new bank accounts for fraudulent purchases. This can be a painstaking process when conducted manually, but studies have found that 100 percent of these fraudulent accounts harness AI or ML at some point in the process, be it during the account creation itself or in its malicious activities post-creation. The traditional defense against automated account creation, CAPTCHAs, are strong against most AI-created accounts but have a 2 percent failure rate. This is an unacceptable allowance considering the scale of AI bot attacks, which allow thousands of fake accounts to slip past banks’ defenses.
Other forms of cybercrime are as varied as they are dangerous. Thirty-five percent of FIs reported increases in botnet, phishing and ransomware attacks over the past year, 32 percent saw a rise in mobile malware, 30 percent experienced pandemic-related fraud and 29 percent said that threats originating from their own employees were on the rise. Human analysts risk being overwhelmed by the sheer volume of these digital hazards, resulting in FIs deploying AI- and ML-based systems that can meet the scale of these threats.
AI And ML Flex Their Fraud-Fighting Prowess
Human analyst teams not only have a hard time detecting fraud as it happens — 45 percent of FIs say investigations take too long to complete — but also have a penchant for mistakenly identifying legitimate account creations and transactions as fraudulent. False-positive rates are even up to 90 percent for some banks, resulting in everything from frustrating obstacles for customers as they try to clear their names to customer abandonment.
FIs are looking to limit these obstacles by deploying AI and ML technologies on a larger scale, with more than $217 billion spent on these technologies for fraud prevention and risk assessment. ML-based systems in particular have been shown to reduce fraud investigation times by 70 percent and improve the accuracy of fraud detection by 90 percent, with the total portion of fraud attempts detected reaching an impressive 95 percent.
This technology is still largely the domain of major FIs, however. More than 72 percent of banks with more than $100 billion in assets have an AI or ML-based solution in place, yet only approximately 6 percent of all FIs can say the same. While 80 percent of fraud prevention experts say AI can reduce the success rate of payments fraud and almost 64 percent say AI is valuable for preventing fraud before it even occurs, nearly 46 percent of experts said they were concerned about AI’s ability to operate in real time. Another 42 percent said that AI systems lack transparency, offering fraud prevention personnel few insights into how their systems work to detect fraud and thus develop ways to improve it.
Like many technologies, AI and ML can be made more effective, transparent and cost-conscious as long as technology providers take steps to do so. Fraudsters never cease to improve their techniques, so neither can prevention experts.