The Rising Threat of AI-Driven Cyberattacks in Banking
Ever since India took off on its journey towards becoming a $1 trillion digital economy by 2030, the massive growth has also come with massive scrutiny from cybercriminals worldwide. The Indian banking and financial services sector has always been a major target because of the potential of huge monetary gains and access to extremely sensitive information. According to a recent Financial Stability Report from RBI, the sector has faced more than 20,000 cyberattacks over the past 2 decades, resulting in $20 billion worth of losses.
Some of the most severe cyberattacks faced by our country in recent times have come in banking & finance.
- In 2016, Union Bank of India was successfully targeted by a spear phishing campaign which subsequently led to an attack on their SWIFT systems. Attackers siphoned off close to $170 million from that, and Union Bank took over a year to recover the entire amount with help from local and international authorities.
- Pune-based Cosmos Cooperative Bank wasn’t so fortunate in 2018, when cybercriminals infiltrated the bank’s ATM servers by injecting malware. Once they gained access, they ended up cloning thousands of Visa & RuPay debit cards and coordinating close to 12,000 withdrawals across 28 countries.
- More recently, over 300 small local Indian banks were forced to go offline last year because of a ransomware attack on tech service provider C-Edge Technologies. India has over 1,500 cooperative & rural regional banks, so about 1 in 5 banks faced temporary shutdown because of the attack. Fortunately, NPCI quickly isolated these banks to contain the incident.
While the sector has constantly been under attack over the years, there has been a serious uptick in recent times. According to Check Point’s Threat Intelligence Report, banking & financial institutions in India experienced an average of 2,525 cyberattacks in the last 6 months of 2024, a figure significantly higher than the global average of 1,674 attacks per organization. It isn’t just a banking thing – according to a survey by Indusface, cyberattacks targeting India-based organizations doubled year-on-year, with 1.2 billion attacks in the third quarter of 2024 up from about 600 million in the same quarter of 2023.
The primary reason for this exponential rise is the increasing incorporation of AI in cyberattacks. Just like it has become a key asset in every enterprise’s operations, it has also become a key weapon in a cybercriminal’s arsenal. AI in the form of LLMs, GPTs and other ML models have proved to be such a driving force for attackers because of several reasons:
- Increased Frequency: Earlier, attackers used to spend a lot of time conducting reconnaissance and drafting convincing messages to gain credentials. Now, AI chatbots can do the recon work in seconds and connect with countless individuals simultaneously in order to try and find a way in.
- More Sophistication: Like all AI algorithms, the ones used by attackers learn and evolve over time. If they don’t crack your systems at the first go, they can adapt to avoid detection or create a pattern of attack that evades your systems.
- Lowered Barriers to Entry: Before the advent of AI, cybercriminals required advanced technical skills and a lot of free time on their hands in order to conduct attacks. Now, AI has enabled them to develop these attacks with greater ease & precision, all at the single click of a button.
Emerging AI-Powered Cyber Threats in Banking
AI-Powered Phishing: A New Era of Fraud
Phishing is the tactic most preferred by cyberattackers because it is much simpler to log in than hack in. Over the years, cyberattackers have required advanced technical and social engineering skills to create convincing fake websites and emails. But even then, these approaches often had limitations in terms of quality and variation.
AI can do each step of the phishing process better, and in seconds with the click of a button:
- It uses data scraping to gather and analyze information from public sources, including social media platforms like LinkedIn and Instagram. Through this information, it can identify ideal high-value targets.
- It can then develop a convincing persona with a corresponding online presence to carry out communications with all the intended targets.
- It can creatively develop a realistic & plausible scenario for approaching the target that is likely to garner their attention.
- Generative AI can then create highly personalised and realistic messages across all relevant mediums (emails, SMS, WhatsApp messages, social media DMs) with subtle emotional triggers that force action on the part of the victim. Audio & video media can also be created to push action, which we’ll explore later.
Why is it dangerous for banks?
Banks have evolved over the years by facilitating seamless online transactions using trust factors like multi-factor authentication. AI-powered phishing can create highly realistic scenarios to bypass these factors of authentication for both your customers and employees – whether it’s an email leading them to a fake page to change their password or an automated call seemingly from the bank asking them to verify credentials by entering them. Once cybercriminals gain access from these sophisticated AI phishing tactics, they can wreak havoc on your systems.
AI-Enhanced Cyber Attacks: Ransomware, Malware & Adversarial AI
If AI is not being used to gather credentials from your customers and employees, it’s being used to attack your systems. AI can enhance approaches such as ransomware and malware attacks in various phases, including identifying vulnerabilities, advancing attack paths, establishing backdoors, adapting or modifying ransomware files over time to avoid detection and exfiltrating/tampering with data.
Additionally, like all other sectors, banking & financial services are increasingly incorporating AI into their own processes. Attackers could aim to disrupt the performance and accuracy of these systems with adversarial AI of their own:
- Poisoning attacks target the model training data, injecting fake or misleading information into the training dataset in order to compromise accuracy or objectivity.
- Evasion attacks target input data once these models are developed and apply subtle changes to the data being shared with the model, negatively impacting the model’s predictive capabilities.
- Model tampering facilitates unauthorized alterations to your model in order to compromise its ability to create accurate inputs.
Deepfakes: The Invisible Banking Threat
This tactic has gained a lot of prominence in recent times because of AI, but it is interesting to note that deepfakes have been around for a while. It’s just that the AI democratisation wave has now made them easily accessible to the general public. Deepfakes are synthetic media where a person’s face or voice is replaced almost seamlessly with that of another person, using machine learning, AI and computer graphics.
- A deepfake released publically of your CEO saying something damaging could cause your stock to plummet.
- Deepfakes can lead to successful impersonation and action like a deepfake CEO calling an employee and asking them to transfer money to a fraudulent account.
- Financial scams can happen through deepfakes such as fake videos seemingly authorizing large wire transfers.
- Blackmail or extortion can be added to the phishing process through deepfakes of employees doing something embarrassing, forcing them to perform actions they would not normally do.
How Banks Can Defend Against AI-Powered Cyber Threats
considering all these emerging AI-enabled threats, what can you as a bank do to successfully ward off these attacks? Your cybersecurity strategy should cover these 2 critical aspects: securing your systems, and securing your stakeholders.
Securing Your Banking Systems Against AI Attacks
To protect your systems against AI-powered cyberattacks, here are some of the elements you can incorporate into your strategy:
- A comprehensive cybersecurity platform offering essential elements like continuous monitoring, intrusion detection and endpoint protections
- Layers of controls that offer overlapping protections, so that if one layer gets targeted or fails, no areas in your system are left unprotected
- Holistic security that translates all across your system, including all your supply chain partners and vendors
- AI-native cybersecurity to analyze vast datasets and identify patterns – with real-time analysis of input & output data to protect against adversarial AI attacks
- An Incident Response Plan that effectively covers all stages from detection & analysis to containment, eradication & recovery
- Deepfake detection tools to protect yourself against deepfake attacks
- Develop baselines for user behaviour and use those to analyze systems for abnormal user activity
- Zero-trust access for all your stakeholders
Strengthening Stakeholder Awareness & Cyber Hygiene
Ultimately, while you can have the best cybersecurity controls in your systems, approaches like AI-powered phishing and deepfakes target your stakeholders and potentially initiate malicious action from them. Therefore, effective employee & customer education is essential to convert them from your biggest weakness to your greatest strength. Elements of this education include:
- Informing your employees about limiting the personal and sensitive information they share on social media & other online spaces, so as to avoid publically revealing details that could be used to form a deepfake attack
- Asking employees to be mindful of who has access to their voice records, especially when making business or personal calls
- Educating them on the telltale signs of deepfake technology (inconsistent facial expressions, unnatural blinking, unrealistic speech patterns)
- Setting up a helpline to give immediate assistance to stakeholders targeted by deepfakes and other social engineering attempts
- Informing them on best practices like not clicking suspicious links or downloads, and always verifying from reputable sources before completing an action
- Perpetually disseminating information on all kinds of emerging attacks and how to be aware of them.