Again and again, Hollywood has painted an identical picture of the garden-variety hacker: a twentysomething lone wolf, wearing a dark hoodie, sitting in his mother’s basement, perhaps trying to break into a celebrity’s Twitter account. The unfortunate reality, however, is that today’s hackers are far more organized and far less benign than this TV image suggests. Much like a lawful business, modern cyber-criminals treat their work as a profession, their budgets are well-defined, and most are in it for the money. It is therefore little surprise that the financial services industry — which manages trillions of dollars across complex digital infrastructures — has long represented the holy grail for these sophisticated criminals.
Breaking the Bank
Of course, big banks and leading insurers have come to understand the severity of the threats they face online. In 2018, for the second consecutive year, the financial services industry suffered the highest volume of cyber security incidents among all economic sectors, with European banks in particular facing an average of 85 serious attempted breaches. But while many financial companies have responded by investing heavily in conventional cyber defenses, criminals are constantly generating never-before-seen attacks designed to bypass these traditional security tools, which rely on rules and signatures capable of spotting just previously known threats. The resulting cat-and-mouse game — wherein such tools are updated to detect the latest exploit, only to be compromised again by the next attack — has proven disastrous. Cyber-crime cost the financial industry $18 million per firm in 2017, while producing losses of $600 billion for the world overall.
To make matters worse, experts anticipate that AI-charged malware will soon be witnessed in the wild, a development which promises to transform this already damaging cat-and-mouse game into a full-blown AI arms race. By viewing cyber-crime as the quasi-business that it is today, the rationale for incorporating ‘narrow’ AI elements into cyber-attacks becomes eminently clear. As with legitimate corporations across all industries, such AI could allow online threat actors to automate tasks, reach more prospective targets, and improve their criminal ‘conversation rates,’ with the ultimate effect of saving time and increasing profits. And most menacingly, this kind of ‘smart’ malware will make quick work of the weak link in any company’s cyber defenses — its employees. With that in mind, here’s what an AI-powered cyber-attack on a major bank could look like:
Accessing the Virtual Vault
The devastating AI attack begins, innocently enough, with a single email. However, as a stealthier variant of the familiar phishing email, this ‘spear phishing’ email has been crafted specifically to deceive a top executive at one of the largest banks on Wall Street. Often generated using reconnaissance from social media, spear phishing campaigns are labor-intensive and costly — 20 times more expensive, in fact, than an ordinary phishing campaign. Yet thanks to their personalized nature, spear phishing is remarkably effective, producing 40 times the return of their boilerplate counterparts. And this particular email, ostensibly written by the company’s CMO regarding the launch of the bank’s newest ad, was actually authored by an AI toolkit that had learned to mimic the CMO’s writing style by observing her tweets. Indeed, a 2016 experiment proved that AI could already create these emails just as effectively as humans but eight times faster, a capability which is rapidly improving.
Having been successfully duped by the email, the unsuspecting executive downloads its attachment and infects his computer with a never-before-seen strain of malware, whose novelty enables it to bypass the bank’s impressive array of signature-based security tools. The AI-equipped malware bides its time, sitting passively on the computer for several days to gain an understanding of the executive’s typical online behavior. It then searches for vulnerabilities in the bank’s network by scanning only those devices with which the executive normally communicates, thus lowering the chance of the scan being flagged. Throughout this process, the malware leverages contextualization to blend into the computer’s baseline operations, adapting its behavior on the fly to avoid setting off any alarms.
After finding the most vulnerable attack vector, the malware proceeds to infiltrate the bank’s private network using its own ‘agency,’ rather than ‘phoning home’ to the criminals for new instructions. And once inside the network, it proceeds to syphon off electronic funds to an offshore bank account. These transfers occur over the course of several weeks in relatively small quantities each time, with the malware having learned to emulate the timing and size of the fees that the bank already pays to its third-party consultants and corporate partners. At every turn, AI facilitates the criminals in preserving their anonymity, in increasing the attack’s subtlety, and ultimately, in committing a highly lucrative crime.
Winning the Arms Race
Advancements in artificial intelligence have spawned formerly unimaginable innovations across the globe, and from chess to multiplication to reading comprehension, machines are increasingly able to exceed the limitations of their human engineers. Yet the double-edged sword of technological progress means that malicious actors have begun to exploit this ability: employing machine learning algorithms to wage both physical and cyber warfare unlike anything ever witnessed. To address such challenges, the only path forward is with AI itself.
Unlike human professionals, artificially intelligent cyber defenses can spot the minute differences between genuine employee behavior and nefarious AI mimicry at each stage of the cyber-attack lifecycle, from the initial spear phishing email to the concluding exfiltration attempt. Moreover, these AI security tools need not rely on rules and signatures to predefine tomorrow’s cyber-attack based on yesterday’s threats, allowing them to detect previously unknown exploits that traditional tools miss. As the business of cyber-crime pushes the envelope with more and more sophisticated attacks, it is incumbent upon the financial sector to respond by staying one step ahead in the AI arms race.
Learn how to defend your network in our upcoming webinar:
The Future of AI-Powered Cyber Defense for Financial Institutions
In the webinar, security industry expert Max Heinemeyer will analyze the most sophisticated cyber-threats of 2018, including insider attacks and fast-acting ransomware. The webinar will also outline expectations for 2019’s threat landscape, specifically as it pertains to the financial services sector, and detail how cyber AI tools have finally returned the defensive advantage to organizations around the globe.
Webinar Details:
Date: Thursday, January 17.
Time: 10:00 a.m. EST (New York) / 3 p.m. GMT (London)