In mid May we found out the Bank of Bangladesh lost a reported $81m when crooks managed to fraudulently redirect funds to shady recipients via the Swift international money transfer system. Then we found out it had happened again…and again…and again…and again. Here’s @issuemakerslab graphically illustrating the first 4 breaches, including what’s known about where money went:
First analyses of the compromise:
BAE Systems Threat Research Blog published details of code they found matching characteristics of the February Bangladesh bank attack. Ars Technica then summarised those findings here: Billion dollar Bangladesh hack: SWIFT software hacked, no firewalls, $10 switches
The diagrams below (linking to the BAE blog post), tell a story of patient sophistication. However, as is often the case, that’s the technical how, rather than root causes. Including the ones Ars Technica brutally highlights their headline.
Institutions using the network must have existing banking relationships; SWIFT transactions do not actually send money but instead send payment orders that must then be settled by having the institutions involved moving money between accounts.
SWIFT’s security stems from two major sources. Notionally, it’s a private network, and most banks set up their accounts such that only certain transactions between particular parties are permitted.
In the case of these breaches, all those bets were off, because attackers either had insider help to get privileged credentials, or they hacked the bank network and camped there waiting to spot and scoop credentials. All of the many checks and balances built into systems to prevent this kind of fraud were defeated in ways that suggest perpetrators had intimate knowledge of both banking practice and Swift systems:
- Inserting valid transactions into the Swift system to pay attacker-defined recipient accounts (how this was done hasn’t yet been reported)
- Using valid credentials to compromise the local Swift Alliance Software Server messaging system and Oracle database
- Listening for Swift messaging about target transactions.
- Amending transaction values after checking and/or amending available currency values.
- Submitting amended transactions for processing.
- Amending built-in integrity checks (altered code produces a default pass).
- Deleting transactions or replacing original values in databases.
- Replacing amended transaction values with original values before transaction reports are printed and submitted for manual checks.
When people with expertise like this are inside your network it is all about detection and response. And in this case, steps were taken to defeat technical and financial processes designed to do exactly that. In aggregate, it’s the kind of compromise that strikes fear into the heart of anyone trying to protect a business.
Which enterprise security tool can prevent this kind of breach?
The simple answer is none of them. There’s no single magic bullet for this. Problems reported so far on the host side equate to cracks in security fundamentals:
- The insider element already discussed,
- Network architecture failings (especially perimeter controls, lack of segmentation and related monitoring)
- Access administration and access/transaction logging and monitoring
- Questionable device quality/build/configuration
…not, in the main, linked to weaknesses in the Swift systems themselves.
The only mitigation is adequate expertise, budget, and backing to get foundation controls right…especially for your insider risk.
It seems likely there was one or more compromised staff member (or employed member of the criminal gang) at targeted banks and one or more individual with either prolonged access to Swift’s core network, or means to study Swift systems elsewhere. Folk mainly agree these very specific attacks would not have been possible without that collusion. A ‘show and tell’ CBT isn’t going to combat that. Consider the money on offer: The $81mn in Bangladesh, was very nearly $1 BILLION. The only thing preventing that giant additional loss was a spelling mistake in the transaction.
What might make a difference is staff screening and user behavioural analytics (if deemed ethical and able to produce outputs that are accurate and actionable), coupled with expert lead training to help staff spot and resist social engineering, while keeping a close eye on social engineering campaigns run by threat actors operating in your sector.
The rest, unless firms and international law enforcers get far better at catching and punishing perpetrators (or you can offer more attractive rewards than attackers), is residual risk. Residual risk that cyber insurers may or may not let you transfer.
What does this mean for 3rd party security governance?
The first reports coincided with the 2nd instalment of a series of Infospectives posts on 3rd party security governance (based mainly on experience gained in the financial services sector). In it I defined a subset of largest 3rd parties requiring a distinct governance approach:
Big Suppliers of Generic Services or Products – e.g. credit card providers and BACS/Swift service providers – organisations likely to be on procurement’s radar as tactically, strategically, financially, or operationally important, providing things that are identical to the things provided to every other client.
and made this somewhat controversial statement:
Suppliers I’m arguing, after due diligence, you should automatically remove from future audit/assessment scope.
How can my advice (and reputation) still stand in the context of these recent breaches?
First my justification for that apparent call to negligence:
Your duty is to make sure they don’t provide some bespoke element of service that would require on-going assessment. Then do due diligence…[but] what you cannot do, if you are taking their bog standard service or product (the same one everyone uses), is get them to change it.
…following this argument to its logical conclusion, why would you waste dramatic effort trying and failing (or occasionally trying and succeeding) to get an audit done, when the result will be blanket refusal to change anything?
Now the critical extract from the 12th May New York Times report on these breaches:
In both cases, the core messaging system of Swift was not breached; rather, the criminals attacked the banks’ connections to the Swift network. Each bank is responsible for maintaining the security of its connection to Swift. Criminals have found ways to exploit loopholes in bank security to obtain login credentials and dispatch fraudulent Swift messages.
So this wasn’t, in the main, about Swift systems, it was about bank network security, database security, insider threats, and incredibly motivated, well resourced, and able criminals.
Circling back to my seemingly suspect advice about 3rd party security governance: My recommendations start and stop in the context of controls operated by your suppliers. A fundamental part of due diligence is carefully mapping the demarcation between procedural and technical controls you are responsible for, and things they are contractually bound to provide. A demarcation that will be the crux of the blame game with these and possible future breaches, as this Wall Street Journal article vividly underlines:
Participants in the Swift venture will be pressuring the firm to scrutinise what part it may have played in these incidents, and anything that can be done to help prevent recurrence. But as consumers of Swift’s services, they have a far bigger challenge.
That’s ignoring potential impact on all of the other individuals and organisations who do business with these breached and at risk financial firms. Folk who have no control and no means to influence how this system is operated. For them, to quote my original post:
Where [after due diligence] you note a mismatch between supplier security and your requirements, you should articulate the associated risk to the business.
In terms of supplier selection and on-going governance, that then leaves the business with two choices:
- Accept the risks, or
- Chose another supplier who provides the controls you want
Image Credit: Copyright: fergregory / 123RF Stock Photo
Categories: Corporate Security