The interweb says it should be.
Every day, in every way, it’s getting FUDdier: Cyberterrorists, Cyberespionage, Cybercriminals, Cyberbogeymen. Cybercars, Cyberheating, Cybercyber.
By rights nothing should be working. All of our bank accounts should be empty, company websites should all be defaced or down and national infrastructure should be in melted puddles all around us. But it’s not (not just yet – if you are on the pessimistic side).
And stories about destructive behaviour from the so-called good guys don’t help. They can’t resist right? Their natural conspiracy theorising activist anti-capitalist urges can’t be quashed, so they’re bound to poke around too hard, find something they shouldn’t and betray you.
Sooo tempting to say there’s nothing to worry about if you are not doing anything wrong…but i shall resist.
Just recently it was Chris Roberts. He allegedly hacked a plane’s computer and took control of an engine thrust system mid-flight, making the plane change direction briefly and slightly. The most contentious of many things he did to raise awareness of exploitable aircraft vulnerabilities (and, many say, to raise awareness of his hacking prowess). In April he tweeted mid-flight with a picture showing he was into a plane system. That time showing he could, but not doing anything once in. The FBI were waiting for him when he landed. The shocking headline about tinkering with engines came from the FBI affidavit summarising Chris’s interrogation. If it’s true, I cannot forgive it (nor can the majority of the online security community). However there is broader context and a bigger debate.
That doesn’t change the fact many folk have been talking to airlines about vulnerabilities for months. There are many, many researchers finding holes in systems and notifying their owners. While I’m not asking anyone to condone unauthorised compromise of live systems, there is a serious world-wide problem with vulnerabilities being ignored for convenience, cost or political reasons.
To play devil’s advocate a little, what reportedly happened is similar in some ways to an ‘experiment’ of mine with the company mainframe (more on that below). The result was a system outage that topped the corporate severity scale. Just like most researchers, I put my hand up straight away to say what had been done. The response, after the initial furore died down was:
“You really shouldn’t have been able to do that. At least now we know about it we can fix it”
Of course no lives were at risk, the business only took a brief (if spectacular and international) hit, the media didn’t get to hear about it and regulators, because impact wasn’t material, didn’t need to be informed. BUT there was an impact…..unlike the actions taken by the vast majority of responsible researchers, who disclose vulnerabilities to firms.
Bug bounty programmes are welcome, as are researchers (with checks to ensure validity of claims and claimers). The problem is that lots of information the public would demand to see acted upon, falls though foggy legal gaps. A big part of the antidote – BUILD RIGHT, BUILD ONCE
Plausible deniability about serious software and system vulnerabilities, is not an ethical or sustainable security strategy
And who better to inform securing the systems of the future, than those who understand the folk who will try hardest to break them. Folk who have been (and will continue to be) pushed from the White towards the Grey or BlackHat camp if we ignore well intentioned advice.
Employ Hackers – Are You MAD?!
You are understandably concerned about employing or contracting the services of people with hacker skills. You need, while doing better with staff and physical security in general, to get over that. This TED talk, in my opinion, is a must see for every CXO:
I’m not and I have never been a hacker….oh wait:
- I cracked the password file on a client’s server. I built it, then forgot the master admin password.
- VERY early in my IT career, while working on a helpdesk, I played with the supposedly minimal mainframe access I was allowed. I tweaked a standard command to see what it did. The result was ‘interesting’. A third of our global mainframe processes went down.
- I’ve OSINTed (dug for all publicly available data about) interviewers, potential partners and folk who propose working together.
- I’ve downloaded, installed and tweaked code on devices used by my kids to keep them safe
- I carefully conduct conversations with employers to extract information I need to hack any job I do (in the the life-hacker sense).
What do you do to people, processes, networks, systems or software to get around ‘annoying’ controls and get the job done?
- Have you ever ignored corporate file transfer guidelines (maybe sticking things on a USB drive or on Dropbox) to move data fast?
- Have you ever pulled rank, or got someone else to wave the JFDI card, to bypass some kind of required security (a forgotten pass to get into the building, forgotten security questions for a password reset or something similar)?
- Have you ever plugged a wireless access point into the company network because seats are scarce and company WiFi sucks?
- Have you ever switched off AV or a firewall application (or strong-armed IT staff to do it for you), so you can install a programme they wouldn’t normally permit you to run?
- Have you ever shared passwords with a contractor, because their official access is taking too long to set up and they’re reminding you their time is money?
Those actions are comparable to things malicious actors would do to get into (and get data out of) your network and systems. They rarely do that CSI Cyber thing where biometrics are faked, or firewall configurations are changed to get past seemingly cast iron defenses. They look for the path of least resistance (and we leave a LOT of easily navigated paths).
My motivation was securing my clients and my family, raising awareness of good and bad security and furthering my education and career. If you listen hard to Keren, most hackers are motivated by the same things. Of course there are ‘bad apples’, as there are in any workforce, but think about this:
If your business is reasonably well secured, who can do more harm more easily? The staff, supplier or partner firm given permission to access buildings, systems and sensitive data, or the opportunist would-be exploiters trying to get in from outside?
Those with the expert means and motivation to get past good security controls and disrupt/destroy for disruption/destruction’s sake, deface or DOS for activist purposes, or compromise systems for profit or political gain are still in the minority. For those folk, who better to look for worrying signs and sensible defenses, than people who know how that really works?