Corporate Data Protection & GDPR

Opinion: Morrisons, vicarious liability, and privacy risk management reality

On the face of it organisations were just made liable for nefarious data doings of any nasty individual they might have had the misfortune to employ…or nice employees who just mess up. Even if organisations do nothing wrong and things happen in spite of ‘appropriate’ control, they might be vicariously liable.

In this attempt to unpick something murky in the world of data protection I try to answer the following questions:

  • What happened and what does vicarious mean in this liability context?
  • Is this liability for poor security?
  • What the heck does this mean for your operational security and data protection day job?
  • What will the liability likely be?
  • Why should Morrisons NOT own residual data handling risks?

and close with

  • Insurance limitations, being sensible about surveillance, and some control tyres you could kick

1. What happened and what vicarious means in this liability context

Here’s a link to the 5RB summary of findings in Various Claimants v WM Morrisons Supermarket including a PDF of the full verdict, and below is the neatest and plainest English bit I could find on what vicarious liability actually means

So how did all this come about? In a nutshell: IT bloke gets disciplined for a breach of company policy, IT bloke is debatably justifiably hacked off, IT bloke later gets asked to send the company HR database contents to a 3rd party (KPMG in this case), IT bloke does what was asked. He follows procedure with encrypted drives and even a hand to hand delivery, but also takes a copy, sticks it on his own laptop, then uploads content to a file sharing site. He then anonymously shares access via a link, and tips off multiple media outlets posing as a concerned member of the web-browsing public. The data being just about everything from the HR file of every staff member…all roughly 100,000 of them.


“But how the heck do we mitigate for rogue people being unpredictably bad!?”

They all, entirely understandably, cry.


In trying the initial case the court didn’t see the 1998 Data Protection Act (interpretation in UK law of the 1995 EU Data Protection Directive) as the be all and end all benchmark for liability for this breach. Morrisons’ legal team fought valiantly to say it was . They argued the DPA 1998 was, in fact, a super-duper legal tool that ruled out the need for pesky benchmarking against other laws and standards e.g. for misuse of information and breach of confidentiality. Very specifically they argued that it ruled out vicarious liability as a cause of action and strongly asserted that was the intention of the DPA framers. Summarising their case: bearing in mind Morrisons was found to have had appropriate controls in place by DPA 1998 principle 7 standards and Mr Skelton was found to be in the wrong and sentenced to 8 years in jail, how could they still be on the hook?

The first judge didn’t agree…nor did the second. They ruled that Morrisons couldn’t say vicarious liability was off the table, because framers would almost certainly have mentioned it at the time. It is, after all, a far from trivial point of law that would more generally rule out recourse to whole swathes of other causes of action.

Of course the DPA 1998 has now been been  superseded by the DPA 2018 containing local legal elements of the directly applicable and far beefier GDPR, but you apply the law at the time things went wrong.

The solution suggested to manage vicarious liability for this kind of kamikaze insider risk? INSURANCE.

Below are scenes from Cyber Insurance HQ when that detail was revealed.

57700792_l.jpg

Copyright Sergey Nivens

Reportedly that is a ‘mad office party’. Gotta love stock images, but I digress. Most who follow me know how I feel about the accuracy and adequacy of cyber insurance risk estimates and premiums. For those who don’t, I gave my 60 odd cents here (I’m told by folk who should know it’s still pretty much on the money).

So, what to do? What does this mean for the day job? And is it really as aggressively anti-business, pro US-style litigiousness, and nonsensical as the prevailing headlines made it appear?


DISCLAIMER: Still not, as I make sure I say in all such posts, a lawyer. This revolves almost entirely around legal findings, so I was particularly keen to point that out and invite anyone reading to shout if anything is missing or wrong. I have worked hard to check facts and I’ve read a daft number of reports on this, but nope, no law degree. However I do have, unlike most of the lawyers doing a cracking job of picking the precedential (sic?) bits out of this, plenty of first hand experience dealing with risk management at the security and data protection coal face.


2. Is this liability for poor security?

Nope. And this is the crux of concern. In this specific case Morrisons, according to the first judge in a finding unchallenged by the second, hit the lion’s share of good practice benchmarks in the DPA. The overarching DPA security benchmark is ‘appropriate’ control taking into account the probability of local threats acting on local technical, procedural, or people related vulnerabilities, to produce particular impacts, allowing for the cost to implement decent mitigating controls (see the precise wording below). That, as I’ve said until I’m blue in the face, should be the basis of any good security strategy, but there’s no proscriptive measure of ‘secure’ in there. This is about context, and in this specific context the control was deemed sufficient.

Screenshot 2018-11-23 at 12.54.45

Extract from original DPA 1998 text – Link to PDF

This fella was intent on causing problems. Risk of doing so was argued to be ‘appropriately’ handled while he was following the day-job script from which Morrisons claimed they had no reason to think he would diverge. Then, when he did go rogue, they successfully defended the assertion that they detected and handled it in an appropriate way. Which brings us to important points.

There are going to be many, many security peeps who pipe up with the many, many ways THEY would have monitored, managed, and mitigated for this particular guy, but they’ll do so with no knowledge of the actual state of security play on those particular days.

The court acknowledged that not everything that could be done was being done, but they also found that Morrisons had done an ‘appropriate’ amount based upon pre-existing understanding of risks and time available between Skelton’s disciplinary hearing, discovery that he was seriously upset, and him committing the crimes. They further found, on the majority of points, that Morrisons could not reasonably have implement additional controls in that time frame and most potential controls would only have had a slim chance of reliably and promptly preventing the wrongdoing, or detecting it in time to mitigate impact.

Is the firm directly (as opposed to vicariously) liable for not conducting 24/7/365 close quarters surveillance of this individual before or after the point he became extremely aggrieved? Are they liable for not having a technical solution that would have reliably prevented someone instructed to access the data and authorised to transfer it, from using the access, encryption keys, and equipment they were authorised to have?

The answer from all security and privacy practitioners worth their salt should be ‘no’, not least because some of that control would break other human rights related laws. That’s allowing for a whole bunch of debatable elements, depending on what portion of a very tight budget you think Morrisons should spend on security and the real life security vs usability trade-off. The kind of debates in which the online security crew can stop seeing pragmatic wood, for tech possibility trees. Just like the legal crew can disappear down a warren-worth of rabbit holes with different contextual interpretations of law, regulation, and precedent.

Bottom line: no firm can 100% mitigate the risk of a breach, especially a skilled insider breach. Morrisons, in the opinion of the court, did enough, so we have to stop and draw a line under the question of appropriate technical and organisational control in the context of the DPA…actually, make that 2 lines, because I know the security crew will find it hard to let that go.



3.What the heck does this mean for your operational security and data protection day job?

That’s where this is doing folks’ heads in: Even if Morrisons had implemented so much control staff couldn’t actually do their job any more, or, conversely, granted every staff member god-like access to every system in their entire estate and switched off every way of logging what they did with it, even then security would be beside the point when it came to vicarious liability…until the court began to work out details of resulting damages. Something the 5,500 odd plaintiffs in the Morrison’s case are still waiting to find out.

So which bit of our security risk is this about? What can we actually do? The simple answer, in the first instance, is very little, except to continue work in progress to understand and manage exposure to damaging insider accidents and abuses. Another simple early answer is: DONT BUY THE FIRST HASTILY REBRANDED VENDOR SOLUTION THAT USES THIS CASE AS MARKETING COPY.

Work to get a handle on the residual risk. How much can you do to further reduce that probability or impact without hobbling folks’ ability to do their job, or trampling over their rights? What portion of the risk will the board have to tolerate temporariliy or in the medium term, or transfer to an insurer (note I said ‘tolerate’ not ‘accept’, because you should revisit what you can and can’t do about risks regularly). Always keeping in mind that this might differ from similar work you’ve done before, because it’s squarely focused on potential harm to the rights and freedoms of data subjects rather than just your bottom line.

In other words:


Whether ’tis nobler at the time to manage
the risk of outrageous breaches
Or to take out cover against insider troubles
And by insuring, curtail them?

Shakespeare, Hamlet, Act 3, Scene 1, butchered by Infospectives.


First there’s inherent privacy risk

The amount of risk to a data subject that exists when they hand information over to another legal entity, in this case Morrisons. That’s the risk bearing in mind the prevailing threat landscape and existing state of vulnerabilities, but not allowing for any mitigating controls e.g. in the absence of access restrictions, firewalls, network segregation, internal traffic monitoring, codes of conduct for staff, disciplinary procedures and sanctions, training, etc, etc.

Then there’s residual privacy risk

The inherent risk adjusted downwards by the preventative, detective, or impact limiting value of all of the variously layered and interconnected technical and procedural controls.

All set against a privacy risk appetite:

How much residual risk you, as an organisation, think you should bear on behalf of officers of your organisation and, in this case, on behalf of the data subjects you have responsbility to protect while their data remains under your direct or delegated control.

Those with legal and regulatory blinkers would prefer zero tolerance for such risks. That’s one of the hills that GDPR programmes die on, if they haven’t already fallen to bad scoping, finger in air requirements definition, or a missing RACI. Ah the RACI, that most elusive of things. Lack of which leaves everyone under the impression that information gathering, assessment, and remediation is somebody else’s job. It is the opposite of trival to get all that right in a large, complex, and distributed organisation, while rationally allowing for the fact that non-compliance doesn’t necessarily equal an intolerable risk.

That planning challenge isn’t aided by the fact that security and data protection don’t have proscriptive enough control requirements to create an obvious plan and security consensus (PCI DSS payment card controls are about as far as we go), and who, if we’re sensible, really wants that? One size will never fit all organisational models and risk environments. So we all work to hit some modified set of industry recognised (and hopefully risk related) benchmarks for adequacy.

The UK Information Commissioner, EU Data Protection Board, and seasoned professionals will have good advice on minimum data protection and privacy controls, including linked purpose specific legislation (e.g. for medical testing, child protection, fraud prevention, and surveillance), ISO27001 provides a security related risk and control management framework, the NIS directive is there for national infrastructure, and frameworks like NIST, and Cyber Essentials, give a smörgåsbord of other baseline controls.

Beyond those laws, regulations, selected standards, and locally defined ‘musts’, you’ll have some local process and technology-specific controls, plus progressive layers of rules and procedures. Things that can variably close the gap between the  residual risk and somewhere a tolerable distance from your risk appetite. Though it’s still notoriously hard to evidence a causal link between the  implementation of controls, control effectiveness, and real risk reduction value.

The best CISOs and DPOs are a decent distance up that risk management maturity curve, because without that view of cost and benefit you can’t resist calls to compromise ethical and compliance standards in favour of profit or rapid delivery. You’ll also struggle to counter cacophonous vendor noise.

As an example:

If the risk is physical theft of hard copy documents from desks, then the inherent risk is the full extent of harm that can be done if those documents fall into different types of more or less malicious and motivated hands. From a cleaner with a passing interest, or a chancer who blagged their way past the guard, to a targeted and well planned theft ordered by a competitor, criminal, or even (though far less frequent than folk like to think), someone acting on behalf of a government somewhere.

The residual risk is whatever that impact and likelihood gets reduced to when you take layered perimeter controls, reception controls, internal door controls, clear desk policies, and other sensible measures into account. You will never completely mitigate related risk while you have employees, cleaners, caterers and any other warm body permitted to pass through the security perimeters on a daily basis.  It just is what it is, the trick is minimising and scaling it.

Ditto with IT staff stealing your entire HR database. If you are going to let any staff have access to shift that data, and respect for their human right not to be surveilled 24/7, you will have substantial ongoing residual risk. That gap, the one between your residual risk and your risk appetite, is probably sitting somewhere distinctly south of where you wish it was, but that’s just how the privileged access cookie crumbles.

Can you identify and treat the root cause?

At some stage you just have to trust folk to do their job, while sticking that risk on your risk register to manage.

As part of that you might want to take a look at how staff are treated. The disciplinary events preceding the Morrisons breach caused much debate. Consensus seems to be that he was treated pretty harshly, but his response was an unpredicatably severe escalation. As a result some sections of the data protection community, people I greatly respect, have been very critical of the vicarious liability finding. They see it as an uncecessary stick to hit organisations with. An employee did something damaging under his own steam and was prosecuted. The firm did as much as was reasonable to guard against it and respond to it. End of story they, like Morrisons argue. They want to avoid disproportionately lining law firm (rather than data subject) pockets, and firms falling back into defensive positions that undo the privacy and security culture changes all of of us fought so hard for.

Screenshot 2018-11-26 at 21.07.21

From “Research and analysis to quantify the benefits arising from personal data rights under the GDPR” Report to the Department for Culture, Media & Sport. by London Economics, May 2017 (see the PDF)

As counterpoint to that: If Morrisons didn’t have ‘Data breach caused by disgruntled IT admin’ on their exec risk profile, they darn well should have. Try putting rights and wrongs on one side for a moment and jump straight to the fallout and initial reponse. I left one role as network security manager because a rogue admin retained access throughout a disciplinary hearing. It was quite simply impossible to do my job. When there’s a contentious HR issue related to one of these staff you need to seriously consider some gardening leave, but how long do you disadvantage someone based on subjective judgement about the likelihood they will do it again, or, if they were falsely accused and poorly treated, the likelihood they will go rogue as revenge?

This isn’t a new challenge for HR, and security departments. How would you manage the risk if the sales director had his knuckles rapped in such a way that it destroyed all loyalty to the firm? Do you leave him with the customer rolodex and safe keys? You can’t completely mitigate that risk, especially if he doesn’t care if he ever works again.

Most firms see technical staff, even privileged technical staff, as some kind of cannon fodder, but as we’ve repeatedly seen, the chaos they can both cause and prevent is just as weighty and far-reaching as the chaos that can be called down by carefully nurtured, and super-well-remunerated CXOs. Take a look at the recruitment controls, financial controls, carrots, and commensurate legal sticks you have in place to curb potential board-level fallout. Consider what you invest to succession plan and ensure capacity for delegation to avoid single critical points of failure. Now take a look at what you do with your super privileged IT and security crew. A false economy?

Is it all doom and gloom?

To take some of the edge off this rather dark post, do remember that trustworthiness and respect are things you can take to the bank. They strengthen relationships and in the current interpersonal climate respectful interactions can be notable as an exception.

There is ample and growing evidence for a return on privacy invetment (don’t think ROPI will fly as an acronymn, but hey ho). If you can foster trust within your organisation, supply chain, and (as a natural result) in your customer base, there is little doubt it can be a competitive differentiator with potential to boost both brand loyalty and demand. Here’s some more on that (link to the same PDF that gave me that bar chart) produced by London Economics for the Department of Media, Commerce, and Sport as part of framing the 2018 DPA

Having said all of that, I certainly don’t pretend to have all the answers, so back to the post.

4. What will the liability likely be?

Bored yet? I know. I’m sorry. Welcome to my world. GRC is a noble sounding endeavour, but the journey can make you want to chew your own arm off, no matter how worthwhile the eventual prize.

This is where the bottom line finally and fully comes in. The court has said that Morrisons is the accountable custodian of that residual risk, remembering the meaningful portion of that is risk to data subjects. Risk of harm that might come to them in the short, medium, or long-term as a result of either harmful authorised processing, or a breach.

But how liable are they? That’s the big question that we and the 5,500 odd plaintiffs in this case are waiting to answer.

What price will the judge place on identity theft and related fraud. Virtual risk tipping over into personal safety where data is leveraged to victimise. Inconvenience and expense linked to a response whether or not a crime was committed e.g. freezing accounts, having to prove your own identity, living in debt while investigations conclude. Descruction of personal or professional goodwill and credibility linked compromised profiles or sites. Defending against secondary or opportunist attacks (e.g. phishing) from people who become aware of the breach or obtain data second or third hand. Tail effects which can last for many months, or even years. The psychological impact and distress linked to all of that.

However this pans out it will be a step change in legal recourse for victims of data breaches and a payday for specialist and variously less specialist law firms who will not be backwards in coming forwards with an offer to help.

5. Why should Morrisons NOT own residual data handling risks?

The risks associated with handling data and employing people who you give privileged and other risky access, are yours to manage on behalf of data subjects. People rely on you and expect you to do that. That’s the killer conclusion I’ve come to at great length. It’s a cost of doing business. You have a debt of care.

Finally, with this finding, big businesses will have to rethink the tactic of setting their giant-size legal teams on errant employees in an attempt to make sure liability sticks elsewhere. They will have to accept their share of liability for people getting harmed if they invited the perpetrator in and actions can be linked in any relatively straightforward way to an employee doing their day job.

To benchmark your own appetite for this risk, perhaps consider another case, one that involved nearly every adult in America. Of course this is under jurisdiction of a totally different legal system, but the principle still stands.


Equifax ex-CEO blames breach on one person and a bad scanner

CNET


At congressional hearings Richard Smith, their CEO, testified that he was responsible, and sorry, but not at fault. Instead the theft of 145.4 million detailed data records was pinned on a scanner that allegedly didn’t flag the Apache Struts vulnerability and the one IT guy allegedly responsible for making sure that didn’t happen.

Should that kind of failing, defended by a cadre of heavy weight lawyers as an aberration, an ad hoc incident, a rogue oversight, a sub-par individual, negate need for a firm to grant those harmed some redress?

Screenshot 2018-11-26 at 11.45.38In this example it is easier to argue some direct liability for apparently negligent security: One guy in charge of vulnerability management for the whole of Equifax? But they have so far managed to swerve any federal sanctions, despite criticism from the congressional committee about how this it was done.

Mr Smith did step down, but reported still recieved a $multi-million payout, other board members reportedly forewent their yearly bonus, but kept their other perks, and Equifax have since been hit wilth 240+ class action lawsuits. At least one subset of actions relates to the way they tried to ‘protect’ data subjects in the immediate aftermath: When responding to breach enquiries with the offer of 12 months free credit monitoring, Equifax failed to disclose that they owned TrustID, the third party who would be doing the monitoring. A service that Equifax needed folk to use to to help the company persue one of it’s key strategic aims. Plus, to add insult to potential injury, they initially made cover conditional on signing away rights to persue a class action in court.

Screenshot 2018-11-26 at 11.57.13With any meaningful federal sanction looking less and less likely in the current anti-regulatory climate, I think its fairly safe to conclude they have been deemed too big to fail. As intermediary data broker for a heinously tangled web of entities, the economic ripples of a meaningful federal sanction would have been much like that butterfly flapping it’s wings. And impact of all those class actions? Even allowing for a recent law forcing many through arbitration vs court, there might be more to see.

Do we want to encourage this kind of typically American legal culture in the UK? Can the model be modified to minimise flagrant legal opportunism in the class action space? Can we avoid chaos, but also avoid vested interests shielding companies and governments from the implications of their actions?

There’s no doubt a need to tread carefully. That’s one of the key reasons this was referred to a higher court: A collectively acknowledged need to balance appropriate liability in a space where potential harm to individuals and their inability to independently seek redress will only increase (e.g. with smart home, car, medical device, or other ‘AI knows best’ scenarios) vs the risk of financially weaponising the judicial system for Mr Skelton and others willing to suffer the consequences in pursuit of revenge.

6. Insurance limitations, being sensible about surveillance, and some control tyres you could kick

So how will this play out in the broader sense, other than as a source of insurer and vendor glee? One knee-jerk proposal was turning employment into a One Flew Over The Cuckoo’s Nest scenario a.k.a blanket pre-employment and on-the-job monitoring for mental health problems to aid the organisation to make decisions about employee fitness for employment purpose.

Screenshot 2018-11-26 at 12.30.12

Worthwhile thread for anoyone considering preventative or detective control options that disadvantage a whole group regardless of wrongdoing.

Wonderful idea met by derision from just about everyone who understands potential for discriminatory harm. That doesn’t mean that enhanced screening should be ruled out for the most sensitive roles and accesses, but sectioning your workforce into different mental health pools and attaching almost entirely prejudice-driven and irrelevant restrictions to each inexpertly benchmarked subset of staff…catch yourselves on. I discuss some aspects of a more reasonable physical security and insider threat perspective here and below are some of the specific control tyres I would probably kick first.

  • Appropriately restricted access to buildings and systems to the least privilege needed to do the job.
  • 2+ eyes approval and frequent reviews for privileged god-like access to buildings and systems and something in line with the risk for the rest.
  • Minimising data kept and data transferred – Do you really need all of it? Do they?
  • Enforced password complexity and multi-factor authentication. Including policies to block browsers storing credentials and implement a corporate password safe, but at the same time loosen requirements to change credentials unless incidents highlight a need.
  • Limiting use of ports on desktops and laptops. Bearing in mind ways to do that are not all created equal and threat actors differ wildly in their ability to make their kit look like a permitted device, but all of this is about raising the bar not eliminating risk.
  • Database logging for activity as well as access.
  • Network/Gateway logging that can spot anomalous activity.
  • Non-negotiable mobile device encryption and remote tracking/deletion capability if used for work
  • Creating a baseline for normal log output and benchmarks for anomalous and suspicious behaviour.
  • Segregating networks to increase control and limit incident impact.
  • Ensuring 3rd parties do the same where services warrant it.
  • Changing email culture so email is not used as a file store, data transfer solution, or way to send passwords for new accounts, encrypted attachments, and external media.
  • Locking down Office 365 – Learn how to find and configure security and privacy settings if you have to use MS Office /  Office 365.
  • Making staff aware that these controls and capabilities are in place, and explaining why they are there.
  • Logging ‘Disgruntled User’ and ‘Disgruntled IT Administrator’ as risks – These belong on your risk profile. If they are not already there, they should be. This is where I feel Morrisons’ negligence lies. Accept you have to manage those risks.
  • Incident scenarios that aren’t just about availability – Practice to understand what you can detect, how long that takes, how well you can investigate, and how you decide what to notify to the regulator and/or data subjectsand make sure you circle back to track the lessons learned.
  • Gardening leave for the duration of disciplinary hearings for staff with privileged access to your infrastructure. Ensure triggering that does not prejudice disciplinary outcomes.

And if you are a small shop looking at this and thinking “Gee thanks, same old unachievable rubbish the big shots always recommend”, look again at the stuff that doesn’t need a tool, especially the access limitation and email / Office configuration, and consider how little work that would take to do well at a small-scale. In that you have the far easier job. You know your staff, you trust your staff, you have direct relationships with your staff. That all reduces the residual risk of an insider breach and work to mitigate it, but it doesn’t reduce the need to verify that people are doing the right thing. Trust but verify. Always.

In close protection everyone knows they can’t defend against every determined attacker. Especially someone with massive resources, lots of time, niche expertise, or total disregard for their own chance of survival. The same, in a less lethal context, is true of even the most diligent firm. But it is still your debt of care. The problems will all revolve around scaling that debt, and working out a fair price for that care. All in the interest of building a business case for control, means to challenge insurer defined premiums, and fostering the kind of data processing fairness and respect we all, if we’re honest, want to see.

Over time perhaps it’s more cost-effective to budget for breach notification (often the lion’s share of coverage) and reinvest the balance over predicted premiums? Everyone wins when its less likely you’re going to harm your staff or customers and more likely you won’t have to make a claim.h

This example should be the next incident scenario you run. Especially if your incident response plan is any variation of shooting an IT scapegoat at dawn.

So that’s about it, except to say thank you for reading. Do take me up on that invitation to flag any inaccuracies or holes and in the meantime, just like my lawyer friends, I will be watching this Morrisons liability space for that final verdict on compensation with ears pricked and bated breath.


Featured Image Credit: 123rf Stock Photo. Copyright: Vitaliy Vodolazskyy