Home  |  About  |  Contact

Friday, 23 Mar , 2018

Facebook and Cambridge Analytica – It’s Pandora’s box, it’s open, but it’s been open for a while

Share this article

Anyone with any knowledge of the goings on in digital advertising, political campaign management, or (for that matter) military information operations, will have been utterly unsurprised by the news over the last few days. It’s Pandora’s box, it’s open, but it’s been open for a while.

Includes:

  • An overview of Facebook and Cambridge Analytica news
  • An opinion on public and industry reactions
  • A look at the wider social media and data broking world, featuring NationBuilder, and China’s social credit machine.
  • The Silicon web extending out behind key players in the current drama
  • A brief look at future implications of all the data sharing in the context of surveillance and what we like to call ‘AI’.
  • Links to advice and guidance to reduce your social media and more general personal data footprint, now and in future.

Updated 18th October 2019: Following a Google exec admitting that people should probably warn visitors that their Nest is going to record them (links to privacy settings for Nest, and Echo added at the very end) and Mark Zuckerberg trying to teach us all about free expression and the vital role that Facebook has to play.

Updated 9th September 2019: Following the Laurence Dodds investigation into political campaign services from NationBuilder in The Telegraph

Updated 26th June 2018: Added references to UK Government sharing student data and the Chinese social credit system.

Updated 29th March 2018: Following Buzzfeed publishing an internal Facebook memo underlining points here about data collection ethos.

UPDATED 27th March 2018: Following Chris Wiley’s 3 hour emergency session with the UK Electoral Commission.


No harm, no foul? Nothing to hide, nothing to fear?

This is not a data breach, this is Facebook’s business model < currently the crux of the mainstream reporting. It slammed their share price (relatively speaking), prompted a mini exodus of business partners, REALLY upset some shareholders, and Zuck is under a very, very large microscope while in major damage limitation mode, BUT many commentators refuse to believe it will result in any medium-term impact.
Screen Shot 2018-03-30 at 11.00.04

From a leaked Facebook memo. Image links to the 29th March Buzzfeed piece that first published it. This links to the BBC’s write up including Andrew’s response claiming that he was just playing devil’s advocate to prompt internal ethical debate.

Most recently Zuck stood up for free expression, or at least his version of the role Facebook played in helping us to express ourselves and the role he intends to play in future. A performance that set cats squarely amongst privacy pigeons, especially when he sold Facebook’s early days, when he was still fighting backlash about it’s ‘ hot or not’ predecessor, as a tool to give people a voice to protest the Iraq War. Cunningly timed to coincide with release of Official Secrets, a film about that same dark period in recent history.

What’s going on here is not a conspiracy, it’s not a network of Marvel villains, it’s just business. Lots and lots of mutually dependent money machines that feed, like the Monster’s Inc power grid, on your laughter and screams.

Yes, Cambridge Analytica and all the other bodies using and abusing our virtual identities have many questions to answer about playing fast and loose with all the data they could grab, but you have to trace problems back to their source.

This was, to paraphrase Sean Parker (Facebook’s ex-CEO), an experiment. A teenage boy’s hypothesis that you can, with awe inspiring ease, use computers to exploit human vulnerabilities and generate power and profit in the process. An experiment that broke out of the lab and exploded the scale of traceable online activity long before most of us saw a smartphone.

Here that is again, straight from the horse’s mouth

Swathes of privacy activists and other ethically interested parties have been screaming into echo chambers for years about the data oligopolists and the utterly impossible privacy-defending task faced by the average Joe or Jane. Only a tiny handful have the means and inclination to delve into data sharing implications, and fewer still scratch the surface of lip service paid to consent and clarity.


The inherent disdain for the average user implied by all of this almost beggars belief, but that is the trap. We need to extricate ourselves from the visceral disgust and move as quickly as possible on to the practical question of decisive sanctions for proven wrongdoing and better next steps.


Max Schrems, a 30 year old Austrian privacy activist, started challenging Facebook in court before he finished his law degree. He secured victories and Facebook made small changes, but as you would expect it takes a life changing amount of money, energy, and time to do that. He’s currently fighting an accusation that he’s now a professional litigant and so unable to follow through with cases and appeals under existing data protection and human rights law.

No matter your opinion of Schrems you should be glad he’s out there and still up for a fight, because without him you wouldn’t even be aware of a bunch of stuff you object to.

The same goes for Snowden. The headline above takes you to my look at Safe Harbor and data security. Safe Harbor being the inadequate EU:US data transfer and protection agreement that Europe relied upon to legally send your data stateside (probably best not get onto my opinion of the equally self-certified and very selectively enforced replacement).

Whether you view Snowden as hero or villain, he helped us take a big step forward in grasping what the data collectors were capable of. In this context he and Schrems lead directly to Safe Harbor ending and the GDPR taking it’s final shape. After all, what Snowden leaked wouldn’t have been feasible without the ever expanding data oceans and gobal digital rivers feeding them.

Compliance with the letter of the scant law has done little to help folk navigate the Gorgon’s maze of privacy settings, the giant web of data sharing relationships, and the unstoppable tide of technical innovation in the data monetisation space. All set against a back drop of an attention grabbing juggernaut, running down new means, via any feasible media, to fill the ever widening maw of the data beasts.

Public and industry reactions vary wildly:

  • Crickets

From a whole metric something-tonne of folk who will never care how online social things work, just that they currently do.

  • ‘But they did nothing wrong!’

Mainly from folk with power or profit related skin in the game. For instance Facebook, Cambridge Analytica, other social media firms, social media scraping data brokers, marketing firms, adtech firms, political power brokers, human/signals/imagery/open source intelligence bodies, and recipients of products or services from any of the above.

  • ‘What the AF!?’

From folk who still, quite reasonably, don’t expect to be taken advantage of by business they deal with on or offline, so they missed all the previous ‘If you’re not paying, you’re the product“ memos.

  • ‘Meh’

Mainly from a cynical minority looking for the right emo angle on social media. Folk who allegedly stopped using Facebook aeons ago. Plus another subset of folk who tried REALLY hard to explain why they shunned Facebook to anyone who would listen and carried on trying for a VERY long while. Folk now exhausted by other people not paying attention and all out of outrage at abusive data shenanigans.

  • ‘Yeah. And?’

Slightly different to the ‘Meh’ crew. These folks haven’t given up. They’re already poised with thorough knowledge of pertinent law, regulation, historical quotes, memes, and research papers. They have technical and non-technical plans in progress or ready to go to make some incremental gains, and they’re happy to ride the newly towering media wave.

  • ‘YES! YES! SEEEE! I told you so!’

From folk desperate to make a difference who more or less worthily celebrate the validation and vindication this brings. Folk somewhere between ‘What the AF?!’ and ‘Yeah…’


Full disclosure: I’m a mild ‘Meh’ rehabilitated into a fairly perky ‘Yeah’ having exhausted myself early on doing the ‘Yes! Yes! SEEEE!’ (being looked at like a paranoid loony rather wore me down).

With age and experience I’ve learned to shave the sneer off the ‘Meh’ and internalise the ‘I told you so’. Working to bring about and influence change even when we’re not in the middle of a data FUBAR. That’s the real job.

That’s in contrast to the darkest end of the ‘Meh’ continuum. A small subset of folk who think those who ‘stupidly’ shared data without understanding implications deserve to be punished. Other related views might be:

  • Raising awareness of risks is a waste of time (the masses don’t care or are too dumb).
  • Simple privacy and security solutions are pointless (e.g. 2 factor authentication), unless they are practically perfect in every way (homage to Disney’s Mary plus fun shell related pun potential with Poppins).
  • People should learn to code and RTFM instead of asking good basic questions. If not they deserve to have their credibility in other areas crushed by being crowd-trolled as a newb idiot.

Some of those folk might end up seeing and caring that my father was right: you get more mainstream attention, management time, and therefore ability to influence real change if you leave folk with their cab fare home…

…as opposed to backing out of a room wearing a triumphant FU face and repeatedly and violently jabbing a double bird in their direction. A perspective Alex Stamos shares from his unenviable position as Facebook’s ex-CISO, at the time of writing still there serving out notice after resigning in December.
Screen Shot 2018-03-24 at 11.41.38Screen Shot 2018-03-24 at 11.41.14
But I digress. That’s frustration about one aspect of both the problem and solution. I’m putting the cart before the crazy horse. As you probably now realise:

Of course it’s not just about Facebook and Cambridge Analytica


Let’s look around this world a little more, just to underline the fact that this was never just about Facebook. It’s linked to business as usual, or what passes as ‘usual’, in Silicon Valley. Incredibly bright folk driven to design transformative tech. Commercial sponsors keeping wunderkind’ eyes on a siloed prize, while doing the grubbier work of dominating and staying ahead in an industry sector. The darker politicised corners where people seek information to trade for influence. Doing whatever it takes to get that done.

But not everyone has a bad agenda. Both social media and big data still have, and will continue to have, huge potential to change the world for the better < Yep, absolutely, but what’s hit the fan here is pretty much unchecked abuse of lots of power.

Incidentally, Napster creator and Facebook veteran Sean Parker (the one featured in the video at the start), is also on the board of NationBuilder, a 3dna company. A business swimming in very similar waters to Cambridge Analytica. They provide solutions to help political and other campaigns collect and organise personal data in NationBuilder and associated 3rd party tools. They then provide means to effectively analyse and target individuals with ads and other comms. And where does the data go? Their American cloud (I tried to follow the link to their Privacy Shield certificate, but I think it was broken).
Screenshot_20180324-084129_01_01_02.jpg
In 2012 NationBuilder got category A VC funding of $6.25m from Andreessen Horowitz (known as “a16z”). Among the big firms in the Andreessen Horowitz portfolio: Facebook. Along with hundreds of other household names you would associate with having lots and lots and lots of your data.

Maybe have a quick look at the apps linked to your Facebook profile. One of them might be NationBuilder. If you’re not a registered user for your campaign, business, or charity, you’ve signed up via Facebook somewhere along the way.

The Trump and Brexit Leave campaigns used NationBuilder solutions (the link is to Amberhawk’s blog looking at data collection and protection in both Brexit camps), so did Democratic candidates, remainers, and just about any other organisation who wants to keep track of prospects and supporters.

Here’s Jim Gilliam, CEO, writing on the NationBuilder blog in 2011 about the Scottish National Party use of tools. It’s back before Brexit was even a twinkle in Mrs May’s eye, but I’m willing to bet they still have that data.


A website alone simply wouldn’t have done the job, says Mr Torrance: it’s like an island — the online community has to choose to go there. “What we have done through Facebook and Twitter is build a online distribution network, like a pyramid, with HQ at the top, and then party members, supporters, the public, all circulating information,”

Kirk Torrence, SNP’s then New Media Strategist, 2011


And here’s Toni Cowan-Brown from NationBuilder responding to Mark Zuckerberg’s statement on Thursday (starts at 2:48). She makes very reasonable and rational points about the error of Facebook’s ways and shared responsibility (between Facebook and us) to improve things…

All commercial consumers of Facebook data, including NationBuilder, want to see Zuckerberg share liability. Liability he’s always ducked by casting Facebook as a conduit for content vs a curator. A legal distinction historically applied to traditional telcos (here’s Lawfare blog from 2017 examining that in more detail). A position that’s grown shaky for a number of social media magnates given the recent hate related traffic moderation and bot pruning.

When all’s said and done NationBuilder is just another digital data processing and analysis tool. Software with a low-cost entry point (here’s a PCMag review where it’s clear it’s priced to expand use beyond early verticals), and an utterly reasonable business case for buying it. Plus, as far as I can see, there’s no wrongdoing by design. But like anything leveraging our social media presence it comes with loads of ignored or unforeseen data collection question marks and ongoing data management and data protection challenges. All opening the door to future data processing abuses in the wrong, or careless hands.

Here’s the UK ICO with guidance on what it is and isn’t ok for campaign to do with personal data (includes a link onwards to a more detailed PDF), but it doesn’t address more general issues with third party practice. Questions about about data collection and handling by firms like NationBuilder and their numerous specialist third parties. That swings back to more general data protection requirements. Handily enough for NationBuilder, it’s up to those running a campaign to ensure that data processed for their specified purposes was obtained, retained, and shared in line with properly managed, data subject expectations, but NationBuilder are not absolved of all liability. They remain directly liable for poor practice when carrying out their processing role, using data for secondary purposes not specified in their contract, or handling any data they they were not explicitly asked to gather – but the chances of it being spotted and stopped without proper campaign due diligence, is very slim.

That leaves one heck of a margin for error, as this thread and the accompanying article from investigative Telegraph reporter Laurence Dodds, graphically highlights.
Screenshot 2019-09-09 at 10.17.36
But none of this this even begins to compete with the grandaddy of data mining operations: Palantir. It was founded by Peter Thiel and his two partners back in 2003. That’s just before Peter got involved as a key investor in Facebook (he bought a 10.2% stake, but sold much of it in 2012, though he’s still on the board). All of which happened after he founded PayPal (established in 1998 as Confinity).

Palantir has more government and intelligence service contracts than you can shake a stick at and too many countries are making sure new regulations like the GDPR, and surveillance laws like the UK IPAct, leave chunky exceptions for too much personal data collection, retention, sharing, and use (because immigration, because national and economic security…or not).
Screen Shot 2018-05-30 at 16.56.07
Another example of government data handed off to third parties (check out the work being done in the UK by Defend Digital Me for more on this).

The UK’s Pupil Database, over 20 million records, including flags associated with mental health and social welfare (stuff that can label individuals indefinitely), is collected and shared by the government with public and private sector bodies. In a significant proportion of cases parents had no idea that was happening, or had no idea about short, medium, and long-term implications. So choices, where parents or children of an appropriate age had one, were uninformed.

As far as most parents and carers knew there were anonymous statistics shared to help the government with policy formulation, yes/no boxes to tick about pictures and names being shared in brochures, on websites, or on social media, and some other ‘stuff’ to enable the school to, well, school. It wasn’t, to their knowledge, about sending bulk identifiable data sets to the tax office, giving payment system fingerprints to cloud vendors, or giving access to the immigration service, criminal justice department, or private ‘educational consultants’.

How does that measure up against your risk appetite? Not your belief in the need for national security and statistics to support social welfare and education strategy. Your tolerance for not being told about it, your inability to make a choice on their behalf, having no chance to confirm data was minimised, protected, and the right oversight was in place for sharing and further re-use.  Including a far from unimaginable leap to using data to hone analysis of characteristics that might indicate future criminal or antisocial behaviour.
Screen Shot 2018-06-26 at 11.14.29On that last point, China have a stonking head start (the image above links to that article, this link is to a 3 year old  BBC article on earlyish steps).

They have a massive and expanding implementation of ‘social credit’ scores used to ‘motivate’ ‘better’ behaviour by citizens (if this was a face to face chat I’d probably have air quote provoked RSI before finishing the conversation). That includes things like preferential pricing for ‘good’ citizens and limiting access to goods and services e.g. broadband, travel, jobs, and government aid), for those who have been ‘bad’.
Screen Shot 2018-06-26 at 10.57.36
Who is benchmarking what a ‘good citizen’ looks like? How consistent is that? What recourse do people have to challenge it? Are overseers able to unpick the cause of a mistake, or a harmful but apparently valid algorithmic conclusion? Who might be benchmarking ‘good’ in future? What are their motivations for doing all this? Who else will this be shared with? What are their motivations? How far will manipulation go before the means stop being justified by the political, financial, religious, or other ideological ends?

Alarmist? Nope. In absence of adequate checks and balances societies will find a level that best protects the political and financial interests of the people most invested in it, or the people with most influence over them. If that shifts too far from the least-bad tipping point for health, social, and economic wellbeing and opposing influencers can’t financially outgun or otherwise mitigate excesses of those in control, it must fall back onto elected representatives and the rule of law to adjust the balance. But, when those vested interests and balancing bodies get too incestuous, accountability will begin to breakdown. Then ALL bets are off.

But back to the Silicon web…


This isn’t really about NationBuilder, Facebook, Cambridge Analytica, or even Palantir, it’s about the the deathly interconnectivity of systems, the excessive concentration of data in too few very powerful hands, the ability to infinitely replicate our data, and failing to innovate means to protect at the same rate we have innovated means to exploit. 


Incidentally, Marc Andreesson, one half of a16z, is a Silicon Valley legend. He was co-author of Mosaic, then co-founder of Netscape, and has his finger in more tech pies than you can imagine. He personally invested early in Facebook, but in Oct/Nov 2015 he sold about 73%, or $160m worth of his personal shares over the course of 2 weeks.

If you feel like another little detour down the Silicon Valley family rabbit hole, Sean Parker was also heavily involved with the Founders Fund VC that Peter Thiel set up in 2005, though he exited in 2014. Founders Fund, unsurprisingly, was a major investor in Palantir, and Facebook, and Paypal. And lastly, just for fun, here’s a debate hosted by The Milken Institute between Marc Adreesson and Mr Thiel (if you enjoyed the series Halt and Catch Fire, these guys are the real life deal…minus the women).

A big theme is the way regulation hobbles innovation. In some cases I’m not entirely un-inclined to agree.  I’ve seen unforgivable disruption caused in the name of audit, regulation, or governance, but that’s when it’s done badly. Perception that its bad comes from the opportunity cost to a business versus an individual of NOT getting it right and not having effective oversight.  I can pretty much guarantee there would be a fundamental difference of opinion about the checks and minimum controls necessary.

Round and round and round it goes. Where it stops? Nobody knows


Summing that up: they all know each other, they are all immensely wealthy, they collectively wield enormous power, and almost all of that is down to data, our data


Circling round again to the story of the day, the BIG legal questions are stacking up for Facebook, SCL, and Cambridge Analytica. Mainly around alleged UK and US political campaign improprieties, evidence tampering, and a number of potential data protection and human rights violations. Testimony and evidence also points us to Palantir and even (very allegedly and indirectly), Google involvement. Here’s something from the New York Times piece on that:


A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.


Why not check out the full testimony Chris Wylie gave on the morning of 27th March. It’s over 3 hours, but there’s hardly a dull moment. Then there are all of the documents submitted as evidence to the committee that are now available via the UK Parliament website. Carole Cadwalladr sums some of that up here (the pictured tweet links to the article):
Screen Shot 2018-03-30 at 00.51.05
There was no formal Palantir involvement as far as Chris knew, but in early 2013 Alexander Nix (the SCL director who became chief executive of Cambridge Analytica) and a Palantir executive discussed working together on election campaigns. Chris also reported direct communication he had with Alfredas Chmieliauskas (listed on LinkedIn as in business development for Palantir). He states there were a number of senior Palantir staff  on-site ‘helping’ in the SCL days. Folk very interested in the data they were hoping to acquire with help from Mr Kosinki. The data they later succeeded in acquiring with help from Dr Kogan and his Facebook survey. They also reportedly helped develop the psychographic and psychometric profiling capabilities under codename ‘Big Daddy’.


Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.

New York Times: ‘Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data’ 


Then there’s this:
https://www.youtube.com/watch?v=2GuHVZx4OwU
Even after all of this data dealing intrigue, the thing arguably prompting the most widespread rancour is that bare faced Facebook lie. It’s hard to watch that 2009 BBC interview without feeling angry, either on your own behalf, or on behalf of folk who decided to trust the message and pile ever more pieces of their personal life onto his platform.

The inherent disdain for those users implied by all of this almost beggars belief, but that is the trap. We need to extricate ourselves from the visceral disgust and move as quickly as possible on to the practical question of decisive sanctions for proven wrongdoing and better next steps.

This isn’t really about NationBuilder, Facebook, Cambridge Analytica, or even Palantir, it’s about the the deathly interconnectivity of systems, the excessive concentration of data in too few very powerful hands, the ability to infinitely replicate our data, and all while failing to innovate means to protect it at the same rate we have innovated means to exploit. Problems we need to collectively work to solve if we want fairness and transparency in the data wrangling wild west.

But nobody wants the internet to die and without the business model this mess was built on, how do providers of services we like get paid? Do we subscribe? Do we agree a price for the data we share? Or have we been institutionalised to believe this only works if our data is put out of our control and sold, repeatedly, forever? Are we missing the fact there may be a new model to make this work, one folk would only turn attention to if we make it clear the current one is unacceptable by issuing sanctions with teeth and voting with our feet?
GDPR Article 5 Principles
The image links to searchable GDPR text
Most recently that has been thrown into stark relief by the General Data Protection Regulation. The European regulation that puts some power back into private hands with requirements for transparency about planned data use, clarity about data sharing, legal rights for individuals to get answers from organisations, and sanctions that may actually act as a deterrent. That’s my current focus. Grafting for incremental ethical and procedural changes to tip those scales back in a more respectful direction, but I occasionally have reason to ask myself…

…does it even matter?

Easy to say, hard to explore. As Maslow said; most of us have a portion of attention, time, and money that isn’t currently being used to survive. Attention, time and money that will only get spent on products and services we hear about. Products and services we are encouraged to desire, or decide we require.

Advertisers are right, a portion of those purchases will be based on advertising concepts and copy we wouldn’t have seen if we’d been given a chance to opt out. The implication being that we are getting in the way of a necessary matchmaking service with our silly data protection rules.
Screen Shot 2018-03-24 at 14.16.31
Then there’s the fact that players in this data farming game are all incestuously linked. The chart above is from Thinknum media, (the numbers are millions) one of many, many firms analysing and monitoring what you do online every day. Looking for useful correlations between tracked online activities and activity logs slurped from your uniquely identified devices. Matching those identifiers back to rich Facebook data about you and your friends (because you logged onto another site with Facebook, or Liked/clicked a 3rd party site via your Facebook profile). Even if you didn’t go anywhere near Facebook, ubiquitous tracking will auction browsing snippets to highest bidders and a sufficient subset have access to enough publicly available social media content to put it all together without you having to help. All adding up to a devastatingly powerful and persistent picture of you and your network.

Correlation is not causation, but correlation is enough

Correlation is not causation. E.g. If 90% of those identified as extreme right-wingers vs 30% identified as far left-wing indicated – via millions of recorded of clicks, posts, tweets, comments, survey responses, questions to Alexa, purchases, and likes – that they love butterflies, it doesn’t mean that loving butterflies has a high probability of turning you into a Nazi. But, as Christopher Wylie explained when he outed activity at Cambridge Analytica, the unimaginable quantity of captured information and data bout interactions does dramatically improve design and effectiveness of increasingly micro targeting.

That’s before you get into creating your own butterfly related content using lessons learned from natural interactions. Honing it to be the most right-winger seducing butterfly click-bait ever. That funnels a motivated audience, already in a fairly predictable mood, to the carefully positioned messages you really want to convey.

They then share your butterfly content and your linked core content with their social circles (folk at the more extreme edges of belief systems tend to be fair more easily engaged and provoked to act). That act of liking and sharing lends credibility in the eyes of people who share their beliefs, and a subset who judge credibility on the quantity vs quality of approvers. Rinse and repeat with ramifications rippling outwards forever.

Now replace ‘love butterflies’ with whatever you’ve seen trigger the biggest reactions in the real world,

Tangled world wide webs

That means we are far more embroiled in the web of interconnected data collectors and processors than most of us will ever know. As shown in the detail of the Cambridge Analytica story and as explained in more technical detail by Johnny Ryan (Head of Ecosystem at . Writer & digital historian. Author of ‘A History of the Internet and the Digital Future’).
Screen Shot 2018-03-24 at 11.21.31
Then followed by Pat Walshe @privacymatters, hugely skilled data protection professional with global experience. He very graphically illustrates how exercising your privacy rights with WhatsApp is very much easier said than done.
Screen Shot 2018-03-24 at 09.54.19
But the fact is, no matter your discomfort, some of this is a necessity, IF under balanced control with proper consideration of longer-term and wider-reaching implications.

Hobbling the free flow of data in a poorly thought out or imbalanced way can create barriers to entry for new organisations and dangerously limit individual rights. The data held by big hitters have isn’t going to get deleted any time soon. The playing field isn’t level and it can’t be levelled until another more privacy protected and personally protective generation are given means to assert their rights.

In the mean time we still need to enable fair competition, activism, education, social interaction, dissemination of heath and welfare information, and ongoing equal access to the incredible stores of collective online knowledge. Fabulous things, necessary things, life-saving things that we can struggle to see and hear over the deafening and algorithmically prioritised noise created by hyper wealthy social and political influencers, plus all the other giant retailers, commodity rich players, and their social media friends. The sponsored version of their truth tailored to suit a virtual impression of us.

If we don’t get it right that data oligopoly will continues to bleed, as it has done in recent years, into the rest of the market and other aspects of our lives. Risking a self-fulfilling and insoluble imbalance. A largely unregulatable symbiosis of ‘too big to fail’ individuals and organisations.

But while we’re talking about oligopolies and unchecked power, we must also keep an eye on old money and old money’s mates in governments and global markets. There are very real reasons for the commodity brokers to demonise tech and it’s mainly old money that pulls mainstream media strings, so we should all take a breath and check our bias. Not forgetting the fact that traditional corporations have shedloads of our data too, either directly, or via organisations employed to protect their interests. And they are all spending an incredible amount of money trying to be Facebook, in terms of customer engagement, data collection, analysis, and targeting. The power there is in hands you never hear of, folk who have a permanent pass to wander down government halls, unlike the tech giants we currently love to hate.

So, how serious and how immediate is the risk of harm? How likely is measurable fallout from Facebook data shenanigans Vs harm caused…for example…by subsidising fossil fuels to put cleaner energy out of the mainstream retail race, or selling debt until the markets die?

Through yet another lens, why would the government want to see Facebook and friends fail. It is too rich a vein of intelligence accessible without unobtainable amounts of public investment and pesky accountable oversight hoops. But that doesn’t mean they’d turn down better access to that data and more legal/regulatory leverage (telco neutrality is bothersome, data protection law is a downer, and encryption really is a giant pain in the arse).

It’s not so much heroes and villains as many shades of big money dominance grey.
Screen Shot 2018-03-23 at 18.45.59
Facilitating an introduction and social, commercial, public service, or security related exchange is social media’s functional raison d’être. We understand that is fundamentally dependent on firms having our data and that dependence creates potential for more balance, except most of the big boys are now publicly traded. As is typical, shareholder tails begin to furiously wag the business dog. That produces a pretty facile and short-termist numbers game (users and clicks, users and clicks) that no-one can rely on to cement an ethical response.

How do we unring this bell? How do we put this horse back in the stable from whence it bolted quite a long time ago? How do we do that transparently and under some semblance of sustainable mutual control?


“Why are you following me Alexa?”

“All the better to get to know you my dear”


Back to the question asked some moments ago: does all this really matter? The answer, in my completely honest opinion, is yes, or I wouldn’t be wasting my time writing this hoofing great tome of a post.

I believe this has marked another small upswing in general awareness of our privacy risks. An incremental increase in willingness to pay attention. It has also increased the vulnerability of the big data collection corporations to impactful criticism and it’s done so 2 months before the EU begins enforcing a globally applicable legal instrument that leaves space to meaningfully take that criticism to court. But, and we have to be realistic here, it won’t change the business model any time soon. That will take market entrants who manage to make privacy and security by design desirable in it’s own competitive advantage creating right. A feature not an overhead. As well as enforcement that proves it won’t give the giant firms a free pass. An attention grabbing proposition with sufficient power to change the paradigm.

No amount of headlines will make that happen. Not even if this does represent the beginning of Facebook’s twilight commercial years. Any relative downswing in Facebook’s fortunes will be more than balanced by an upswing in data acquisition operations elsewhere. Right now my bet is on the smart home based IoT, where we’re just a tiny hop skip and jump from Alexa growing legs.

Are algorithms leading to AI, is AI really the anti-christ, and what does my data really let folk do?

As everyone is aware folk with very deep pockets and more than a passing acquaintance with tech caution that seeds being sown now will push computers permanently and catastrophically out of our control. Time is running out, they shout, to reign in the worst excesses of innovation for innovations sake. The lifeblood of that process? Data. Mainly personal data.

This is a nihilistic perspective that belongs squarely in the tinfoil hat camp…or does it?
20480_SMJPG_66K83901A5038570A
Algorithms are becoming more complex and powerful all the time, but we are still a long way away from anything we could call intelligence. All the current hoohah about Cambridge Analytica focuses heavily on whether analytics performed on the naughtily-acquired squillions of American Facebookers and the targeting done with results, did actually cause any material change to the result of a certain US election.

The truth is, even with a looooong history of relevant psychological research done by both the government and marketing industry, no-one can be quantifiably and causally certain. Not least because of the sheer scale and depth of available data in this case and the overwhelming array of other calls on an individual’s attention during time online. Here’s a take on that from Pamela Rutledge, Ph.D., M.B.A., and Director of the Media Psychology Research Center.

There’s also a reality check to chuck in regarding capability and intelligence of what we currently call. I’m not concerned that computers will skip from calculating to emoting any time soon, I’m concerned about the fallible building blocks we’re giving computers, both in terms of developer bias and flawed inputs. But most of all I’m concerned about the ethics of the vested interests paying and pushing for results.

Then there’s quantum computing – technology that AI arguably cannot exist and thrive without – which is getting far closer to broad commercial viability, but it’s still, in many cases, bafflingly slow.

So I choose to believe we have time and motivation to do the right thing, despite having massive commercial forces driving towards more use of these emotionally unintelligent and physiologically ignorant machines. Machines that will be equipped to make very rational judgements about the usefulness and efficiency of human designed things, including other humans.

A tough leap of faith when you witness this kind of exchange online. Developers so up to their necks in the commercial imperative, or personal excitement about progress that they genuinely seem to be missing the implication wood for the innovation trees:


We have to decide if we are going to develop algorithms so that humans can understand exactly how decisions are made. If we do that it is going to be far more time consuming, complex, and costly. If we accept that we will soon lose the ability to unpick the precise steps that lead to a decision, and instead retain some higher level control performance gains will be immense.


I don’t know about you, but that scares the [insert preferred expressive term] out of me. Those folk will likely tell me I don’t ‘get’ data science and I don’t ‘get’ what algorithmic analysis and decision making actually means in the current digital context. They’ll tell me that it is naive and simplistic to want to reverse engineer the decisions about my employability, mental health, immigration status, self-driving transport safety, life-supporting medical device stability, likelihood to commit a crime. No doubt I am missing some nuanced technical reality here, but as I said in a 2015 article:


Even though non-profit bodies like The Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute, are working hard on everyone’s behalf to keep up, I don’t think it’s enough.

The god of money has big guns. We need to make it mandatory for governments and commercial ventures to finance effective knowledge sharing with accountable overseers. We need to give bodies (like those above) teeth and a seat at the top table.

I’m not arguing innovators should lose their intellectual property, but are we comfortable that those with everything to gain from developments are viewing implications in the round? If there are concerns voiced from within, will they make it past their boards? That certainly hasn’t historically been the case with pleas for proper consideration of privacy and security for software development and IT change .

Someone, not with a vested interest, must have the ability to apply statutory brakes, or have a means to inform lawmakers and risk owners, so ethical understanding and controls can keep up.


Sure, I have more reading to do to get the finer points of contributory tech and economics, but they also have a way to go to understand what it takes to inject accountabilty and governance in a coherent and sustainable risk based framework. The urgency of this conversation, the possibility we’re unintentionally setting something in motion we won’t be able to roll back, makes it time worth investing. Time people like Zeynep Tufekci have already invested (I thoroughly agree with this take on her book):
Screen Shot 2018-03-23 at 18.52.28

Data gathering, surveillance and human rights:

Taking a step back and looking at the broader implications of big data collection and analysis (thinking about the IPAct and Palantir again), this is the abstract of an article by Paul Bernal (Senior Lecturer in privacy, human rights, at the University of East Anglia) published in the Journal of Cyber Policy on 16th September 2016. It’s very much worth a read, as is Paul’s deliciously irreverent blog.

Since this was first published there have been ripples spreading inwards too. Ethical objections to work that can harm individuals voiced by key tech firm insiders and multiplying mea culpas and regretful noises being made by ex oligopolists.

Screen Shot 2018-06-26 at 14.07.44

From the letter sent by Microsoft staff – read the whole thing here

Nicole Wong – Former Legal Director at Twitter and former Deputy General Counsel at Google:


“We need to stop talking about this as a “breach” or a “leak” or a “TOS violation” and start thinking about it as a global supply chain problem in desperate need of human rights standards and diligence. #privacyrights are #humanrights”

Twitter (@nicolewong) 17th march 2018 in response to New York Times coverage of the Cambridge Analytica story


So it does seem (as stoicly stated by the newly minted statue of Milicent Fawcett in Parliament Square), that courage, like hope, is contagious.
First Female Suffragist Millicent Fawcett Statue Unveiled In Parliament Square
Which needs to be compared and contrasted with the arguably complicit leniency of the EU Parliament in accepting May testimony from Mark Zuckerberg. Allowing Facebook to take advantage of a widely criticised question format (asking everything up front, so Facebook responses could be delivered as a pre-prepared monologue). Responses that differed little from the generalist answers given previously to the House Commerce Committee. Then there was his refusal, to date (despite being threatened with a summons that would kick in if he ever set foot on UK soil again) to meet in person with the UK Parliament or UK Information Commissioner.

That’s against the backdrop of an on-going ICO investigation into Facebook, Cambridge Analytica, and up to 30 other organisations. Organisations like Canadian firm AggregateIQ, alleged SCL subsidiary, who were employed to perform campaign data analysis for Brexit leave groups implicated in alleged data misue and electoral conduct failings.

A few more spaces to keep a close eye on.
Screen Shot 2018-06-26 at 14.31.03Screen Shot 2018-06-26 at 14.29.14
All in all, and at great length, that constitutes stuff most woulld file under “Stop finding things to worry about”. The stuff friends and family roll their eyes at. But it does seem, based on the ongoing furore, far more folk will rightly be sparing far more time to at least think twice.


Some practical stuff

Which apps are connected to your social media accounts? Check and prune. From Readers Digest (yep, Readers Digest), a good user friendly guide.

Facebook tips: If you’re not ready to delete Facebook (no-one is saying you have to), here’s some practical stuff to help you grasp data sharing implications and take back a bit of control. If you still just want to get rid of Facebook:
https://twitter.com/CNET/status/978134453531783168?s=20
More general social media privacy settings: From Europol

Google / Gmail privacy settings. An incredible amount of tracking starts with gmail and Google. Know the settings you can change to minimise data shared and delete search, voice, location histories. I like the linked guide from BT

Use a better browser that doesn’t let Google and everyone else track you. Brave, Startpage, Duck Duck Go, Tor browser. There are lots of alternatives, but at a minimum don’t log on with your Gmail account to use Chrome browser. You don’t need to, no matter how much they tell you to. The link is to a comparison between Duck Duck Go and Startpage.

Install an ad blocker There are a bunch of sites that might complain if you install an ad blocker to go with your usual antivirus protection for browsing, but it’s the only way to see a whole bunch of the third parties linked to websites that implant targeted ads in websites and track you. You can then selectively add selectively permit things that make a positive difference to your browsing if you are happy to do so. There’s also malware that can arrive disguised as ads. In particular crypto miners that hijack the power of your computer to help someone else to mine crypto currency like Bitcoin. The link is to Privacy Badger from the Electronic Frontier Foundation. It gives you the option to selectively disable ads and access to your browsing and device info. Nothing is 100% effective, but in my honest opinion, the transparency helps everyone.

For Android phone privacy, as far as that’s possible. For initial set up create a throwaway Gmail account with made up details that you lock down hard and don’t use for anything else.

Log onto social media sites via a browser wherever possible. It’s the apps that scoop your location data and other social media usage info. Some of the other advice in the linked Lifehacker article is a bit more involved and depends on your risk appetite. E.g. Not using fingerprints to lock your phone. The author is right, law enforcers can compel you to provide a fingerprint in the same way they can force you to submit to a physical search when they meet whatever benchmark for probable cause that applies in a given situation. BUT, for me at least, I balance that risk against the fact I don’t get out much and I use my password safe more because I don’t have to type a huge password on a tiny keyboard. You can also change these settings if you get into something that demands more privacy, or before you travel abroad, or do more private stuff on another phone with tougher settings.

Email Security (how much of your life is stored/organised here?). Avoid unwittingly signing up to, or getting infected by things you don’t want to. Something from Troy Hunt, Microsoft Regional Director and MVP, and creator of HaveIBeenPwned. I thoroughly recommend signing up to that free service.

Amazon Echo Privacy Settings – This is changing almost daily now the trust pendulum has swung so much away from default trust in these kind of devices. The linked guide is pretty good at helping to minimise data gathered and retained and this one from CNET covers similar ground.

Nest Privacy Settings – That’s Google own guide to Nest privacy settings. It focuses first on that question of eavesdropping on guests then gives links to get at FAQs and your settings. Much of the other stuff relevant to Google above will be relevant here as you are forced to have a Google account to make it work. This gives more perspective on the post acquisition journey to full assimilation with the Google machine.

Where and to whom does the GDPR apply?

Yeah, I doubted my sanity going at this one too, but here I am, because working out whether or not the GDPR would apply in different practical and geographical circumstances is proving harder than it really should...for everyone. This regulation has been my almost...

GDPR – You’ve analysed the gaps, but can you close them?

  There is a critical gap for most firms: An inability to interpret and leverage gap analysis, data discovery, and mapping output to actually implement technical data processing change. This article is about the challenges most large firms are facing when trying...