Anyone with any knowledge of the goings on in digital advertising, political campaign management, or (for that matter) military information operations, will have been utterly unsurprised by the news over the last few days. It’s Pandora’s box, it’s open, but it’s been open for a while.
- An overview of Facebook and Cambridge Analytica news
- An opinion on public and industry reactions
- A look at the wider social media and data broking world, featuring NationBuilder
- The Silicon web extending out behind key players in the current drama
- A brief look at future implications of all the data sharing, in the context of surveillance and what we like to call ‘AI’.
- Links to advice and guidance to reduce your social media and more general personal data footprint, now and in future.
Updated 29th March: Following Buzzfeed publishing an internal Facebook memo underlining points here about data collection ethos, plus contracts and other additional documents published by The Guardian.
UPDATED 27th March: Following the 3 hour emergency session with the UK Electoral Commission various details have been added. Including Palantir’s alleged involvement with Cambridge Analytica.
No harm, no foul? Nothing to hide, nothing to fear?
This is not a data breach, this is Facebook’s business model < currently the crux of the mainstream reporting. It slammed their share price (relatively speaking), prompted a mini exodus of business partners, REALLY upset some shareholders, and Zuck is under a very, very large microscope while in major damage limitation mode.
From a leaked Facebook memo. Image links to the 29th March Buzzfeed piece that first published it. This links to the BBC’s write up including Andrew’s response claiming that he was just playing devil’s advocate to prompt internal ethical debate.
What’s going on is not a conspiracy, it’s not a network of Marvel villains, it’s just business. Lots and lots of mutually dependent money machines that feed, like the Monster’s Inc power grid, on your laughter and screams.
Yes, Cambridge Analytica and all the other bodies using and abusing our virtual identities have many questions to answer about playing fast and loose with all the data they could grab, but you have to trace problems back to their source.
This is, quoting Sean Parker (Facebook’s ex-CEO), an experiment. A teenage boy’s hypothesis that you can, with awe inspiring ease, use computers to exploit human vulnerabilities and generate power and profit in the process. An experiment that broke out of the lab long before most of us saw a smartphone.
Here that is again, straight from the horse’s mouth:
Swathes of privacy activists and other ethically interested parties have been screaming into echo chambers for years about the data oligopolists and the utterly impossible privacy-defending task faced by the average Joe or Jane. Only a tiny handful have the means and inclination to delve into data sharing implications and fewer still scratch the surface of lip service paid to consent and clarity.
The inherent disdain for the average user implied by all of this almost beggars belief, but that is the trap. We need to extricate ourselves from the visceral disgust and move as quickly as possible on to the practical question of decisive sanctions for proven wrongdoing and better next steps.
Max Schrems, a 30 year old Austrian privacy activist, started challenging Facebook in court before he finished his law degree. He secured victories and Facebook made small changes, but as you would expect it takes a life changing amount of money, energy, and time to do that. He’s currently fighting an accusation that he’s now a ‘professional litigant’ and so unable to follow through with cases and appeals under existing data protection and human rights law.
No matter your opinion of Schrems you should be glad he’s out there and still up for a fight, because without him you wouldn’t even be aware of most of the stuff you object to.
The same goes for Snowden. The headline above takes you to my look at the role Snowden played in the context of Safe Harbor. The utterly inadequate EU:US data transfer and data protection agreement arguably destroyed by his revelations (probably best not to get onto my opinion of the equally self-certified and very selectively enforced Privacy Shield as replacement). Whether you view him as hero or villain, we took a big step forward in grasping what the data collectors were capable of.
Compliance with the letter of the scant law has done little to help folk navigate the Gorgon’s maze of privacy settings, the giant web of data sharing relationships, and the unstoppable tide of technical innovation in the data monetisation space. All set against a back drop of an attention grabbing juggernaut, running down new means, via any feasible media, to fill the ever widening maw of the data guzzling beast.
Public and industry reactions vary wildly:
From a whole metric something-tonne of folk who will never care how online social things work, just that they currently do.
‘But they did nothing wrong!’
Mainly from folk with power or profit related skin in the game. For instance Facebook, Cambridge Analytica, other social media firms, social media scraping data brokers, marketing firms, adtech firms, political power brokers, human/signals/imagery/open source intelligence bodies and recipients of products or services from any of the above.
‘What the AF!?’
From the odd newly woke social media user who missed all the previous ‘If you’re not paying, you’re the product’ memos.
Mainly from a cynical minority looking for the right emo angle on social media. Folk who allegedly stopped using Facebook aeons ago. Plus a subset of folk who tried REALLY hard to explain why to anyone who would listen and carried on trying for a VERY long while. Now exhausted by other people not paying attention and all out of outrage at abusive data shenanigans.
Slightly different to the ‘Meh’ crew. These folks haven’t given up. They’re already poised with thorough knowledge of pertinent law, regulation, historical quotes, memes, and research papers. They have technical and non-technical plans in progress or ready to go, and they’re happy to ride the newly towering media wave.
‘YES! YES! SEEEE! I told you so!’
From folk desperate to make a difference who more or less worthily celebrate the validation and vindication this brings. Folk somewhere between ‘What the AF?!’ and ‘Yeah…’
Full disclosure: I’m a mild ‘Meh’ rehabilitated into a fairly perky ‘Yeah’ having exhausted myself early on doing the ‘Yes! Yes! SEEEE!’ (there’s only so much labelling as a tinfoil hat fan that a person can take).
With age and experience I’ve learned to shave the sneer off the ‘Meh’ and internalise the ‘I told you so’. I work to bring about and influence change even when we’re not in the middle of a data FUBAR. That’s the real job.
That’s in contrast to the darkest end of the ‘Meh’ continuum. A small subset of folk who think those who ‘stupidly’ shared data without understanding implications deserve to be punished. Other related views might be:
- Raising awareness of risks is a waste of time (the masses don’t care or are too dumb).
- Simple privacy and security solutions are pointless (e.g. 2 factor authentication), unless they are practically perfect in every way (homage to Disney’s Mary plus fun shell related pun potential with Poppins).
- People should learn to code and RTFM instead of asking good basic questions. If not they deserve to have their credibility in other areas crushed by being crowd-trolled as a newb idiot.
Some of those folk might end up seeing and caring that my father was right: you get more mainstream attention, management time, and therefore ability to influence real change if you leave folk with their cab fare home…
…as opposed to backing out of a room wearing a triumphant FU face and repeatedly and violently jabbing a double bird in their direction. A perspective Alex Stamos shares from his unenviable position as Facebook’s ex-CISO, currently still there serving out notice after resigning in December.
But I digress. That’s frustration about one aspect of the solution. I’m putting the cart before the crazy horse. As you probably now realise:
It’s NOT just about Facebook and Cambridge Analytica
Lets look around this world a little more, just to underline the fact that this was never just about Facebook. It’s linked to business as usual, or what passes as ‘usual’, in Silicon Valley. Incredibly bright folk driven to design transformative tech, commercial giants keeping the wunderkind happily blinkered while they do the grubbier work of dominating and staying ahead in an industry sector, and the darker politicised corners, where people seek information to trade for influence, and do whatever it takes to get that done.
But not everyone has a bad agenda and both social media and big data still have, and will continue to have more potential to change the world for the better < Yep, absolutely, but what’s hit the fan here is pretty much unchecked abuse of lots of power.
Incidentally, Napster creator and Facebook veteran Sean Parker (the one featured in the video at the start), is also on the board of NationBuilder, a 3dna company. A business swimming in very similar waters to Cambridge Analytica. They provide solutions to help political and other campaigns collect and organise personal data in NationBuilder and associated 3rd party tools. They then provide means to effectively analyse and target individuals with ads and other comms. And where does the data go? Their American cloud (I tried to follow the link to their Privacy Shield certificate, but I think it was broken).
In 2012 NationBuilder got category A VC funding of $6.25m from Andreessen Horowitz (known as “a16z”). Among the big firms in the Andreessen Horowitz portfolio: Facebook. Along with hundreds of other household names you would associate with having lots and lots and lots of your data.
Maybe have a quick look at the apps linked to your Facebook profile. One of them might be NationBuilder. If you’re not a registered user for your campaign, business, or charity, you’ve signed up via Facebook somewhere along the way
The Trump and Brexit Leave campaigns used NationBuilder solutions (the link is to Amberhawk’s blog looking at data collection and protection in both Brexit camps), so did Democratic candidates, remainers, and just about any other organisation who wants to keep track of prospects and supporters.
Here’s Jim Gilliam, CEO, writing on the NationBuilder blog in 2011 about the Scottish National Party use of tools. It’s back before Brexit was even a twinkle in Mrs May’s eye, but I’m willing to bet they still have that data.
A website alone simply wouldn’t have done the job, says Mr Torrance: it’s like an island — the online community has to choose to go there. “What we have done through Facebook and Twitter is build a online distribution network, like a pyramid, with HQ at the top, and then party members, supporters, the public, all circulating information,”
Kirk Torrence, SNP’s then New Media Strategist, 2011
And here’s Toni Cowan-Brown from NationBuilder responding to Mark Zuckerberg’s statement on Thursday (starts at 2:48). She makes very reasonable and rational points about the error of Facebook’s ways and shared responsibility (between Facebook and us) to improve things…
…but all commercial consumers of Facebook data want to see Zuckerberg share liability. Liability he’s always ducked by casting Facebook as a conduit for content vs a curator. A legal distinction historically applied to traditional telcos (here’s Lawfare blog from 2017 examining that in more detail). A position that’s grown shaky for a number of social media magnates given the recent hate related traffic moderation and bot pruning.
When all’s said and done NationBuilder is just software with a low-cost entry point (here’s a PCMag review where it’s clear it’s priced to expand use beyond early verticals), and an utterly reasonable business case for buying it. Plus, as far as I know, there’s no wrongdoing by design. But like anything leveraging our social media presence it comes with loads of ignored or unforeseen data collection question marks and ongoing data management and protection challenges. All opening the door to future data processing abuses in the wrong, or careless hands
But none of this this even begins to compete with the grandaddy of data mining operations: Palantir (hat tip to Charles Stromeyer, investor and Harvard alum for reminding me of the Theil connections). Founded by Peter Thiel and his two partners back in 2003. That’s just before Peter got involved as a key investor in Facebook (he bought a 10.2% stake, but sold much of it in 2012, though he’s still on the board), all after he founded PayPal (established in 1998 as Confinity).
Palantir has more government and intelligence service contracts than you can shake a stick at, and too many countries are making sure new regulations like the GDPR, and surveillance laws like the UK IPAct and the US Patriot Act leave chunky exceptions for too much government personal data collection and use, whether national security related or not.
This isn’t really about NationBuilder, Facebook, Cambridge Analytica, or even Palantir, it’s about the the deathly interconnectivity of systems, the excessive concentration of data in too few very powerful hands, the ability to infinitely replicate our data, and failing to innovate means to protect at the same rate we have innovated means to exploit.
Incidentally, Marc Andreesson, one half of a16z, is a Silicon Valley legend. He was co-author of Mosaic, then co-founder of Netscape, and has his finger in more tech pies than you can imagine. He personally invested early in Facebook, but in Oct/Nov 2015 he sold about 73%, or $160m worth of his personal shares over the course of 2 weeks.
If you feel like another little detour down the Silicon Valley family rabbit hole, Sean Parker was also heavily involved with the Founders Fund VC that Peter Thiel set up in 2005, though he exited in 2014. Founders Fund, unsurprisingly, was a major investor in Palantir, and Facebook, and Paypal. And lastly, just for fun, here’s a debate hosted by The Milken Institute between Marc A and Mr T (if you enjoyed the series Halt and Catch Fire, these guys are the real life deal…minus the women).
A big theme is the way regulation hobbles innovation. I’m not entirely un-inclined to agree because I’ve seen some horrifically poorly applied oversight kill off good progress, but I think we’d have significantly different opinions on the minimum controls you need around personal data, no matter how focused you are on failing fast.
Round and round and round it goes. Where it stops? Nobody knows
Summing that up: they all know each other, they are all immensely wealthy, they collectively wield enormous power, and almost all of that is down to data, our data
Circling round again to the story of the day, the BIG legal questions are stacking up for Facebook, SCL, and Cambridge Analytica. Mainly around alledged UK and US political campaign improprieties, evidence tampering, and a number of potential data protection and human rights violations. Testimony and evidence also points us to Palantir and even, very allegedly and indirectly, Google involvement. Here’s something the New York Times piece about the evolving response to hat:
A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.
“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.
Why not check out the full testimony Chris Wylie gave on the morning of 27th March. It’s over 3 hours, but there’s hardly a dull moment. Then there are all of the documents submitted as evidence to the committee that are now available via the UK Parliament website. Carole Cadwalladr sums some of that up here (the pictured tweet links to the article):
There was no formal Palantir involvement as far as Chris knew, but in early 2013, Alexander Nix, the SCL director who became chief executive of Cambridge Analytica, and a Palantir executive discussed working together on election campaigns. Chris also reported direct communication he had with Alfredas Chmieliauskas (listed on LinkedIn as in business development for Palantir). He states there were a number of senior Palantir staff on-site ‘helping’ in the SCL days. Folk who very interested in the data they were hoping to acquire with help from Mr Kosinki. The data they later succeeded in acquiring with help from Dr Kogan and his Facebook survey. They also reportedly helped develop the psychographic and psychometric profiling capabilities under codename ‘Big Daddy’.
Palantir said it had “never had a relationship with Cambridge Analytica, nor have we ever worked on any Cambridge Analytica data.” Later on Tuesday, Palantir revised its account, saying that Mr. Chmieliauskas was not acting on the company’s behalf when he advised Mr. Wylie on the Facebook data.
New York Times: ‘Peter Thiel Employee Helped Cambridge Analytica Before It Harvested Data’
Then there’s this:
Even after all of this data dealing intrigue, the thing arguably prompting the most widespread rancour is that bare faced Facebook lie. It’s hard to watch that 2009 BBC interview without feeling angry, either on your own behalf, or on behalf of the folk who decided to trust the message and pile ever more pieces of their personal life onto his platform.
The inherent disdain for those users implied by all of this almost beggars belief, but that is the trap. We need to extricate ourselves from the visceral disgust and move as quickly as possible on to the practical question of decisive sanctions for proven wrongdoing and better next steps.
This isn’t really about NationBuilder, Facebook, Cambridge Analytica, or even Palantir, it’s about the the deathly interconnectivity of systems, the excessive concentration of data in too few very powerful hands, the ability to infinitely replicate our data, and failing to innovate means to protect at the same rate we have innovated means to exploit. Problems we need to collectively work to solve if we want fairness and transparency in the data wrangling wild west.
But nobody wants the internet to die and without the business model this mess was built on, how do providers of services we like get paid? Do we subscribe? Do we agree a price for the data we share? Or have we been institutionalised to believe this only works if our data is put out of our control and sold, repeatedly, forever, to the highest bidder? Are we missing the fact there may be a new model to make this work, one folk would only turn attention to if we make it clear the current one is unacceptable by issuing sanctions with teeth and voting with our feet.
Most recently that has been thrown into stark relief by the General Data Protection Regulation. The European regulation that puts some power back into private hands with requirements for transparency about planned data use, clarity about data sharing, legal rights for individuals to get answers from organisations, and sanctions that may actually act as a deterrent. That’s my current focus. Grafting for incremental ethical and procedural changes to tip those scales back in a more respectful direction, but I occasionally have reason to ask myself…
…does it even matter?
Easy to say, hard to explore. As Maslow said; most of us have a portion of attention, time, and money that isn’t currently being used just to survive. Attention, time and money that will only get spent on products and services we hear about from people or places we trust. Products and services we are encouraged to desire, or decide we require. Advertisers are right, a portion of those acquisitions were be based on advertising concepts and copy we wouldn’t have seen if we’d opted out. The implication being that we are getting in the way of a necessary matchmaking service with our silly data protection rules.
Then there’s the fact that players in this data farming game are all incestuously linked. The chart above is from Thinknum media, (the numbers are millions) one of many, many firms analysing and monitoring what you do online everyday. Looking for useful correlations between tracked online activities and activity logs slurped from your uniquely identified devices. Matching those identifiers back to rich Facebook data about you and your friends (because you logged onto another site with Facebook, or Liked/clicked a 3rd party site via your Facebook profile). All creating a devastatingly powerful and persistent picture of you and your network. The same is true for logging on with LinkedIn / Twitter / Google etc etc.
Correlation is not causation. E.g. If 90% of those identified as extreme right-wingers vs 30% identified as far left-wing indicated – via millions of recorded of clicks, posts, tweets, comments, survey responses, questions to Alexa, purchases, and likes – that they love butterflies, it doesn’t mean that loving butterflies turns you into a Nazi. But, as Christopher Wylie explained when he outed activity at Cambridge Analytica, the unimaginable quantity of captured profiles and interactions do dramatically improve design of campaign content and effectiveness of targeting.
That’s before you get into creating your own butterfly related content using lessons learned from natural interactions. Honing it to be the most right-winger seducing butterfly click-bait ever. That funnels a motivated audience, already in a fairly predictable mood, to the carefully positioned messages you really want to convey. They then share your butterfly content and your linked core content with their social circles (folk at the more extreme edges of belief systems tend to be fair more easily engaged and provoked to act). That act of liking and sharing lends credibility in the eyes of people who share their beliefs, and a subset who judge credibility on the quantity vs quality of approvers. Rinse and repeat with ramifications rippling outwards forever.
Now replace ‘love butterflies’ with whatever you’ve seen trigger the biggest reactions in the real world,
Tangled world wide webs
That means we are far more embroiled in the web of interconnected data collectors and processors than most of us will ever know. As shown in the detail of the Cambridge Analytica story and as explained in more technical detail by Johnny Ryan (Head of Ecosystem at
@PageFair. Writer & digital historian. Author of ‘A History of the Internet and the Digital Future’).
Then followed by Pat Walshe @privacymatters, hugely skilled data protection professional with global experience. He very graphically illustrates how exercising your privacy rights with WhatsApp is very much easier said than done.
But the fact is, no matter your discomfort, it is a necessity, IF under balanced control with proper consideration of longer-term and wider-reaching implications.
Hobbling the free flow of data in a poorly thought out or imbalanced way can create barriers to entry for new organisations and dangerously limit individual rights. The data the big hitters have isn’t going to get deleted any time soon. The playing field isn’t level and it can’t be until another far more privacy protected and personally protective generation make their way online.
In the mean time we still need to enable fair competition, activism, education, social interaction, dissemination of heath and welfare information, and ongoing equal access to the incredible stores of collective online knowledge. Fabulous things, necessary things, life-saving things that we can struggle to see and hear over the deafening and algorithmically prioritised noise created by hyper wealthy social and political influencers, commodity rich individuals and nations, giant retailers, and their social media friends. The sponsored version of their truth tailored to suit a virtual impression of us.
If we don’t get it right that data oligopoly will continues to bleed, as it has done in recent years, into the rest of the market and other aspects of our lives. Risking a self-fulfilling and insoluble imbalance. A largely unregulatable symbiosis of ‘too big to fail’ individuals and organisations.
But while we’re talking about oligopolies and unchecked power, we must also keep an eye on old money and old money’s mates in governments and global markets. There are very real reasons for the commodity brokers to demonise tech and it’s mainly old money that pulls mainstream media strings, so we should all take a breath and check our bias. Not forgetting the fact that traditional corporations have shedloads of our data too, either directly, or via organisations employed to protect their interests. And they are all spending an incredibly amount of money trying to be Facebook, in terms of data collection, analysis, and targeting. The power there is in hands you never hear of, folk who have a permanent pass to wander down government halls, unlike the tech giants we currently love to hate.
So, how much and how immediate is harm likely because of Facebook data shenanigans Vs harm cause…for example…by subsidising fossil fuels to put cleaner energy out of the mainstream retail race, or selling debt until the markets die?
Through yet another lens, why would the government want to see Facebook and friends fail. It is too rich a vein of intelligence, accessible without unobtainable amounts of public investment and jumping through pesky accountable oversight hoops. But that doesn’t mean they’d turn down better access to that data and more legal/regulatory leverage (telco neutrality is bothersome, data protection law is a downer, and encryption really is a giant pain in the arse).
It’s not so much heroes and villains as many shades of big money dominance grey.
Facilitating an introduction and social, commercial, public service, or security related exchange is social media’s functional raison d’être. We understand that is fundamentally dependent on firms having our data and that dependence creates potential for more balance, except most of the big boys are now publicly traded. As is typical, shareholder tails begin to furiously wag the business dog. That produces a pretty facile and short-termist numbers game (users and clicks, users and clicks) that no-one can rely on to produce an ethical response.
How do we unring this bell? How do we put this horse back in the stable from whence it bolted quite a long time ago? How do we do that transparently and under some semblance of sustainable mutual control?
“Why are you following me Alexa?”
“All the better to get to know you my dear”
Back to the question asked some moments ago: does all this really matter? The answer, in my completely honest opinion, is yes, or I wouldn’t be wasting my time writing this hoofing great tome of a post.
I believe this has marked another small upswing in general awareness of our privacy risks. An incremental increase in willingness to pay attention. It has also increased the vulnerability of the big data collection corporations to impactful criticism and it’s done so 2 months before the EU begins enforcing a globally applicable legal instrument that leaves space to meaningfully take that criticism to court. But, and we have to be realistic here, it won’t change the business model any time soon. That will take a market entrant who manages to make privacy and security by design desirable in it’s own competitive advantage creating right. A feature not an overhead. All wrapped in an attention grabbing proposition with sufficient power to change the paradigm.
No amount of headlines, sanctions, and fines will make that happen. Not even if this does represent the beginning of Facebook’s twilight commercial years. Any relative downswing in Facebook’s fortunes will be more than balanced by an upswing in data acquisition operations elsewhere. Right now my bet is on the smart home based IoT, where we’re just a tiny hop skip and jump from Alexa growing legs.
Are algorithms leading to AI, is AI really the anti-christ, and what does my data really let folk do?
As everyone is aware, big hitters like Elon Musk, Bill Gates, and the late Stephen Hawking caution that seeds being sown now will push computers permanently and catastrophically out of our control. Time is running out, they shout, to reign in the worst excesses of innovation for innovations sake. The lifeblood of that process? Data. Mainly personal data.
This is a nihilistic perspective that belongs squarely in the tinfoil hat camp…or does it?
Algorithms are becoming more complex and powerful all the time, but we are still a long way away from anything we could call intelligence. All the current hoohah about Cambridge Analytica focuses heavily on whether analytics performed on the naughtily-acquired squillions of American Facebookers and the targeting done with results, did actually cause any material change to the result of a certain US election.
The truth is, even with a looooong history of relevant psychological research done by both the government and marketing industry, no-one can be quantifiably and causally certain. Not least because of the sheer scale and depth of available data in this case and the overwhelming array of other calls on an individual’s attention during time online. Here’s a take on that from Pamela Rutledge, Ph.D., M.B.A., and Director of the Media Psychology Research Center
There’s also a reality check to chuck in as regards how capable and intelligent AI supposedly is. I’m not concerned that computers will skip from calculating to emoting any time soon I’m concerned about the fallible building blocks we’re giving computers, both in terms of developer bias and flawed inputs. But most of all I’m concerned about the ethics of the vested interests paying and pushing for results.
Then there’s quantum computing – technology that AI arguably cannot exist and thrive without – which is getting far closer to broad commercial viability, but it’s still, in many cases, bafflingly slow.
So I choose to believe we have time and motivation to do the right thing, despite having massive commercial forces driving towards more use of these emotionally unintelligent and physiologically ignorant machines. Machines that will be equipped to make very rational judgements about the usefulness and efficiency of human designed things, including other humans.
A tough leap of faith when you witness this kind of exchange online. Developers so up to their necks in the commercial imperative, or personal excitement about progress that they genuinely seem to be missing the implication wood for the innovation trees:
We have to decide if we are going to develop algorithms so that humans can understand exactly how decisions are made. If we do that it is going to be far more time consuming, complex, and costly. If we accept that we will soon lose the ability to unpick the precise steps that lead to a decision, and instead retain some higher level control performance gains will be immense.
I don’t know about you, but that scares the [insert preferred expressive term] out of me. Those folk will likely tell me I don’t ‘get’ data science and I don’t ‘get’ what algorithmic analysis and decision making actually means in the current digital context. They’ll tell me that it is naive and simplistic to want to reverse engineer the decisions about my employability, mental health, immigration status, likelihood to commit a crime. No doubt I am missing some nuanced technical reality here, but as I said in a 2015 article:
Even though non-profit bodies like The Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute, are working hard on everyone’s behalf to keep up, I don’t think it’s enough.
The god of money has big guns. We need to make it mandatory for governments and commercial ventures to finance effective knowledge sharing with accountable overseers. We need to give bodies (like those above) teeth and a seat at the top table.
I’m not arguing innovators should lose their intellectual property, but are we comfortable that those with everything to gain from developments are viewing implications in the round? If there are concerns voiced from within, will they make it past their boards? That certainly hasn’t historically been the case with pleas for proper consideration of privacy and security for software development and IT change .
Someone, not with a vested interest, must have the ability to apply statutory brakes, or have a means to inform lawmakers and risk owners, so ethical understanding and controls can keep up.
Sure, I have an enormous amount of extra reading to do before I stand toe to toe and argue the toss with these folk, but the urgency of this conversation, the possibility they’re unintentionally setting something in motion we won’t be able to roll back. That makes it time I’m willing to invest. Time people like Zeynep Tufekci have already invested (I thoroughly agree with this take on her book):
Data gathering, surveillance and human rights:
Taking a step back and looking at the broader implications of big data collection and analysis (thinking about the IPAct and Palantir again), this is the abstract of an article by Paul Bernal (Senior Lecturer in privacy, human rights, at the University of East Anglia) published in the Journal of Cyber Policy on 16th September 2016. It’s very much worth a read, as is Paul’s deliciously irreverent blog.
Then there’s Mo Gawdat. Ex Chief Business Officer for Google X, Google’s far future innovation arm. He’s issued a challenge I can get behind; spread some potentially exponential good cheer, but I’m more interested in why he decided to dedicate the rest of his life to this mission. Why does a man who worked at the bleeding edge of tech make this fundamental change? It’s only 2 minutes, why not watch and see.
So this is the stuff that’s usually filed under “Stop finding things to worry about”. The stuff friends and family roll their eyes at. But it seems, based on the current furore, far more folk will rightly be sparing far more time to at least think twice.
Some practical stuff
Which apps are connected to your social media accounts? Check and prune. From Readers Digest (yep, Readers Digest), an good user friendly guide).
Facebook tips: If you’re not ready to delete Facebook (no-one is saying you should), here’s some practical stuff to help you grasp data sharing implications and take back a bit of control.
If you still just want to get rid of Facebook:
More general social media privacy settings: From Europol
Google / Gmail privacy settings. An incredible amount of tracking starts with gmail.and Google. Know the settings you can change to minimise data shared and delete search, voice, location histories. I like this guide from BT
Use a better browser that doesn’t let Google and everyone else track you. Startpage, Duck Duck Go, Tor browser. There are lots of alternatives, but at a minimum don’t log on with your Gmail account to use Chrome browser. You don’t need to, no matter how much they tell you to. The link is to a comparison between Duck Duck Go and Startpage.
Install an ad blocker There are a bunch of sites that might complain if you install an ad blocker to go with your usual antivirus protection for browsing, but it’s the only way to see a whole bunch of the third parties linked to websites that implant targeted ads in websites and track you. There’s also malware that can arrive disguised as ads. In particular crypto miners, that hijack the power of your computer to help someone else to mine crypto currency like Bitcoin. The link is to Privacy Badger from the Electronic Frontier Foundation. It gives you the option to selectively disable ads and access to your browsing and device info. Nothing is 100% effective, but in my honest opinion, the transparency helps everyone.
For Android phone privacy, as far as that’s possible. For initial set up create a throwaway Gmail account with made up details that you lock down hard and don’t use for anything else.
Some of the other advice in the linked Lifehacker article is a bit more involved and depends on your risk appetite. E.g. Not using fingerprints to lock your phone. The author is right, law enforcers can compel you to provide a fingerprint in the same way they can force you to submit to a physical search when they meet whatever benchmark for probably cause that applies in a given situation. BUT, for me at least, I balance that risk against the fact I don’t get out much, and I use my password safe more because I don’t have to type a huge password on a tiny keyboard. You can also change these settings if you get into something that demands more privacy, or before you travel abroad, or do more private stuff on another phone with tougher settings.
Email Security (how much of your life is stored/organised here?) to avoid unwittingly signing up to, or getting infected by things you don’t want to: From Troy Hunt, Microsoft Regional Director and MVP, and creator of HaveIBeenPwned. I thoroughly recommend signing up to that free service.