Featured

Opinion: Paying to play with our personal data – is it ok?

We’ve migrated from ‘Hot or Not?’ to being held virtually hostage by many of the digital platforms we rely on today. In the midst of that a new processing paradigm has emerged. Myriad startups want to pay to play with your personal data. Can this tackle on-going issues with transparency, fairness, data processing choice, and more serious risks to human rights? And is there an even more damaging disconnect in the way we think about our digital selves?

These are the sections I’ve broken it down into. Some restate the position in different ways, an attempt to find mental hooks for different people, and it’s at least a 10 minute read (you can’t now say you weren’t warned):

  • Personal data is indivisible from the people it describes
  • Do we have a hosting hostage crisis?
  • Can it ever be a free and fair exchange?
  • No-one owns personal data
  • What’s the problem with paying to play with data
  • The new data brokers
  • Who will guard the guards?

The hot or not reference was prompted by Zuck waxing lyrical to Georgetown students about Facebook’s origin story.  I’m not in the ‘Facebook is the embodiment of evil’ camp, but trying to pass it off as a way to bond over objection to the Iraq war (cunningly timed to coincide with release of the Official Secrets movie), was taking things a tad far. Especially since he was still fighting college sanctions for privacy violations round Facebook’s coed hotness rating predecessor.

Which neatly leads me on to the crux of this post: A deeply personal effort to make the following feel real for as many people as possible, because I feel failure to do so is at the root of most pain. We can’t explore a new model for personal data use without acknowledging the nature of pre-existing weaknesses in business models and governance foundations.

Personal data is indivisible from the people it describes

Tough to sell when folk were just beginning to banter in MSN Groups over dialup, but we are way past the point now where inherent value and interconnectedness of our aggregate selves can be in any doubt.

This isn’t aimed at the mom and pop shop, charity, or local school. This is aimed at the data oligopolists and their dependent supply chain. Those in both the public and private sector who drive and shape the model for the international data trade.

Cyber criminals and brokers know the value of a nicely packaged personal profile. The richer and more faithful the connection to your physical self, the higher the price that’s paid. Thousands of interested parties bid in real time for data points generated every time you browse. Social media influencers bank billions from monetising personally branded memes. Every start-up will have a per user calculation for their VC valuation, but we are not the cattle or chattel that implies.

We, as private individuals, have rights that attach to our person, both the physical entity and the extension of ourselves embodied online.

Then, on the other side of the value equation, there’s the impact on you and me. Folk will endlessly argue that we get value for our data sharing effort and attention. The capability to socially connect, the shopping convenience, the absorbing games, the research and development of more than just the next smart speaker, undeniable ability to have a voice and find information on a global scale. But, at what cost? Initially asked quietly in specialist corners. Drowned out in the mainstream by real excitement and marketing-generated noise.

I’ve sometimes struggled to make the concept of indivisibility resonate, because so many view personal data as a thing. Something it’s ok for others to own in return for access to their app, or as a free-for-all resource if intentionally or unintentionally posted somewhere public. Something we are selfish not to give away, without question, when folk wave the flags of national security and greater good.

This post outs those as dangerous fallacies, specifically in context of my assertion that personal data cannot be separated, by any concept of ownership, from the physical and digital you. Below is my first attempt to make that feel real. I’m trying to channel the way AOC builds a challenge (love or loathe her, it’s very effective):

Her provocation (Mark has been required to testify before Congress again, image links to her now deathly busy thread) followed by my proposed response (just my side of the conversation).


Do you agree that people can’t be separated from the personal data they share?

Really? Anonymous you say. Let’s put a pin in that one. 

Links to the quoted article

If not, why do you insist on use of a real name?

That may be, but does the quality of identity verification and the quantity of real-life data you can link, affect how much advertisers and other data consumers will pay?

I see. So you do, in fact, agree.

Following on, do you agree that manipulation of personal data has potential to cause real harm?

You’re not sure? Ok, let’s flip this around. Do you agree that manipulation of personal data i.e. analysis to better target content, gathering data on reactions, then adding output to data sets for sale, has potential to impact what people will think and do?

Still not sure? Really? We should let your advertisers and political customers know.

So we’re not 100% sure how, or 100% sure how much, but we are 100% sure that impact happens. Just less willing to talk numbers for the inevitable negative side of that same coin.

Fine. Taking this a step further, how can you defend making people lend you their identities under a contract it has been proven that few understand? Almost more to the point, can you defend selling those identities to thousands of 3rd parties that most have never heard of, for uses that few expect?

You object to that characterisation? You say they get more than their fair share of compensation for the use you and 3rd / 4th / 5th /…../ 100th parties  put them to? Especially if I allow for the greater societal good your revenue helps you do?

On that we might have to agree to disagree

If you will indulge this perspective a little further: When will they have worked off their free service debt? If they do, will you voluntarily set them free? Can you defend this indentured labour on the basis of a per individual exchange, or the value an individual adds over time to the aggregate data whole? Do we adjust for the absence of communication options other than the platforms you own (and continue to acquire) and how hard you make it to leave?

You’ve historically absolved yourself of responsibility for the negative side of enabling individual and aggregate manipulation, a side with genocidal extremes. Shouldn’t that come into the fair exchange equation too?


In many ways failure to see us in the data is a key international divide. Europe has a long history of actively treating privacy as a human right and focussing on real life harm. So has the US (contrary to popular belief), but in most recent American history data has generally been viewed as property, or product, owned by the firm that buys, finds, or farms it. The fuel for an AI revolution, the new oil, or the new toxic waste (as imperfect counter to that oil daftness). Then we move on, at the other end of the data handling spectrum, to places such as China and North Korea. Where the community vs the individual dominates social planning and those in power have a rabid appetite for data to drive compliance with the preferred shape of their centrally defined greater goods.

Wired, 7th June 2019, by NICOLE KOBIE

Across all models, in many organisations, you, as the data source, are of value to the extent that you can be milked for more, nudged to refresh a fuzzy mockery of consent, pointed at clients’ products, ads, and political messages, or squeezed to yield more contacts to whom they can do the same.

But the yields of targeted marketing – the original baseline justification for gathering ever greater detail by ever more fitness tracking, smart speaking, home securing, bluetooth beaconing, virtually assisting, face recognising, location tracking, DNA sequencing means – are increasingly hard to sell as a justification to offset long-term personal, societal, political, environmental, and very basic financial costs.

So why the heck is this steamroller still allowed to drive at F1 speed over our privacy and digital autonomy?

Have we got a hosting hostage crisis?

If you follow my assertion; that identifiable data is indivisible from the rest of our being, then there is no other conclusion to draw.

Historically we have been given little or no choice about purposes beyond the initial superficial exchange e.g. you get to play my game and we get to learn from data you give us and what you do here to help us improve service, and do some advertising. Suggesting that 10 seconds spent clicking through terms and conditions gives firms the right to use us in all the less obvious ways, without limit or pause, is utterly wrong.

We may be given the impression we can leave whenever we choose, but the reality is different. When data is dropped into massive buckets and reused for complex analysis, there is often no ability to strip it away. Either it technically risks breaking databases and linked dependent systems, or it has been shared down a supply chain until it is out of sight, out of mind, and out of control. Do you think Facebook knew the full list of 3rd parties that had access to copies of your and your friends’ data when news of Cambridge Analytica broke? I’m willing to bet my bottom dollar they didn’t. Did they know whether parties they dealt with were acting in line with their recently beefed up terms? Of course not! That isn’t specific to Facebook. Most barely pay attention to vendors on which they depend, let alone the data processing behaviour of all their downstream clients.

This is a significant point, one of the key pillars of the problem, this is why we need to rethink control over the flow of our identities through the digital data supply chain.

But it’s more than that, incredible effort is also put into making our demand for service less elastic, less divisible from the fabric of individual and cultural habits that define the cadence of all of our days. Making it less likely that we will want, or feel able, to walk away.

We wake up and see reports on the quality of our sleep, they log our morning run (the steps, heart rate, route), and capture images of us along the way. We log contents of our breakfast then ask our echo about the weather, in the process she picks up a snippet of chat, a plan to get the morning after pill (you inexplicably threw up yesterday, and after asking Alexa whether that can really impact your oral contraception, you decided it was better to be safe than sorry today). That added a tasty reproductive snippet to your digital profile. It impacts the kind of real-time ads you see from winners of the behind scenes data auction. While browsing you see vitamins here, life insurance there, but only subconsciously register the change. At least you aren’t in a country where such contraception is outlawed. There are no flags for that kind of content under state orders to safeguard foetal rights.

Next you do your online shopping, more data points added to your supermarket loyalty lake (and oh my goodness is that a rich body of data), your banking transactions database, any third parties they share with to help you track and visualise spending, or to help them manage their risks, with Google because they analyse your confirmation email (and all of your other emails), to add it to the list they keep of purchases made, plus those ubiquitous browsing brokers who bet on the value of this intelligence, and place ads wherever they think you’ll see them throughout your browsing day.

Then you log onto Facebook, or rather you don’t log onto Facebook, you log onto EBay with your Facebook ID. A tsunami of cross-site broking and tracking activity is triggered. Data shared across sites as far as the eye can see.

Unpicking any substantial portion of that is painful. I’ve had a go. Not all of it, I’m still subject to a huge weight of tracking that I swallow for convenience and speed, or because they defeat even my tolerance to hunt down ways to leave. But I have worked to get rid of Facebook, Instagram, Google (for my phone, the main bulk of email, and browsing). I spent 30 minutes individually unticking boxes next to eBay third party trackers and cookies, but couldn’t face searching round sites of the 30 or so who didn’t even have a tick box, many of which weren’t even in English. It made me really angry. Why had they made it so monumentally hard for me to exercise my legal rights (of course I know why, but that feeling of disempowerment, of being disrespected, of being viewed as worthless never quite goes away)

I got rid of the proprietary apps and log on to social media accounts using a browser (Brave browser with Startpage as my search engine) and have paid to switch my domain to encrypted mail. But Google doesn’t much like that combo. I’m told I have to allow cookies to turn off whole rafts of the tracking. It’s that same feeling of being told I just have to suck it up, no matter what. I’m also increasingly buying second hand electrical goods, because so much comes integrated with ‘AI’. I nearly caved on smart lighting and thermostat control because I’m a sucker for warmth and interior design, but get gladder I didn’t all the time. This news about Nest was just the latest reason why.

My compromises are not perfect, I’ve given up some stuff I miss and cause myself headaches (the tech does add value, fun, and reduces some friction when working), but to me what I gain isn’t worth the data I would give away (there are a bunch of links to advice about minimising your online footprint right at the end of this long read).

Most people’s appetite for data sharing is nowhere near the same. Most just don’t fret, and that’s fine, as long as they’ve been given means to make an informed choice along the way. Or (as the conclusion we should all increasingly draw), informed choice sometimes isn’t feasible, even when it’s the best legal basis according to one of the new laws. Especially with devices bundled with internet connections, but no screens. Then we should be stopping and fundamentally redesigning those data dependent transactions.

Think about 23andMe with their consumer DNA testing. People had the chance to opt out of 3rd party reuse of their genome and data collected via supplementary questionnaires, but when news of the real nature of reuse (in context of the sale to Glaxo Smith Kline) broke, it turned out many had missed that option, or felt fundamentally misguided about implications and intent.

The lightning fast viral spread of other privacy abuse news, stuff some toxic folk will say you should have known, underlines that most DON’T know what is being done, the risks attached to how, and all the third parties involved. I can’t think of anyone of my acquaintance outside the trade who would have grasped the concept of trackers and Real Time Bidding from any privacy notices they may have decided to read.

When doing my semi-cull of excess connections, I encountered a fair amount of practical, psychological, and social resistance. Did I really want to leave? But my friends will miss me (many looked at me like I was nerdily nuts, though fewer do that now than used to). Think of all the news and deals. Think of all the memories you made. There’s no going back when you do this. Actually, on second thoughts, we’ll just set a timer disable your account in a few days. Go away and think about it then forget this little wobble. Oh, you came back. You really want to do this? Well just follow these 7 steps and answer these 5 questions (with increasingly emotive language) and you can be on your way. Just in case you change your mind and find it awkward without us (we did our best to make sure you would), just log back on in the next 30 or so days. Your stuff won’t REALLY be deleted for a good while yet and a whole raft of insights we derived, ‘anonymised’ stuff, and 3rd party copies we lost track of, will never go away.

But this is in the context of social media. Even though the work to keep you sharing is relentless and real, it can pale in comparison to similar initiatives by the state. Then the cost of trying to extract yourself from the data lake, or difficulty trying correct your stored profile, can exclude you from receipt of essential support, and even, in extremes, kill you. I linked to some background on China above, but here’s a piece on the fallout from ‘exception errors’ in the Indian Aadhar digital ID:

Can it ever be a fair and free exchange?

That unplugging experience is far easier now than it has historically been, at least for the biggest social media sites, but don’t get me started on the ties that bound Windows 10 to Bing and Cortana, or whatever they call her now.

In a post Snowden and GDPR world we genuinely do have more choice, but in a super-fast broadband and 4 soon to be 5G mobile world (against a backdrop of data-hungry smart things in the IoT), the web of deathly interconnected small, medium, and large unseen players can make it almost impossible to get a real handle on collective strings. I hate that feeling of being out of control of what is done to me, hence all the effort I made, but the lengths we are forced to go to see how we are being used, let alone to stop it, is why I ended up working in this privacy field. I was determined to do my small part to put people back in some semblance of control. Nowhere, arguably, is that picture more obscure than in the world of adtech.

You can ask for your account to be closed and data to be deleted…but beyond the obvious hosts, in the worlds of adtech and edtech, ask who?

Defend digital me works to clarify uses of UK pupil data and demand transparency and appropriate control over sharing and use. Image links to their site.

So much is cumulatively gained, but it’s become increasingly clear rights have been ridden over to reach this point and there’s a billion-dollar sponsored narrative that says there’s no other way.

All earth changing innovation has a hand over fist growth phase, when promise is huge, and everyone is loath to get in its way (right now that’s happening for ‘AI’). Then the balance between progress and pain becomes better defined and we step in to slow the worst excesses. That is always a tug of war between those seeing most return and those seeing most harm. Depending on the social zeitgeist, balance of dominant politics, and economic drivers (in terms of money, materials, and markets), the first iteration of that process takes more or less long, and lands more or less in favour of those with most to lose (not referring to those who might see a couple of mil shaved off their golden parachute, thinking more of those who lose jobs, health, social safety nets, security, privacy, freedom, or even their lives).

To casual observers it can look like rules curb progress, and that has certainly sometimes been the case, but it is more often one of those sneaky correlations, the ones that power the data machine. “We see the sun rise when we wake up, so waking up must cause the sun to rise”. Correlations that mainly don’t pass the test if asked to prove that cause and effect. Who do we think has most power to effect change? Governments, VCs, and massive corporations, or activists, charities, and everyday folk? I don’t think we can really argue that equates to horribly harmful innovation brakes. It is often horribly hard to get a new concept off the ground and into the mainstream, but these things have a lifecycle and it will always be too early to slow the hell for leather sprint forward in the opinion of someone with too much skin in the game.

But back to that hostage taking claim. In the main it’s maybe more akin to indentured labour. Where lip service is paid to existence of a voluntary transaction and a contract upon which that is based, but there’s evidence we could pivot back to a hostage scenario when political pressure is high, or times get lean.

Moving on to the newer model: some of these pay per play ideas bear a passing resemblance to casual protection-free labour at some fraction of a minimum wage, but I’ve seen no proposals to tackle the risk of data hostage holding and other roots of exploitation historically seen.

But that is only if you support my premise: that your personal data is indivisible from the real life you. If you don’t or can’t accept that then I truly think we’re screwed.

The new data broking paradigm, flipping us sixpence in return for variably well-defined use of ever-replicatable portions of our private selves, hides behind a beautiful sounding logic. A reinvention of dignity. A levelling of the data oligopolists’ playing fields, but, in reality, it sows all the same seeds for sustaining and widening the unjust divide between those who control the means of information production and everyone else. And just like the rest of the long history of securing labour to produce things for those with most power, those with least independent influence will be forced to concede the most in return.

I see no other way this can play out, but is the basic premise faulty? Is the person vs property point real?

No-one owns personal data

Ownership of personal data is something most have strong feelings about. Not, perhaps, in context of nerdy spats between privacy pros, but definitely when the phone rings, door gets knocked, or an email arrives where a stranger asks for you by name.

“How did you get my name / address / number / email?!” we all immediately cry.

We very viscerally feel the indignation that someone, somewhere, gave a stranger permission to invade our home, or other personal space.

While we still deeply feel that transactions should be conducted on the basis of common decency and respect, there is a different and entirely detached conversation going on inside data dependent firms. Often ‘Data Owner’ is a formally defined role. Even when that is replaced by ‘Data Custodian’ or some less proprietorial term the narrative will centre on data as a resource that everyone must work harder to monetise via marketing, machine learning, or selling it on to others with promises of magic it will allow.

Much of that will come with some consideration of data protection, privacy and associated ethics, but that, almost invariably, is on the cost side of the balance sheet vs the profit sought by investors, market pundits, and other bodies who really pull the strings.

That isn’t ubiquitous. There are good people working incredibly hard to ensure proper consideration of risks on behalf of people inside the data, but passion and diligence stand little chance in the face of a career impacting battle for faster, cheaper, better, more. It can feel like little more than lip service is paid, unless it’s time to lobby for weaker laws, and much of that is due to us dehumanising the people used to power it all.

When I began to pivot decisively towards a purer focus on privacy the fundamental truth about data ownership smacked me between the eyes (or rather, some very smart privacy bodies did that for me).

Yep, I know, no-one owning data sounds like conceptual nonsense. If that’s how you feel, I feel your pain. It is not intuitive. But we don’t. We don’t own data that describes us any more than we own ourselves. We, as people, in a free modern country, can’t be bought, sold, or forced to work for free. That’s why I was moved to write this post.

It’s become urgent because of the new conversation powered by millions in venture capital. The conversation about personal data as a possession, personal data as a commodity, personal data as a resource with tradable value. Not the old conversation that sounded just like it, the new conversation about respect and fairness. Why, they all argue, in slightly variable ways, shouldn’t YOU profit from YOUR data. After all, as we now know, everyone else has been busy selling and bartering it, or the insights derived from it.

Why indeed?!

I think it’s fair to say that most of us expect choices when it comes to our data and we’re none too happy about the high privacy price paid for services that purport to be free.

What’s the problem with paying to play with data?

To be very blunt about my view, we are screwed if we allow the new paradigm in personal data use to be paying for chunks of an online identity in such a way that people feel it’s once and done. A transaction completed. Data gone. Rights to complain along with. It doesn’t have to be legally true, people just have be encouraged to think that way

We are doubly screwed when people inevitably start to rely upon it. This model will of course appeal to those in greater need. I’m not suggesting those people are stupid. Billions have been spent to better nudge wavering bodies and there are a big subset of folk for whom the personal risk will no doubt be worth the returns, but on the periphery impact will be heaviest for those made most malleable by existing vulnerabilities, or made vulnerable by the need to access services that the data brokers or their customers control.

If people are nudged to rely on these transactions, whether it looks like data to rent, or data to buy, it can never truly be an equitable transaction. Sometimes, if stopping and deleting data really is an option, it might be like selling an hour of your time, or, if handing over history of a few random transactions, a snippet of regrowable hair, but we could easily see an escalation to selling a kidney, or at least the genetic code that would enable construction of one, off the back of a rushed and friction-free electronic exchange, sweetened with a nice financial lure.

If they were going to get it anyway, why shouldn’t you get paid? But how does compensating people less than royally for continuation of historical exploitation work out as a way to improve inequality today?

Giant firms paying micro-transaction, by micro-transaction, to aggregate more cumulative power than we would ever be able to counter as the individual signatories to myriad agreements where our rights are barely visibly signed away. In absence of regulators that can enforce control before harms grow too large and without other collective bargaining (the kind that doesn’t come with a vested data broking interest) how does it work out any other way?

Really, NO-ONE should own personal data

Surely, spinning this through a slightly different lens, this all revolves around the data being mine. Something no-one else can use without telling me, except in extremes. Something I get to change my mind about unless there’s a really good and transparent reason not to.

In effect that’s true, at least in the EU, but more to the point, in fact the WHOLE point: The data IS you.

It isn’t your data, it doesn’t belong to you, you can’t transfer ownership of it, you can’t give it away, because it is indivisible from the living, breathing human being that is YOU.

People using your data are using YOU.

Taking that to its final logical conclusion, do we want to condone holding people against their will? Do we give people a pass if it’s less than a physical whole? Is it ok to reserve rights to use and manipulate their laugh, their walk, their face, their sense of humour, their sexuality, their genes, when the understanding of implications is at best superficial and limited to the most immediate proposed use? Do we ignore the fact that a decision to stay is more like Stockholm Syndrome than informed choice?

Still got that furrowed brow? Perhaps jump to the more obvious end of the personal data spectrum when trying to anchor it in your mind.

DNA donation

This is an easier ask. Some more or less microscopic trace of you placed in a test tube and centrifuged until those blurry bands summarise the past present and future of your ancestral line. THAT is DEFINITELY a bit of you. The impact of DNA compromise or misuse will fall directly on you and the people genetically related to you. That can’t be uncoupled from the rest of the person you are. You can’t get your double helix replaced.

So let’s take that as our starting point.

DNA Payday Inc.* says:


“Hang on, you PAID Acme Ancestral*!? YOU paid THEM to do generic analysis of your genome, then they got paid THOUSANDS for your genetic data and the survey results (phenotype data) you provided to put medical and lifestyle context around it!?”

*similarity to any actual companies is purely co-incidental

“WE want to give you some of that money back. Why should they get to use your data, DNA profile, and derived insights for free. In fact, not for free. You actually PAID for the privilege of being commoditised”


You can probably see where this is going.

The new data brokers

There’s the pay off. They get to be intermediaries, taking a cut of the money someone will pay to use your data. That might be their share of a service fee paid for your specific data set, for data aggregated from multiple individuals, or for insights derived from related analysis. Often the promise, beyond payment, is that they will sustain scrupulous transparency, effectively anonymise data where feasible, and facilitate efficient enforcement of your various data subject rights.

ZDNet, By Danny Palmer, 10th July 2019

The model is largely the same as for the new ‘Data Trusts’ pitched by the government as a solution to fair sharing of data between bodies doing research and other work for the public good. An ethical clearing house, if you will. Think about the most recent stories of NHS medical data being farmed off in bulk to tech firms as part of public-private AI-ish endeavours.

Effectively, through their scale and influence, demanding the respect, transparency, and control that you, as a private person, have hardly ever been afforded by the likes of Facebook, Google, Amazon, Microsoft, Apple etc etc etc.

Hallelujah hey. About time we got to demand our data protection rights and a fair price for what we have historically been cornered or nudged to give away.

But…on the other hand…

Does the carefully defined greater national or societal good (never mind the price of a quick pint, or even a new sofa) negate the cumulative short, medium and long-term risks of uninformed consent, rushed implementation, dangerous gaps in protective processing controls, and poor on-going oversight over reuse, sharing, security, and retention?

How would any individual know the difference between initial polished promise of a diligent data ‘disruptor’, NHS initiative, or edtech firm pitching to single out suicidal kids vs the behind scenes facilitators who might package and sell you to the highest AI bidder while labelling you in persistently inaccurate, or even dangerous ways.

How would we make that stuff transparent and easily consumable in the days, hours, or sometimes minutes, that we are given to sign on digitised lines?

Or, in other words:

Who will guard the guards?

Can we see any downsides to the free market being asked, once again, to decide the price for a chunk of our data? A market that’s already proved it will fulfil its economic imperative to maximise production of information, while cutting production costs. Doing so by devising ever more subtle and targeted ways to manipulate us to increase data sharing quantity, quality, and frequency.

A free market flinging around loss-leaders to plug itself into data rich environments like our schools, our hospitals, our councils, our government departments, and myriad other bodies who are cash-strapped and ill equipped to do the security and privacy due diligence we would all expect.

Are we confident that these new ventures, answering to all of the same founders and investors, won’t become less and less concerned with injecting control on our behalf and more and more concerned about new and more efficient forms of data subject recruitment, information production, and financial returns?

Even more to the point, how long do we think the more ethical and exploitation-resistant forms of this business model will avoid the lure of a multi-million or even billion-dollar acquisition by the OG data players, or requests they can’t refuse from interested governments.

But I’m just a jaded privacy person aren’t I? Hater of all forms of innovation and sure that everyone is out to profit at any cost.

In fact that couldn’t be farther from the truth. I have watched security operations founder for lack of means to effectively analyse bulk data and governance efforts die for want of useful metrics. Then, very personally, while my mother was fighting through the 9 month of her life from her diagnosis to death from pancreatic cancer, I was passionate about the need to ethically aggregate and analyse available patient data to inform future research. In 2016 pancreatic cancer was the number 1 cause of cancer related deaths in the UK, but only received 1% of the research funding and in 40 years the 5 year survival has only increased from 4% to a global maximum of 9%. So no, I am not ignorant of potential gains. I have also worked with lots of incredibly diligent people who are driven to do the right thing.

I really want to be proved wrong. I really want to be able, as a private individual, to gain comfort and confidence in the forethought, risk management, and on-going work that will be done to keep me safe when I agree to be part of a data set.

Not all firms, not all governments, not all data brokers, not all people…I know. But right now the relative ubiquity of hard to grasp implications, unknowable data supply chains, rushed proofs of inadequately risk assessed and poorly controlled concept for facial recognition and collecting DNA, it’s almost impossible to spot the privacy-protecting innovation. The ideas that might be underpinned with real respect and solid safety nets for people they plan to use.

When it comes to paying people for data, I don’t see a way to avoid the pitfalls described here. Redefining the business model as employment of data subjects, outlawing the protectionist practices that look a lot like hostage taking, and regulating to move indentured labour onto a fairer and more transparent employment footing, is, to me, the only way this makes conceptual sense, but can it work in practice?

The lead time or disconnect between wrong-doing and meaningful challenge is currently huge. There’s an almost non-existent chance of a single individual prompting lasting change to intentional or accidental poor practice (as opposed to prompting a pay-out pinned to an NDA, so firms can get back to business as usual). Placing the onus on individuals to right this ship and pretending that a clear privacy notice is enough to make it fair, is ridiculous and invites wrongdoing with impunity.

Yes, the GDPR and some similar global legislation opens the door to collective action, but how will that effect change if based in any way on the premise that our virtual selves are theirs to own, or on the premise that we lose rights to our identity if we make any part of it public. Not least because the legal folk struggle to get beyond physical and financial as benchmarks for harm, because the statute books just haven’t kept up.

Taking all of this into account, including nearly two decades working to inject consideration of security and privacy into supplier due diligence and technical change, I am highly sceptical that either ethics, or existing effort to enforce the law will work to reduce incidence and moderate impact of abuse, rushed mistakes, poor controls, or processing declared devoid of bias when viewed through a non-diverse developer lens. All accepted as risks against a yardstick based on wellbeing of the processing entity. Accepted by people who won’t be bearing the impact as they are rarely, if ever, held to account.

All, as I have argued at extreme length, built on the dehumanisation of personal data. A brave new world still divorced from real consideration of the individuals entangled in the code and driven by the same financial backers and power brokers that got us into many of these messes in the first place.


And now, having thoroughly vented my spleen, voiced my fears, and set out my privacy stall, I am going to retire to draft part 2. Trying to share, by contrast, some of the solutions I’m beginning to see. Things that might re-inject real humanity into data handling. The way I passionately argue is essential to power any semblance of a more equitable way.

Leave a Reply