Home  |  Sarah  |  Services  |  Blog  Contact

Sunday, 03 Aug , 2014

Artificial Intelligence & Oversight At The Bleeding Technical Edges

Share this article

The perspective of a parent. Considering our duty to keep up, as technological evolution accelerates towards a point where consequences might overtake human capability to implement controls.

0

From “What would you do with Watson” about IBM’s super computer https://www.youtube.com/watch?v=Y_cqBP08yuA


So we’re allegedly tearing headlong towards an AI assisted (controlled?) future. Regardless of those true benchmarks for intelligence, artificial or otherwise, who the heck is keeping an eye on the wider and longer-term implications on behalf of our kids and their kids?
More to the point, who is even capable of grasping the underpinnings of AI and similar innovations?
Few truly understand the full possibilities, but most opponents predict a forbiddingly Bruckheimer-esque future and it got me to thinking…
Who’s really in charge?
Or, in other words, “Quis custodiet ipsos custodes?“. Arguably more pertinent to artificial intelligence than any other technological development in history. An issue raised again, controversially, by Stephen Hawking in May;article-2618434-14C0214D000005DC-462_634x449

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes…” Interview with The Independent, 1st May 2014

Followed by a riposte from Steve Mason of ClickSoftware, suggesting Mr Hawking is scrabbling to retain his position as thought leader by stirring up some FUD.  He says that it is natural for individuals at “the pinnacle of thought leadership” to feel threatened by something that could knock them off their perch.
One of the big fears linked to AI (apart from Terminator style Armageddon) is the theft of jobs from humans. In the same article Steve Mason argues it could be quite the opposite, with new services growing up around AI enabled industries.
All well and good, but to my mind at least, we’re missing the point;
Stephen Hawking says….
Steve Mason argues…
Asimov predicts…
BUT nobody knows.
I’m not going to talk about the pros and cons of machine intelligence (although I did pen a little dystopian fairy tale, about one possible AI driven future). Instead I want to give the perspective of a parent. Considering our duty to keep up, as technological evolution accelerates towards a point where consequences might overtake human capability to implement controls.
The greatest challenge: The knowledge gap. 
images (38)In writing “A Brief History of Time” Stephen Hawking admitted filtering his thoughts through a chain of progressively less specialist colleagues. Eventually, like a game of intellectual and mainly accurate Chinese Whispers, something publicly comprehensible emerged. The same will be necessary for all of the work going on at the bleeding edge of modern science and tech.
To stand any chance of realistic ethical oversight of boundary pushing developments, there has to be investment in translation. Not publishing academic papers for like minded colleagues. Interpreting current and potential future implications for accountable bodies empowered to implement checks and balances. Building a safety net for you, me and generations to come.
Lessons from the recent past to apply to an artificially intelligent future?
Perhaps it’s easier to view in the context of more immediate issues. For instance, the search and social media content filtering algorithms. Busily deciding which version of the world you and I would like to see today, except we don’t get any say in the matter (Twitter is imminently going to follow Facebook down this path). How many developers report to bosses that know how these algorithms work? How many grasp the potential fallout in terms of long-term shaping of public opinion?Facebook-Emotional-Manipulation-400x300
Propaganda and social control are age old concepts and in very recent history, Facebook was caught conducting emotional control experiments on users. They changed the tone what appeared in news feeds for a selected group, to see what effect it had on the ‘mood’ of their own posts. That raises questions about what else is going on behind the scenes.
Even more generally, we know most non-specialists don’t have a useful understanding of what the IT and Cyber Security guys are getting up to on their behalf.
We’re creeping towards a point where language barriers and knowledge gaps are being bridged in my security field. It’s become clear it is to everyone’s practical and commercial advantage. But, we’re still a distance from any useful understanding of how ‘variable versions of the truth’ served up in the social media-verse will impact us and our kids.
If you take another few steps into the technical fug, no-one (except folk deep inside the field of AI), has a chance in hell of knowing where it’s all heading and the various types of fallout there might be.
Who has the interests of my children and my children’s children at heart?
So while those non-profit bodies mentioned by Stephen (The Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute), are working hard on everyone’s behalf to keep up, I don’t think it’s enough.
As I commented in another post, the god of money has big guns. We need to make it mandatory for governments and commercial ventures to finance effective knowledge sharing with accountable overseers. We need to give bodies (like those above) teeth and a seat at the top table.
I’m not arguing innovators should lose their intellectual property, but are we comfortable that those with everything to gain from developments are viewing implications in the round? If there are concerns voiced from within, will they make it past their boards? That certainly hasn’t historically been the case with pleas for proper consideration of security for software development and in-house IT change efforts.
Someone, not with a vested interest, must have the ability to apply statutory brakes, or have a means to inform lawmakers and risk owners, so ethical understanding and controls can keep up.
Would independent oversight hobble innovation?
rand-paul-on-the-nsa-watching-the-watchmenWould this be a death knell for innovation? Some will say yes, but history is littered with the corpses of those trampled by the desire to ‘just see what will happen next’ (Oppenheimer and those targeted using his intellectual property, as a dramatic example) .
On the other hand, just like information sharing in security, understanding can quash prejudice and broker the respect and trust needed to start a rational conversation about “what ifs”.
Where wider social, military, political or economic implications of developments are in doubt, I would like to see ethics committees at institution and industry level. Staffed by a mix of inside and outside experts, who can equally effectively flag unforeseen pitfalls and unexpected benefits of brand new and brilliant innovations.
Those external experts will become perfect advocates for beneficial developments. Able to manage media expectations when the news breaks and defuse negative knee jerk reactions from other non-specialist decision makers.
I’m not a Luddite, but I am scared that my kids will be hung out to dry in someone else’s version of a “good future”. A future that might, thanks to a mammoth “oops, we didn’t think of that” disaster, turn out to be not so good after all. The main basis of my fear? That the brightest and best are tearing hell for leather forward in pursuit of progress (or knowledge for knowledge’s sake) and serving up the fruits of their labors to people with a less pure agenda. Not negligent per se, just motivated by immediate reward and ill equipped to look sideways and forward far enough to see any unexpected harm that might be caused.
So, who is watching the watchmen? Well meaning academics? Doubtfully tech savvy law enforcers? Secret service bodies? Occasional industry regulators?
Is that really fit for purpose and effective enough?

Opinion: Paying to play with our personal data – is it ok?

We’ve migrated from ‘Hot or Not?’ to being held virtually hostage by many of the digital platforms we rely on today. In the midst of that a new processing paradigm has emerged. Myriad startups want to pay to play with your personal data. Can this tackle on-going...

In AI we will blindly trust…

...and the architects, designers, data scientists, and developers will think we are nuts I've been driven back to the blog to talk about one very specific aspect of privacy, data protection and Artificial Intelligence (exchange for Machine Learning or Algorithms as...

Data Protection, Security, and the GDPR: Myths and misconceptions #2

Welcome back! This is a shamefully delayed sequel to my first instalment of security themed GDPR thoughts: Data Protection, Security, and the GDPR: A fraught and fuzzy relationship. Here I look back again over my pre-privacy IT and InfoSec career to spot things likely...

Where and to whom does the GDPR apply?

Yeah, I doubted my sanity going at this one too, but here I am, because working out whether or not the GDPR would apply in different practical and geographical circumstances is proving harder than it really should...for everyone. This regulation has been my almost...

GDPR – You’ve analysed the gaps, but can you close them?

  There is a critical gap for most firms: An inability to interpret and leverage gap analysis, data discovery, and mapping output to actually implement technical data processing change. This article is about the challenges most large firms are facing when trying...

GDPR – The Compliance Conundrum

There is one question related to the General Data Protection Regulation that will arguably cause more ulcers than any other: How much is enough? In some portions of the GDPR 'good' is straightforward. In many others we are asked to respect principles of fairness and...

Opinion: The role of automated data discovery in a GDPR programme

Do you have any online profiles or posts featuring those 4 magic characters: G D P R? If so, whether you are a business decision maker, IT body, security body, charity boss, employed data protection pro, or job seeking data protection pro (less and less likely), you...

When Business Culture Eats Cybersecurity For Breakfast – Part One

A four-part story of budget cuts, blamestorming, breaches and massive bumps in the road to mature security. Wild Speculation & IT Transformation Do you remember Nick Leeson? On February 23rd 1995 he sent a fax telling bosses at Barings Bank he was ill and wanted...

Cyber Insurers Dictating Cybersecurity Standards?

A run down of the key challenges with choosing and using cyber insurance called out in the last few months. It looks entirely possible you will have 'adequate' security dictated by your insurers, so it is your job to understand the risk based yardstick they're using...

There Is No Such Thing As Information Security Risk

Having worked in IT and Information Security for 13 years, I've come to the conclusion that there is no such thing as information security risk. There are just business risks that have one or more security or IT related causes. There is a fundamental and persistent...