Oliver Rees

17 posts

How to catch a thief

I had the pleasure recently to be featured in a Barclays video about fraud online. Cyber fraud is a critical global issue, and a large amount of it happens because people, rather than technology, are compromised.
You can watch the video here.

[LINK]

Artificial Intelligence, Ethics & GDPR

Discretion has been a critical part of society’s decision making for as long as societies have existed. Rather than making absolute decisions in legal systems, for example, judges are able to decide proportionate decisions based on mitigating and aggravating factors. A key responsibility of a judge is to write a judgement (and you can read them if you like); without explanation and justification the conclusion a judge comes to is invalid.

Discretion is also a key part of philosophy & ethics. In the classic philosophical thought experiment the trolly problem you have the choice of killing one person to save five by switching a lever on the track. In this thought experiment the outcome is the least important element. Simply to assert that "I'd flick the lever" is to misunderstand the question. Explaining how you reach your decision gives rise to some of the most important ethical and legal questions. Discussion about the topic helps us to analyse what sort of society we live in and what values we live by. These are not trivial questions: killing one to save ten turns you into judge and jury, weighing one life against another. Inaction demonstrates your indifference to human life: if you won’t kill one to save five, would you kill one to save fifty, or ten thousand? And what if that one was your child?

In most studies conducted, 90% of people given the option would kill one to save five. The psychologist David Navarette even conducted this experiment in virtual reality, with the one person screaming to add an element of realism (I'm unsure how this passed the ethics committee..). Nine out of ten people still flicked the lever. Howeve, if the one person is your child, parent or sibling, though, this goes down to 33%. The discretionary factors tell us something about the society we live in (whether we like it or not): we're broadly utilitarian but this changes dramatically if we know or love the people involved.

The role of artificial intelligence in this sort of decision making is becoming more relevant every day. As AI replaces humans as the drivers of cars, the diagnosers of disease and the arresters of criminals the traditional discretion based “fuzzy” decision making that has fuelled ethical debates since Ancient Greece is being replaced.

Screen-Shot-2017-12-08-at-12.15.14

MIT has already applied the classical trolly problem to a self driving car that has had a brake failure, allowing you to create elaborate and often ridiculous ethical situations. Should a self driving car kill four people in a car or people crossing the road? What if some of those people are elderly? Or babies? Or dogs? Though this situation sounds absurd it is an indication of real world artificial decision making that will soon be taking place. Germany has recently issued legislation to say that a self driving car “must do the least amount of harm if put into a situation where hitting a human is unavoidable, and cannot discriminate based on age, gender, race, disability, or any other observable factors”. This is a sort of discretion blind, politically correct & ultimately easy piece of legislation to pass. To parallel it back to the trolly situation, this is like saying “thou shalt not kill,” but if this is the case then why do 90% of people flick the switch? And this isn’t the only problem. For a start, how can a self driving car predict what sort of damage will be done to each individual? What about the damage to those in the car? And what if the people involved are relatives of the drivers - does that make a difference? If artificial intelligence is to be reflective of the society we live in then should we programme in photos of family and friends to say that under no circumstances should we harm these people, as in the trolly experiment findings?

Screen-Shot-2017-12-08-at-12.23.38

These AI ethical codes are common in today’s world. Sage, the SaaS accountancy platform has an “ethics of code” including principles like “AI must be held to account — and so must users.” But when we say “held to account” the question is: to whom? In the self driving car example, a decision will be made based on factors that the car has seen before - creating an average of human biases and morality. Pretending that self driving cars can always make the “ethical” and “right” decision is to misunderstand the entire point of ethics. There is no right answer. Whether I kill one to save ten, or I ignore the train tracks all-together, no decision is ideal or right.

Perhaps the answer is to prevent computers from making ethical decisions at all. In the GDPR legislation that is coming in to force next year, some of the key individual rights introduced are the “rights related to automated decision making including profiling”. This essentially means that any decision making process made “without human intervention,” will have to be documented - and that if decision are made in this way the individual has a right to “obtain human intervention.” Essentially this will mean automated decisions being made, a user appealing, and a “human” within the business simply rubber stamping the computer’s judgement.

Despite this legislation the ethical and societal questions remain essentially unsolved. The words “Artificial Intelligence” belie the simplistic principles that underpin AI. AI doesn’t know why it makes the decisions that it does, just that it is solving the problem in the most efficient way possible. Asking a piece of artificial intelligence software why it made a decision is like asking a giraffe why its neck is so long. Natural selection provides a key performance indicator: survival, and animals experiment and mutate in the hope that they’ll find a way to survive for longer than their peers. There are no ethical systems outside the human world because ethics is a human construct. Simply asserting that AI must play by the “right” ethical rules is simplistic and dangerous; creating a one size fits all model that leads to compliance box ticking rather than substantive change.

[LINK]

The next technology revolution will come from a government

btc

Whilst crypto currencies like Bitcoin and Ethereum continue to accelerate in value, debates about the true value of these currencies and the blockchain technology on which they’re based have continued to rage. At the Blockchain Summit last week in London companies and entrepreneurs speculated about a world where everything - from you driverless car to your AI assistant - will eventually be “blockchainised”. The reasons given ranged from efficiency saving to, as a consultant NHS anaesthesiologist put it: “the fact that no one trusts governments or authorities any more.” Ironically, though, what is holding back the development of blockchain technology is the fact that at the moment people do trust centralised authorities more than decentralised technologies.


One only has to look at images of the Bitcoin mines where precious coins are stored to think that regulation may be a good thing - if only to stop the devastating environmental effects that use of the Bitcoin and Ethereum blockchains cause. In addition to this, speculation about the regulatory compatibility of cryptocurrencies also creates a huge amount of uncertainty. When a blockchain is meant to create the most trusted conditions for transactions to execute it’s slightly off-putting that governments will not officially endorse the technology.

And yet blockchain and particularly smart contracts do have the opportunity to revolutionise the way that transactions about the future are perceived and executed. What all blockchains that you might hear about currently lack is trust. If I don’t know that Ethereum is going to be around in five years because it will be replaced by its sexy successor NEO, it’s unlike I’m going to feel comfortable purchasing a twenty year bond based on Ethereum. The only institution in the world that has the authority needed to back an open & public blockchain are governments themselves.

The Chinese government has countless think tanks looking into the benefits of Blockchain, as well as a consortium of companies like Tencent and Alibaba that will ultimately come together to create a state sponsored blockchain. Like GPS - a technology developed by the US military and given (currently) free to the world - a state sponsored blockchain will be the trusted, open and public source of truth in the future, the only question is: which country will get there first. One thing is for sure, it’s extremely unlikely the country will be Britain. Whilst Estonia has already experimented with minting its own crypto currency and launching a blockchain, the UK government didn’t even send anyone to the London based Blockchain Summit.

When the implications for blockchain technology are fundamentally political and social it is desperately sad that the authority that should be taking an interest in regulating and adopting this technology is not just silent but deeply ignorant about its capabilities. Whilst China plows ahead with brave blockchain regulation and large scale investment the British government - at a time when we need innovation most - remains firmly rooted in the past.

[LINK]

Mission Critical Low Priorities

Ask any CEO or leader of an organisation what their top worries are and they'll probably tell you that they're related to technological disruption or cyber security. In a recent PWC report, cyber security ranked in the top five concerns of surveyed CEOs. Yet despite the perceived importance of the issue, time and again it is the simplest vulnerabilities that catch companies out. Whether it's falling for an email impersonation or not patching systems, it's clear that talking about these issues is a lot easier than doing anything about them.

Costas Markides from the London Business School has talked about the problem of senior leaders saying "we need to do something about innovation". If you were at home and your partner said "we need to do the washing up" who exactly do they mean? The phrase is non-descript and non-directional, meaning that the dishes would inevitably languish in the sink. In these cases the rhetoric itself becomes dangerous. Rather than prescribing a course of action like hiring a Chief Information Security Officer or growing an innovation department, most companies default to continuing with business as usual.


Cyber security and innovation are hugely important but rarely acted upon

There are a number of reasons that I've seen which result in this behaviour. By recognising the common traps it becomes easier to align your areas of high perceived importance with your high priorities.

1. It won't happen to me

There is often an underlying feeling that instances of tech disruption or cyber security won't happen to you or your company. "My people are too smart to fall for a phishing attack" or "It's a hard market to break into really, what damage can a start up do?" Like global warming, it's only after the damage has been done that it becomes clear that these sorts of things can happen to anyone.

By creating scenarios about the future - as the packaging manufacturer DS Smith has done with their 2025 scenarios - you can bring some level of reality to these hypotheticals. Assessing what actual damage might be done, and documenting who would face the repercussions you can reframe the debate.

2. Where's the ROI?

The other problem about hypotheticals is that it's hard to build a business case around them. The recent WannaCry ransomware outbreak could have been prevented by a simple patch to NHS computers; yet justifying a significant spend on a security update is difficult when there are pressing funding needs in other areas of the organisation. Making a business case for an unknown is only possible when you accept that occurrences like a hack or disruption are not just likely but inevitable. To help stakeholders understand just how likely this is, things like company wide phishing tests can be used to demonstrate vulnerabilities.

3. Innovation Stigma

"I'm just going to spend the next two hours doing some innovation." Imagine if one of your colleagues said this to you? You'd probably picture them sitting on a beanbag doodling on a pad of paper. Though innovation is critical to business today, finding the time and space to do it is difficult - especially when it's often not criteria in someone's performance review. It's only by promoting innovation as a tangible activity within a company - through a lab or investments for example - that "innovation activity" can happen without it looking like people are wasting their time.

4. Fear of failure

Underlying all of these issues is the problem that admitting to failure is incredibly difficult it today's business environment. Both innovation and cyber security have a high probability of failure - even if you invest significantly in both areas it's still possible that you will be disrupted and hacked. It's a game of probability, though, and therefore it's important to be transparent and communicative about the risks involved.

[LINK]