How to catch a thief

I had the pleasure recently to be featured in a Barclays video about fraud online. Cyber fraud is a critical global issue, and a large amount of it happens because people, rather than technology, are compromised.
You can watch the video here.

[LINK]

Artificial Intelligence, Ethics & GDPR

Discretion has been a critical part of society’s decision making for as long as societies have existed. Rather than making absolute decisions in legal systems, for example, judges are able to decide proportionate decisions based on mitigating and aggravating factors. A key responsibility of a judge is to write a judgement (and you can read them if you like); without explanation and justification the conclusion a judge comes to is invalid.

Discretion is also a key part of philosophy & ethics. In the classic philosophical thought experiment the trolly problem you have the choice of killing one person to save five by switching a lever on the track. In this thought experiment the outcome is the least important element. Simply to assert that "I'd flick the lever" is to misunderstand the question. Explaining how you reach your decision gives rise to some of the most important ethical and legal questions. Discussion about the topic helps us to analyse what sort of society we live in and what values we live by. These are not trivial questions: killing one to save ten turns you into judge and jury, weighing one life against another. Inaction demonstrates your indifference to human life: if you won’t kill one to save five, would you kill one to save fifty, or ten thousand? And what if that one was your child?

In most studies conducted, 90% of people given the option would kill one to save five. The psychologist David Navarette even conducted this experiment in virtual reality, with the one person screaming to add an element of realism (I'm unsure how this passed the ethics committee..). Nine out of ten people still flicked the lever. Howeve, if the one person is your child, parent or sibling, though, this goes down to 33%. The discretionary factors tell us something about the society we live in (whether we like it or not): we're broadly utilitarian but this changes dramatically if we know or love the people involved.

The role of artificial intelligence in this sort of decision making is becoming more relevant every day. As AI replaces humans as the drivers of cars, the diagnosers of disease and the arresters of criminals the traditional discretion based “fuzzy” decision making that has fuelled ethical debates since Ancient Greece is being replaced.

Screen-Shot-2017-12-08-at-12.15.14

MIT has already applied the classical trolly problem to a self driving car that has had a brake failure, allowing you to create elaborate and often ridiculous ethical situations. Should a self driving car kill four people in a car or people crossing the road? What if some of those people are elderly? Or babies? Or dogs? Though this situation sounds absurd it is an indication of real world artificial decision making that will soon be taking place. Germany has recently issued legislation to say that a self driving car “must do the least amount of harm if put into a situation where hitting a human is unavoidable, and cannot discriminate based on age, gender, race, disability, or any other observable factors”. This is a sort of discretion blind, politically correct & ultimately easy piece of legislation to pass. To parallel it back to the trolly situation, this is like saying “thou shalt not kill,” but if this is the case then why do 90% of people flick the switch? And this isn’t the only problem. For a start, how can a self driving car predict what sort of damage will be done to each individual? What about the damage to those in the car? And what if the people involved are relatives of the drivers - does that make a difference? If artificial intelligence is to be reflective of the society we live in then should we programme in photos of family and friends to say that under no circumstances should we harm these people, as in the trolly experiment findings?

Screen-Shot-2017-12-08-at-12.23.38

These AI ethical codes are common in today’s world. Sage, the SaaS accountancy platform has an “ethics of code” including principles like “AI must be held to account — and so must users.” But when we say “held to account” the question is: to whom? In the self driving car example, a decision will be made based on factors that the car has seen before - creating an average of human biases and morality. Pretending that self driving cars can always make the “ethical” and “right” decision is to misunderstand the entire point of ethics. There is no right answer. Whether I kill one to save ten, or I ignore the train tracks all-together, no decision is ideal or right.

Perhaps the answer is to prevent computers from making ethical decisions at all. In the GDPR legislation that is coming in to force next year, some of the key individual rights introduced are the “rights related to automated decision making including profiling”. This essentially means that any decision making process made “without human intervention,” will have to be documented - and that if decision are made in this way the individual has a right to “obtain human intervention.” Essentially this will mean automated decisions being made, a user appealing, and a “human” within the business simply rubber stamping the computer’s judgement.

Despite this legislation the ethical and societal questions remain essentially unsolved. The words “Artificial Intelligence” belie the simplistic principles that underpin AI. AI doesn’t know why it makes the decisions that it does, just that it is solving the problem in the most efficient way possible. Asking a piece of artificial intelligence software why it made a decision is like asking a giraffe why its neck is so long. Natural selection provides a key performance indicator: survival, and animals experiment and mutate in the hope that they’ll find a way to survive for longer than their peers. There are no ethical systems outside the human world because ethics is a human construct. Simply asserting that AI must play by the “right” ethical rules is simplistic and dangerous; creating a one size fits all model that leads to compliance box ticking rather than substantive change.

[LINK]

The next technology revolution will come from a government

btc

Whilst crypto currencies like Bitcoin and Ethereum continue to accelerate in value, debates about the true value of these currencies and the blockchain technology on which they’re based have continued to rage. At the Blockchain Summit last week in London companies and entrepreneurs speculated about a world where everything - from you driverless car to your AI assistant - will eventually be “blockchainised”. The reasons given ranged from efficiency saving to, as a consultant NHS anaesthesiologist put it: “the fact that no one trusts governments or authorities any more.” Ironically, though, what is holding back the development of blockchain technology is the fact that at the moment people do trust centralised authorities more than decentralised technologies.


One only has to look at images of the Bitcoin mines where precious coins are stored to think that regulation may be a good thing - if only to stop the devastating environmental effects that use of the Bitcoin and Ethereum blockchains cause. In addition to this, speculation about the regulatory compatibility of cryptocurrencies also creates a huge amount of uncertainty. When a blockchain is meant to create the most trusted conditions for transactions to execute it’s slightly off-putting that governments will not officially endorse the technology.

And yet blockchain and particularly smart contracts do have the opportunity to revolutionise the way that transactions about the future are perceived and executed. What all blockchains that you might hear about currently lack is trust. If I don’t know that Ethereum is going to be around in five years because it will be replaced by its sexy successor NEO, it’s unlike I’m going to feel comfortable purchasing a twenty year bond based on Ethereum. The only institution in the world that has the authority needed to back an open & public blockchain are governments themselves.

The Chinese government has countless think tanks looking into the benefits of Blockchain, as well as a consortium of companies like Tencent and Alibaba that will ultimately come together to create a state sponsored blockchain. Like GPS - a technology developed by the US military and given (currently) free to the world - a state sponsored blockchain will be the trusted, open and public source of truth in the future, the only question is: which country will get there first. One thing is for sure, it’s extremely unlikely the country will be Britain. Whilst Estonia has already experimented with minting its own crypto currency and launching a blockchain, the UK government didn’t even send anyone to the London based Blockchain Summit.

When the implications for blockchain technology are fundamentally political and social it is desperately sad that the authority that should be taking an interest in regulating and adopting this technology is not just silent but deeply ignorant about its capabilities. Whilst China plows ahead with brave blockchain regulation and large scale investment the British government - at a time when we need innovation most - remains firmly rooted in the past.

[LINK]

Blockchain & the Truth

Computers and humans think in a fundamentally different way when it comes to the concept of "the truth". A computer's understanding of the truth is based on if statements: decisions that give a binary outcome based on a piece of supplied data. To work out if I can take money out of an ATM, somewhere in the depths of the ATM's code base will be something along the lines of:

If the amount in this person's account is greater than amount being withdrawn, allow the withdrawal

If I wasn’t telling the truth - attempting to withdraw more money than was in my account, the ATM can easily tell because it has access to a data set of truth (my account balance) that it can easily compare with the data I supply (my request for cash). The truth here is always true and impossible to manipulate (without directly hacking the bank): a computer cannot be fooled when given the correct data.

When it comes to humans, there is a much more nuanced and discretionary understanding of the truth. As the magician and psychologist Derren Brown has demonstrated, even a basic question like “can I buy this item” can be manipulated based on emotional and subconscious cues. This allowed him to walk into a jewellery shop in the US and buy a necklace with blank paper instead of bank notes.


He has given the right data (which would have failed the ATM’s IF test), but the way that he supplied the data (with a healthy dose of distraction and manipulation) caused the shopkeeper to make an incorrect judgement about what was true and what was false.

Even with one of the most supposedly objective systems: the law, there are many more nuances than we think. In fact, the entire judicial system is, in many ways, a result of the necessity for discretion. Killing is wrong, but under certain circumstances and with certain mitigations it becomes right. There is rarely a binary answer as there is in computing and so the translation of these nuances into absolutes becomes a challenge for a computer scientist.

This becomes problematic when one considers the implications of this “fuzzy thinking” for a technology that gives the impression of authority: blockchain. By having an “immutable” ledger many proponents of blockchain suggest that the issue of lying can be reduced through a combination of an incorruptible audit trail (you can see all transactions that have ever been entered into the blockchain) and the decentralised nature of the transaction authentication (you need 50% or more of global agreement to validate a transaction). For a financial transaction, the “truth” you’re trying to arrive at is relatively simple, and related to the ATM example earlier:

If this person has not spent this money in a previous transaction, this person still has the money

Essentially the bitcoin blockchain is a list of transactions that can be used to work out who owns what, based on a truthful account of what transactions have been made in the past. However, what happens when the transactions you’re writing to the blockchain aren’t so clear?

Provenance is a company that is attempting to increase trust in supply chains by writing data to an immutable blockchain to demonstrate that, for example, fish are caught sustainably. Where is the truth in this example? On one level you have a simple question: was this fish caught in a certain area of the ocean? That’s something a computer can answer - if the boat has a GPS you can link it to the transaction & automatically collect the data. But the question of “was this fish sustainably caught” is much more nuanced than that, meaning that we need more nuanced data. What about how much the fishermen are getting paid to catch the fish? Even if the person recording the data is paid well what if they’re employing workers on below minimum wage? What if - like Ryanair - on the surface it looks like they're being treated well but a complex system of companies means that they’re actually not getting the right employment rights?

tuna-report-fishermen-large-f2ec9af7adf8d0a08cda64ab5cc70c7773101cffa8231955fd26fe6d60958424
Provenance give phones to fishermen to help them record data to the blockchain. Read more.

When you ask a sustainability consultant to assess whether fish is sustainably caught the research conducted won’t be binary; the data collected will include a value judgement based on factors other than the raw data. The data entered onto the blockchain, on the other hand, must be binary truth data that a computer can understand: this fish was either caught sustainably or it wasn’t.

The challenge for blockchain’s application into areas beyond financial transactions is exactly this fuzzy area. How can you create a rule based system when the rules in the real world are more flexible than we might like to admit?

[LINK]

Building an innovation function within a corporate environment

innov-2

A culture of innovation doesn’t naturally develop in corporate environments that are designed to optimise efficiencies in their core business lines. Innovation involves risk, and well established companies are designed to mitigate risk. In order to foster innovative thinking and output a deliberate strategy is needed to ensure that innovation can thrive within a risk averse and large scale organisation.

Some corporates seem to naturally have a strong innovation function; examples like Amazon and Google are often cited. Yet even these companies have deliberate and well planned strategies - whether it’s Amazon’s secretive floor where experimental R+D takes place and only a select few are allowed, or Google’s legendary 20% time, where employees are encouraged to spend time on their own projects and ideas.

There are several approaches that can be taken to develop an innovation function within a large corporate - with some proving more successful than others.

The lab approach

Svigals_Pepsi_B_16750-1600x713
Pepsi's ideation zone. This innovation lab is still missing a 3D printer.

As the pace of technological change quickened towards the late 2000s many companies responded by creating “labs”. Often housed on floors in corporate headquarters - or separate beautiful & sleek office blocks in some cases - the archetypal lab would be kitted out with the latest tech like 3D printers and VR headsets and would be staffed by a bunch of young, cool tech people. The business case often used to create a lab is: we’ll destroy and then reinvent our current business model in 2-3 years, therefore making the investment worthwhile.

There are a few key problems with the concept of a lab. The first is that employees in the rest of the organisation often have no idea of the existence of the lab. If they do know about it, they’re often not thrilled about it. After all, the lab's remit is to completely shake up the way that the company works - for good or for bad as far as the staff are concerned.

Secondly, labs often over promise and then fail to deliver. Inventing a completely new way of doing business in a couple of years is a monumental challenge, and considering start up failure rates it’s unlikely that the lab will be able to produce the required results. Consequently, labs often get closed down and swept under the rug leading to the all too common scenario:

Person 1: Did you hear, we had a lab open in the US designed to help us innovate?
Person 2: Didn’t you hear, the lab closed down a month ago as they weren’t delivering any innovation.

Pros of the lab approach: Good PR value to show you’re investing in innovation.

Cons of the lab approach: Difficult to communicate and share with the internal team.

The investment approach

wayra
Wayra, Telefónica's accelerator is one of the most well known corporate investment strategies

Many corporates are quick to admit that they will never be the fastest or most nimble. Instead of trying to create an internal function, why not outsource the need to innovate to an external partner? This partner is often an expert in running hack days or accelerators; events which are designed to bring in external startups to either invest in or offer partnerships to. Though this might seem beneficial from a financial point of view: your bet is hedged as you only invest in companies you think have a good idea, it often doesn’t work out exactly as planned.

Whilst the hope is that the start up way of working will ‘rub off’ on the staff within the large company, all too often the accelerator or hack day is run is isolation of the core business. External partners are used to manage the external companies, and whilst a small number of liaison staff may get to see and speak with the startups, in an organisation with many thousands of employees it’s unlikely that the impact will be particularly great.

Pros of the accelerator approach: A de-risked approach that might result in a return if you follow through on partnerships and investments.

Cons of the accelerator approach: The innovation is restricted to external companies and there’s little transformation internally.

The all hands approach

It could be argued that in the same way everyone in a business is responsible for driving growth, everyone is also responsible for driving innovation. Two of the most common barriers to people innovating internally is, first, a lack of digital skills and knowledge, and second, a stigma around innovating within the business. This stigma often leads to conversations like this:

Person 1: What are you up to?
Person 2: Oh, I’m just doing a bit of innovation.
Person 1: Can you please get back to work.

In addition to this, fear of failure within high pressure environments often leads to people being afraid to get involved in projects or ideas when there is a perceived high probability of failure. It’s often these projects which are the truly innovative and transformative ones.

Embracing the all hands approach is a big commitment. To solve the digital skills problem there will be a huge investment in training required. Overcoming the innovation stigma problem may require a change in company structure, and a change in employee reviews will almost certainly be needed (GE did exactly this) - making it a mandatory requirement to document something you’ve failed at or innovated in each year, forcing people to spend some of their time thinking about high risk, high reward projects.

The benefit of the all hands approach is that when done right it can be truly transformational, and the sort of innovation you get isn’t just “moonshot” type ideas. By empowering everyone to execute innovative ideas and supporting them to do so (not just saying it, actually putting metrics against it), there will be a huge amount of incremental innovation as a result.

Pros of the all hands approach: A business transformational approach to innovation that can result in innovation accross every business unit.

Cons of the all hands approach: Huge investment required, as well as a potential restructuring of the organisation. Also, does everyone want to or need to innovate?

The Innovation Champion approach

Innovation isn’t everyone’s bag. Would you want you pilot to be innovative? Or your taxi driver? Sometimes it’s important to stick to the letter of the law and do things as you’ve always done them. In the same way, there will be areas of any large organisation where innovation isn’t desirable. Of course a knowledge of whats needed to be an innovative company is important for people in areas like compliance and accounting, but this is a different mindset to those working in new product development or client relations.

By identifying core areas of the business where innovation might be beneficial, as well as - importantly - identifying people who would be keen to drive this change, large organisations can create multidisciplinary teams to work autonomously as mini start ups within the organisation. Importantly, these groups are embedded amongst the traditional business so their ways of working, thinking and training really will rub off on their colleagues.

In order for this approach to work the innovation champions have to be given additional training, but also be encouraged to share that training amongst the organisation as a whole. As well as this, they need to operate on much longer term metrics and be accountable for the results (Eric Ries talks about this extensively in his “innovation accounting” writing).

If executed correctly the innovation champions can spark a culture change in a large organisation, precipitating the drive towards a more innovative culture as a whole.

Pros of the innovation champion approach: A way to tactically accelerate innovation within identified areas of the business. Very targeted and precise.

Cons of the innovation champion approach: Relatively large investment required, needs to be accompanied by a culture shift.

Of course there isn’t a single right approach, but the methods outlined above hopefully give a flavour of the sort of strategies that large companies have employed in the past to try and drive innovation forward.

[LINK]