Bitcoin, the environment & Crypto Offseting

Bitcoin may be good at many things, but being a decent transactional currency isn’t one of them. As an ING economist argued today Bitcoin’s transaction fees and slow speeds mean that buying and selling things with the cryptocurrency simply isn’t realistic. Even the most vocal supporters of Bitcoin admit that it will never replace fiat currencies as ways to transact on a day to day basis. Instead, they argue, Bitcoin is really a ‘store of value,’ more similar to gold than the US dollar.

There is a problem with Bitcoin of a store of value, though. Whereas gold is a metal dug out of the ground, a Bitcoin is really a representation of an amount of past computing power. It takes a certain amount of energy to crack a computational puzzle and generate Bitcoin. It's easy in the beginning, but as more computers join the network the challenge gets harder and harder. These computations are also calculated every time a transaction is registered on the network.

Every 100 Bitcoin transactions can power a home for an entire year - and the amount of energy consumption increases every day.

To demonstrate just how damaging the environmental impacts of Bitcoin are, I created Crypto Offset - a website designed to help you calculate and then offset the damage done by holding crypto currency.

Despite Bitcoin’s idealist roots, it is substantially more environmentally damaging than all other world currencies put together. As it’s a decentralised network, and proof of work is central to how Bitcoin functions, it’s unlikely that this gas guzzling will stop any time soon. Hopefully future crypto currencies will take lessons from this and aim to reduce the environmental devastation of the technology.

[LINK]

How to catch a thief

I had the pleasure recently to be featured in a Barclays video about fraud online. Cyber fraud is a critical global issue, and a large amount of it happens because people, rather than technology, are compromised.
You can watch the video here.

[LINK]

Artificial Intelligence, Ethics & GDPR

Discretion has been a critical part of society’s decision making for as long as societies have existed. Rather than making absolute decisions in legal systems, for example, judges are able to decide proportionate decisions based on mitigating and aggravating factors. A key responsibility of a judge is to write a judgement (and you can read them if you like); without explanation and justification the conclusion a judge comes to is invalid.

Discretion is also a key part of philosophy & ethics. In the classic philosophical thought experiment the trolly problem you have the choice of killing one person to save five by switching a lever on the track. In this thought experiment the outcome is the least important element. Simply to assert that "I'd flick the lever" is to misunderstand the question. Explaining how you reach your decision gives rise to some of the most important ethical and legal questions. Discussion about the topic helps us to analyse what sort of society we live in and what values we live by. These are not trivial questions: killing one to save ten turns you into judge and jury, weighing one life against another. Inaction demonstrates your indifference to human life: if you won’t kill one to save five, would you kill one to save fifty, or ten thousand? And what if that one was your child?

In most studies conducted, 90% of people given the option would kill one to save five. The psychologist David Navarette even conducted this experiment in virtual reality, with the one person screaming to add an element of realism (I'm unsure how this passed the ethics committee..). Nine out of ten people still flicked the lever. Howeve, if the one person is your child, parent or sibling, though, this goes down to 33%. The discretionary factors tell us something about the society we live in (whether we like it or not): we're broadly utilitarian but this changes dramatically if we know or love the people involved.

The role of artificial intelligence in this sort of decision making is becoming more relevant every day. As AI replaces humans as the drivers of cars, the diagnosers of disease and the arresters of criminals the traditional discretion based “fuzzy” decision making that has fuelled ethical debates since Ancient Greece is being replaced.

Screen-Shot-2017-12-08-at-12.15.14

MIT has already applied the classical trolly problem to a self driving car that has had a brake failure, allowing you to create elaborate and often ridiculous ethical situations. Should a self driving car kill four people in a car or people crossing the road? What if some of those people are elderly? Or babies? Or dogs? Though this situation sounds absurd it is an indication of real world artificial decision making that will soon be taking place. Germany has recently issued legislation to say that a self driving car “must do the least amount of harm if put into a situation where hitting a human is unavoidable, and cannot discriminate based on age, gender, race, disability, or any other observable factors”. This is a sort of discretion blind, politically correct & ultimately easy piece of legislation to pass. To parallel it back to the trolly situation, this is like saying “thou shalt not kill,” but if this is the case then why do 90% of people flick the switch? And this isn’t the only problem. For a start, how can a self driving car predict what sort of damage will be done to each individual? What about the damage to those in the car? And what if the people involved are relatives of the drivers - does that make a difference? If artificial intelligence is to be reflective of the society we live in then should we programme in photos of family and friends to say that under no circumstances should we harm these people, as in the trolly experiment findings?

Screen-Shot-2017-12-08-at-12.23.38

These AI ethical codes are common in today’s world. Sage, the SaaS accountancy platform has an “ethics of code” including principles like “AI must be held to account — and so must users.” But when we say “held to account” the question is: to whom? In the self driving car example, a decision will be made based on factors that the car has seen before - creating an average of human biases and morality. Pretending that self driving cars can always make the “ethical” and “right” decision is to misunderstand the entire point of ethics. There is no right answer. Whether I kill one to save ten, or I ignore the train tracks all-together, no decision is ideal or right.

Perhaps the answer is to prevent computers from making ethical decisions at all. In the GDPR legislation that is coming in to force next year, some of the key individual rights introduced are the “rights related to automated decision making including profiling”. This essentially means that any decision making process made “without human intervention,” will have to be documented - and that if decision are made in this way the individual has a right to “obtain human intervention.” Essentially this will mean automated decisions being made, a user appealing, and a “human” within the business simply rubber stamping the computer’s judgement.

Despite this legislation the ethical and societal questions remain essentially unsolved. The words “Artificial Intelligence” belie the simplistic principles that underpin AI. AI doesn’t know why it makes the decisions that it does, just that it is solving the problem in the most efficient way possible. Asking a piece of artificial intelligence software why it made a decision is like asking a giraffe why its neck is so long. Natural selection provides a key performance indicator: survival, and animals experiment and mutate in the hope that they’ll find a way to survive for longer than their peers. There are no ethical systems outside the human world because ethics is a human construct. Simply asserting that AI must play by the “right” ethical rules is simplistic and dangerous; creating a one size fits all model that leads to compliance box ticking rather than substantive change.

[LINK]

The next technology revolution will come from a government

btc

Whilst crypto currencies like Bitcoin and Ethereum continue to accelerate in value, debates about the true value of these currencies and the blockchain technology on which they’re based have continued to rage. At the Blockchain Summit last week in London companies and entrepreneurs speculated about a world where everything - from you driverless car to your AI assistant - will eventually be “blockchainised”. The reasons given ranged from efficiency saving to, as a consultant NHS anaesthesiologist put it: “the fact that no one trusts governments or authorities any more.” Ironically, though, what is holding back the development of blockchain technology is the fact that at the moment people do trust centralised authorities more than decentralised technologies.


One only has to look at images of the Bitcoin mines where precious coins are stored to think that regulation may be a good thing - if only to stop the devastating environmental effects that use of the Bitcoin and Ethereum blockchains cause. In addition to this, speculation about the regulatory compatibility of cryptocurrencies also creates a huge amount of uncertainty. When a blockchain is meant to create the most trusted conditions for transactions to execute it’s slightly off-putting that governments will not officially endorse the technology.

And yet blockchain and particularly smart contracts do have the opportunity to revolutionise the way that transactions about the future are perceived and executed. What all blockchains that you might hear about currently lack is trust. If I don’t know that Ethereum is going to be around in five years because it will be replaced by its sexy successor NEO, it’s unlike I’m going to feel comfortable purchasing a twenty year bond based on Ethereum. The only institution in the world that has the authority needed to back an open & public blockchain are governments themselves.

The Chinese government has countless think tanks looking into the benefits of Blockchain, as well as a consortium of companies like Tencent and Alibaba that will ultimately come together to create a state sponsored blockchain. Like GPS - a technology developed by the US military and given (currently) free to the world - a state sponsored blockchain will be the trusted, open and public source of truth in the future, the only question is: which country will get there first. One thing is for sure, it’s extremely unlikely the country will be Britain. Whilst Estonia has already experimented with minting its own crypto currency and launching a blockchain, the UK government didn’t even send anyone to the London based Blockchain Summit.

When the implications for blockchain technology are fundamentally political and social it is desperately sad that the authority that should be taking an interest in regulating and adopting this technology is not just silent but deeply ignorant about its capabilities. Whilst China plows ahead with brave blockchain regulation and large scale investment the British government - at a time when we need innovation most - remains firmly rooted in the past.

[LINK]

Blockchain & the Truth

Computers and humans think in a fundamentally different way when it comes to the concept of "the truth". A computer's understanding of the truth is based on if statements: decisions that give a binary outcome based on a piece of supplied data. To work out if I can take money out of an ATM, somewhere in the depths of the ATM's code base will be something along the lines of:

If the amount in this person's account is greater than amount being withdrawn, allow the withdrawal

If I wasn’t telling the truth - attempting to withdraw more money than was in my account, the ATM can easily tell because it has access to a data set of truth (my account balance) that it can easily compare with the data I supply (my request for cash). The truth here is always true and impossible to manipulate (without directly hacking the bank): a computer cannot be fooled when given the correct data.

When it comes to humans, there is a much more nuanced and discretionary understanding of the truth. As the magician and psychologist Derren Brown has demonstrated, even a basic question like “can I buy this item” can be manipulated based on emotional and subconscious cues. This allowed him to walk into a jewellery shop in the US and buy a necklace with blank paper instead of bank notes.


He has given the right data (which would have failed the ATM’s IF test), but the way that he supplied the data (with a healthy dose of distraction and manipulation) caused the shopkeeper to make an incorrect judgement about what was true and what was false.

Even with one of the most supposedly objective systems: the law, there are many more nuances than we think. In fact, the entire judicial system is, in many ways, a result of the necessity for discretion. Killing is wrong, but under certain circumstances and with certain mitigations it becomes right. There is rarely a binary answer as there is in computing and so the translation of these nuances into absolutes becomes a challenge for a computer scientist.

This becomes problematic when one considers the implications of this “fuzzy thinking” for a technology that gives the impression of authority: blockchain. By having an “immutable” ledger many proponents of blockchain suggest that the issue of lying can be reduced through a combination of an incorruptible audit trail (you can see all transactions that have ever been entered into the blockchain) and the decentralised nature of the transaction authentication (you need 50% or more of global agreement to validate a transaction). For a financial transaction, the “truth” you’re trying to arrive at is relatively simple, and related to the ATM example earlier:

If this person has not spent this money in a previous transaction, this person still has the money

Essentially the bitcoin blockchain is a list of transactions that can be used to work out who owns what, based on a truthful account of what transactions have been made in the past. However, what happens when the transactions you’re writing to the blockchain aren’t so clear?

Provenance is a company that is attempting to increase trust in supply chains by writing data to an immutable blockchain to demonstrate that, for example, fish are caught sustainably. Where is the truth in this example? On one level you have a simple question: was this fish caught in a certain area of the ocean? That’s something a computer can answer - if the boat has a GPS you can link it to the transaction & automatically collect the data. But the question of “was this fish sustainably caught” is much more nuanced than that, meaning that we need more nuanced data. What about how much the fishermen are getting paid to catch the fish? Even if the person recording the data is paid well what if they’re employing workers on below minimum wage? What if - like Ryanair - on the surface it looks like they're being treated well but a complex system of companies means that they’re actually not getting the right employment rights?

tuna-report-fishermen-large-f2ec9af7adf8d0a08cda64ab5cc70c7773101cffa8231955fd26fe6d60958424
Provenance give phones to fishermen to help them record data to the blockchain. Read more.

When you ask a sustainability consultant to assess whether fish is sustainably caught the research conducted won’t be binary; the data collected will include a value judgement based on factors other than the raw data. The data entered onto the blockchain, on the other hand, must be binary truth data that a computer can understand: this fish was either caught sustainably or it wasn’t.

The challenge for blockchain’s application into areas beyond financial transactions is exactly this fuzzy area. How can you create a rule based system when the rules in the real world are more flexible than we might like to admit?

[LINK]