Artificial Intelligence, Ethics & GDPR

Discretion has been a critical part of society’s decision making for as long as societies have existed. Rather than making absolute decisions in legal systems, for example, judges are able to decide proportionate decisions based on mitigating and aggravating factors. A key responsibility of a judge is to write a judgement (and you can read them if you like); without explanation and justification the conclusion a judge comes to is invalid.

Discretion is also a key part of philosophy & ethics. In the classic philosophical thought experiment the trolly problem you have the choice of killing one person to save five by switching a lever on the track. In this thought experiment the outcome is the least important element. Simply to assert that "I'd flick the lever" is to misunderstand the question. Explaining how you reach your decision gives rise to some of the most important ethical and legal questions. Discussion about the topic helps us to analyse what sort of society we live in and what values we live by. These are not trivial questions: killing one to save ten turns you into judge and jury, weighing one life against another. Inaction demonstrates your indifference to human life: if you won’t kill one to save five, would you kill one to save fifty, or ten thousand? And what if that one was your child?

In most studies conducted, 90% of people given the option would kill one to save five. The psychologist David Navarette even conducted this experiment in virtual reality, with the one person screaming to add an element of realism (I'm unsure how this passed the ethics committee..). Nine out of ten people still flicked the lever. Howeve, if the one person is your child, parent or sibling, though, this goes down to 33%. The discretionary factors tell us something about the society we live in (whether we like it or not): we're broadly utilitarian but this changes dramatically if we know or love the people involved.

The role of artificial intelligence in this sort of decision making is becoming more relevant every day. As AI replaces humans as the drivers of cars, the diagnosers of disease and the arresters of criminals the traditional discretion based “fuzzy” decision making that has fuelled ethical debates since Ancient Greece is being replaced.

Screen-Shot-2017-12-08-at-12.15.14

MIT has already applied the classical trolly problem to a self driving car that has had a brake failure, allowing you to create elaborate and often ridiculous ethical situations. Should a self driving car kill four people in a car or people crossing the road? What if some of those people are elderly? Or babies? Or dogs? Though this situation sounds absurd it is an indication of real world artificial decision making that will soon be taking place. Germany has recently issued legislation to say that a self driving car “must do the least amount of harm if put into a situation where hitting a human is unavoidable, and cannot discriminate based on age, gender, race, disability, or any other observable factors”. This is a sort of discretion blind, politically correct & ultimately easy piece of legislation to pass. To parallel it back to the trolly situation, this is like saying “thou shalt not kill,” but if this is the case then why do 90% of people flick the switch? And this isn’t the only problem. For a start, how can a self driving car predict what sort of damage will be done to each individual? What about the damage to those in the car? And what if the people involved are relatives of the drivers - does that make a difference? If artificial intelligence is to be reflective of the society we live in then should we programme in photos of family and friends to say that under no circumstances should we harm these people, as in the trolly experiment findings?

Screen-Shot-2017-12-08-at-12.23.38

These AI ethical codes are common in today’s world. Sage, the SaaS accountancy platform has an “ethics of code” including principles like “AI must be held to account — and so must users.” But when we say “held to account” the question is: to whom? In the self driving car example, a decision will be made based on factors that the car has seen before - creating an average of human biases and morality. Pretending that self driving cars can always make the “ethical” and “right” decision is to misunderstand the entire point of ethics. There is no right answer. Whether I kill one to save ten, or I ignore the train tracks all-together, no decision is ideal or right.

Perhaps the answer is to prevent computers from making ethical decisions at all. In the GDPR legislation that is coming in to force next year, some of the key individual rights introduced are the “rights related to automated decision making including profiling”. This essentially means that any decision making process made “without human intervention,” will have to be documented - and that if decision are made in this way the individual has a right to “obtain human intervention.” Essentially this will mean automated decisions being made, a user appealing, and a “human” within the business simply rubber stamping the computer’s judgement.

Despite this legislation the ethical and societal questions remain essentially unsolved. The words “Artificial Intelligence” belie the simplistic principles that underpin AI. AI doesn’t know why it makes the decisions that it does, just that it is solving the problem in the most efficient way possible. Asking a piece of artificial intelligence software why it made a decision is like asking a giraffe why its neck is so long. Natural selection provides a key performance indicator: survival, and animals experiment and mutate in the hope that they’ll find a way to survive for longer than their peers. There are no ethical systems outside the human world because ethics is a human construct. Simply asserting that AI must play by the “right” ethical rules is simplistic and dangerous; creating a one size fits all model that leads to compliance box ticking rather than substantive change.

Comments