3 min read

Are we building unethical AI systems?

Have we built systems that perpetuate prejudices we’ve fought for decades?
Are we building unethical AI systems?

There’s a lot of hype around Artificial Intelligence and for all the right reasons. Investors are pouring money into promising AI solutions. These systems are to be deployed at scale, affecting billions of lives. Could we be doing something really wrong while building them?

A while back, a team of machine learning specialists had built an AI system for Amazon.com to help with their recruiting process. The idea was to build an engine that would accept 100 odd resumes and be able to spit out the top 5 that could be hired right away. Sounds like an HR dream come true- except it wasn’t. Machine learning relies on training data to “learn” from.

Training data is the data you feed to an AI system for it to draw conclusions about different scenarios so it can predict the outcome value for new unseen data.

Amazon’s model was training on data it had gathered over a 10 year period in which the number of males hired was hugely outnumbered by the number of females. End result? The system was churning out candidates with the bias that men are better suited for tech roles. The system was edited to combat this issue but there was no guarantee it wouldn’t learn ways to become discriminatory in other manners. Which begs the question —

Have we built systems that perpetuate prejudices we’ve fought for decades?

For a while now, this has become a matter of concern among experts. We are increasingly relying on Artificial Intelligence to solve world’s pressing problems.

About 3 years ago, lawyers for Eric Loomis argued before the Supreme Court of Wisconsin that their client had been discriminated against by a computer algorithm.

3 years prior to that, Loomis had been found guilty of attempting to flee police and operating a vehicle without the owner’s consent. As per sentencing guidelines, the judge was not required to impose a prison sentence. But the judge consulted COMPAS (aka Correctional Offender Management Profiling for Alternative Sanctions), a software system that predicts if someone is likely to be a repeat offender, consequently sentencing Loomis to 6 years in jail.

Was it fair to do so? Can a computer be wrong?

Not only this, it is also clear by now that the solution an AI system provides by training on a population may not perform well on another population.

What is the solution?

To start with, there is no quick fix. The current problem lies within the top-down approach followed in building these systems, largely influenced by cultures of developed nations with inherent biases.

It has become increasingly clear that we need an iterative approach towards building such systems that consider a diversity of perspectives. A good way to do this would be to adopt a multi disciplinary approach towards problem solving in AI. We need human rights experts to work with the technology experts to put social context into binary systems.

And while ethics is a broad topic in itself, involvement of government and society while building such systems is a more holistic approach if we want to build ethical AI systems.

image source

The MIT-IBM Watson AI Lab claims to understand this quite well and uses contractual approaches to ethics, to describe principles that people use in decision-making and determine how human minds apply them.

End Notes

AI systems devoid of ethical principles are merely amplifiers of human bias, resulting in badly built systems.

On the other hand, if built carefully, as AI systems understand inconsistencies in decision making by humans, they could also point out when we are being biased and parochial, leading us to adopt more impartial or egalitarian views. In essence, they could be teaching us to be better humans!

What are your thoughts?