Biased bots: How do we teach them ‘good values’, and is that even possible?

Tyagarajan S September 6, 2016 6 min

This is the second part of a piece on self-learning algorithms that imbibe human prejudices from the people who create them, and what effect this may have on us in the future as Artificial Intelligence (AI) becomes more and more integrated with our lives.

Read the first part here

In this piece, we look at how we can weed out biases from bots — and if it is possible, or, indeed, desirable to do so: 

Speaking in a a16z podcast “When humanity meets AI”, Dr Fei Fei Li (Director of Stanford AI lab) identified ‘humanistic thinking’ as a key gap in developing technology, especially AI. The only way creator biases can be overcome and a ‘humanistic’ narrative is likely to emerge is if the pool working on AI is more diverse across sexes, races, ethnicities and cultures.

Jeffrey Dean, head of the Google Brain Team working on AI research, is concerned about the lack of diversity in the AI research community. “In my experience, whenever you bring people together with different kinds of expertise, different perspectives, etc., you end up achieving things that none of you could do individually, because no one person has the entire skills and perspective necessary,” he said during a recent Reddit AMA.

AI-graphic-1

However, the Google Brain Team has token diversity at best (as of August 2016). For instance, it has only three women (and 40 men) to represent the thinking of nearly half the world.

Perhaps this is the reason Google, Facebook, Microsoft and Amazon are all open-sourcing their AI platforms. It doesn’t beat the ‘dudes working on software’ problem completely and may not solve significant diversity issues but it’s a start, at least.

But are countries around the world ready to join in this definition of our future?

Controls

Late in December 2015, Musk and a group of other partners formed OpenAI, a non-profit artificial intelligence research company. “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible,” says the mission statement.

Non-profit research is one part of the solution towards building a transparent, unbiased AI platform, although massive adoption and development is still likely to be controlled by the large AI despots that are going to emerge.

Today, AI is built under a large regulatory vacuum. It is notoriously difficult to regulate due to the lack of any common framework for development, multiplicity of actors working on it, opaqueness of the underlying algorithms, and inability to foresee the kind of problems that can arise. Governments are struggling to legally define artificial intelligence.

Non-profit research is one part of the solution towards building a transparent, unbiased AI platform

Perhaps, AI needs watchdogs that monitor and control even before governments can start regulating. Future of Life, one of the very few such organizations, monitors to ensure that tomorrow’s powerful technologies are being put to beneficial uses.

However, the lack of transparency and understanding on the decision algorithms and code that underlie the functioning of AI systems makes all of this a difficult process. And this fundamental part of the puzzle is what Machine Intelligence Research Institute focuses on. It is designing approaches where AI can be more transparent and easily understood by humans.

Yet, the uncertainty over how AI would be regulated and what controls those developing AI would be subject to leaves open the concern on how AI will be influenced by and will influence us.

Humanistic development

A human child is trained under a protected bubble for years before being allowed to roam the open world learning off its own volition. AI is still in its infancy today and is child-like.

Watson playing jeopardy

Books are the largest, ongoing compendium of human civilization’s document of itself and one research suggests using the corpus of stories in our books to teach AI systems about human values before letting them roam free.

However, with language being a significant determinant of the value-narrative that can be derived from the books, would the cultural and social values of large parts of the world (that are not part of the Western world) still get ignored?

For, unlike human children, the learning of AI systems will accumulate and solidify over time periods beyond a life-time and early biases will tend to lurk for eternity.

Even more fundamental is the question – is our definition of right and wrong and morality static, that machines can learn about it from books and gather a clear understanding off them? For, unlike human children, the learning of AI systems will accumulate and solidify over time periods beyond a life-time and early biases will tend to lurk for eternity.

For, unlike human children, the learning of AI systems will accumulate and solidify over time periods beyond a life-time

Luke Muehlhauser, the executive director of the Machine Intelligence Research Institute, believes that it is a dangerous assumption, “I shudder to think what would have happened if the Ancient Greeks had invented machine superintelligence, and given it some version of their most progressive moral values of the time. I get a similar shudder when I think of programming current human values into a machine superintelligence.”

Simply put, there is no way to be sure that our current values represent what’s best for humanity in the long run.

At the very least, any attempt to train AI through codified moral, cultural values must look at the world as a diverse whole rather than see it through the lens of the Western world. Perhaps, a community of trainers around the world could provide AI with a richer, more diverse set of cultural and moral values (some even conflicting) to create a balanced perspective.

A safe training bed

Even more radical is this thought experiment: What if there exists an AI framework or a ‘Mother AI’, if you will, that provides the necessary moral, ethical and social checks and balances for all AI-related platforms being developed in the world? While this may not solve the primary problem of how to train this Mother AI, it consolidates the moral and ethical questions to one platform that can be monitored, worked upon and constantly improved through collective effort. All AI platforms would need to hook onto it as a rule, which will ensure that they fall under a standard ongoing framework of ‘goodness’.

What if there exists an AI framework or a ‘Mother AI’, if you will, that provides the necessary moral, ethical and social checks and balances for all AI-related platforms

Clean AI could accelerate the march of humanity beyond its primitive divisions. It could be one of the biggest milestones in human progress ever. On the other hand, it has destructive power beyond that of any dangerous technology we’ve ever built. It could keep us mired in our own primitive beliefs by reinforcing it back to us. If that happens, we’ll end up in a world powered by bots and automatic cars but filled with minds limited by prejudice and bigotry.


               

Disclosure: FactorDaily is owned by SourceCode Media, which counts Accel Partners, Blume Ventures and Vijay Shekhar Sharma among its investors. Accel Partners is an early investor in Flipkart. Vijay Shekhar Sharma is the founder of Paytm. None of FactorDaily’s investors have any influence on its reporting about India’s technology and startup ecosystem.