A top Googler talks about the ethics of AI and job losses

Shadma Shaikh Jayadevan PK March 20, 2018 15 min

Alphabet Inc runs the world’s largest search engine Google, owns over half a dozen products with more than billion users each, makes self-driving cars, and sells smartphones to enterprise apps. It is now betting heavily on artificial intelligence, or AI, a technology that has languished in research labs for the better part of its life cycle. 

With a large amount of computing power now available on the internet and data becoming more accessible, the promise of AI, which tries to mimic human decision-making, is more real now than ever before. Google’s chief executive officer Sundar Pichai equates the impact of AI on humanity with that of electricity or fire.

For optimists, AI holds the keys to curing cancer or making humanity more productive in a world where machines assist in sundry tasks — repetitive or complex. Yet, for many, it is also the dangerous technological tool that could lead to massive job losses and even unforeseen catastrophes or wars.

Google is one among few companies with a number of AI experts in its ranks. Among them is Prabhakar Raghavan, vice president of engineering at Google. On the sidelines of a Google event on AI last week, Raghavan, now responsible for Google’s G Suite (it’s productivity applications including Gmail, Docs and Calendar for enterprises), gave a rare interview to a few journalists.

“I wouldn’t say AI is commoditised in any sense yet but it’s at the point where it’s within the grasp of a much larger audience…there’s this raw economic opportunity in tens of billions of dollars if not much much more,” says Raghavan, who has worked at IBM Research Labs and Yahoo! Labs earlier. In the interview, he discussed the impact of AI in today’s world, the ethics of practising AI, and also takes on the question: will AI create or destruct jobs? Edited excerpts:

What are some of the big bets Google is making in AI?

To me, there’s a victory already with the advances in self-driving. And I don’t mean what just Google has done, but the very fact that now practically any car manufacturer you buy from including the relatively inexpensive cars are embedding some of these AI features. If even one life got saved as a result—that’s a victory for all of us, right? So it’s easy to focus on the breadth and diversity of these applications, but eventually, it has be about the improvement of humankind.

Beyond that you get into predicting everything like floods, helping farmers and all of that. No matter how you look at it, there’s this raw economic opportunity in tens of billions of dollars if not much much more. Experts project it in the trillion dollar range and that’s a huge opportunity for the world and certainly for any particular economy like India.

What role is India going to play?

I think a very significant one. What are the ingredients in it? One of the points I made in my talk is, it’s not a bottleneck in computing resources. Imagine that you have infinite computing. That’s the model that cloud (computing) brings in. It’s a utility like electricity, imagine you have as much as you want. Then, the only thing that limits you is imagination. I really believe if you are imaginative about the problems you want to solve, you can solve anything.

Second, you need a somewhat technically literate workforce. And again I’m not talking about PhDs in computer science. If you look at that, India is a leader by any means. So I think all the ingredients are there. Now it’s up to people’s imaginations and passion to make things happen. There’s no shortage of ideas here. There’s nothing about the geographic boundaries that limit imagination and it’s just what people have in their minds. I certainly don’t buy that we are short of electricity or compute.

(Editor’s note: AI companies have raised billions of dollars in funding with US and China leading research and deployment in the field. In contrast, Indian startups have collectively raised less than $100 million. Read how India stacks up in the AI race here.)

You’ve seen tech move from lab to Yahoo, IBM. What could be the next big wave or a watershed moment for AI?

There’s no role definition of scientist versus engineer. If you think about moving stuff rapidly from the lab to the field to mass impact, removal of those silos, where one person doesn’t say I am in an ivory tower and I am going to sit here and measure myself one way, (is important). In the end, we all have to influence and have an impact. At Google, someone who is a very trained scientist like Jeff Dean has a keen interest in market movements and bridging that is something we have been successfully doing. And we see the world following suit. I don’t think we’ll have a watershed moment where we realise we have put so many petaflops to solve healthcare problem but I certainly hope and pray that whenever humanity cracks something like cancer, competition plays a big role in it and AI plays a big role in it.

I think a lot of it comes back to behaviour. And that’s not a watershed moment that’s a watershed generation.

Do you spend a lot of time thinking about the ethical aspects of AI? What are your broad thoughts on it?

It’s a very hard and deep question. At some level, ethical challenges in science have faced us independent of the particular science or technology. So it’s not just an AI problem. If you look at CRISPR’s gene editing, you get frighteningly close to questions that are like: is something a disease or is it a choice? And in a similar fashion, AI is raising all these challenges. What is the right way to treat data with safe custody, what is the right way to respect the data?

We at Google think about respecting the opportunity and think about respecting the users, and if you put that together it gives you fairly good guiding principles for what to do or not do with data. Some of the ethical considerations implicit in your question are handled if you have good overarching principles that apply to all technology whether it’s gene editing or AI. It would be a mistake to solve AI problems and ethical problems in isolation and then say wait a minute we hit differently when it came to CRISPR.

There’s a growing concern that AI would mimic human prejudice against race, gender. How is Google tackling that?

Bias is a concrete aspect of ethical AI. As you clearly know, that’s an early and active subject of research both in and outside Google. Researchers across have looked at this very specific aspect: Do algorithms tend to be biased against certain races for predicting rescidivism, for instance, for criminal paroles. When the studies first came out, we were all shocked. But then you look at the data and data doesn’t lie.

The good news is researchers came up with fairly robust defences that going forward will protect us against it. So even if data bias exists in a certain way, there are generic robust techniques for de-biasing. How do you deal with it? I think the first step is awareness — just becoming aware of social biases and so on made us want to pursue this as a scientific community. And that’s a great step forward.

So there is a conscious effort and how do you enforce them because you cannot separate bias from an algorithm later, you have to do it in the beginning.

If you look at the literature, a lot of the debiasing is now being built into the algorithms themselves. Would I claim to you that we are perfect? No, not at all. My point is that the gap that needs to be addressed is vanishing over time and we are getting a little closer to being in that ideal state of saying we’re really not biased and we’re really not going to make these sophomoric errors. You will continue to see improvement over that time. To reiterate the point about openness, just means that the same mistake will not be made by somebody else. But you are going to make mistakes and you’re going to see more mistakes.

Things could go horribly wrong like… I’m sure you guys are familiar with stories of the well-meaning Twitter bots that went crazy and turned into racist Nazi propaganda. And we said, ‘Oh my god, I’m glad we are not in the middle of that fuss’, but by then we already had wisened up enough in our algorithms that we wouldn’t have made that mistake but we were making other mistakes.

(Editor’s note: Artificial Intelligence needs a tonne of data to work on. Researchers mostly scrape that from the open web, or libraries that are created from user-generated content. The degeneration of Microsoft’s Twitter bot Tay is an example of how AI can go wrong if the data it feeds on is bad. Read our two-part series on Biased Bots here.)

How do you use AI and machine learning (ML) to fight fake news, biases in YouTube?

This is a very hard question. Again AI/ML is the hammer but I don’t think we have quite pinned down the nail. This goes back to my earlier comment. Some of these ethical questions are not just about AI and social media. They go back to broader technologies like gene editing.

So, the question here is fake news. If you look at what you guys do as journalists, you give us everything from factual reporting, where you’re basically citing a number or a figure to some degree of interpretation, or opinion. News is not a monolithic thing. You have the free right to speak and produce any opinion you want. So who is the right person to arbiter what is fake news and to say that’s a Facebook problem or a Twitter problem or a Google problem? To be the sole judge of what is “fake news” is perhaps excessive.

That said, we know of many recent instances like the various election campaigns in Europe and the US where there’s been a proliferation of fake news planted with the intention of subverting the system. The argument is you don’t want to necessarily cull that out because somebody has expressed that opinion and who are we to say that… it’s free speech. But, (you can) actually point out factual inaccuracy. So one of the things that we’ve been looking at is if you have a blurb of text and it is asserting a bunch of facts, how genuine is it or what is the level of fidelity or believability you can assign to some of the facts and let people judge. I don’t think it’s for us to say this one is fake and this one is real.That’s perhaps excessive for any of these platforms. There’s a large effort on thinking hard about news, how to elevate its quality.

Everybody talks about job creation and job destruction in the context of AI and automation. What are your views?

What’s changed in one hundred and fifty years? When the cotton mills came into being, everybody said jobs go away, we are going to destroy these cotton mills in the nineteenth century and they were called the Luddites.

Eventually, human ingenuity changes the role of productive work. Since machines are hand weaving clothes, we are going to go to something else. So I’m optimistic in that sense. It’s not like I view this as a permanent takeaway. We as humans will figure out what the next thing to do is. Really, to the point I was making earlier, the best thing I can do for us is to remove as much as the tedium from our lives so that we can go on to doing more creative things. That is a pattern you’ve seen through history. It’s not a moment now where, ‘Oh my god, AI is coming and taking away of jobs.’ Lots of things have taken away one form of job and then humans have come back with another. I’m an optimist.

(Editor’s note: Robot process automation, or RPA, where automation is deployed in BPO operations, is starting to go mainstream in India’s back offices. Nearly 90% of the workforce needed to perform a task that can be broken down into parts can be eliminated. Read more about how it affects jobs here.)

But do you see near-term pain? For instance, truck drivers in the US will be out of jobs once self-driving trucks come in. It’s very hard to reskill.

The honest answer to your question is that there are certain segments in sectors where there is some near-term pain. But if history is any guide, over time, society rebuilds around these functions and people gravitate onto the next thing. There is a wonderful story about a community in Tennessee or West Virginia, where there’s a community of coal miners who saw coal mining evaporate and they were losing all their jobs. And then a couple of community leaders said you know what we are going to teach these coal miners to code and write apps. It’s a touching story. When the short-term pain is inevitable these were people who were committed to not destroying the community. They didn’t want all the kids to move to Silicon Valley or wherever. They wanted to keep them there and it is an amazing story of the rebirth of the community from coal mining to being a mini tech hub. That’s the kind of transition that you’re going to see a lot more. Again, my faith is that human ingenuity produces very real answers to these questions.

How would you say a student should approach his or her career?

When you go through an educational process, you are not going there to learn facts. You’re going to learn to ask questions and what are the right questions to ask. The second thing is not to get wrapped up in a fleeting technology for the moment or a buzzword like data science. If you can get a general education that includes critical appraisal and a lot of fundamentals, (that’s good.). In the average career — and, I  think of it as 50 to 60 years with the growing life expectancies may be even more — these technologies are going to change anywhere from five to fifteen times. If you get wedded to one, then most (of your) life you’re out of luck. But if you can equip yourself with a general broad-based critical education, then you can roll with these technologies.

Many countries are investing heavily in AI, like in China and Silicon Valley. Do you see gaps in the ecosystem in India?

It’s too easy to be myopic and say today there’s this shortage. These blatantly obvious structural barriers, like do you have enough data scientists will go away rapidly. Everybody is doing what is in their self-interest and that includes China and I expect India will be the same. In rational, self-interest, people will do everything to come up with great problems. You already see Jeff (Dean) quoted the work that his team is doing with Sankara Nethralaya and Arvind Eye Hospital. That came about because there are certain conditions for which you get so many incidences that you can amass so much data and solve these problems in a very targeted way. So, there’s this market need which comes about from the incidence of the disease together with the geographic sprawl that India is a very large country. That’s a unique opportunity that played out very well.

[Also see: India moves to address AI talent supply gap, gets a leg-up from Google, Microsoft, Intel

What is the role of the government? China wants to become a leader in AI by 2020.

It is always nice to have a north star and say we want to be number one or number two. It’s not even clear how you measure these things. Eventually, it’s societal impact.

Let me give you this special case that I deal with all the time. The last thing innovation needs is some grand figure telling you what to do because when you have some boss man telling you to innovate it is not going to happen. Innovation has to be grass roots, from the grounds up, it has to be based on people’s passion and imagination. Not because I am sitting here making rules and telling you that on Fridays you need to innovate. That never works. I understand that there can be all sorts of structural goals and policies put in place but personally, I feel like it’s got to be back to what people want to do. It is not because I or the prime minister tells him to do.

(Editor’s note: While India has a thriving student community learning data science, it lacks talent at the top. Only 386 of roughly 22,000 PhD educated AI-researchers worldwide are in India. Read more here.)

Photos: Rajesh Subramanian


               

Photos: Rajesh Subramanian
Updated at 09:39 pm on March 22, 2018  for typos and also to correct Sakra hospital to Sankara Nethralaya.