Q&A  | 

Maria Klawe

“Our language represents the biases of our culture.”

SHARE

Reading Time: 7 minutes

Maria Margaret Klawe is a computer scientist and the fifth president of Harvey Mudd College. She was previously Dean of the School of Engineering and Applied Science at Princeton University. Having always worked in traditionally masculine fields, now she leads a university that has become a pioneer in terms of diversity. In just 12 years, she has managed to increase the percentage of girls who want to dedicate themselves to "Computer Science" from 10 to 40%.

Let’s start with the first and introductory questions. What do we mean when we talk about diversity or, in this case, lack of diversity?

I think the definition of diversity or lack of diversity definitely depends on the country we’re talking about, or even the industry. But in North America, there are primarily two kinds of lack of diversity.

Gender is definitely an issue. And then the other one that I think is very important in the US is that there’s a lack of diversity by race, particularly in areas like the tech industry.

We tend to see that Caucasians and Asians are well represented in tech, but they’re mostly male. And we see that black and Latin people are very underrepresented. We could go further than that and actually look at the kind of socioeconomic status people come from. In the US, like most countries, we will see that people who come from middle-class or wealthier backgrounds are much more likely to get an education that allows them to be successful in areas like technology and people from low income families are far less likely. Then we could go past that and talk about sexual orientation. The LGBTQ community are underrepresented in technology in general as well. But I would say those are the primary categories that we look at.

Also, we want you to explain a daily life situation that we have encountered. If you write “He is babysitter and she is a doctor” and translate it through Google Translate into a certain language (Turkish, for example) and from this we return it to English, the result we obtain is a gender change, the algorithm will return an amazing "She is babysitter and he is a doctor". Why is this? Is it because of the lack of diversity of those who program these algorithms? Are we in time to correct these biases?

That’s such an interesting example. The way probably all translation programs work at this point is that they examine a large amount of text in a particular language and they attempt to look for what is most likely to be a particular phrase. I suspect that in Turkish as well as in English it’s more likely that women are babysitters and that men are doctors.

I don’t think it’s the people who program the algorithms. It’s the fact that our language represents the biases of our culture and unless we actually are very deliberate about trying to correct those, we’re going to see this over and over again.

And of course it also happens in our own lives. If I was giving a talk at a company and the CEO told his daughter about the person who was giving the talk but didn’t mention my gender, maybe a 12 year old would say to her father, well, you must be really excited about him coming to the company. And then he might tell her that I was female and she would be stunned. We’re living in an age where some people are very deliberately trying to change these biases and help areas of science and engineering and other professions, be more supportive and engaging for everyone. I don’t think it’s too late to do this, but I think we have to recognize that those biases occur automatically in every culture no matter what the culture is.

Also, we want to ask you that in the hands of AI, algorithms are used to determine our next actions, to know our tastes, to predict our thoughts, actions, and of course, to automate processes, including governance systems. Who is responsible if something goes wrong? If it is mostly white men who program, predict, and automate these codes, what are the risks, particularly when it comes to governance?

Obviously in any culture, many people have blind spots. It’s certainly the case that if you have women who are looking at something, they’re more likely to notice this mistake about the babysitter and the doctor than men are because they’re sensitized to it. I am on a board where perhaps a third of the people on the board are female. And there was a proposal to show a video about a dance showing off the PhD thesis of a female student. It was very sexist and the women all just went, “Oh my goodness, we can’t possibly show this video.” They eventually all got what we were concerned about, but it was just so interesting that several of the men had seen the video before and it never occurred to them that it was very sexist and inappropriate for an organization that really believes in inclusion and increasing diversity.

If you have diverse teams making decisions collectively, you make fewer mistakes because you have more people who are sensitized to different aspects of issues.

Of course. And actually you're not the only one who thinks that also there's a research group called the “AI Now” of the University of New York, and they have recently published a study where they were saying they were facing a serious crisis in terms of diversity of gender but also in terms of race in the field of AI. How can the public sector encourage more diversity in computer science and engineering?

This is something I have worked on for most of my life. The first thing is that institutions really need to work on this issue, whether we’re talking about companies or universities or high schools, elementary schools, government organizations. 

I think focusing on the issue and making extensive efforts to change things has a lot of impact.

At Harvey Mudd, we’re now 50/50 male and female in computer science and physics, and pretty close to that in engineering. There are quite good numbers, but the reason this has happened is because we have a community that is committed to this and now we’re busy working with other universities and with companies and so on to help them take the things that worked for us and see if they can adapt them and help them achieve success. It’s been very exciting to see a number of companies like Accenture, which is a technology consulting firm with 400,000 employees around the world, making huge progress in recruiting and promoting women even over the last five years. They have their first female CEO, Julie Sweet, which is fantastic. Microsoft is also making progress in terms of women and people of color. It’s definitely happening.

It happens more quickly in certain areas when you’re looking at a very large company, because a lot of it has to do with leadership and whether there are people in influential positions who are really committed to making these changes.

Is it easier to make the change in the public sector or in the private one?

I think in some ways it is easier to make the change in the private sector. It also depends on the country, because countries have very different rules about hiring and firing in the public sector than in the private sector. For instance, it’s much harder to fire somebody in Sweden or France than it is to fire them in the United States. In the United States it’s often easier to do things in the private sector than to do things in the public one. Because if the leadership and the board supports this, I think they have more flexibility. The bottom line is I think in almost every institution it is possible to improve the situation.

8,500 people attended one of the most important AI conferences worldwide in 2016, and only 6 of them were black. How can we reverse this situation?

I think it’s very important to encourage people of color to study areas that are going to have a lot of impact in the world in the next 10, 20, 50 years.

Artificial intelligence, computer science, mathemathics or statistics are some of those areas. We have had quite a lot of success with African American students majoring in computer science and also in engineering. We’ve had a fair amount of success with Latin people in these fields.

When I was at Princeton as dean of Engineering, I had a student who was African American and he did an internship with a researcher at Intel and that person emailed me and said “Maria, you have to watch out for this particular student. He’s amazing.” So I had him work with me on research for the next couple of years. I wrote letters for him going to graduate school. He got his PhD from Berkeley, from Michael Jordan. So one of the top people in machine learning, he went to Stanford on a postdoc and took a faculty position. Now he’s with Microsoft research in Boston. The problem is that there’s another 2000 stories of African Americans who started in an area of math or computer science and for whatever reason at some point got discouraged. So I think the incredibly important thing is for people to encourage people to be successful. And a big part of being successful is working hard and getting help. It’s not so much about how smart you are when you were born.

A lot of it has to do with context. It’s essentially being in a place where people believe that you can actually learn the material and be successful with it.

And when we teach in ways that encourage students to feel that way, the whole reason people succeed is there. But when you have people who are feeling like their communities, their learning community, their professor, their teaching assistants, whatever, don’t believe that they belong and can succeed, that’s when they’re more likely to leave.

Actually, I would like to ask you from your personal point of view, how have you seen the issue of lack of diversity evolving during your personal experience working in the sector? Are you positive about the future?

So we are in a period of change in our society. I mean obviously very rapid change with respect to artificial intelligence. And I’ve had both very positive experiences and challenging experiences, but I think the thing that makes me optimistic is, there are many more people who are caring about this issue and who are actively working to address this.

And to end with, could the current labour and representative inequality cause an even greater social gap to the one that still exists in most countries of the world?

Absolutely, it’s one of these things that I think is going to happen. I think that the influence that artificial intelligence and computational technology and data science are going to have in the world is going to increase significantly over the next decade.

Unless we become much better at providing more people with access to learning this knowledge and these skills, there’s going to be an even bigger gap between the haves and have nots.

And we do not yet have approaches in place to really solve this problem. If I look at the situation, for instance, in Africa very few students are gonna have access to learning these kinds of things versus the situation in China or India where many people will have access. So yes, I think that we have to be very intentional about how we provide access, not just in the areas that currently have skills in these areas.