interview  | 

The intersection of artificial intelligence and humanity

Interview with Rumman Chowdhury, Global Responsible AI Lead of Accenture

Tags: 'Accenture' 'Artificial intelligence' 'Rumman Chowdhury'

SHARE

Reading Time: 10 minutes

Rumman Chowdhury’s passion lies at the intersection of artificial intelligence and humanity. She holds degrees in quantitative social science and has been a practicing data scientist and AI developer since 2013. She is currently the Global Lead for Responsible AI at Accenture Applied Intelligence, where she works with C-suite clients to create cutting-edge technical solutions for ethical, explainable and transparent AI. In her work as Accenture’s Responsible AI lead, she led the design of the Fairness Tool, a first-in-industry algorithmic tool to identify and mitigate bias in AI systems. The Fairness Tool has been utilized successfully at Accenture clients around the world.

Firstly, could you tell us about Accenture applied intelligence and what are your current goals and key challenges as responsible Artificial intelligence?

Accenture applied intelligence is the AI delivery arm of Accenture. I am our global lead for responsible AI. I don’t just think about the ethical problems I have to create solutions and work with clients to do so. So that means sometimes creating technical tools, sometimes giving strategic guidance, sometimes it’s a governance model, but my job and my role has always been to deliver solutions for clients.

The convergence of big data and AI has been said to be the single most important development in shaping our future. Why is that?

Well, the convergence of big data and artificial intelligence has been absolutely pivotal. The concepts of artificial intelligence have existed for many decades since the 50s. We were not able to enact on them. The creation of relational databases, improvements in compute power as well as the growth of the amount of data we have, that’s actually what enables artificial intelligence to flourish.

You say that your passion lies are the intersection of artificial intelligence and humanity. Can you explain what you mean by this?

My passion lies at the intersection of artificial intelligence and humanity.

And what that means is from a technical perspective, yes, we think of data fueling algorithms making our AI and technologies. But there is a very human component too. And actually lately I’ve been using the term algorithmic systems more and more because the system respects the fact that this is not just a technology, it is used in context of society and that’s where the humanity comes in. When we think about this technology.

When it comes to Human–AI relationships, how do you think our daily lives and our work lives will change over the next few years?

When it comes to these technologies, there are significant ways in which our daily lives and our work lives have already changed and will continue to change. And one thing to think about is the increasing real time information data and the globalization of concepts. So now if I have an idea, it’s not just something I share with my friends or maybe write a story about. It’s all shared with the world immediately in real time. So I think that really impacts the amount of information and knowledge that we have. Of course the downside is now there’s a lot of noise. So we see things like the rise of pseudoscience or nationalistic movements. So I think one very important way that we’re thinking about both our personal lives and our work lives is

how do we manage the sheer amount of information that comes our way given our human limitations of being able to understand and process the data that comes into our minds.

Let's talk about data ethics and ethical AI practices. You advise companies in ethical or responsible AI practices, what are the biggest ethical issues when it comes to AI?

There are many ethical issues when it comes to artificial intelligence

and sometimes they’re as basic as what I call Data Science 101. What is the kind of data you’re using? Where does it come from? Is it accurately reflecting what you’re trying to measure? Have people been informed, have they given consent? And some of them are also like more philosophical in nature, right? Just because you can build something doesn’t mean you should. Is there some sort of harm to humanity or some basic human right that’s being infringed? So it’s hard to really narrow down the ethical implications to a set few. But what I would say is there’s a wide range of considerations and what we have are frameworks that are specific to the kinds of questions people are asking and the context in which it will be used can help guide the right kinds of question making.

You came to data science from a quantitative social science background. Do you think this has had any specific impact on your approach towards AI and data?

I come to artificial intelligence from a quantitative social science background and what that actually gives me and folks like myself is a different perspective on data and algorithms. So I think as opposed to somebody from maybe a pure programming perspective,

I think about humanity first society first. And to me the programming, the compute, that’s sort of a tool I use to accomplish the goal. But the goal is the advancement of humanity.

Ethics is a branch of philosophy that is not yet being readily applied to computer science. Why is this and how can we seed data ethics into the curriculum more effectively?

There are movements to see things like data ethics and ethics in general into computer science. And what we’re seeing is almost a clash of cultures. Where I think a lot of folks who are maybe more mathematical, quantitative, analytical, see the ethical questions as overly philosophical and not applied. And I actually agree with them. I have a background in political philosophy and in my opinion,

what philosophy helps you do it’s actually help you ask questions. It does not necessarily give you answers. And I think that’s the difficulty.

Purely quantitative folks usually just want an answer. And what philosophy and lot of ethical frameworks try to do is help you ask questions. So what needs to happen is this merging of the two where it’s not just outcome based, but it’s also not just theory based.

There has been a proliferation of ethical declarations, guidelines and committees over the past year. Did you see this as a genuine first step to our ethical data governance.

There’s been a proliferation of so many different kinds of ethical guidelines, rules, principles, et cetera. And the difficulty actually has been taking action on them. One can say AI should do no harm. What does that really mean when you create artificial intelligence? So these are very helpful steps in creating ethical guidelines or ethical technology, but it is insufficient. So now we’re really working on how can we make these things applied.

How can we ensure companies are implementing ethical frameworks into our assistance?

One of the companies I work with are very concerned with creating ethical frameworks to implement in their systems. What they’re genuinely worried about is the trustworthiness of these systems, right? How can we make sure people trust what’s happening? It’s not just about creating a technology in a bubble that I know is good. Actually, it needs to be communicated to the public and the public has to trust me and I need to earn that trust. So that’s a lot of what I work on with companies. It’s not always a technical solve. Sometimes it’s about communication, it’s about good governance, it’s about transparency and these all work together to create more ethical systems.

How are the public and private sector working together when it comes to data ethics? And is this type of collaboration effective?

A public and private collaboration on data ethics is absolutely necessary and vital, especially when we think about the creation of smart cities, the digitization of our government infrastructure, right?

We’re trusting private companies with very personal information about us. This is no longer storing passwords. It’s now storing fingerprints, social security numbers, biometric information and some of the stuff cannot be changed. Somebody steals your password, you can change your password, you can’t change your fingerprint, you can’t change your iris scan. So it’s absolutely necessary for these groups to collaborate and work together if we are to create the kind of citizen accountable systems that improve humanity, which is the goal I think of most of the people who are working in this space.

It seems that often technologies are developed and implemented and it's only afterwards that we start to think about its ethical implications. Why is this? Is it possible to reverse this trend?

Often technology is built and then afterwards we sort of see the harms that frankly a lot of us say should have been thought of before. This is not a normal part of data science. Often the value proposition of building technology is efficiency or income, right? Will it either make me money, save me money, or will it make things more efficient, which essentially makes me money or saves me money, right? So I think part of this is something I call question zero, which is the “just because you can doesn’t mean you should” question. And that does not have a proposition based on efficiency or income. That is a proposition based on something like flourishing, wellbeing, potential harms, et cetera.

So I think the best way to prevent the creation of technology that in retrospect was something we shouldn’t have built, is addressing the fact that we need values that are not just based on efficiency and profitability, but values that are based on humanity and human flourishing.

Let's talk about gender diversity and AI. You were selected as one of the BBC 100 Women as a part of #teamlead tasked with tackling the glass ceiling by creating an app that can teach women to lean in during meetings. Can you explain these app to us?

Sometimes the lean in narrative is a little bit problematic because it puts responsibility on the groups that are already marginalized to push more against a system that’s already pushing back against them. The intent of the app was actually to give visibility into how people communicate and talk during meetings. So this is not just an app for women. It can be an app for just people who are quiet, people who are introverts, right?

So often our corporate workplace dynamics and often team dynamics favor people who speak more, who are more extroverts, who are more alpha. And these are not necessarily people who have the best ideas.

So the intent of the app was actually to use voice recognition and to actually create meeting minutes essentially that are then parsed by things like sentiment or amount of time people spoke to give you real time feedback. Also this helps managers understand the team dynamics in their own groups because people tend to have very subjective viewpoints. Harvard Business Review study showed that when women speak 30% of the time, men in the room think they’re speaking more than half the time. So even the perception of how much people are speaking it’s not a reflection of reality. So what I wanted to do was give empirics, give data so that people could look objectively at what’s happening in their meeting, at the dynamics in the room.

Why has a AI traditionally being seen as a man's world?

There’s a bunch of really good books that actually look into that phenomenon. Mar Hicks wrote this book called Programmed Inequality, which is about how the computation worlds in the UK crumbled because of the marginalization of women. When it became a man’s job to program.

Traditionally in the United States programming in the fifties and sixties was seen as a woman’s job because it looked like typing. It wasn’t until it became profitable that it became rebranded as a masculine role that women were then pushed out of.

So it ties a lot into social dynamics, particularly of what we consider feminine jobs and masculine jobs, but also where we put value, which tends to be, frankly, that jobs that are feminized are worthless and jobs that are masculine are worth more. It’s actually a quite complex narrative. I think the other problem here is there is this narrative of sort of the hoodie–wearing solo college–dropouts, startup–founder that just got cemented in the 2000’s, and frankly, cults of personality or things that are built around a singular persona to me are always problematic because then people aren’t making individual choices, right? They’re not exercising their ability to think critically. I think there’s a lot to unpack in that, but

fundamentally I think that singular or overly simplistic narratives are what leads to the marginalization of people, whether it’s based on gender, income, race, whatever it is. 

What do you think is the key to encouraging more women into AI?

When thinking about how to get more women into AI, that’s a kind of a complex statement, right? So for women who already work in the industry, a lot of folks will say there’s insufficient support. So it’s not just about educating young girls to be interested in artificial intelligence. What happens when they get these jobs? How are they treated?

In all diversity initiatives, my encouragement is always not just to focus on education and hiring more women, but to think about promotion and retention of the women you have.

Because if you say you want a 50% female workforce, but you look at where people are in the hierarchy and it’s entry level with 75% women and senior level 0 women, you haven’t reached your goal. So there’s multiple factors. Yes, it’s important to encourage and spark interest in all kinds of people to enter technology to create a level playing field. But then the level playing field has to perpetuate once they are in these jobs and roles and they have to exist for the women and minorities that are in these jobs today.

There is concern that since algorithms have not been designed by diverse group of people, we're not designing inclusive enough automated systems. How can we create more diversity and avoid algorithm bias?

So it’s quite important when thinking about algorithmic bias to think about the diversity in the room when creating it.There is certainly a narrative on human centricity at the moment and people are talking about it. I just recently published an article in VentureBeat called The Retrofit Human, which is thinking through how we want to try to create inclusive design. But in reality we have this narrative of a techno solutionism or techno chauvinism, which pretty much says that any technological system is better than a human built system, which frankly is false. It’s about diversity of thought. Not just representational diversity. Just because I’m a woman, I don’t speak for all women, right? But it’s actually about openness to feedback. Having algorithmic systems that are actually working in response to the humans that interact with it and having that built into the design that will make algorithms more inclusive and also these algorithmic systems hopefully less biased.

I’d like to end the interview talking about something you said, which is that artificial intelligence can reflect and reinforce societal prejudice and gender differences, but it can also help fight them. How?

So artificial intelligence can be used to either reinforce these biases or it can actually be used to combat them.

And that’s my job and that’s what I try to do. I try to create technical tools that actually help you identify where issues can come up in a standardized fashion in the products that you’re building. I built something called the Fairness Tool with my team. That helps you identify specific types of biases that arise in your data and your model and your output also suggest things you can do to prevent it. My team has created an algorithmic assessment framework, which is a qualitative and quantitative methodology for addressing this type of algorithmic bias. I think it’s very helpful to think about not just these algorithms as a tool for building something, but also how you make algorithms or models that help you understand the algorithms that you’re building.