What is artificial intelligence to you?
How did you get into this field?
How has AI developed and evolved?
What are the next challenges?
How can nations cooperate?
What does represent the rise of AI?
About AI vs natural intelligence...
Robots as slaves?
Government and tech growth
Public sector & tools for the digital era?
Which government is doing a good job?
How do we govern transnational corporations?
How will our daily lives and our work lives change?
Is it too late?
Joanna Bryson is an associate professor in the Department of Computer Science at the University of Bath where she works on Artificial Intelligence, ethics and collaborative cognition. She is a global academic Expert in Artificial and Natural Intelligence, significantly engaged in Global Technology Policy, a Scientific Board Member for the Task Force on Artificial Intelligence and Cybersecurity at CEPS (Centre for European Policy Studies), and a member of the Ethics board at Mindfire Global.
What is artificial intelligence to you based on your work and experience?
I tend to use a definition of intelligence that I actually learned as an undergraduate in psychology, which I have recently learned is more than a century old: intelligence is “being able to do the right thing at the right time.” Another way to think about that is as a form of computation. You’re translating the current context into an appropriate action, right? So it’s a transformation of information. If we can agree that that’s what intelligence is, then the only difference with artificial intelligence is that it’s a subset of all the intelligent things that a human has intentionally created. So it’s an artifact. And I think that’s one of the things people forget. They learn this word AI and they think it’s some kind of space that we discovered. But actually it’s something we’ve built.
How did you get into this field of study and why?
The first time I ever worked in AI was actually 1986, but it wasn’t continuous. I was interested in it, but my undergraduate degree was actually in behavioral science. I was interested in what intelligence was for and how different species use it, but I was also a really good programmer. I thought I could get into the best university if I combined what I was good at with what I was interested in, so I went into artificial intelligence. In reality I was interested in everything and I just thought this was as good a thing as any, but almost everything else came out of that first master’s degree.
Since you started out, how has AI developed? What's been the evolution?
The main thing has been the massive increase in computation and of course devices have become cheaper and more widely spread. Now we have much more data. We have the combination of faster computing and more data. We also have slightly better machine learning algorithms. Of course we have things that more closely replicate what we see in nature and I think that’s why people think there’s a lot more artificial intelligence than there was before, but actually as soon as we had Google search and spell checking, we had AI. AI has been changing things for decades.
Now we have much more data. We have the combination of faster computing and more data.
A lot of the narrative has changed. Part of it is that the digital companies have just come into the fore and, of course, there are many digital companies, but a few of them have managed to capture a natural monopoly and have grown into incredibly strong positions. So it was only in the last 10 years, I think, that the digital companies surpassed the petrochemical companies as the largest companies in the world. And that’s been true on both sides of the great firewall of China, both in China and in the West. At the same time the traditional petrochemical and manufacturing giants are still there, it’s not like they went away, but this tech industry suddenly has a lot of money, which may not just be about how good they are at what they do, it may also be that we haven’t figured out appropriately how to tax them yet. I think that’s part of the problem.
What are going to be the challenges for the coming five to 10 years?
In 5 to 10 years I think we’re going to be trying to deal with the same challenges that we have today. Probably other challenges will emerge during that period too. What everybody is talking about about right now, for example, is the assault on democracy. How do we coordinate behavior when it is so easy to have impact, purchased by money, across national borders? Our understanding of ourselves is changing because we can know so much about ourselves and our behaviour can be predicted. Helping humans understand how we behave and the extent to which we are a product of our culture and our context so that we can better understand and figure out how to govern and regulate ourselves is part of the problem.
We’ve all seen that there’s been rejection of this technology crisis just as there’s been a huge rejection of climate science by people who don’t want to think it’s a problem. I think there will be a backlash against psychology for example. Yes, there have been some high profile problems in psychology, but this backlash far exceeds that. There will be an assault on science and I think there will be various assaults on knowledge. And the worst thing about that is that some of the big name assaulters that are trying to run an agenda in the public discourse are positioning themselves to take advantage of the knowledge that they’re pretending they don’t believe in. So I think we’ll see a problem of disinformation, communication, and coordination in terms of what is it that a national government does.
When everything is changing and power is changing, how can two nations cooperate with each other and what are the rights of the people?
I have to say that one of the most astounding things to me is that we were all worried for the last two or three years about the fact that over a million workers had been put into concentration camps and there was a technology component to that. But now, in the last months, 7 million people in India, the entire state of Kashmere, has been told that, firstly, they are no longer proper citizens, they’re not a real state, and secondly, they lost all communications. They don’t have telephones, they don’t have internet. A nation of the law, the world’s largest democracy, was able to just do that to one of its States. I hope Europe is more robust than that, but it’s a huge concern. It’s interesting that we can see all these things, but in some ways we still can’t stop them. It’s not clear whether that’s because power has shifted or because, we’re just more aware of the problems we had before.
You know, the great Stephen Hawking once said that the rise of powerful AI will be either the best or the worst thing ever to happen to humanity, which do you think?
Well, I think artificial intelligence is an extension of ourselves as a prosthetic. It is a part of humanity and it’s been going on for a long time. So, it’s hard to say. Certainly it’s been incredibly enabling, the fact that we’ve moved, something like 4 billion people, that’s more than half of the world, out of extreme poverty in the last 20 years is amazing. It would be great to reach the last billion people who are still there, but it’s been incredible for the people that were living on $1 a day or something in extreme poverty, partly because the democratization that the digital media brings us. The power of having a smartphone means you know what your labor is worth. And that’s been really important. Not just in China, not just in India, but across the world. So I think it’s probably both. But that’s normal. It’s part of us.
So in what ways is AI more powerful than natural intelligence? What can people do that robots can’t?
I guess the main thing about the digital revolution is that you can do the same thing over and over again. You can copy. So we can quickly get copies of whatever news is put out or whatever. That’s called scaling. You can combine additional computation from other bodies. Humans don’t scale in that way. So take, for example, search engines, what they’re able to do, it’s amazing that we can in a very short amount of time, get a web page that’s stored anywhere on the planet and search for it by a very few number of words. That’s incredible, right? That takes an enormous amount of coordinated computation. Humans can’t just fuse their brains together.
But in other ways as individuals, we just can’t know what everybody else knows. And it would be a little scary if we did. We would lose a lot of our individuality. I hope this isn’t too depressing, but I think it’s actually wonderful. I think it’s the main thing that makes us human. We’re not going to build anything like an animal out of silicon, something that’s been designed and for which we can be responsible and accountable for. So the fact that we are animals and we have our relationships, our aesthetics, and our values, is based on the problems of being, you know, individual right? We’re never gonna have a machine that is really a piece of our society, that would replace one of us in the way that we interact with each other, love each other, reproduce with each other.
We’re not gonna do that with a machine. So I think it’s always best to view artificial intelligence as a prosthetic. It’s something that extends from humans. It’s not something that can replace humans. Having said that, of course, with a lever you can maybe have one person do the work that two people did before, but it’s not like there’s a motivational force in the robot. The fundamental problems about how we treat with each other, whether we treat each other with respect, whether we decide to wipe entire groups of people off the face of the planet, these are the problems we’ve always had, and, and they don’t change that much with technology.
I think it’s always best to view artificial intelligence as a prosthetic. It’s something that extends from humans. It’s not something that can replace humans.
The title of your paper, robots should be slaves, was controversial. What do you mean by the use of the term slaves?
That paper was written for a book called artificial companions. I find it really problematic that people thought of AI or robots as people. So I said, look, AI, and this is just one argument I’ve given you already some better arguments, but the basic idea is, look, how can you think it’s going to be a person when it’s something we’re going to buy and sell? Because we’ve all agreed that you shouldn’t buy and sell people. Yeah, we do buy and sell things. We don’t buy and sell people. So of course we haven’t all agreed that there’s unfortunately still a human trafficking in the current world. The mistake I made with that title was this belief that everyone was sensitive to the truth that you can’t own people.
The word slave here is about something else. You have devices that wash the dishes for you. You call it a dishwasher, but you don’t consider them human. It’s a servant you own. But, of course, it could still be seen as inconsiderate to all the people who still have as a strong part of their identity with the fact that their ancestors had been enslaved. So it was a bad title in a way, but it was the first time people listened, so it grabbed people’s attention, which is a sort of MIT thing to do. It was the third thing I tried after just writing the papers straight twice.
We're going to shift the focus to governance. So, how can the government avoid stunting the growth of emerging technologies?
Well, that’s a really interesting question because there are a couple of assumptions about this. What does stunt even mean and what does over regulation even mean? On the one hand, we want as many innovations coming to market as possible so that we can take advantage of them. On the other hand, if you have something come out too quickly and then crash and burn or destroy other things, then that’s not actually a great win. So sometimes being a little bit slower to produce something, and then starting out with a stable ecosystem might be a better solution.
So good regulation is about supporting the sustainable growth of corporations and giving them ways to innovate and try things and still be able to move forward. I think one of the big things we have to think about as I mentioned before is actually wealth redistribution because a lot of digital commerce doesn’t cost that much to do once you’ve figured it out, but it hasn’t been redistributed very well. You want to make sure you’re actually paying people what their time is worth and that you don’t have these sort of invisible people hidden underneath. There are very specific people that are providing services that aren’t seen as a part of the ecosystem even though they are. The people getting a lot of visibility right now are those who have to do the censoring of videos. You have to check because some people are going to flag videos just because they’re annoyed with their friends while some of them have actually seen something that nobody wants to see. Again, this is, we’re, we’re more intelligent now. All of us are interconnected. There are so many different things that we have to take care of with respect to each other. Making sure that things are distributed well is part of what government is all about.
Good regulation is about supporting the sustainable growth of corporations and giving them ways to innovate and try things and still be able to move forward.
Another really important thing is to make sure there isn’t huge amounts of corruption, discrimination, or – one of the famous ones- recidivism. This prediction of whether someone will go back to jail and there’s this famous program that that was more likely to predict African Americans would go back to jail and less likely to predict that European Americans would go back to jail, less so than they actually would. So it failed for both. And no one knew that machine learning could do that bad at a job, just trading off the data, so it doesn’t look like it was a legitimate use of AI. It looks like someone deliberately did that and then used the fact that it was a machine and it was “magic” to sort of hide the bad thing they’ve done. So what we really want from our governments is to enforce us, like they would with any other organization, to realize it isn’t magic, to not worry too much about the stunting, and to sort of be the saltwater that flushes and keeps the system clean.
When I talk to companies, I say, look, you know, you don’t want to have a race to the bottom, but you also know that sometimes somebody in your ecosystem is cheating. So you want government to be there and to be able to tell what they’re doing wrong. And to do that you have to pay this cost and that cost is properly documented your code. So that you can show you were doing the right thing, that you wrote the codes for the right reason, that you tested it in an inappropriate way, that you secured it, you used cybersecurity or made sure nobody else has hacked it. That’s the great thing about software actually in doing the right thing, you naturally document it. If you’re careful, then it’s a normal thing to document what you’ve done. All we need is for government to be able to hire enough people that know how to read those kinds of records, that know something about computer science, and for companies to be encouraged to do that and to understand that doesn’t mean you have to go open source. You could still have your IP, but you just trust the government to check your code because then they can check everybody else’s code and nobody gets to cheat. And that makes it easier for everybody. It’s actually a win, win. You could actually move faster with more informed regulation.
How can the public sector ensure people are getting the right tools, structures and skills that they need in the digital era?
I obviously think the government should give enormous amounts of money to universities. I’m joking. I’m an academic. We keep trying to find the best ways forward and that will keep changing as people keep innovating new ways of doing this, but we keep recognizing that it is an issue and trying to make sure the kids know the jobs that are available. So again, this is about transparency, not hiding things, not pretending they’re magic, but showing people how things work. And I think that’s a really important thing.
There’s not a magic bullet, but I think the most important thing is not believing the hype too much.
One of the problems is the anthropomorphism and selling of cute robots instead of making it clear how they actually work, that there’s someone behind it and those are jobs that you could have, that you might want to have. There’s been a lot of push towards anthropomorphizing because you make these cute things that people want to buy and spend money on and take care of. And they don’t ask too many questions about it because they think they know, they think it’s a person, but it’s not a person. Do we need to stop? So the kids even are becoming motivated to go and try these things, but it’s an ongoing processes. There’s not a magic bullet, but I think the most important thing is not believing the hype too much. Recognizing that there’s a way this stuff works and exposing how it works to people, being open minded about it.
So do you have a good example, of a government that is looking to the future in terms of innovation and regulation, where are they doing a good job?
I don’t know if it’s a bad example of a good thing, but the British are having all kinds of problems. They’ve decided that in order to recover from the crazy things they’re doing, that they’re going to take advantage of artificial intelligence. They’re leading on artificial intelligence. They actually have the regulatory bodies already set up. They’ve made it so that everybody’s supposed to learn how to program in school, so that there’s this lifelong learning. They’ve done some great things in that area. Like I said, I don’t know if it’s a great example, because maybe they invest in so much because they’ve thought that the only other thing they’ve invested in is destroying their economy. So they’re trying to do something counterbalancing, but it may be good practice to look at it.
There are a lot of countries I just that seem to be doing very good things. I was just in Austria, it was in December and I said something in Germany and people always amazed if an Anglophone speaker says something in another language. And they said, Oh, I lived here in 2007 and they said, Oh, you wouldn’t believe how it’s changed. You know in 2007- 2008, the leading employer in Vienna at least was the arts and now it’s digital, so obviously they must be doing something right. I suspect a lot of places are doing good things and it’s important to share good practice. Cybersecurity really matters.
One thing I don’t like is when people look at the countries that have enormous companies that have unbelievable amounts of power and say that those guys must be doing something right. Maybe they’re doing something wrong because that isn’t necessarily the context you want to get yourself into. We now have this transnational issue because these companies are so powerful. That’s one of the great things about the EU. They’ve made things very, very clear. The nations are the ones that have the courts, the military and the police and then Europe just coordinates between the nations. And I think that’s a good structure. I mean, I think geography matters a lot, but there’s certain amounts of things that corporations have to take care of, such as making sure they’re doing the right thing with their own products.
Okay, so now how do we govern transnational corporations?
You don’t just want one government. Traditionally people said that the best way to to reduce corruption is to have a government in a bunch of mid sized companies and they all keep watch on each other. So in a way we should be better off because we have more entities. But I think if anything, we’re leaning a little too hard on the disruption side and sometimes throwing out the institutions that are actually useful and making everybody more safe. And that’s why we’re seeing this race of inequality in the OACD globally. Inequality has been going down, but in the OACD it has been going up and that causes real problems because the ultra rich are way too rich and powerful with too few constraints. We need to get on top of this.
A lot of people think, “Oh well you know, we make children and you committed an action by which a child comes into existence but you don’t get to design the child.” So you don’t say how many hands they have or how many feet they have. And you don’t get to choose the fact that their social status has a huge impact on their wellbeing. That’s not something we get to choose. In fact, even if when we choose a partner, 99% of the genes are the same, so the word “choosing” makes a tiny difference. Whereas with AI, you really are building it from the ground up. So you really do get to choose what is remembered, what is even perceived, and certainly, what is the motivational structure? You can just write. So that’s why I say it’s like authoring a book more than it’s like having a child, if that makes any sense.
So moving on to human robot relationships, how do you think our daily lives and our work lives will change over the next few years?
I can’t talk about that in terms of relationships because I don’t think that’s the right, the metaphor. As we talked about before, AI is a prosthetic. So it’s not that we’re suddenly, seeing AI coworkers or whatever. You know, it’s not like there’s suddenly a robot on the bridge of the enterprise. It’s much more that you need to understand that when you’re working with artificial intelligence, you’re working with your own company, probably combined with some other company, that is actually producing the artifact. So it’s important to understand that relationships are with other humans, or even other animals. But when you see a movie character or you read a novel or you see AI, you’re going through those kinds of motions, there’s no empathy there. It’s not the same experience. It’s something that’s been written. So I don’t like the idea of presenting AI as a coworker. There was a little bit of that with some of these manufacturing robots, but if you’re going to work well with those robots, you’re going to learn that it’s a device. It’s like a typewriter. You’ll figure out how to use it to the best extent possible. It isn’t that robots take over jobs. It’s that, that a company decides whether or not they want to fully automate their business process. There are some stores with people at the cash registers and there are some stores where you go and swipe the products yourself, right? There’s no people at the cash registers. That’s a decision by the company. It’s not that the cash registers decided they did or didn’t let people work there. That’s the choice we have to make. Quite a lot of us when we buy airplane tickets now we just go on the web and we don’t talk to anyone, right? We just type everything in. But for a lot of organizations that turned out to be not a great solution. It’s quite interesting to me that if you go to a phone store, often there is a person there and they’re talking to you, but really they’re still just filling in forms. And then if you get a a problem, that person can’t really solve it because they’re are also still just filling in the form that you could have been filling in at home. It’s cheap, but it’s not always useful.
It isn’t that robots take over jobs. It’s that, that a company decides whether or not they want to fully automate their business process.
The advice I give companies is always have a path. There should be a path from the customers and suppliers through to the executives and the boards. You want to have employees that can tell when it’s time to go up the chain, otherwise you aren’t able to capture the kinds of problems that your employers or customers or vendors have. And you’re not able to recognize the opportunities they might be bringing. So I think it’s very important to have something agile. One example of all this is banks. The robot was called an automated teller machine. They were brought in in the eighties and there’s literally more bank tellers now than there were in the 1980s. The reason is that if you have fewer bank tellers per branch, but that made the branch cheaper, so the banks are able to make more branches and that’s why they have more tellers. But the point is we have a 30 year period that we can look at it and say, well, what, what was going on there?
For example, in LA, you’re not going to have these giant law partnerships with hundreds of paralegals because you can do that with AI. That might mean that everybody has better access to better law because the lawyers themselves don’t have to go through that whole painful process of being a boring paralegal, but can directly use AI and help people resolve their issues more quickly, and hopefully not by going through the courts all the time. So only the most difficult and highly contested court cases are the ones that actually see a human judge.
Do you think is it too late to become more equitable, more sustainable, more social?
Well, in some ways it’s obviously too late, but in some ways it’s never too late while there are still people. So I think we’re always trying to do the best we can do and we shouldn’t lose track of the fact that we have been doing pretty well. There are an awful lot of people that have come out of extreme poverty. Coming to recognize all the good things that happened as well as the bad things and working within all those pieces. I think that’s one of the exciting problems in front of us.