interview  | 

The humanity behind data and technology: interview with Azeem Azhar

"We need to start by understanding what a more equitable future would look like."


Azeem Azhar is a strategist, analyst, product entrepreneur, influencer and writer with a passion for all things technology. He is the Senior Advisor on Artificial Intelligence to the CTO of Accenture, a member of the World Economic Forum's Global Futures Council on Digital Economy & Society, Advisory Member of HFS Research and Advisor to CognitionX. Azeem also runs a very popular newsletter on the societal implications of technology called Exponential View. Up until recently, Azeem was also the Vice President at the global media company, Schibsted Media Group. Azeem is an award-winning entrepreneur. He founded PeerIndex in 2010 where they applied machine learning to large-scale social media graphs to make predictions about web users. Brandwatch acquired PeerIndex in 2014. An investor in many tech startups, especially in the AI sector, Azeem has held corporate strategy roles at Reuters and the BBC as well as contributing as an Financial Times Columnist.

How did you get involved in tech originally?

I have been involved in technology for my entire life. I first saw a computer when I was seven years old in 1979 when my next door neighbor came back with a kit computer. I got my first computer in 1981, it was a ZX81, one of the Sinclair computers that many people across the United Kingdom would have ended up owning as their first computer. And since that time, I’ve always had a computer in my life. I’ve always used them, I’ve used them in my schoolwork, I used them in my university studies and I used them as soon as I entered the workforce in 1994.

What led you to set up the Exponential View?

Exponential View is my weekly newsletter. It also has a podcast. Having worked as a technology founder for my last company for six or seven years, when I sold that company, I was quite tired and I wanted to look at things other than running my own business.

I thought one way of doing that would be to write a newsletter. And the thing that I wanted to write about was how do I bring together the domains of these rapidly changing technologies, exponential technologies, and the impact on our human systems. Human systems like business, industry, society, government and culture. I felt that there was a gap, that people who are coming out of the world of technology and human systems.

What are your goals and key challenges at Exponential View?

Exponential View has started to have some real currency and credibility in the market. So as a founder of this small entity, this small newsletter business, I want to increase my reach and get to as many people as I can because I think I will add value to the market. We already have a very good, high quality, interesting audience of people who read it, but there can always be more.

The digital revolution has completely transformed the world around that. In what key ways has technology transformed people's lives for the better?

The hallmark of digital technology is that it’s based on the foundation of the semiconductor and the semiconductor is based on the fundamental economics that are described by Moore’s law. Moore’s law tells us that every couple of years you double the amount of transistors on a chip, you effectively double the amount of computation you can get for the same price. Digital technology has arisen as a result of decades of these price declines, which means we now have millions of times more computation in our hands, in the supercomputers in our pockets, than we had 20 years ago or 30 years ago when I got my first computer. And putting that technology into the hands of everyone has given us powers that traditionally only governments used to have. The question is about how wisely we choose to use it.

What are the biggest challenges we face as a society in the digital era?

In the digital era the biggest question is really one about power. Who has that power, who doesn’t have that power and how the power manifests itself in control? And what I mean by that is that today in the digital world, our access to all of our resources is mediated by technology platforms. And the trouble is that there are only a few companies who really control those infrastructures, the Googles and the Amazons and the Facebooks, the gaffers as they’re called, and they are accumulating more and more power as they get larger.

They use their profits to build their cash reserves to buy the best talent and they start to Hoover up the data in domains they don’t currently operate, making themselves more powerful, making themselves more of an essential part of how we interface with the world.

And the reason that’s a problem is that they don’t really have any democratic accountability. They don’t have a group of rulers who we can vote out.

Regulation is struggling to keep up the pace with innovation. Are our current institutions effective and efficient enough to deal with it with the challenges of the information age?

Regulation is struggling to keep up with the rate of progress. And it is a peculiarly fast information age rate of progress. We would normally expect regulation to be slower than progress. That’s the nature of progress. Our regulatory institutions were designed for a slower moving world. Now the good news is that as we have seen in the digital futures society workshops today governments and civic institutions are starting to realize that and actively going out to manage this. How can we be more agile? How can we be more skilled up? It’s quite risky, but society, stakeholders, institutions, even companies are starting to wake up to the importance of having this regulatory conversation. So it is just possible that we might be able to get a good framework in place before too much harm is done. 

According to the World Economic Forum, industry structures and business models are being disrupted by innovation, changing cost structures, lower barriers to entry and shifting value pools. How can companies continue to create, distribute and capture value sustainably in this new environment?

Given the rate of disruption, companies do have to change very quickly and it’s not clear that all of them will be able to make the changes required. After all, when we saw the transition through the industrial age, many companies didn’t make the leap. Companies come and companies go. But the critical thing that firms need to think about is that as we enter the information age, what does it actually take to serve human needs? Customer needs? A segment need? Given that we’re in the information age, not the industrial age, therein lies a whole set of new operating practices, of capabilities, of business strategies.  How do you work with data, how do you acquire data, how do you build products that serve a need rather than fill a distribution supply chain and a retailer’s shelf at the end of it.

Let's talk about the big players. You were mentioning platforms such as Google or Facebook that have grown exponentially, swallowed rivals and revolutionized the world of media. How could these powers be measured, checked and communicated better?

Certainly in the West, our digital experience is dominated by four or five very large American companies and they have grown extremely quickly in terms of their core businesses. I think the only way that we will check their power is through an ongoing process, some of which will be tactical blocking and tackling. So we don’t like how Android has integrated here or we’re concerned about advertising fraud there, some of which will be much larger scale review of their activities: is there too much market power? But it’s an ongoing process to get to what a final end state will be.

It’s important for us to recognize that these companies are now accumulating power at a scale that will at some point make it difficult for national governments to have any fine-grained tools to deal with them. We still have those tools and we need to act now.

Companies like Facebook or Google have been under increasing levels of public scrutiny over the last few years. How can they re–establish trust?

For Facebook and Google reestablishing trust will be the same as it is for any other entity or organization. Trust is established over time. It’s established through your behaviors, your actions, and how those actions are perceived. What the firms have currently been doing is stonewalling, blocking, denying, giving incomplete versions of their behavior while at the same time engaging in similar types of activities. They haven’t even started to think about honestly how they go about re-establishing trust. The reestablishment needs to start with the behavior. “We’ve acquired too much data. We admitted we’re acquiring health data in order to derive health data products. We’re not going to now acquire more data. We are going to make it a one click button for you to delete all your data from our systems if you need.” There are behaviors that speak much, much more loudly than lobbying firms and shiny PR people. And we’ve yet to really see that.

The convergence of big data and AI has been said to be the single most important development in shaping our future. Why and what have been the most important advancements as a result of this technology?

The convergence of AI and data is really the critical catalyst of this exponential information age. Over the last 50 years, we have slowly and then increasingly quickly digitized the world in which we live. We as individuals moved all of our communications into electronic systems, all of which are data. And this data resides on computer systems. So it’s available to be used by AI, which is just another processing technique. But the reality is that every new product, service or innovation we create has now got data at the heart of it. Everything is now part of a digital data system and the technology that lies in common across all of them to analyze processes and make recommendations as to what to do next is, generally speaking, called artificial intelligence. In just less than a decade, we’ve gone from this being science fiction to being a quotidian task that billions of us do many times a day. And I think that’s quite a remarkable trajectory.

People are worried about AI and robots taking over our jobs. Yet many experts argue that humans will always have certain skills that cannot be replaced, such as creativity, insight and empathy. In what ways is AI more powerful than natural intelligence?

Natural human intelligence is quite a remarkable thing in its conception. It’s so capable of doing so much. Let’s think about it as a 360-degree field of view. They can play chess really well or they can navigate around a warehouse really well, or they can transcribe from one language to another really well, but they can’t do all of those things together. The thing that makes human intelligence unique is its generality and its adaptability and its power efficiency. We run our lovely brains on 25 Watts of energy. AI today can’t go across from one domain to the next and they are not energy efficient in any meaningful way, it’s thousands of Watts that they require. They also require a lot of data compared to the amount of data that we need to learn from. And I think the final thing about AI systems that we should never lose sight of is that without the human to program it and the human to plug it into the power, it’s nothing and will remain that way for some years to come.

Do you think AI could have one day emotional intelligence?

Well, I think what AI could do in this environment is it could look at a variety of sensor cues that we might be able to get from a subject, your facial expression or knowledge of your history, your skin response, your pulse, your blood pressure, and provided there’s enough data it might be able to make a stochastic guess as to what your emotion might be, but it’s pretty rough and ready. They may well be engineerable problems, but we don’t know how to today and until we do know how to, there’s no way that we could actually deliver that with authenticity. What I find disappointing about that last point is that I just don’t think the diligence would have been done. The question won’t have been asked, do we even want to do this? Should we be doing this and does this product actually do what is promised?

Should data have a price or does this pose a risk of creating inequalities between those and needing to sell and those that do not?

Data today already has a price, both a financial price and the price of access and inclusion and fairness. And sometimes people have said that it’s a little bit unfair because if you can afford to buy an iPhone which has got all of Apple’s privacy around it, your data is in some sense less likely to be used for monetization or marketing than if you buy an Android. And to some extent that’s true, but the other way that data has started to have a price and for a long time has been the way in which it is used for decision making in these large scale complex societies in which we live.

You once said that the future is not even talked about technology at all. It's about our culture, ethics and values and how we choose to deal with those. What effort do you think is the key to building a more equitable digital future?

If we want to build a more equitable digital future, we need to start by understanding what a more equitable future would look like. What does that mean in our culture? What are the measures we would use to see whether we were getting there? Once we have figured that out, we can start to design the rules and the guard rails for the digital and the few non-digital services that would have to be built according to those systems. We need to establish a set of regulatory principles against our definition of what safety, or in this case, equity and justice means for our society.