Q&A  | 

Saleem Alhabash

“We’re not realizing how much data is worth”

Tags: 'Data dividend' 'Data ethics' 'Ética de datos'

SHARE

Reading Time: 7 minutes

Saleem Alhabash is an Associate Professor at Michigan State University‘s Department of Advertising & Public Relations, where he also co-directs the Media and Advertising Psychology (MAP) Lab. His research focuses on the processes and effects of new and social media within the context of persuasion. More specifically, his research investigates the cognitive and emotional responses and psychological effects associated with using new and social media.

Following California Governor Gavin Newsom’s proposal, the idea of consumers getting paid for their digital data is becoming a hot topic. What are your thoughts on this?

I think it’s a really bold statement. If we’re thinking in economic terms about the amount of money that is being generated from our data, we as consumers are forgoing the rights to our privacy knowingly to be part of the social structure and the social fabric of the digital era, but we’re not really realizing how much that data is worth. We do go through a kind of transactional contract.

Think of all of the important things that we’re getting out of social media, it is part of everyday life and the price that we’re paying for that ‘free’ service is that we forgo the right to our data.

So I think it’s, it’s only fair that technology companies are more socially responsible. We realize that the tech companies, advertising agencies and regular companies that are using our data to target people are making money out of it, therefore there has to be some reward for the individuals who are the engine of this economy.

What is our data worth? Is data all the same or is some more valuable than others?

I’m not ready to give out a formula, because I don’t think I’m equipped to do that. I think economists have to be involved, but not just economists. It has to be a group of individuals across multiple disciplines, including market and tech experts, but also experts on people (e.g. sociologists and psychologists). Are all data the same? Of course not. I mean some data is readily available online, but some private data is more detrimental in terms of making sense of who we are as consumers. Are all the data being collected and archived on 23ndMe worth the same as my picture and my name? Is my home address worth more than what kind of movies I like? There is a certain balance that we have to achieve. We have to be aware of the potential harm that could be inflicted on individuals against how private the information is. Because for some groups, their private information can put them at a severe disadvantage.

I’m fine having my picture on Google images because I have my own website where I publicly post my pictures, but if my face had an association with where I live, then I would have a big problem with that because that is something I did not sign up for.

I truly believe that not all data are created equal, and the marketers, advertisers and tech companies know that. They know that for sure because they have all of that data. If you look at 23andMe (DNA Genetic Testing & Analysis) or Fitbit, for example. If you have people’s DNA or heart rate data for 24 or 12 hours a day, what are the ramifications of that in terms of knowing that level of private information about the individual?

It’s not just about a single data type, it’s about the ability to merge and associate data. So if I’m Fitbit and I have access to someone’s Google calendar and their heart rate data alongside what they’re posting online, then what are the insights that I’m developing out of this particular triangulation of data?

What are the benefits of having multiple data points for the same individual? Am I able to look at multiple data sources and link a single data point for the same individual with other sources of data across time? It’s not just the levels of data that are different, but also the levels of merging and triangulating the data and the possibilities that result from that.

Who should support a data dividend? Only technology companies or any company that benefits from the knowledge of our behaviour?

I think the mechanics of it are going to need a lot more discussion and deliberation. People have just normalized the idea that there should be a responsibility beyond symbolic gestures and symbolic PR moves.

Any company that is benefiting by making money off of any individual data should be held accountable in the sense that this is what it’s worth to get into that marketplace.

Do you think we’ll be paying to protect our data in future? And if so, is privacy turning into a luxury good that only some can afford to protect? 

The digital divide is not something new. The difference between the ‘haves’ and ‘have nots’ has always spilled over to digital technologies and information communication technologies.

I do not think that data and privacy are very different from access to mobile phones or digital services such as physical or cloud-based technology, but I do think exchanging privacy for money is a valid concern.

I can tell Facebook I’m going to pay x number of dollars every month and I don’t want anyone to touch my data, I want you to forget about me from your algorithm and from all of your data sets, but I still want to partake in the community that is Facebook, whereas some other people might not be able to do so, therefore their data becomes a commodity in the marketplace. If I am of a certain ethnic group, will I be treated differently by the social media algorithm? There are experiments that are being done in information and computer science that actually show how changing certain parameters within an ad might make a difference in who gets it. But also you’ve got another level – access to information.

 

Is there going to be differentiation between who gets premium access to higher quality information? Is the algorithm going to be equitable in terms of understanding the social context behind cultural stereotypes and social stereotypes? If I am part of an ethnic group that is marginalized, am I going to be receiving the same interest rate in an ad for a finance loan from a bank as someone who is from a dominant group? If I live in a neighborhood that is characterized as low on socioeconomic status (SES), would the terms of the loan be different than if I lived in an affluent part of the town? These are pressing questions right now because with the proliferation of algorithmic curation, computational advertising, and programmatic buying, data about consumers are not ambiguous anymore.

We can access that data and therefore the whole concept of advertising waste or incidental exposure becomes negligible and irrelevant, because it brings out all these ethical issues around did they knowingly give this interest rate to a marginalized person or is it just the algorithm? Who holds the responsibility? Is it the advertiser that is agreeing to reach people that way or is it the tech company that has created the system that enables potentially discriminatory behaviors?

I think that there could be more digital divides within the realm of privacy and the capacity of an algorithm to perform in discriminatory ways. There is a need for regulation that has the best interest of the consumer front and center. Who is advocating for the billions of social media users? At the end of the day, we have to think more comprehensively about creating a system that doesn’t just work for and reward the tech company, allowing them to become much richer while benefiting from consumer information and data. We have to keep the consumer in mind. 

Social scientific research and medical research has been regulated by ethical codes that are implemented by institutional review boards (or ethic boards). But much of the work in computer science is exempt from that as it does not technically qualify as research dealing with human subjects. So at which point are we going to see a shift to ‘this is a real problem and we need to subject our research and experimentation in academia and in the industry to ethical guidelines.’ How can we create ethical guidelines for this type of work? We’re conducting an experiment where we’re going to be putting some stimuli on Instagram and seeing how people respond to it. We don’t have direct contact with any human in the study, technically. However, we are getting in contact with their data, with traces of their livelihoods. In this case, we worked closely with our local IRB to get the study protocol approved in a way that recognizes the risks and weights them against the benefits to society and people.

The role of an ethics guideline is to identify what are the levels of risk and what are the benefits.

And if the benefits outweigh the risks then there is an ethical ground for conducting that type of work. But is that the same as Cambridge Analytica using people’s data for the sake of changing their minds about an election?

So there has to be greater investment, discussion, and enforcement of ethical guidelines in tech and tech-related industries. More importantly, we have to invest in teaching future professionals in tech, advertising and marketing industries about the nuances in ethics and ethical decision making. We don’t do enough of that. We have to shift the thinking from regarding the world as zeros and ones in order to understand how our actions are actually impacting people’s lives and have big ethical and moral ramifications for society, for individuals, for the whole world in general.

According to your recent study, students would ask for an average of $2,076 while adults would ask for $1,139 to deactivate their Facebook account for a year. People may not pay for Facebook but they clearly still value it. Is it only right therefore that we pay for social media with our data? Why do people place such value on Facebook?

There are multiple reasons. In general, when thinking about the privacy and security of any type of technology, whether it’s using your mobile application or responding to phishing emails, the average user has a fatalistic attitude to giving out their data. They understand the risks. Not one person in my class cares when I talk about how much data social media platforms have on us. Of course they are charged up, but this is not a big surprise.

People know that all of their behaviors are recorded, transcribed and archived. The difference is in seeing the extent of what tech companies or advertisers are doing with it, but a lot of people feel that it is an uphill battle.

That they just don’t have the capacity to actually do anything about it. “It is too complex, I don’t understand how it works.” Maybe I understand that they’re collecting my data, but I’m not going to read the whole terms of service before clicking “I agree.”

The other side of the transaction is that these technologies have become an extension of our lives and extensions of our mind. They have become part of our daily rituals. They become habit and habits are extremely hard to break.

The whole industry of marketing revolves around understanding people’s habits and leveraging to achieve a defined goal. Technology is no different. We become habitual users, and despite our feelings about technology, it can still be a part of who we are and what do we do every single day.

There’s a lot going on on social media and people who go through a digital detox will often feel some sort of social isolation in social settings because everyone would be talking about something that went viral or something that was shared on Instagram or Snapchat. So it’s not just my relationship with that device and technology. There’s a great social aspect as well, it’s not just personal. It’s part of who we are as social beings. We all want to fit in. We all want to do the things that would make us popular and loved and belong. Social media have become the beacon of that, whether it’s real or unreal, nonetheless, it is the place where we exercise our social instincts, relationships and rewards.