Article  | 

Who will win the AI race?

Tags: 'ADMS' 'Artificial intelligence' 'Data ethics' 'Facial Recognition' 'smart cities'

SHARE

Reading Time: 3 minutes

China and Europe are crafting different models to control how AI affects society; one of them will prevail.

 

 

The paradigm of classical society has changed a lot in recent decades regarding the control, security and surveillance of countries over their inhabitants with the rise of technology and, specifically, Artificial Intelligence, the last great leap in computing. What seemed like science fiction to us, something like out of Minority Report, arrived in our lives sooner than expected.

 

But what exactly do we mean by “Artificial Intelligence”? AI broadly refers to any human-like behaviour displayed by a machine or system; in its most basic form, computers are programmed to “mimic” human behaviour using extensive data from past examples of similar behaviour. As its designers might argue, AI can streamline business processes, complete tasks faster, eliminate human error, and much more.

 

Of course, AI ​​does not think for itself; someone in charge has to give instructions to it and feed it data so that, based on analysing this information, it can make contrasted, impersonal, emotionless decisions.

 

This is the main issue: who gives AI instructions, for what and if it’s actually true that machines are infallible. Whether it’s a brand looking to sell products or services to audiences or a government looking to control crime, AI remains a malleable and potent system that can cross any borders regarding privacy and control over our information.

 

 

Facial recognition: ethical dilemmas

One of the most significant pitfalls of AI is facial recognition, a technology widely used by the Chinese government, one of its biggest supporters.

 

In 2019, IHS Markit predicted that there would be 1 billion surveillance cameras worldwide by 2021. According to the same report, 54 percent of the world’s cameras are located in China; that means up to 540 million CCTV cameras. These cameras, located on the streets of all the big cities of the Asian giant, can identify specific people, gender, ethnicities and even behaviour, as well as possible fights; this is how cameras from the Hikvision and Dahua companies, partially controlled by the Chinese state network, managed to “target” millions of Uyghur Muslims through the use of software for “suspicious behaviour”.

 

These are cameras of the same brand and technology that are now being implemented in the United Kingdom; 73 percent of councils across the UK, 57 percent of secondary schools in England, and 60 percent of NHS Trusts are using CCTV systems made by the two companies, as are several universities and police forces, according to Forbes. Now, parliamentarians from different English parties are asking to withdraw these systems due to the questionable implications of the manufacturing companies in the ethnic cleansing of the Uyghurs and, above all, due to the national security problem that they may pose if China can access so much data on the citizenship of the United Kingdom.

 

 

The Chinese model vs the European model

In the name of fighting terrorism, the Chinese government follows its inhabitants even in their most private moments: in Dongguan, Southern China, toilet paper dispensers using facial recognition were removed from public bathrooms following public outrage.

 

Whether it’s a brand looking to sell products or services to audiences or a government looking to control crime, AI remains a malleable and potent system.

 

The Chinese model is one of ultra-surveillance, while the European model, which tries to unite countries under the same guidelines, tries to protect citizens’ privacy. China seeks to restrict the ideological and trends influence that these algorithms can have once they are already in the market; Europe, meanwhile, wants companies to be audited, regulated and controlled before having contact with society, whose freedom they want to protect.

 

The two powers are beginning to work on regulations to guide the behaviour of AI companies in the market. However, regulating a technology like this at state level is like putting a fence in the sea. Two different regulations will find it very difficult to coexist in the same digital world because it will create two parallel worlds of AI in which the operation, design and conception of the technology will be different, if not opposite.

 

One wonders… who will win the race to control Artificial Intelligence? How will this influence the future of humanity and technology? It is early to have answers, but it’s the moment to ask the questions.

 

Two different regulations will find it very difficult to coexist in the same digital world because it will create two parallel worlds of AI in which the operation, design and conception of the technology will be different.