Article  | 

The AI Government: accountability and ethics

Tags: 'Artificial intelligence' 'Data ethics' 'Digital inclusion' 'Public innovation'


Reading Time: 3 minutes

While governments worldwide keep integrating AI systems into their workflow, the public remains unaware of how this affects their lives in multiple ways.


In the last few years, it has become more than evident that the public administration’s processes are in complete transformation. Until recently, all administrative tasks and decisions fell to human beings, but more and more, AI has taken on a more critical role in international governance.


Saving time and money

National agencies have benefited enormously from implementing artificial intelligence to streamline processes and make simple decisions. In addition to being cost-efficient, by “dispensing” with human (administrative and social) employees when granting public aid programmes, the whole process becomes faster. But there’s a catch: early and enthusiastic adoption of not-yet completely safe AI can lead to multiple errors and injustices. The “confirmation bias” in which officials yield to the algorithm’s decision against their judgement and experience, or the lack of analysis and nuance, can be some of them.


Trial and error

AI can revolutionise civil administrations, but we must remember that it may still be a fallible system. The too-early and urgent widespread adoption of IA is also often a cause for discrimination. As we discussed in our article Algorithmic gender discrimination and the platform economy, one of the clearest examples is how algorithms discriminate based on gender. There are many other cases where the application of AI without human oversight can result in injustices that widen the economic, gender, cultural and social gap in our welfare state.


Suppose we talk about justice, one of the branches of the civil service that often gets bogged down and drags citizens through procedures that can take years. Implementing E-Courts in the Netherlands —where both parties agree to submit to an automated process to resolve disputes and where the sentence is supervised and confirmed by a human being—gave rise to complaints about the system’s opacity. This lack of information led to a legal battle over how appropriate this technology was.


In June 2020, A-Level students and their families gathered in front of the British Parliament to protest the results produced by the algorithm to assign grades. Exams had been cancelled due to the pandemic, and this was the system the government had chosen to evaluate them. The students detected that the algorithm disproportionately affected students from working class and disadvantaged backgrounds. The results saw a 4.7% rise in A grades at independent fee-paying schools, whilst public further education colleges and sixth form colleges saw just a 0.3% rise. This marking down resulted in students from disadvantaged backgrounds being unable to access university education.


Early and enthusiastic adoption of not-yet completely safe AI can lead to multiple errors and injustices.


Systems that purport to diagnose illnesses or that score the client’s credit with a limited data set can be unreliable because they tend to miss nuances that can radically change the outcome. In France, the Score Coeur algorithm has made it possible to match donors with heart recipients more effectively than the old manual dividing the cases into “emergency” and “super emergency”. However, after a period, it was discovered that patient data was often not updated, and there were red flags concerning bogus data in up to one in four patients at one hospital, all-in-all endangering the objectivity and success of the system.



The necessity for AI oversight

Since “the fuel of artificial intelligence is personal data,” as David Santos, head of the legal office of the Spanish Data Protection Agency (AEPD), very aptly puts it, supervision and control policies at the European level are more than ever necessary.


The European Commission has launched several initiatives to harness the risks posed by AI while maintaining competitiveness in the global markets, ending bias and discrimination and protecting the European citizens from predatory practices on the part of the behemoth companies that thrive on data. The Artificial Intelligence Act of the EU will provide the necessary framework for the public and private sectors and become “a global standard determining to what extent AI has positive rather than negative effects on upper life wherever you may be”.


“The fuel of artificial intelligence is personal data”. David Santos, head of the legal office of the Spanish Data Protection Agency (AEPD)