Article  | 

Algorithmic governance: proceed with caution

SHARE

Reading Time: 3 minutes

Governments are facing a health crisis, an environmental crisis, a financial crisis and a digital crisis… all at the same time. There was already a growing presence of algorithmic decision-making in the public sector, but as the coronavirus pandemic accelerated the world towards digitalisation, algorithmic decision-making has become commonplace.

These systems are making crucial decisions about our health, welfare, benefits and other government services. When decisions are trusted to algorithms without human supervision or without ensuring they adhere to established ethical norms, things can sometimes go very wrong. Below are two examples:

 

UK A-Levels fiasco

In 2020 the Covid-19 pandemic outbreak had a major impact on education systems worldwide. Given the critical situation, the UK government decided not to hold exams for students aged 16–18. As an alternative, the UK’s exam regulator developed the Ofqual grading algorithm system. Teachers were asked to provide an estimated grade and a ranking for each pupil. These were put through the algorithm, which factored in the school’s performance over the previous three years.

 

The strategy aimed to ensure that, even without exams, the grades would be consistent with how schools had done in the past, under the assumption that teachers were likely to be more generous in assigning an estimated mark, which might lead to grade inflation. Prime Minister Boris Johnson defended the system as “robust”, amidst widespread criticism from schools and colleges and MPs.

 

When A-level grades were eventually announced in England, Wales and Northern Ireland on 13 August, nearly 40% were lower than teachers’ assessments. To make matters worse, the downgrading affected state schools significantly more than private schools. By basing the algorithm on historical data from an unequal education system, a high-performing student from an underperforming school was likely to have their results automatically downgraded.

 

Austrian Employment Agency

The Austrian ArbeitsMarktService (AMS), known as the AMS algorithm, is an example of public employment services using algorithmic profiling models to predict a jobseeker’s probability of finding work, in a bid to cut costs and improve efficiency.

 

Based on statistics from previous years, the system calculates the future chances of job seekers in the labour market. The algorithmic system looks for connections between successful employment and job seeker characteristics, including age, ethnicity, gender, education, care obligations and health impairments. It then classifies job seekers into three groups: those with high chances to find a job within six months, those with a one-year prospect, and those likely to be employed within two years.

 

This algorithm is particularly controversial, as it is designed to prioritise efficiency over inclusion. It was strongly criticised for ranking job seekers who are more likely to get and keep a job above those who need it most, correlating gender, care work, fragmented work history, etc. with low employment prospects. Predictive systems based on past hiring decisions reflect institutional and systemic biases and can both reveal and reproduce patterns of inequity, penalising disadvantaged and minority groups, including women.

 

What role should algorithms play in the public sector and how can we mitigate the risk of discrimination?

Algorithmic systems tend to be promoted for being efficient, low-cost, smarter, faster, more consistent and more objective. However, the above case studies demonstrate that this is not always the case in practice and highlight how important it is for governments to have robust ethical frameworks in place before implementing such systems: creating the right data sets, counting those who are excluded and ensuring a diverse group of people participate in the design process are essential, as well as checking for potential bias, ensuring each decision is transparent and explainable, and that a human can be held accountable for any negative consequences.

 

For a more in-depth exploration of the topic, below you can access some of our reports:

  • “Governing Algorithms” aims to contribute towards an inclusive development of AI and help restore and strengthen trust between policymakers and the public.
  • “Government” aims to provide a short overview of the field of ADMS and establish a set of guidelines regarding its application in the public sector in order to ensure the protection of existing rights and interests of citizens.
  • “Exploring gender-responsive designs in digital welfare” explores how design can play a critical role in highlighting system blind spots and can encourage practices that foster gender responsiveness.

 

What do the experts have to say about it? Check here the opinion of the world leading voices!