Q&A  | 

Ansgar Koene and the risks of the use of black box algorithms in HR

"Algorithms do not make moral judgements [...]. Humans decide how much to prioritize false-positives over false-negatives".

Tags: 'AI' 'Ansgar Koene' 'Future of work' 'labour market' 'Labour market regulation' 'Workers rights'

SHARE

Reading Time: 6 minutes

Ansgar Koene holds a MSc in Electrical Engineering and a PhD in Computational Neuroscience. His work focuses on the development of design and regulatory tools to maximize the beneficial use of information technologies and minimize negative consequences on people and society.

He has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.

He is currently Global AI Ethics and Regulatory leader at EY and Senior Research Fellow at the Horizon Institute for Digital Economy Research (University of Nottingham) where he is co-Investigator on the UnBias project, on awareness raising and regulation to minimise algorithmic bias, and lead the Policy Impact activities for the Horizon institute.

Could you give us an overview of your work?

As Global AI Ethics and Regulatory leader at EY my role is to help the AI Lab, and EY as a whole, to develop ethically responsible AI governance methods, produce thought leadership on AI governance and engage with national/regional AI regulatory developments (e.g. European Commission; OECD AI policy observatory).

At Nottingham, I was the lead author on a Science Technology Options Assessment report for the European Parliament on “A governance framework for algorithmic accountability and transparency” that was published in 2019. I am also a trustee for the 5Rights Foundation.

Are algorithms going to take over the labour market, and which areas if so?

Algorithms are having a profound impact on the shape of the labour market by enabling the automation of increasingly complex tasks that follow predictable and repetitive patterns. The current wave of AI systems, which are essentially complex statistical inference engines, has enabled an acceleration of the labour market automation process into domains that require higher levels of pattern recognition (and pattern reproduction) than was previously possible.

In the Audit profession for instance Document Intelligence, which combined image processing and Natural Language Processing, is enabling the automation of much of the audit grunt-work of checking that required documents have been filled in properly and transferring information from those documents into databases for further assessment.

In the Audit profession for instance Document Intelligence, which combined image processing and Natural Language Processing, is enabling the automation of much of the audit grunt-work of checking that required documents have been filled in properly and transferring information from those documents into databases for further assessment.

Are algorithmic decisions fair?

Fairness is a moral virtue that relates to the justifiability of decisions. Whether or not an algorithmic decision is fair depends on the context of the decision and the ways in which the system has been optimized by its creators. All algorithmic systems are built and programmed by people.

As the UN Special Rapporteur for Extreme Poverty concluded in his report on the use of algorithmic decision making in social welfare allocation systems, the underlying problem with the use of these systems is that they have been introduced and optimised to implement a cost-cutting agenda at the expense of vulnerable populations. This agenda is a choice made by humans.

Algorithms do not make moral judgements, they execute instructions and optimise towards defined goals. Humans decide how much to prioritize false-positives over false-negatives.

Algorithms do not make moral judgements, they execute instructions and optimise towards defined goals. Humans decide how much to prioritize false-positives over false-negatives. If historical data shows a correlation between criminal behaviour and divorced parents, it is humans who must decide if it is fair to penalize someone’s chances of being released on parole based on something their parents did and which they most likely had no control over. It is a human societal choice if we want our criminal justice system to judge individuals based on gross statistical patterns across the populations, or if we want a system based on individual cause-and-effect analysis.

How can we guarantee the fairness of algorithms?

In our work on the IEEE Standard for Algorithmic Bias Consideration we phrase the work as intending to minimize unintended, unjustified and unacceptable difference in algorithmic decisions. In order to address these the development and deployment/use of algorithmic systems must:

Have a sufficient understanding of the context of use, including who will be impacted by the system (does this include groups with different vulnerabilities?). This includes the need to re-assess system bias if there are significant changes to the context it is used in.

Have a clear and complete understanding of the decisions that are made during the design/deployment and the implications of the criteria that the algorithm is optimizing for. A typical way in which lack of diversity in the development team may lead to bias is by failing to recognize a design choice is linked to cultural customs. Decisions may also end up being made outside of the system design phase during bug fixes, which might lead to a failure to thoroughly explore the implication of those decisions.

In order to know if a decision or optimisation criterium is fair it is necessary to explore the justification for that decision. Ensure that this choice be justified to the people impacted by the use of the algorithmic system.

Check if the justifications for the decisions are acceptable in the context where the system is used. Where possible this should include consultation with representatives of the various groups who have been identified as potentially being impacted by the use of the system.

An important element we added in the ECPAIS Bias certification criteria is the need for ongoing system behaviour monitoring and the ability to perform corrective interventions if unjustified bias is observed.

What's the problem with black box algorithms, and how do they affect the work market?

A key problem with black box algorithms, in regards to fairness, is the difficulty in understanding if a decision was justified or if it should be challenged. A recent article by Panoptykon Foundation nicely explained many of the issues regarding black box algorithms, including the reasons why many of these system need not be a black box as they currently are.

Similar to the criminal justice example eluded to earlier, many of the current black box algorithms used in these contexts employ machine learning methods that are trained with historical data to detect patterns in resumes or in observed behaviours that, at the population level, are statistically correlated with good employees.

For the work market specifically, black box algorithms pose a problem when they are used in Human Resources contexts such as hiring, performance evaluation and other HR decisions. Similar to the criminal justice example eluded to earlier, many of the current black box algorithms used in these contexts employ machine learning methods that are trained with historical data to detect patterns in resumes or in observed behaviours that, at the population level, are statistically correlated with good employees. With black box algorithms it can be difficult to subsequently investigate if these correlations make causal sense (or are merely spurious correlations), and are justifiable grounds for the decisions (e.g. don’t involve discrimination on proxies of protected characteristics such as sex, race, etc.).

Does AI jeopardize the rights of workers?

Referring back to the report of the UN Special Rapporteur on Extreme Poverty, AI jeopardizes the rights of workers if it is used as a tool to implement policy agendas aimed at reducing the rights of workers. This includes the use of black box algorithms as a means for obscuring the rationale behind decisions and frustrating the ability to challenge the justification of those decisions. 

This includes the use of black box algorithms as a means for obscuring the rationale behind decisions and frustrating the ability to challenge the justification of those decisions. 

How can we protect the fundamental rights of workers while retaining freedom to innovate new algorithmic methods?

A key part to protecting the fundamental rights of workers in the light of algorithmic systems is to acknowledge that the rights of workers pertain to the justification, outcomes and impacts of decisions, irrespective of the tool that is used in the decision-making process.  

Innovation of new algorithmic methods can improve the efficiency of processing, e.g. resume pre-screening, but does not change where the accountability for decisions lies, nor what the obligations are regarding the fundamental rights of workers. 

Is state intervention needed to guarantee the transparency and accountability of algorithms as well as the liability of developers, or can the industry regulate itself?

The state has an important role in showing how existing rights and regulations remain valid independent of the use of algorithmic systems. In addition to possible general comments for elaborating how existing legislation should be interpreted in an algorithmic context, a key requirement will be upskilling of the regulatory agencies that are tasked with enforcement of existing legislation.

The state remains important to guarantee transparency and accountability of decisions, with or without the use of algorithms. If this requirement is maintained, industry will step in to develop standards and best practices for the transparency of algorithms to enable them to be used while remaining compliant with the general requirements on decision making.

In some select domains or applications the use of algorithms may change the allocation of accountability and/or liability between developers and operators of algorithmic systems. For those cases it will be necessary to provide additional regulatory clarity.

Can workers actively protect themselves from algorithm based decisions?

An important step for workers to protect themselves from algorithm based decisions is to demand that their existing rights regarding explanation, justification and challenge of decisions must be respected irrespective of the use of algorithms in the decision process. Accountability lies with the people using the algorithms. Algorithm based decisions are not perfect and deserve to be challenged.

An important step for workers to protect themselves from algorithm based decisions is to demand that their existing rights regarding explanation, justification and challenge of decisions must be respected irrespective of the use of algorithms in the decision process.