Algorithmic discrimination in Spain: limits and potential of the legal framework

SHARE

The use of applications based on artificial intelligence (AI) is causing mounting concerns and posing both ethical and technological challenges. One such challenge is algorithmic discrimination and the discriminatory outcomes that can result from the use of automated or semi-automated decision-making systems and other AI based applications.

While the regulatory framework for the use of AI is still in development, it is important to define algorithmic discrimination and evaluate how it can be addressed.

Discrimination should be addressed directly when developing regulations for the use of AI. It needs to be addressed in connection with existing anti-discrimination mechanisms under the Spanish Constitution and law.

This report arises from the particular need to understand the legal implications of these systems and whether Spain’s anti-discrimination and gender equality laws can tackle algorithmic discrimination adequately. The focus of the analysis is thus on algorithmic discrimination, though it also addresses disagreements among technologists, lawyers and public policy analysts in their understanding of the problem, and the difficulties that may result.
One of the fundamental problems in identifying and combating algorithmic discrimination is the opacity of AI systems and the logic followed by the companies that develop and own them.
Algorithmic discrimination comprises distinct profiles that require targeted solutions. Data protection regulations are clearly inadequate to address the problems it poses.
Algorithmic discrimination in Spain: limits and potential of the legal framework

Data quality is the primary source of algorithmic discrimination. The use of incomplete, biased, incorrect or outdated data may lead to discriminatory outputs. Moreover, AI designers and developers may introduce their own biases in designing the system or preparing training samples. Equally, the data the system accesses may reflect  entrenched social hierarchies, incorrect or inadequate representations of certain social groups. 

AI systems present a series of challenges that make it difficult to address the negative impact of discrimination. At times, it is difficult to understand the root cause of discrimination as AI systems have the risk of creating a black box effect, making outcomes incomprehensible, even to experts. Given that automated systems apply decisions on a greater scale and faster pace, they present the risk of amplifying and entrenching existing patterns of inequality.  

The definition of algorithmic discrimination used in technical contexts is based on the idea of bias or error (in the design of the model or as a result of poor data quality), while legal and social ideas of discrimination focus on the concept of unfair disadvantage. Consequently, the problem as raised by technologists and lawyers when referring to algorithmic discrimination may deviate. And the solutions they seek may be irrelevant to each other or divergent. The interdisciplinary collaboration needed to find solutions to a problem as complex as this requires this divergence to be recognised and addressed.

Some AI cases have had discriminatory impacts, but to date, these have been addressed through data protection instruments, rather than under anti-discrimination regulations. Because protected personal data categories and grounds for discrimination overlap in some cases, data protection regulations can sometimes be used in cases of discrimination in the application of AI systems. Nonetheless, data protection regulations have certain limitations and pose a number of problems rendering them inadequate for discrimination cases. For example, algorithm-based decisions can have discriminatory effects without even using personal data. Algorithms establish probabilistic patterns through inferences and proxies in all kinds of mass data processing.

Algorithmic discrimination shares the problem of intersectionality with other areas of discrimination, but in an exacerbated form. The discrimination becomes more refined, more granular and highly intersectional, and extends beyond the risk from the limited number of protected categories. The existing anti-discrimination legal framework is thus ill-equipped to deal with one of the salient features of discrimination cases in the context of AI: its high degree of intersectionality or granularity.

One of the fundamental problems in identifying and combating algorithmic discrimination is the opacity of AI systems and their incomprehensibility. One clear option for intervention is to increase transparency in the use of these technologies. Spain has adopted regulations designed to increase transparency which could serve this purpose. Nevertheless, transparency is only effective if regulation also provide mechanisms to ensure the right to explanation.

While the proposals for European regulations underline the risk of discrimination and the impact on fundamental rights, they clearly do not establish legal anti-discrimination mechanisms. In the Spanish legislation there are certain laws that could contribute to regulating the risk of discrimination in the use of AI. However, these protections would need to be strengthened as the law is generally not companied by anti-discrimination policies or monitoring and control. Lastly, solutions to mitigate algorithmic discrimination need to go beyond the technocentric approach proposed by EU documentation. Although technological solutions may be effective in debiasing systems, reducing discrimination as a technical issue disregards the social context of discrimination.

Authors