Towards accountable algorithms: tools and methods for responsible use

SHARE

As the implementation of algorithms advances in an increasing number of contexts and sectors, the need for debate on the ethical aspects and governance of artificial intelligence grows. Amid promises of improved efficiency and effectiveness, algorithmic systems can contain biases and make mistakes that have unwanted effects on people’s lives. Algorithmic evaluations can help mitigate these effects by detecting problematic issues such as discrimination against population groups, distortion of reality and exploitation of personal information.

In recent years several published academic papers and reports have sought to detail what

algorithmic evaluations should include. In practical terms, they aim to answer the following question: How can algorithms be evaluated to detect any potential problems they may contain and/or issues that may arise from their use, and how can these be mitigated?

In particular, the question that guides the research, and which has a clear practical focus, is the following: How can algorithms be evaluated to detect any potential problems they contain and/or that may arise from their use, and how can these be mitigated?

All in all, this report is an effort to understand the implications of algorithmic evaluations in a context marked by increasing use of AI in different areas of life.

To bring clarity to this complex issue, this report:

  • provides an overview of methods and tools that can be used depending on the evaluator’s objectives and available resources,

  • explains the ecosystem of stakeholders and sectors involved considering a very general framework for algorithmic accountability, and

  • offers six recommendations to improve algorithmic evaluations in the future.


Mezzo-level governance
Towards accountable algorithms: tools and methods for responsible use