podcast  | 

Chapter 2. RisCanvi (II): Can the next crime be predicted? 

SHARE

In the first episode of this podcast [link to Chapter 1 page] we learned about RisCanvi, the risk assessment algorithm that is used in Catalan prisons to evaluate inmates present and future. This system takes inspiration from software that has been used abroad for years, such as COMPAS or OASys.  

 

These algorithms have been the source of debate, regarding their discriminatory potential as they replicate systemic bias – in the case of COMPAS, against the US’s black population. Algorithmic bias is cause for great concern when talking about AI tools, especially in areas as sensitive as within criminal justice systems. 

 

Until now, there are no studies that have shown that RisCanvi discriminates against any specific group. However, critics point out that there is not enough transparency or control over the data to assess it.  Nor there is real knowledge of the possible errors or the impact that the tool has on inmates’ rights. 

 

In this episode we speak with Nuria Monfort, lawyer, who argues that the variables analyzed by this algorithm are too closely linked to the social reality of prisoners and that instead of helping them, the tool serves to further exclude them. We also address the limits of automated systems, most critically on how they aid to decide whether a prisoner should be released. 

 

So, how does RisCanvi’s outcome influence a probation officer’s evaluation or a judge’s sentence? With researcher Manuel Portela, we explore a complex relationship: that of human judgment and algorithms. And we talk about how professionals, on occasion, need to be aware of the system’s limitations in order to put them to good use.   

 

Listen complete episode

 

 

See all episodes.