- Publication date
- 20 September 2019
- Joint Research Centre
- Topic area
- Trustworthy AI
On 20.-21. May 2019, the Police and Human Right Programme (PHRP) of Amnesty International Netherlands held an expert meeting on predictive policing.
Technological developments increasingly find their ways into today’s policing, and artificial intelligence (AI) is one of them. Police use data sets of different sizes to feed into an algorithmic model that is supposed to predict either places where crime is most likely to occur in the near future (place-oriented predictive policing), or persons who are likely to get involved in crime (personoriented predictive policing). While such models are more and more developed and used by police agencies across the globe, there are many questions with regard to the accuracy of the outputs, possible discriminating biases in the underlying data sets and / or the model using the data, their effectiveness to actually predict crime and many other elements more. It is particular important to answer these questions given the great human rights impact the use of such technology can have with regard to data protection and the right to privacy, the right to liberty and security, freedom from discrimination, freedom of expression and information, the right to a fair trial and effective remedy etc.
The PHRP organised this meeting with a view to bring experts from different areas (police, criminology, data scientists, academic researchers, civil society) together in order to discuss some key questions in this area. This report summarizes the main elements of the discussion with a view to nurture the reflection about critical problems to be researched in depth and to be addressed – in particular from a human rights perspective.