Will predictive systems profile you as a criminal?

Police forces and criminal justice authorities across Europe are using data, algorithms and artificial intelligence (AI) to ‘predict’  if certain people are at ‘risk’ of committing crime or likely to commit crimes in future, and whether and where crime will occur in certain areas in future.

We at Fair Trials are calling for a ban on  ‘predictive’ policing and justice systems. Take the quiz below to see if you’d be profiled or seen as a ‘risk’ – and find out how to support our campaign.

Could you be profiled as at ‘risk’ of committing a crime? Take our quiz and find out.

Unlike the authorities, we will of course not collect or retain any information about you or your answers!

What are predictive policing and justice systems?

Through our research, we know that more and more police forces and criminal justice authorities across Europe are using AI and other data-driven systems to profile people and try and ‘predict’ whether they might commit a crime in the future or are at ‘risk’ of criminality, and profile areas to ‘predict’ whether crime will occur there in future. These systems have been shown to use discriminatory and flawed data and profiles to make these assessments and predictions. They try to determine your risk of criminality or predict the locations of crime based on:

  • Education
  • Family life and background
  • Neighbourhoods and where people live
  • Access or engagement with certain public services, like welfare, housing and/or healthcare
  • Ethnicity
  • Nationality
  • Credit scores, credit rating or other financial information
  • ‘Contact’ with the police – even as a victim or witness to a crime, or as a suspect, even if not charged or convicted

These systems have been used by police and criminal justice authorities to decide whether to target or take action against people – including children – and areas or locations, such as:

  • Surveillance or monitoring
  • Regular stop and search
  • Questioning, whenever a crime is committed in the area
  • Denial of welfare or other services, including having their children taken away by social services
  • Arrest

Predictive systems used in policing and criminal justice:

  • Reproduce and exacerbate discrimination based on race, ethnicity, nationality, socio-economic status and other unjust factors
  • Undermine everyone’s right to be presumed innocent until proven guilty by actual evidence
  • Are secretive, non-transparent and unaccountable

The Artificial Intelligence Act (#AIAct)

The European Union (EU) is discussing a new law to regulate the use of AI. The Artificial Intelligence Act will bring in some safeguards, limiting and even banning some uses of AI to protect people and their rights in the EU, but it does not go far enough. We are calling for a ban on predictive policing and justice AI systems to be included in the #AIAct, alongside several other safeguards. Many MEPs in the European Parliament agree with us, but we need to persuade more of them that these flawed systems must be banned.

Watch: AI, data and policing in Europe

How our algorithm works

These automated and algorithmic systems are often secret and opaque, with authorities refusing to provide information on how they work. However, we will of course explain our ‘algorithm’. All of the questions in our example profiling tool are matched directly to information which is actively used by law enforcement and criminal justice authorities in their own versions of predictive and profiling systems and databases. The reality is that if a person fits just a few of the pieces of information (as asked by our questions), it can be enough to be marked as a ‘risk’. Likewise, if an area fits a similar profile, it too will be marked as at ‘risk’ of crime occurring. These assessments are obviously discriminatory and unjust – we have made our own transparent and explainable version to show just how discriminatory and unjust these systems are.

Our algorithm works as follows

0 — 3 ‘Yes’ answers: ‘Low’ risk outcome

4 — 5 ‘Yes’ answers: ‘Medium’ risk outcome

6 — 10 ‘Yes’ answers: ‘High’ risk outcome

Skip to toolbar