The use of algorithms in UK policing has increased hugely in recent years. At least 14 police forces in the UK use some form of crime prediction software.
But it’s controversial.
There are concerns around transparency and the potential for discrimination. That’s why I’ve developed a tool to govern the use of algorithms in UK policing.
How are algorithms used in policing?
One major use of the technology is in risk assessment of offenders – software analyses data to predict how likely a person is to reoffend. The resulting risk score can be used to decide whether or not to charge someone, or to assign additional police resources to ‘high-risk’ individuals.
Another use is ‘hot spot policing’ – using machines to analyse masses of crime data and work out exactly which areas they should send police patrols at different times of day and night.
Perhaps the most controversial use of the technology is what’s called ‘solvability profiling’. Reported crimes are fed into a database, and software compares how likely it is to be solved with how much of a priority this type of crime is. The resulting analysis can determine whether the crime is investigated or not.
What are the problems with the technology?
Using machines to make decisions means there is an inherent lack of transparency to the system – and transparency is vital to justice. For example, if your life is affected because you’ve been judged to be high-risk, how can you challenge this assessment when it has been made by a computer based on a complex and opaque series of data?
There’s also the issue of data quality. Algorithmic predictions use data entered by human beings, and with this comes the risk of errors, gaps in information, and prejudices becoming embedded into the software.
It can also lead to discrimination. For example, in 2018 the Met Police were found to be using a tool called the Gangs Matrix unlawfully because it was disproportionately targeting young black men in London.
How are we making it better?
Along with colleagues from other universities, I have developed a tool called ALGO-CARE. It’s a piece of research that sets out a model of algorithmic accountability in policing for UK forces.
The police can use it to help design their software. It requires data scientists to be clear what they are making – is the tool highly accurate, or is it highly sensitive (and therefore likely to flag up lots of innocent people too)?
It’s based on the principle that algorithmic software in policing must be
-
advisory
-
lawful
-
granular
-
under clear lines of ownership
-
challengeable
-
accurate
-
responsible
-
explainable
ALGO-CARE has been taken on by the National Police Chiefs’ Council to be used as the national standard across the UK. This is the first example of a senior police body endorsing guidance in algorithms in policing.
It means that from now on all national algorithmic tools should use ALGO-CARE in their design and evaluation, making policing more effective and keeping the public safer.