Police use of AI is on the rise – but transparency isn't keeping up

More than half of UK police forces plan to invest in AI by 2020, but awareness of its current limitations has made officials reluctant to talk about its use
iStock / Maxchered

2018 was the year that artificial intelligence (AI) went from being a tool that few policing organisations knew much about to one that is being used in trials across the world.

In May, South Wales Police in the UK reported that it had made a total of 450 arrests as a result of a facial-recognition system. The system uses AI-powered software to check CCTV footage against watch-lists of wanted and high-risk individuals, with humans validating suspected matches before any action is taken.

The same month, Cellebrite, a technology company specialising in extracting and processing digital information, revealed that it was working with a dozen UK and US police forces. The company uses AI tools to identify information most likely to be relevant to investigations.

And police in China have started to wear “camera sunglasses” connected to a face-recognition algorithm that learns from data from the country’s 170 million CCTV cameras.

The sector is growing fast. Other companies are developing AI technologies to improve the detection of crime, including trawling the web for intelligence, flagging when a suspect’s mobile phone is entering a particular area or estimating the probability of someone violating their parole conditions.

Next year we will see more police embracing AI. According to Deloitte, more than half of UK police forces plan to invest in AI by 2020 and the pressures for policing organisations everywhere to follow suit are significant. A growing number of technology salespeople looking for new markets are knocking on police doors. And police forces are welcoming them, as they seek new ways to manage the demand on their limited resources.

That demand will continue to increase. While “traditional” crimes such as theft and burglary are close to post-war lows in many developed countries, investigative teams in many jurisdictions have been swamped by increased reporting of domestic violence and sexual offences, including historic cases. Police are also now expected to monitor an increasing number of people due to rapidly expanding terrorist watchlists, sex offender registers and parolee rolls.

And they are still struggling to catch up with our ever-expanding digital footprints. Mobile phones, fitness trackers and other connected devices have all yielded decisive evidence in prosecutions, but the effort involved in processing the digital information around every crime is time consuming. The average US household has eight connected devices storing hundreds of megabytes of data and when police fail to disclose information potentially relevant to the defence, cases fall down – or fail to exonerate the innocent.

Read more: Facial recognition tech used by UK police is making a ton of mistakes

Facial recognition is still not all that reliable, with only eight per cent of image matches in the South Wales trials being validated by human operatives. AI is also still far from being able to pull out critical intelligence from digital data without pulling in plenty of junk too. And there are several well-documented cases of AI showing racial bias, which have caused justified alarm. But for many overstretched police departments, even this level of success is better than nothing.

The challenge we will have to face up to in 2019 is that awareness of AI’s current limitations has made police reluctant to talk about its use. For every police trial of AI reported publicly, there is another one hidden by a confidentiality agreement. And even where experimentation is open, opportunities for public debate about the pros and cons are limited.

Next year we will have no choice but to engage in an ethical debate on the trade-offs AI offers us between liberty and security. We will need public scrutiny of algorithms to ensure tools work within national legal frameworks and don’t lead to hundreds of verdicts being overturned. And we need to ensure that good tools win and bad ones improve or die.

In 2019 we will see police ethics committees, national legislatures and the media asking tough questions about AI and taking steps to ensure that they aren’t blinded by maths. And we will have no choice but to raise the bar on algorithmic transparency. While precise details of how these algorithms are coded is naturally commercially sensitive, all policing organisations will have to be able to explain the tools they are experimenting with and their accuracy and cost-effectiveness in layman’s terms. The US Department of Commerce is leading the field in this area. Its Facial Recognition Vendor Test reports publicly on the accuracy and racial bias of AI systems submitted by a vast number of suppliers.

The need for change is urgent. If police don’t make more use of AI, they risk becoming overwhelmed by growing pressures. But if they don’t develop it as openly as possible, they increase the risk of a public backlash and a situation in which many policing organisations will continue to use ineffective tools, at high cost, without anyone noticing.

Tom Gash is co-author ofPolicing 4.0: Deciding the Future of Policing

This article was originally published by WIRED UK