Loading article...
Loading article...

Generating AI summary...
The UK is poised to expand its use of artificial intelligence (AI) in policing, with a new national AI centre set to combat bias and assess the effectiveness of private supplier products. Labour has called for a dramatic expansion of AI use, while police chiefs believe it can keep them up-to-date with new criminal threats. However, experts warn of the risks of bias, which can result in unfair outcomes, such as over-targeting minority communities or misidentifying individuals.
A police chief has admitted that AI used to boost crime fighting will contain bias, but pledged to combat the risks. The new national AI centre, costing £115m, aims to reduce bias and assess private supplier products. Bias has already been identified in the use of retrospective facial recognition, which is powered by AI, and live facial recognition, which hunts for suspects in real-time.
The use of AI in policing raises concerns about bias, as algorithms can reflect past human prejudices and produce unfair outcomes. However, experts believe that AI can also help police deal with new criminal threats, such as social media-facilitated violence, and speed up searches for evidence. A human police officer will still need to make the final decisions about what to do with AI-produced results.
The expansion of AI use in policing has the potential to transform the way police operate, with benefits including faster searches for evidence and improved analysis of data. However, it also raises concerns about bias and the need for independent oversight. The new national AI centre will aim to address these concerns and ensure that AI is used effectively and fairly.
As AI becomes more prevalent in policing, it is essential to balance its benefits with the risks of bias. By acknowledging the potential for bias and taking steps to mitigate it, police can harness the power of AI to improve their effectiveness and keep communities safer.
A: The new national AI centre is a £115m project that aims to reduce bias in AI use in policing and assess the effectiveness of private supplier products. It will also provide training for officers on how to use AI effectively.
A: The risks of bias in AI use in policing include over-targeting minority communities, misidentifying individuals based on race, gender, or socioeconomic status, and producing unfair outcomes.
A: Bias can be minimized in AI use in policing by recognizing and addressing past human prejudices, cleaning data, training models appropriately, and testing them.
Source: The Guardian