14 min read — AI | Legislation | Human Rights

AI and Law Enforcement: Is Predictive Policing the Future of Criminal Profiling?

Artificial intelligence promises to make policing smarter, faster, and more efficient. Yet behind the algorithms lie difficult questions about bias, transparency, and the future of justice itself.
Image Credit: Euro Prospects

By Michela Sinardi — Criminal Justice Correspondent

Edited/Reviewed by: Candela Fernández Pascual

October 19, 2025 | 17:00

Follow our European journalism:

In 2002, director Steven Spielberg released the cult classic tech-noir Minority Report, depicting a futuristic police unit known as “PreCrime,” where officers prevent crimes before they occur, guided not by evidence but by the visions of “precogs” who foresee violence before it happens. What once seemed like speculative cyber-fiction now bears striking resemblance to the tools increasingly adopted by police forces and agencies worldwide. While today we do not rely on clairvoyants, we do indeed rely on Machine Learning (ML) – a subset of AI that uses statistical models and algorithms to process vast amounts of data, identify patterns, and make predictive assessments about crime. However dated, Minority Report remains a striking reflection of the moral questions raised by new technologies – is the line between preventing crime and predicting it beginning to blur?

This article aims to provide a comprehensive understanding of the multifaceted applications of AI in policing by shedding light on its intricate dynamics, and exploring both its transformative benefits and the inherent ethical, social, and legal challenges

The landscape of law enforcement is undergoing a profound transformation driven by the rapid advancements in Artificial Intelligence. Law Enforcement Agencies (LEAs) globally, and particularly within the European Union (EU), are confronting increasingly complex challenges, ranging from the exponential growth of data generated by digital devices and online services to the intricate nature of modern criminal activities. Traditional policing methods alone are often deemed insufficient to address this challenging and globalized criminal landscape, necessitating advanced and innovative solutions. A key application within this technological shift is Predictive Policing (PP), which utilizes quantitative and statistical methods to forecast individuals who may commit a crime and to target areas where crime is likely to occur. This approach leverages historical data to create spatiotemporal forecasts of crime hot spots, guiding police resource allocation decisions with the expectation of deterring or detecting criminal activity. PP aims to enhance the efficiency, effectiveness, and overall performance of law enforcement operations by analysing massive and complex datasets to identify crime patterns, trends, and links, and to improve resource forecasting and operational efficiency.

However, this technological evolution, while promising to revolutionize data analysis, forensic methodologies, and communication channels, introduces new and complex challenges. Concerns span areas such as data privacy, the integrity of AI-driven decisions, data bias, fairness, accountability, transparency, and human rights. There are valid concerns that these complex and often opaque systems, if not carefully managed, may lead to more harm than good. The potential for AI to reproduce and amplify historical biases, leading to the disproportionate targeting of certain communities or groups, is a significant ethical and societal dimension that warrants critical examination.

How Does PP Really Work?

The core promise of PP is to enhance the efficiency, effectiveness, and overall performance of law enforcement operations by analysing massive datasets to identify crime patterns and trends. PP generally focuses on two main types of predictions: Geospatial (or Area-based) Predictive Policing, which forecasts where and when crimes are likely to occur, often identifying “hot spots” and Individual-based Predictive Policing, which anticipates persons most likely to engage in or be victims of criminal activities.

PP systems are also trained on extensive datasets. These typically include historical crime data (e.g., location, type, and timing of past incidents). This is augmented with socio-economic and demographic data such as gender, age, income, and welfare indicators, often disaggregated by postcode region. Some systems also incorporate external data, like weather patterns or data from national statistics offices, alongside local police insights and national statistics to enrich predictions. This information is processed using machine learning models, which identify patterns and generate forecasts about either high-risk areas or individuals. Outputs are often presented as heat maps highlighting potential “hot spots” or as risk scores that classify individuals by likelihood of reoffending. In practice, only the highest-risk categories (e.g., the top 3% of areas) are flagged for police attention and intervention.

PP in Europe: CAS and HART Cases

Taking inspiration from a wave of U.S. experiments in predictive policing since 2017, Europe has begun to move from theory to practice. Two of the most prominent initiatives have emerged in the Netherlands and the United Kingdom, where police forces are testing data-driven models designed to anticipate crime before it occurs.

The Dutch police operate the Crime Anticipation System (CAS), developed in 2019 by Dick Willems, an Amsterdam Police data scientist. CAS soon became a national tool that churned through crime records, neighbourhood demographics, and official statistics to generate weekly heat maps in order to better allocate manpower where and when it matters most. The top three percent of “high-risk” locations (whether a street corner prone to bicycle theft or a housing block with frequent break-ins) are flagged for special patrols. In practice, this means officers are deployed not just to where crime has occurred, but to where an algorithm predicts it will. Although it appears better suited for directly tackling crimes like burglary, theft and pickpocketing, the CAS system has proven helpful in gathering personal information about offenders as well.

Across the Channel, Durham Constabulary trialled the Harm Assessment Risk Tool (HART), which classifies offenders in custody as high, moderate, or low risk of reoffending. Built on more than 100,000 past custody events, HART uses decision trees to weigh factors such as age, postcode, and prior arrests. Those in the “moderate risk” category may be diverted into rehabilitation schemes, while “high risk” offenders face traditional prosecution. The tool is explicitly advisory, but it has nevertheless sparked concern that postcode-based predictions could entrench policing patterns in already over-patrolled communities.

Both CAS and HART highlight the European approach to predictive policing: data-driven, resource-focused, and often justified as efficiency measures. Yet critics warn they risk reproducing the very inequalities they aim to neutralise, especially where social disadvantage is baked into the datasets themselves. Effectiveness has also proven difficult to measure (CAS reportedly achieves prediction rates of around 30% for burglaries – far from a crystal ball.)

 The U.S Debacle and the Perils of Progress: Social, Ethical and Operational Debates

A 2020 MIT Tech Review article reads: “Predictive policing algorithms are racist. They need to be dismantled…If we can’t fix them, we should ditch them.”

Editor and tech journalist Will Douglas Heaven maintains that predictive systems, even when race is excluded as a variable, replicate racial and socio-economic inequality because their training data reflects longstanding, systemic bias in arrest practices, policing intensity, and criminal justice contact. Heaven is not alone, his take on PP is one shared by many throughout the U.S. Unfortunately, critical evidence shows that most of the issues raised by professionals on PP systems are not without merit.

Long before Europe fully embraced predictive policing, the United States offered early (and often troubling) experiments in risk assessment and crime forecasting that illuminate many of the ethical and operational pitfalls now under debate. The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) tool, widely used in U.S. courts for sentencing and parole decisions, was exposed by a ProPublica investigation in 2016 for systematically over-predicting recidivism risk among black defendants while underestimating risk for white defendants. Black individuals labelled “high risk” by COMPAS often did not reoffend—nearly twice as often as their white counterparts, as ProPublica’s data showed. According to US Department of Justice figures, you are more than twice as likely to be arrested if you are Black than if you are white. A Black person is five times as likely to be stopped without just cause as a white person. For instance, in State v. Loomies, the Wisconsin Supreme Court allowed a judge to consider a COMPAS risk score in sentencing – despite the tool’s methodology being opaque – while mandating a written warning about its limitations. However, critics have argued that this advisory approach is unlikely to counteract the deference judges give to algorithmic recommendations or address deeper structural bias in the data used. Similarly, Los Angeles Police’s PredPol, a hotspot predictive algorithm then adopted by numerous U.S. police departments, has repeatedly been criticised for reinforcing spatial bias: areas with heavy policing become marked as “high crime,” receive more patrols, generate more crime data, then continue to be flagged, creating feedback loops that disproportionately burden minority and low-income communities.

It appears clear then that concerns around predictive policing go far beyond technical glitches. At stake are fundamental questions of fairness, accountability, and the role of human judgment in a world increasingly mediated by algorithms. The promise of efficiency sits uneasily beside the risk of amplifying discrimination, eroding privacy, and undermining trust in justice systems.

At the heart of the problem lies data. AI systems are only as sound as the information they are trained on, and when that data carries the weight of past prejudice, the outcome can be unjust. As reported above, American predictive tools and ProPublica’s landmark 2016 investigation into COMPAS are representative of this trend, generating skewed data that, once fed into the algorithm, justifies even heavier policing. European cases echo the same dilemmas. UK’s HART, as mentioned above, was designed to minimize “dangerous errors” by erring on the side of caution. While well-intentioned, this logic means many people may be overclassified as “high risk,” potentially excluding them from rehabilitation opportunities. Likewise, in the Netherlands, the CAS and its predecessors have been accused of indirectly targeting ethnic minorities by using residence as a proxy variable, producing outputs that critics argue blur the line between neutral data analysis and profiling.

Beyond bias, there are deeper concerns about privacy and surveillance. The ability of AI to combine CCTV feeds, biometric identifiers, mobile phone data and online activity offers police extraordinary new powers, but also fuels fears of a “Big Brother” state. San Francisco’s 2019 ban on police use of facial recognition technology reflects a broader backlash: many citizens worry about being constantly monitored simply for existing in public space. In Europe, where fundamental rights are constitutionally enshrined, the spectre of such pervasive monitoring raises alarm about proportionality and legality. The new ‘Chat Control’ law (formally Child Sexual Abuse Regulation) proposed by the EU to combat Child Abuse, encapsulates the population’s growing worry with public handling of private and sensitive data by police authorities. As it stands, the proposal envisages mass scanning of private communications, including encrypted conversations, leaving thousands to question: is the EU about to access our text messages in the name of fighting crime?

Transparency and accountability add another layer of complexity to the PP efficiency debate. It is true that predictive models often operate as “black boxes”: police officers may be given risk scores or hotspot maps without understanding how the algorithm produced them. Proprietary protections make it even harder to scrutinize the software, leaving courts, defendants, and the public unable to interrogate decisions that can alter lives. When errors occur, responsibility is murky… should blame fall on the software designers, the law enforcement agencies deploying the system, or the regulators meant to oversee them?

Finally, there is the crucial question of rights. Predictive policing systems challenge the presumption of innocence by labelling people as “risky” before they act. In doing so, they can stigmatize entire communities and entrench patterns of exclusion, moving law enforcement away from protecting rights and closer to pre-emptive punishment. This shift risks turning ‘innocent until proven guilty’ into ‘guilty until proven predicted.’

All these challenges reveal the double-edged nature of predictive policing: while it offers tools to manage increasingly complex threats, it also magnifies old injustices under the guise of technological neutrality. The lessons of COMPAS and PredPol show what happens when bias is left unchecked, and Europe’s own experiments with CAS and HART illustrate that no jurisdiction is immune. The debate is no longer whether predictive policing works, but whether it can ever be made fair.

Is Responsible Policing a Possible Reality?

For predictive policing to be credible, regulation alone is not enough. A culture of responsible use must take root within law enforcement. This involves regulatory sandboxes, controlled environments where new tools can be tested with real data before being deployed. It also means applying practical frameworks like ALGO-CARE, developed from Durham’s HART project, which translates abstract human rights principles into operational guidelines: systems must be lawful, accurate, explainable and challengeable by human inputs.

Fairness is another cornerstone. Algorithms trained on biased historical data risk reproducing discrimination, so continuous audits, multidisciplinary oversight, and rigorous scrutiny of input data are essential. Transparency also matters: while machine learning models may never be fully “understandable,” their outputs must at least be interpretable, with clear lines of accountability when errors occur. Most importantly – public trust cannot be an afterthought. AI in policing touches on sensitive questions of surveillance and civil liberties. Building legitimacy requires engagement with communities, open dialogue about risks and benefits, and ensuring that systems are not imposed without scrutiny. Collaboration between law enforcement, academia, and industry can help foster this culture of transparency and accountability.

In short, Europe’s (and the world’s) challenge is to ensure that AI in policing strengthens justice rather than undermines it. The tools are already here; the question is whether governance, ethics, and trust can keep pace with technology.

Disclaimer: While Euro Prospects encourages open and free discourse, the opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or views of Euro Prospects or its editorial board.

Write and publish your own article on Euro Prospects

Subscribe to our newsletter – stay informed when we publish articles on pressing European affairs.

Close