Research and Work Packages

PROBabLE Futures

PROBabLE Futures (Probabilistic AI Systems in Law Enforcement Futures) is a four-year, £3.4M RAi funded Keystone Project led by Northumbria University with Glasgow, Northampton, Leicester, Newcastle and Cambridge Universities.

Our project, working alongside our law enforcement, third sector and commercial partners, is developing a framework to understand the implications of uncertainty and to build confidence in future Probabilistic AI in law enforcement, with the interests of justice and responsibility at its heart. Activities include mapping the AI ecosystem in law enforcement, including the use of large language models; developing guidance, checklist and frameworks to guide assessment of scientific and legal validity, and mock trials with AI outputs as evidence.

The research of the PROBabLE Futures project team draws on the expertise of the National Police Chiefs Council, police forces, the Home Office, JUSTICE, CETaS and three industry partners including Microsoft, ROKE and PA Consulting. The project’s focus is on probabilistic AI in policing and the wider criminal justice and law enforcement system – it will deliver tested and trusted advice and frameworks to commercial system developers and public sector bodies on when and how to deploy AI in this sector effectively and with the long-term trust of communities in mind. Activities will also include mock trials with AI outputs as evidence, applying lessons from prior technological developments to study the use of ‘Probabilistic AI’ in policing, intelligence, probation and wider criminal justice contexts.

Probabilistic systems supported by AI, such as facial recognition, predictive tools, large language models (LLMs) and network analysis are being introduced at pace into law enforcement. Whilst these systems offer potential benefits, decision-making based on ‘Probabilistic AI’ has serious implications for individuals. The key problem for responsible AI is that the uncertain or probable nature of outputs is often obscured or misinterpreted, and the underlying data is sensitive and of varying quality. If AI systems are to be used responsibly, attention must be paid to the chaining of systems and cumulative effects of AI systems feeding each other. PROBabLE Futures will review, question and evaluate the use and applications of probabilistic AI across different stages of the criminal justice system. 

Inspiration

Decision-making based on Probabilistic AI in the law enforcement process can have serious consequences for individuals, especially where the uncertain or probable nature of outputs is obscured or misinterpreted.

Ambition

To build a holistic, rights-respecting framework to steer the deployment of Probabilistic AI within law enforcement, creating a coherent system, with justice and responsibility at its heart.

Impact

Law enforcement bodies, policy-makers and law-makers will deploy the multiple factors and requirements in our framework, contributing to the development of a system-based approach to law enforcement AI.

Objectives

  • Mapping the probabilistic AI ecosystem in law enforcement
  • Learning from the past
  • Scoping for the future, including evaluation of contested technologies such as remote weapons scanning
  • Focusing upon practical use of AI & the interaction of multiple systems (chaining)
  • Using XAI taxonomy and novel methods including story-telling and mock trials using AI evidence
  • Establishing an experimental oversight body including members representing under-represented groups

PROBabLE Futures Work Package Research Questions:

Work Package 1

What probabilistic AI tools are being deployed, piloted or trialled in the main stages of the law enforcement process?

How are each of the above AI tools categorised in relation to the following?

a) input-output behaviour

b) training data and testing

c) technical method and internal parameter settings

d) role in the law enforcement decision and legal framework

e) comprehension of measures such as precision and uncertainty

f) model chaining and connection with other systems?

Using a shortlist of past probabilistic technologies, what legal, regulatory/governance, technical and interpretability issues can be identified from these past case-studies?

What regulatory/governance and technical methodologies could have mitigated the identified issues for each case-study?

What can storytelling as a research method reveal about lived experiences of stakeholders and contested truths in relation to law enforcement AI?

What future and emerging technologies are likely to impact and influence law enforcement?

What testing protocols, model techniques and guidelines are required to ensure that future AI tools in law enforcement are responsible?

How are each of the above future AI tools categorised in relation to the following? a) input-output behaviour

b) training data and testing

c) technical method and internal parameter settings

d) role in the law enforcement decision and legal framework

e) comprehension of measures such as precision and uncertainty

f) model chaining and connection with other systems?

What does an effective model for participatory oversight of AI in law enforcement look like?

What lessons can be learned from the establishment and operation of the experimental oversight body?

Which methods of scrutiny (such as assurance cases, model cards) are effective in oversight?

What issues relating to the use of probabilistic AI do the main stakeholders in the jury trial exercise wish to explore? 

Which methods of visualisation, design and communication concerning uncertainties are the most effective for supporting users and decision-makers?

What standards, guidance, protocols and governance will be needed to ensure reliability and relevance of AI-enabled evidence in the future?

WP5 What similarities and differences can be identified between law enforcement bases, legal frameworks and contexts across jurisdictions and how do these influence the understandings and development of responsible AI?

What should be included in a framework to support responsible probabilistic AI in law enforcement?

How can project research impact and influence our partners, stakeholders and the wider law enforcement ecosystem?

How is the project framework applicable to other domains and research contexts?

Scroll to Top