Skip to main content
New Report Available! Decoding Transparency: Strengthening Public Trust in AI for Law Enforcement

New Report Available! Decoding Transparency: Strengthening Public Trust in AI for Law Enforcement

21 Apr 2026

What does it take for people to trust the use of artificial intelligence (AI) in law enforcement?

As AI systems become part of everyday public safety practices, from analysing data to supporting operational decisions, this question is no longer theoretical. It sits at the heart of how institutions can continue to serve communities effectively, fairly and legitimately.

UNICRI’s report, Decoding Transparency: How to Foster Public Trust in Responsible AI Innovation in Law Enforcement, begins from this premise: innovation alone is not enough, trust must accompany it.

 

A changing landscape of trust

Across the world, public trust in institutions is under pressure. At the same time, perceptions of AI remain cautious and, in many cases, sceptical. When these two dynamics intersect in the context of law enforcement, the stakes become particularly high.

Decisions supported by AI can influence people’s rights, freedoms and sense of security. In such a context, the absence of clear information and open dialogue can quickly lead to misunderstanding or resistance.

The report shows that trust in law enforcement and trust in AI are closely interconnected. Where confidence in institutions is strong, new technologies are more readily accepted. Where it is fragile, even well-designed systems may face resistance.

Understanding the gap

To better understand this challenge, UNICRI undertook a unique, global research effort.

The study combines:

  • Extensive literature review and policy analysis

  • A detailed use case on privacy preserving AI-enabled crowd monitoring

  • Interviews with multidisciplinary experts across five regions

  • Experimental studies on public perceptions and behaviour

  • Consultations with stakeholders and peer review

This approach reveals a consistent pattern: there is often a gap between how AI systems are designed and how they are perceived by the public.

Concerns about surveillance, bias and opacity are not only technical issues—they are also issues of communication, understanding and trust.

Transparency as a bridge

The report positions transparency as a way to bridge this gap.

But transparency, in this context, is not limited to disclosing technical information. It is a broader, two-way process that connects institutions and communities. It requires clarity, openness and a willingness to engage.

In practice, this means:

  • Explaining what AI systems are used and why

  • Clarifying how decisions are made and what safeguards exist

  • Acknowledging risks, limitations and uncertainties

  • Creating space for questions, feedback and dialogue

Transparency, therefore, becomes both a principle and a practice—one that evolves over time and across different interactions.

From communication to engagement

A central message of the report is that communication alone is not sufficient.

Providing information is essential, but it must be complemented by meaningful public engagement. Trust is built not only by informing communities, but by listening to them and involving them in shaping how technologies are used.

The report highlights several priorities in this regard:

  • Engaging early, before AI systems are deployed

  • Maintaining consistent communication throughout their lifecycle

  • Reaching diverse audiences through multiple channels

  • Ensuring that vulnerable and underrepresented groups are included

  • Building feedback mechanisms that allow concerns to be addressed

This two-way process is essential to ensure that AI innovation remains aligned with societal expectations and values.

A pathway for responsible innovation

Ultimately, the report underscores that the successful integration of AI in law enforcement depends on more than technical performance. It depends on the quality of the relationship between institutions and the communities they serve.

To support this, it calls for a shift in approach, one that places transparency at the forefront of decision-making:

  • Embedding transparency from the earliest stages of AI adoption

  • Ensuring human oversight and accountability

  • Investing in communication capacities and organisational culture

  • Treating trust as something that must be continuously built and maintained

Looking ahead

As law enforcement agencies navigate the opportunities and challenges of AI, the need for clarity, openness and dialogue will only grow.

Decoding Transparency offers practical guidance grounded in evidence, helping institutions move beyond abstract principles towards concrete action.

In doing so, it contributes to a broader objective: ensuring that innovation in crime prevention and criminal justice remains firmly anchored in fairness, accountability and respect for human rights, and that it provides communities with something essential: confidence in the institutions that serve them.

This report is the result of research collaboration with BI Norwegian Business School  under the project AI4Citizens: Legal, Ethical, and Societal Considerations of Implementing AI Systems for Anonymized Crowd Monitoring to Improve Public Safety. The work was generously supported by the Norwegian Research Council.

 

Download the Report: Decoding Transparency: How to Foster Public Trust in Responsible AI Innovation in Law Enforcement