Skip to main content
AI4Citizens: Legal, Ethical, and Societal Considerations of Implementing AI Systems for Privacy-Preserving Crowd Monitoring to Improve Public Safety Section Banner

AI4Citizens: Legal, Ethical, and Societal Considerations of Implementing AI Systems for Privacy-Preserving Crowd Monitoring to Improve Public Safety

 
“AI4Citizens: Legal, Ethical, and Societal Considerations of Implementing AI Systems for Privacy-Preserving Crowd Monitoring to Improve Public Safety” (AI4Citizens) is a groundbreaking initiative by the UNICRI Centre for Artificial Intelligence and Robotics (the AI Centre), and BI Norwegian Business School, with the generous support of the Research Council of Norway. Launched in March 2024, AI4Citizens explores the opportunities and challenges associated with developing privacy-preserving AI solutions for public safety. Building on these findings, the AI Centre aims to translate its multidisciplinary research into a set of actionable recommendations for law enforcement and other public safety entities on effective strategies to promote transparency and foster public trust.

This initiative is part of a larger multistakeholder research consortium “AI4Citizens: Responsible AI for Citizen Safety in Future Smart Cities” that brings together the Norwegian University of Science and Technology (NTNU) – the project owner, the University of Agder (UiA), Oslo City Municipality, the Nordic Edge Smart City Innovation cluster, several law enforcement partners from Norway, Germany, the Netherlands, and the United Kingdom, and other multidisciplinary stakeholders. 

 

Background

The rapid expansion of artificial intelligence (AI) systems and their increasingly sophisticated potential to support the detection of security threats and crime in public spaces offers vast advantages for law enforcement in their efforts to enhance public safety. At the same time, ethical, societal and human rights considerations must be safeguarded at every stage, as these systems also pose significant risks to fundamental rights and may promote an environment of increased surveillance. 

Responsible AI innovation serves the core functions of law enforcement to protect the community, prevent and investigate crime, and ensure justice, helping to enhance trustworthiness of both law enforcement and technology for the benefit of a safe, just, and secure society. Transparency, privacy and accountability are the cornerstones of this approach.

 

Project Overview

As a foundation for the research, the multidisciplinary team behind the project is researching solutions for the deployment of an AI system for crowd monitoring and anomaly detection in public spaces with a strong emphasis on human rights compliance and responsible innovation. To minimize the impact on privacy, the AI system is designed to anonymize Closed-circuit television (CCTV) footage by masking individual figures. It then detects anomalies – incidents that may constitute threats to public safety – and generates an alert for human operators in law enforcement monitoring centres. These operators analyse the alert information and decide whether to dispatch a patrol. The tool replaces manual video searches by law enforcement officers, aiming to shorten response time to incidents, reduce human fatigue, and minimize errors in incident monitoring.

In parallel, social science partners in the consortium, working with industry and law enforcement agencies, examine the perspectives of diverse stakeholders on such systems, examining the trade-offs different groups encounter in decisions concerning their design, implementation, and adoption. This approach highlights the importance of accounting for societal values and inclusivity in the development of AI systems for public safety.

The UNICRI Centre for AI and Robotics is exploring the responsible AI innovation considerations associated with the above-described scenario, with a particular emphasis on the critical importance of building public trust and communicating both the benefits and the risks of these systems in a transparent manner. Public trust in the implementation of AI systems is essential to ensure the legitimacy of law enforcement practices and the effectiveness of their technological applications.  An essential step in this process is to ensure transparency in the development, deployment and use of such AI systems in a way that sufficiently informs individuals and communities, engaging all relevant stakeholders to address any possible concerns. 
AI4Citizens strives to address these considerations through:

  • promoting meaningful engagement between law enforcement agencies and the communities they serve on AI-related issues

  •  providing law enforcement agencies with the knowledge to build trust and confidence in the responsible use of AI systems for public safety

  • empowering people with greater awareness of existing approaches to ensuring safety and security while safeguarding fairness, privacy, and other fundamental rights 

  • exploring issues related to transparency, public perceptions, and trust in the development and implementation of AI in a human-rights compliant and ethical manner

  • enhancing the knowledge of decision-makers in law enforcement and other entities responsible for public safety regarding the societal and trust-related challenges associated with the use of AI systems 

  • contributing to the overarching goal of implementation AI applications in a nuanced and refined manner, ensuring that associated legal, societal, and ethical challenges are duly addressed.

This work builds upon the UNICRI-INTERPOL  Toolkit for Responsible AI Innovation in Law Enforcement, and involves a series of guided interviews and consultations with multidisciplinary experts, desk research and comprehensive qualitative data analysis.

 

Key Activities

  • Supporting research on the responsible design of AI systems to enhance public safety through anomaly detection in crowd monitoring, and providing guidance on ethical principles, international law obligations, and fundamental rights requirements.

  • Analyzing the risks and impacts of implementing such AI systems from ethical, societal and legal perspectives, and publishing key findings.

  • Conducting interviews with multidisciplinary experts to gather insights on transparency, trust, and AI implementation for public safety.

  • Assessing research data through qualitative analysis and validating findings through stakeholder consultations and peer review sessions.

  • Developing actionable recommendations to foster transparency throughout the responsible implementation of AI systems for public safety. 

  • Raising awareness on the importance of building trust through the dissemination of findings and public engagement events.