News & Reports

ALIGNER, in which CBRNE Ltd as the parent company of AI ORA is a partner, has published a Fundamental Rights Impact Assessment (FRIA) template designed for use by police and law-enforcement agencies planning to deploy AI systems. This instrument aligns with the forthcoming EU AI Act and combines two complementary modules – one to identify and assess risks to fundamental rights such as privacy, non-discrimination and the right to a fair trial, and another to guide governance through ethical standards and mitigation measures.

Developed under the Horizon 2020 project ‘Artificial Intelligence Roadmap for Policing and Law Enforcement’, the FRIA template is operational and ready for integration within existing governance frameworks. A multidisciplinary team of legal, operational and ethical experts can apply the template before deployment and review it throughout the AI system’s lifecycle to reflect evolving risks and technological changes.

Download the FRIA template and its handbook from ALIGNER’s website to ensure your AI initiatives meet legal and ethical requirements; then benchmark your readiness with our free AI Readiness Assessment; for bespoke implementation support explore our AI ORA services.

Access the full FRIA template here and take a decisive step towards responsible AI in law enforcement.