EU Restricts AI in Law Enforcement Amid Growing Concerns

EU Restricts AI in Law Enforcement Amid Growing Concerns
Feb 8, 2025 10:48

Following the July uprising, public trust in Bangladesh’s police force has plummeted. However, trust issues with law enforcement are not uncommon worldwide. To address these concerns, artificial intelligence (AI) has increasingly been integrated into policing. Yet, this shift has raised serious concerns regarding human rights violations. Consequently, on February 2, the European Union (EU) enacted an AI law, prohibiting certain “unacceptable” uses of AI technology in law enforcement, imposing restrictions in seven key areas.

AI is already being used to enhance policing by facilitating facial recognition and analyzing vast data sets to predict and prevent crimes before they occur. Advanced AI tools can even analyze satellite imagery in real time to detect potential criminal activity in large areas, such as during football matches.

However, critics argue that AI-driven policing could introduce biases. Some AI systems struggle to accurately recognize individuals based on race or gender, while increased surveillance in minority-dominated neighborhoods may create a false impression of higher crime rates.

In response to these concerns, the EU’s landmark AI legislation has imposed restrictions on AI systems deemed to pose an “unacceptable risk.” These include social scoring systems, AI tools that assess or predict personal criminal risk, AI-driven emotional recognition in workplaces and educational institutions, and AI-enabled manipulation or fraud. The law also bans indiscriminate facial recognition database expansion through internet or CCTV scraping, biometric identification for defining protected characteristics, and real-time remote biometric identification in public spaces for law enforcement purposes.

Meanwhile, U.S. President Donald Trump has reversed his predecessor Joe Biden’s executive order on AI safety. Commenting on the EU’s AI restrictions, Italian lawmaker Brando Benifei stated, “The goal of these prohibitions is to avoid the use of AI for social control or the restriction of our freedoms. This is an issue closely linked to the protection of our democracy. However, its implementation might be somewhat disorganized.” He added that European law enforcement and immigration authorities continue to use AI-based real-time facial recognition in public spaces, with some exemptions still in place.

Academics and activists are closely monitoring how the new law will be enforced. Nathalie Smuha, assistant professor and researcher in AI ethics at KU Leuven University, questioned the effectiveness of the restrictions, stating, “With so many exceptions in the law, can you really call it a ban?” She pointed out that when the European Commission initially tasked experts with drafting an AI strategy in 2018, an outright ban on certain AI applications was not considered. However, policymakers and experts later revised their stance upon realizing that existing regulations might fail to prevent specific harmful AI practices.

The first draft of the new AI regulations, introduced in April 2021, already recommended prohibiting certain practices, such as AI systems that manipulate individuals’ behavior through exploitative tactics or target vulnerable groups. As the bill progressed through the European Parliament, lawmakers added more restrictions following discussions with government officials. The final version of the law was approved in December 2023.

Dutch Greens lawmaker Kim van Sparrentak, involved in the AI law discussions, explained that the bans fall into three categories: “Some things we knew we didn’t want, some things we could imagine from Hollywood movies, and some things inspired by China.”

A significant influence on the drafting process was a 2019 incident in the Netherlands, where the Dutch tax authorities’ algorithm falsely accused nearly 26,000 individuals of childcare benefits fraud. Such practices, now classified under predictive policing, could be outlawed under the new legislation. “If you want a society based on the rule of law, you cannot continuously treat people as if they are potential criminals,” van Sparrentak emphasized.

Similarly, the ban on scraping images from the internet for facial recognition databases was influenced by the U.S.-based firm Clearview AI, which came under scrutiny for amassing billions of images from online sources. The AI law now prohibits AI systems that create or expand facial recognition databases through unrestricted internet scraping. China’s controversial social credit system, which ranks individuals based on behavior, also played a role in shaping the legislation.

In January, digital rights advocates raised concerns in an open letter, pointing out “serious loopholes” in the AI law, particularly in the areas of policing and migration. Katerina Rodelli, an EU policy analyst at the digital rights group Access Now, remarked, “The most obvious gap is that these bans do not apply to law enforcement and migration authorities.”

A notable exception is the ban on real-time facial recognition in public spaces. While, in principle, law enforcement agencies will no longer be allowed to deploy such systems, EU member states can still authorize exceptions, particularly in cases involving serious crimes. The use of AI for emotional recognition in schools and workplaces is also prohibited, yet this restriction does not apply to law enforcement and immigration authorities. AI-powered lie detectors, which scan facial expressions for signs of deception, could still be used at border crossings.

Van Sparrentak noted that EU governments were unwilling to compromise on certain AI tools, leading to lengthy negotiations lasting up to 36 hours. “They want to retain the ability to use all the tools at their disposal,” she said.

According to a CNBC report, companies that fail to comply with the AI law could face fines of up to €35 million ($35.8 million) or 7% of their global annual revenue, whichever is higher. The regulatory framework for AI was officially introduced in August 2023, with various provisions being implemented in phases. Notably, administrative rules and obligations for companies developing general-purpose AI models will take effect on August 2, 2025. High-risk AI systems intended for critical sectors such as education, healthcare, and transportation have until August 2, 2027, to comply with the new regulations.