As the 2 February deadline for provisions of the EU’s AI Act begins to take effect, civil society groups are expressing concern over the lack of clear guidance from the European Commission on banned AI systems, such as facial recognition and social scoring.
While companies have until mid-next year to align with most of the AI Act’s provisions, the ban on high-risk AI tools, including profiling systems and facial recognition, will start on 2 February.
The Commission’s AI Office has stated it plans to issue guidelines to assist providers with compliance by early 2025, following a consultation on prohibited practices last November. However, these guidelines have yet to be published, raising concerns among advocacy groups. Ella Jakubowska, head of policy at EDRi, voiced her frustration: “It is really worrying that interpretive guidelines still have not been published. We hope this will not be a harbinger of how the AI Act will be enforced in the future.”
The AI Act aims to prohibit AI systems deemed harmful to society, but exceptions exist, particularly in law enforcement. However, critics like Caterina Rodelli from Access Now argue that these exceptions undermine the ban: “If a prohibition contains exceptions, it is not a prohibition anymore.” These exceptions could allow law enforcement to use potentially dangerous technologies, such as unreliable lie detectors and predictive policing tools.
Both EDRi and Access Now have voiced concerns that these exceptions could enable harmful AI systems to persist, with Jakubowska warning that governments and companies might exploit loopholes to continue deploying them.
The AI Act, which has extra-territorial scope, means non-EU companies may also face penalties for violations, with fines of up to 7% of global turnover. Most provisions will be enforced next year, allowing for more time to prepare the necessary standards and guidance.
In the meantime, EU member states have until August to establish national regulators who will oversee the AI Act, with some countries already taking steps to assign oversight to data protection or telecom bodies.