¿Quién vigila al vigilante? La nueva ley española de IA apenas castiga a la Administración | Tecnología | EL PAÍS

See original article

Weak Sanctions on Public AI Misuse in Spain

A new Spanish law implementing the European AI regulation is drawing criticism for its lenient approach to punishing public sector misuse of AI. While the European regulation provides for significant fines for private companies, the Spanish draft only proposes "warnings" and "disciplinary actions" for public entities that violate AI regulations. This has raised concerns among eight digital rights organizations that the lack of substantial penalties for the public sector creates a lack of accountability and endangers citizen rights.

Concerns about Lack of Accountability

The organizations argue that the absence of administrative fines for public entities represents unequal treatment compared to private companies. They propose introducing administrative fines for prohibited or high-risk AI uses and replacing warnings for public officials with temporary or permanent disqualification from public office.

Debate about Impunity vs. Administrative Efficiency

The draft law establishes a tiered penalty system, with the most severe penalties (fines of €7.5 million to €35 million) reserved for private companies violating AI rules. However, for public sector violations, only warnings and disciplinary actions are planned. This lack of significant penalties is a point of contention. While some argue that avoiding financial penalties streamlines administrative processes, critics emphasize the need for substantial sanctions to ensure accountability and deter unlawful AI practices.

Call for Stronger Oversight

Experts like Borja Adsuara highlight the need for mechanisms to ensure that the public sector is held accountable for AI misuse. He advocates for a judge to oversee public surveillance systems, ensuring compliance with regulations. In contrast, Lorenzo Cotino, president of the Spanish Data Protection Agency (AEPD), emphasizes the importance of proactive measures like a registry of public algorithms to detect and prevent AI misuse before it happens. He suggests that addressing compliance in real-time is more crucial than solely focusing on sanctions after the fact.

Sign up for a free account and get the following:
  • Save articles and sync them across your devices
  • Get a digest of the latest premium articles in your inbox twice a week, personalized to you (Coming soon).
  • Get access to our AI features