Spain has taken a strong step in AI regulation by endorsing a bill that forces strong fines on companies failing to label AI-generated content. The move points to combat the spread of deepfakes and adjusts with the European Union’s AI Act, which upholds strict straightforwardness necessities for high-risk AI frameworks, agreeing to Digital Transformation Minister Oscar Lopez.
“AI is an effective tool that can improve our lives but also spread misinformation and weaken democracy,” Lopez cautioned, emphasizing the dangers related to deepfake videos, pictures, and audio.
As one of the first EU countries to execute these stringent rules, Spain is setting a precedent in AI governance. The charge, which still requires endorsement from the lower house, classifies non-compliance as a genuine offense, carrying fines of up to €35 million (£30 million) or 7% of a company’s worldwide yearly income.
Since OpenAI launched ChatGPT in 2022, worldwide regulators have escalated their center on AI security. The bill also denies subliminal AI techniques—such as sounds or pictures planned to control helpless populations—citing concerns over chatbots empowering gambling or AI-driven toys provoking children to take part in unsafe activities.
Furthermore, companies are banned from utilizing AI to classify people based on biometric data or behavioral characteristics for decisions related to benefits eligibility or criminal risk evaluation. However, real-time biometric surveillance in public spaces will still be allowed for national security purposes.
The recently built up AI supervisory agency, AESIA, will manage authorization, with particular cases including protection, wrongdoing, races, credit evaluations, protections, and financial markets falling under the jurisdiction of sector-specific regulators.
By actualizing these measures, Spain is taking a proactive approach to AI governance, setting an example for other countries exploring the complex moral and administrative challenges of counterfeit insights.