🤖 AI Rights & Algorithmic Decisionsinternational

Complaint About a Prohibited AI Practice (EU AI Act Article 5)

The EU AI Act (Regulation 2024/1689) introduced an outright ban on specific AI practices deemed to pose unacceptable risks to fundamental rights and safety. These prohibitions, set out in Article 5, became enforceable on 2 February 2025 - making them the first provisions of the AI Act to take effect. Organizations deploying prohibited AI systems face the highest tier of penalties: up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. The prohibited practices include: AI systems that use subliminal, manipulative, or deceptive techniques to distort behavior and cause significant harm; AI that exploits vulnerabilities related to age, disability, or socioeconomic situation; social scoring systems; individual criminal offense risk assessment based solely on profiling; untargeted facial image scraping from the internet or CCTV; emotion inference in the workplace or educational institutions (except for medical/safety reasons); biometric categorization to infer sensitive attributes such as race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation; and real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions). If you believe an organization is deploying a prohibited AI system, you have the right to file a formal complaint with your national competent authority. DocuGov.ai generates a structured, legally precise complaint letter that identifies the suspected prohibited practice, cites the relevant provisions of Article 5, and requests investigation.

Understanding your situation

You have evidence or reasonable grounds to believe that an organization is deploying an AI system that constitutes a prohibited practice under Article 5 of the EU AI Act. Common scenarios: - An employer is using AI to monitor employees' emotions or facial expressions in the workplace - A company or public body is implementing a social scoring system - An AI system is being used to manipulate or deceive people in a way that distorts their behavior - An organization is scraping facial images from social media to build a facial recognition database - Real-time biometric identification is being used in public spaces without legal authorization - An AI system is inferring sensitive personal characteristics (race, political opinions, sexual orientation) from biometric data - A service provider is using AI that exploits vulnerabilities of specific groups (elderly, disabled, children) - A school or university is using emotion recognition AI on students

What you need to prepare

  • Description of the AI system or practice you believe is prohibited
  • Name and details of the organization deploying the system
  • Any evidence: screenshots, documentation, news articles, product descriptions, privacy policies
  • Description of how the system affects you or others
  • Location where the system is deployed (country, city, specific premises)
  • Timeline: when you first became aware of the practice
  • Witnesses or others affected by the same practice (optional but strengthens the complaint)

Deadline

Article 5 prohibitions have been enforceable since 2 February 2025. There is no specific deadline for filing a complaint, but act promptly while evidence is available. Member states were required to designate national competent authorities by 2 August 2025.

🏛️ Authority

National market surveillance authorities designated under the AI Act. National Data Protection Authorities (for overlapping GDPR issues): UODO (PL), BfDI (DE), CNIL (FR), ICO (UK), AEPD (ES), Garante (IT). The EU AI Office (for cross-border or systemic issues). National consumer protection authorities.

⚖️ Legal basis

EU AI Act (Regulation 2024/1689) Article 5: prohibited AI practices. Article 5(1)(a): subliminal/manipulative/deceptive techniques. Article 5(1)(b): exploitation of vulnerabilities. Article 5(1)(c): social scoring. Article 5(1)(d): individual criminal risk assessment based solely on profiling. Article 5(1)(e): untargeted facial image scraping. Article 5(1)(f): emotion inference in workplace/education. Article 5(1)(g): biometric categorization for sensitive attributes. Article 5(1)(h): real-time remote biometric identification for law enforcement. AI Act Article 99: penalties up to EUR 35 million or 7% of global turnover.

Expert tips

  1. 1Be as specific as possible. Identify the exact AI system, the organization deploying it, and which subparagraph of Article 5 you believe is violated.
  2. 2Gather evidence before filing: screenshots, product docs, news articles, privacy policies mentioning the technology, testimonials.
  3. 3File with the correct authority. Each EU member state has designated national competent authorities. If in doubt, your national DPA is a good starting point.
  4. 4Mention the penalty framework: Article 99 imposes the highest tier of fines (EUR 35M / 7% global turnover) for prohibited practices.
  5. 5Consider also filing a GDPR complaint if the practice involves personal data processing. The two frameworks overlap significantly.
  6. 6You may file a complaint even if you are not personally affected. The AI Act allows any person or entity to report suspected violations.

Ready to create your document?

Generate a professional letter in minutes

Generate This Letter Now