🤖 AI Rights & Algorithmic Decisionsinternational

Deepfake Complaint Letter (AI Act Article 50 + Criminal Law)

Deepfakes - AI-generated or manipulated images, audio, and video that realistically depict people saying or doing things they never did - have become one of the most urgent digital harms of the 2020s. Whether it is non-consensual intimate imagery, identity fraud, political disinformation, or reputational attacks, deepfake technology is increasingly weaponized against individuals. The legal landscape for addressing deepfakes in Europe has strengthened significantly. The EU AI Act (Regulation 2024/1689) Article 50 introduces mandatory transparency and disclosure obligations for deployers of AI systems that generate or manipulate content constituting deepfakes. At the national level, several EU member states have introduced or strengthened criminal provisions targeting deepfakes. Italy enacted Law No. 132/2025, effective from October 2025, which creates a specific criminal offense for unlawful dissemination of AI-generated or altered content (deepfakes), punishable by one to five years' imprisonment. Other jurisdictions address deepfakes through existing laws on defamation, harassment, non-consensual intimate imagery, identity fraud, and personal data violations under GDPR. DocuGov.ai generates a comprehensive complaint letter that: (1) reports the deepfake to the relevant platforms demanding takedown, (2) files a formal complaint with law enforcement or the appropriate regulatory authority, and (3) invokes your rights under GDPR, the AI Act, and applicable national criminal law.

Understanding your situation

Someone has created, distributed, or published AI-generated or AI-manipulated content (deepfake) that depicts you, uses your likeness or voice, or misrepresents you in a way that causes or could cause harm. Common scenarios: - Non-consensual intimate imagery: someone created AI-generated sexual content using your likeness without your consent - Identity fraud: a deepfake of you is being used for financial fraud, impersonation, or social engineering - Reputational attack: a manipulated video or audio recording purports to show you saying or doing something you never did - Harassment or bullying: AI-generated content is being used to harass, intimidate, or humiliate you - Commercial misuse: your likeness has been used without authorization in AI-generated advertising - Content is circulating on social media, messaging apps, adult websites, or other online platforms - You have been a victim of deepfake sextortion demanding payment to remove fabricated content

What you need to prepare

  • Evidence of the deepfake: URLs, screenshots (with timestamps), downloaded copies if possible
  • Proof that the content is fabricated: original unmanipulated versions, alibi evidence
  • Platform where the content is published: name, URL, any response to takedown requests
  • Identity of the perpetrator (if known): name, account, or any identifying information
  • Timeline: when you first became aware of the deepfake
  • Evidence of harm: emotional distress, financial loss, reputational damage, screenshots of distribution
  • Your identification (to prove you are the person depicted)
  • Police report number (if you have already reported to law enforcement)

Deadline

Act immediately. For platform takedowns, most platforms have expedited review for non-consensual intimate imagery and deepfakes. For criminal complaints, statutes of limitation vary but are typically measured in years. For GDPR complaints, act promptly.

🏛️ Authority

Law enforcement: National police (criminal complaint for non-consensual imagery, fraud, harassment, defamation). Platform: Report using their reporting tools. National DPA: for GDPR violations (processing biometric data without consent, Article 9). National AI competent authority: for AI Act transparency violations (from August 2026). Cybercrime units: Europol EC3, national cybercrime divisions.

⚖️ Legal basis

EU AI Act Article 50(4): from 2 August 2026, deployers of deepfake AI systems must disclose that content has been artificially generated or manipulated. AI Act Article 99: penalties for transparency violations will include fines up to EUR 7.5 million or 1.5% of global turnover. GDPR Article 9: processing of biometric data requires explicit consent. GDPR Articles 17 and 79: right to erasure and right to judicial remedy. Digital Services Act (DSA): platform obligations for illegal content. Italy Law 132/2025: specific criminal offense for unlawful deepfake dissemination (1-5 years imprisonment). National laws on defamation, harassment, non-consensual intimate imagery, identity fraud.

Expert tips

  1. 1Preserve all evidence immediately. Take screenshots with timestamps, download content, archive URLs using the Wayback Machine. Digital evidence disappears quickly once reported.
  2. 2Report to the hosting platform first - most have expedited processes for deepfake takedowns, especially for non-consensual intimate imagery.
  3. 3File a police report, especially if the deepfake involves intimate imagery, fraud, threats, or extortion. In many jurisdictions, creating or distributing certain deepfakes is already a criminal offense.
  4. 4Send a GDPR Article 17 erasure request to any organization hosting the content. Processing your biometric data without consent violates GDPR Article 9.
  5. 5From 2 August 2026 under current law, AI Act Article 50 obligations will become enforceable. Deployers who fail to disclose AI-generated content will face fines up to EUR 7.5 million or 1.5% of turnover.
  6. 6If you are a victim of deepfake sextortion, do not pay. Report to law enforcement and the platform immediately.
  7. 7Consider consulting the Revenge Porn Helpline (UK) or equivalent national helplines for support.

Ready to create your document?

Generate a professional letter in minutes

Generate This Letter Now