🤖 AI Rights & Algorithmic Decisionsinternational

Right to Explanation of an Algorithmic Decision (GDPR + AI Act)

When an algorithm makes a decision that affects your life - denying you credit, rejecting your insurance claim, screening out your job application, flagging you for fraud, or determining your benefit eligibility - you have the right to understand why. European data protection law and the EU AI Act together create a robust framework for demanding meaningful explanations of algorithmic decisions. Under GDPR Articles 13(2)(f), 14(2)(g), and 15(1)(h), organizations must provide "meaningful information about the logic involved" in automated decision-making. The EU AI Act goes further: for high-risk AI systems (which include credit scoring, insurance underwriting, employment screening, law enforcement profiling, and public service administration), Article 13 requires transparency and provision of information to deployers, and Article 86 creates an explicit right to explanation for individuals affected by high-risk AI decisions. This means you can demand not just that a human reviews the decision, but also that the organization explains in understandable terms: which data about you was used, how the algorithm processed that data, which factors weighed most heavily in the outcome, and why the system reached the specific conclusion it did. DocuGov.ai generates a formal explanation request that is precise, legally informed, and designed to elicit a substantive response.

Understanding your situation

You were affected by an automated or AI-assisted decision and want to understand how and why the algorithm reached its conclusion. Common scenarios: - A bank denied your loan and cited "risk assessment" without explaining which specific risk factors were identified - An insurance company adjusted your premium or denied coverage based on an opaque algorithmic evaluation - A government agency denied your application for benefits, housing, or a permit based on an automated eligibility check - An employer's AI recruitment tool rejected you and provided no meaningful feedback - A platform banned or restricted your account based on automated moderation or fraud detection - A credit reference agency assigned you a score that you believe is inaccurate - A public authority flagged you in a risk assessment system (fraud detection, tax audit selection) - A healthcare insurer or provider used an algorithm to determine your treatment eligibility

What you need to prepare

  • The decision you want explained (letter, email, notification, or screenshot)
  • The organization's name, address, and DPO contact details
  • Your reference number, account number, or application ID with the organization
  • A description of what you believe was automated about the decision
  • Any previous correspondence with the organization about the decision
  • A list of specific questions you want answered

Deadline

GDPR access requests must be answered within one month (Article 12(3)), extendable to three months for complex requests. AI Act Article 86 right-to-explanation obligations apply once the relevant provisions take effect (2 August 2026 for high-risk systems).

🏛️ Authority

Step 1: The organization itself (address the request to the DPO or compliance officer). Step 2: National Data Protection Authority - UODO (PL), BfDI (DE), CNIL (FR), ICO (UK), AEPD (ES), Garante (IT), AP (NL). Step 3: National AI competent authorities (once designated) for AI Act-specific rights.

⚖️ Legal basis

GDPR Article 15(1)(h): right to meaningful information about the logic involved in automated decision-making, its significance, and envisaged consequences. GDPR Articles 13(2)(f) and 14(2)(g): proactive transparency obligations. GDPR Article 22(3): right to obtain human intervention and contest automated decisions. From 2 August 2026 under current law: EU AI Act Article 86 establishes a right to explanation for individuals affected by high-risk AI decisions (Annex III, excluding category 2). Article 26(11) will require deployers to explain AI-assisted decisions to affected persons. Already enforceable: Article 85 allows complaints to market surveillance authorities about AI systems.

Expert tips

  1. 1Be specific about what you want to know. Request: (a) which personal data was used as input, (b) how the data was weighted, (c) which factors were most influential, (d) what outcome different input values would have produced, and (e) whether human review occurred.
  2. 2Cite the exact GDPR articles: 'Pursuant to Article 15(1)(h) GDPR, I request meaningful information about the logic involved in the automated processing, its significance, and the envisaged consequences for me.'
  3. 3If the organization responds with 'proprietary model' as a refusal, push back. The GDPR right to meaningful information does not permit blanket refusals based on trade secrets.
  4. 4From 2 August 2026 under current law, AI Act Article 86 will grant an explicit right to explanation for decisions made by high-risk AI systems listed in Annex III (excluding category 2). Reference this provision when writing to organizations that deploy such systems.
  5. 5Set a clear response deadline (e.g., 30 days per Article 12(3) GDPR). If the organization fails to respond, this becomes grounds for a DPA complaint.
  6. 6If the explanation reveals errors or bias, use this as the basis for contesting the decision under GDPR Article 22(3) or filing a formal complaint.

Ready to create your document?

Generate a professional letter in minutes

Generate This Letter Now