Risk Ladder: Organizing Risks by Priority
Your client is launching a new data processing feature. You’ve identified fifteen GDPR compliance issues. The CEO asks: “Which ones do we handle first?”
You know the answer. You’ve done this analysis a hundred times. But articulating why the cross-border transfer mechanism is more urgent than updating the privacy policy footer—in a way that makes sense to non-lawyers—takes time you don’t have.
Here’s what most lawyers do: Either treat everything as equally urgent (unhelpful), or make gut calls without showing the reasoning (breeds mistrust). Neither approach helps clients make informed resource decisions.
The better approach: Make your risk assessment systematic and visible.
You already prioritize risks mentally. This prompt helps you document that thinking in a way clients can actually use to make decisions.
The Concept: Risk Ladder Analysis
A risk ladder forces categorization from “will destroy the company” down to “technically non-compliant but functionally irrelevant.”
The value isn’t that AI knows GDPR better than you. It doesn’t. The value is that AI systematically works through your risk list and makes the implicit hierarchy explicit—so you can show clients why you’re recommending they spend €50K fixing one issue but defer a €5K fix on another.
This is especially useful when:
- You’re facing budget constraints
- Client is pushing back on recommendations
- You need to justify prioritization to non-legal stakeholders
- Multiple risks exist and sequencing matters
The Prompt
Concrete Example
Scenario: A 50-person SaaS company is adding a new feature that analyzes customer usage data to provide personalized recommendations. In-house counsel has identified multiple GDPR considerations. CEO needs to know what to prioritize with a €75K compliance budget.
Issues identified:
- Data Processing Agreements (DPAs) with existing customers don’t cover new processing purpose
- Privacy policy mentions “analysis” but not “personalized recommendations”
- No formal DPIA has been conducted
- Marketing team wants to use insights for targeting (separate consent question)
- Some customer data is processed by US-based sub-processor without new SCCs
- Documentation of legitimate interest assessment is incomplete
- Employee training on new feature hasn’t happened
- Data retention policies don’t specify retention period for recommendation data
Run the prompt. Here’s the curated output:
TIER 1 - CRITICAL
Legal basis for new processing is unclear Why it’s Tier 1: You’re expanding processing purposes beyond what existing customer agreements cover. Without updating DPAs or establishing clear legal basis (consent, legitimate interest, contract necessity), the entire feature operates without valid legal ground. This is exactly what regulators target. Timeline: Before launch. Block the feature release until this is resolved.
Cross-border transfers lack proper mechanism Why it’s Tier 1: If your US sub-processor handles EU customer data and you don’t have updated SCCs or an adequacy mechanism, you’re in clear violation. Post-Schrems II, this is high-visibility risk. Timeline: Immediate. Must be in place before new feature processes any data through that sub-processor.
TIER 2 - SERIOUS
DPIA hasn’t been conducted Why it’s Tier 2: Automated decision-making that affects users likely triggers DPIA requirement. Not having one doesn’t stop the launch, but leaves you exposed during any regulatory inquiry. It’s also required under GDPR Article 35 for this type of processing. What changes this: If recommendations influence material outcomes (pricing, access, terms), this moves to Tier 1. Timeline: Complete within 30 days of launch.
Privacy policy lacks specificity Why it’s Tier 2: Generic “analysis” language doesn’t meet transparency requirements for the new feature. Users should understand what “personalized recommendations” means and what data is used. Enforcement risk is moderate but grows if users complain. Timeline: Update before significant user base sees the new feature (within 60 days).
TIER 3 - STANDARD
Documentation of legitimate interest assessment incomplete Why it’s Tier 3: If you’re relying on legitimate interest as legal basis, you should document the balancing test. But incomplete documentation is common, and absence isn’t itself a violation—it’s an evidentiary gap if challenged. Timeline: 90 days. Treat as operational hygiene.
Employee training on new feature Why it’s Tier 3: Good practice, reduces human error risk, but absence isn’t a compliance violation. Most companies have training gaps. Address this as part of normal operational rollout. Timeline: Within 6 months.
TIER 4 - MINIMAL
Data retention policy lacks specific period for recommendation data Why it’s Tier 4: You should have this, but vague retention language is extremely common and rarely enforced absent other problems. Can be addressed when you next update data governance docs. Timeline: Next policy review cycle.
Marketing use of insights (separate consent) Why it’s Tier 4 (for now): You correctly identified this as a separate consent question. Since marketing hasn’t started doing this yet, it’s not creating current risk. But if they start without proper consent, this jumps to Tier 1. Action: Document that this is prohibited until consent mechanism is in place.
What this shows your client:
“We need to spend the budget on updated DPAs and fixing the cross-border transfer issue before launch. The DPIA and privacy policy update happen in the first 30-60 days. Everything else can follow normal operational timelines. Here’s why…”
When to Use This
Deploy risk ladder analysis when:
- Budget constraints exist - Client needs to understand why €50K here, not €5K there
- Multiple compliance issues compete - You need to sequence recommendations
- Stakeholder pushback - CFO is questioning your advice, wants to see reasoning
- Client is risk-tolerant/risk-averse - Adjust recommendations by showing the ladder
- Fast-moving situations - Startup needs to launch but can’t fix everything immediately
The earlier you do this, the better. Risk ladders are most valuable when clients still have time to make informed tradeoffs.
Why This Works
You already do this analysis mentally. The prompt makes it systematic and communicable.
AI doesn’t have CYA incentives. When you ask it to honestly categorize risks, it does—without the professional liability anxiety that makes lawyers hedge. That clarity helps clients make better decisions.
This isn’t about AI replacing legal judgment. It’s about using AI to document your judgment in a format non-lawyers can actually use.
We’re building Mino for lawyers who want AI as a reasoning partner, not just a drafting tool. If you want specialist agents designed for exactly this kind of thinking, join the founding members list.