DSA: Trusted Flaggers and Transparency Reporting
Trusted Flaggers and Transparency Reporting [Articles 22-25]
Rule: Online platforms must establish systems for trusted flaggers to submit priority notices, take measures against misuse of their systems, and publish detailed transparency reports on content moderation activities.
Trusted Flaggers [Article 22]
Article 22(1): Definition and Treatment
“Trusted flagger”: Entity that has demonstrated particular expertise and competence for purposes of detecting, identifying, and notifying illegal content.
Platforms must:
- Enable entities to apply for trusted flagger status
- Process notices from trusted flaggers with priority
- Act upon notices without undue delay
“Priority” treatment means:
- Trusted flagger notices reviewed before general user reports
- Dedicated review process
- Faster response times
- Direct communication channel
Rationale: Organizations with expertise can help platforms identify illegal content more effectively
Article 22(2): Eligibility Criteria
To qualify as trusted flagger, entity must:
| Criterion | Details |
|---|---|
| Particular expertise | Specialized knowledge in detecting illegal content |
| Independence | Independent of platform |
| Objectivity | Diligent and objective reports |
| Quality track record | Accurate notices with low error rate |
Examples of potential trusted flaggers:
- NGOs focused on specific harms (child safety, hate speech, terrorism)
- Government agencies (consumer protection, IP offices)
- Industry associations (anti-counterfeiting groups)
- Fact-checking organizations
- Cybersecurity firms
Not eligible:
- Entities with conflicts of interest
- Those with poor accuracy record
- Entities that misuse reporting systems
Article 22(3): Status Revocation
Platform must revoke trusted flagger status if entity:
- No longer meets eligibility criteria
- Submits insufficient number of notices (dormant)
- Submits notices that are insufficiently precise or substantiated
- Otherwise misuses status
Due process required:
- Inform entity of intention to revoke
- Provide opportunity to respond
- State reasons for revocation
Article 22(4): Publication of Trusted Flaggers
Platforms must publish:
- Names of trusted flaggers they work with
- Updated at least every six months
- Publicly accessible
Purpose: Transparency about which entities have priority access
Measures Against Misuse [Article 23]
Article 23(1): Suspension of Service
Platforms must suspend, for reasonable period and after prior warning:
- Provision of service to recipients who frequently provide manifestly illegal content
- Processing of notices and complaints from individuals or entities who frequently submit manifestly unfounded notices or complaints
“Frequently” means:
- Pattern of repeated behavior
- Not isolated incidents
- Platform determines based on policies
“Manifestly illegal” means:
- Content clearly illegal without detailed analysis
- Obvious violations
“Manifestly unfounded” means:
- Notices/complaints obviously without merit
- Malicious or abusive reporting
Article 23(2): Proportionate Application
When deciding suspension, platform must consider:
| Factor | Consideration |
|---|---|
| Number of illegal items | How much manifestly illegal content |
| Severity | Type of illegality |
| Consequences | Impact of violations |
| Recipient’s intent | Deliberate vs. negligent |
| Freedom of expression | Right to information, opinion |
Suspension must be:
- Proportionate to violation severity
- After prior warning (except serious violations)
- For reasonable period
- Clearly communicated with reasons
Examples:
| Scenario | Proportionate Response |
|---|---|
| Individual uploads 3 videos promoting terrorism | Immediate suspension, referral to authorities |
| User posts 20 copyright-infringing photos | Warning, then temporary suspension if continues |
| Entity submits 50 false DMCA notices | Suspend notice submission privileges |
| User makes 10 groundless complaints | Warning, then suspend complaint access |
Article 23(3): Statement of Reasons
Suspension decision must include:
- Facts and circumstances leading to decision
- Reference to violated terms or illegal content type
- Information on redress
- Duration of suspension
Same requirements as Art 17 (statement of reasons for content restrictions)
Article 23(4): Complaints About Suspension
Suspended recipients may file complaint through internal complaint-handling system (Art 20)
Platform must:
- Process complaint fairly
- Reverse suspension if unjustified
- Restore access promptly if suspension lifted
Transparency Reporting for Online Platforms [Article 24]
Article 24(1): Annual Reporting Obligation
Online platforms must publish reports, at least once every six months, containing:
Clear, easily accessible, and machine-readable format
Article 24(2): Content to be Included
Report must contain information on:
(a) Content Moderation Activities
| Category | Data Required |
|---|---|
| Orders from authorities | Number received by Member State and content type |
| Notices received | Number, legal ground invoked, median time for decision |
| Own-initiative removals | Number of items removed or restricted, content type |
| Complaints received | Number through Art 20 system, basis, decisions, median time |
| Out-of-court dispute resolution | Number of disputes submitted, outcomes |
| Suspensions | Number under Art 23, reasons |
(b) Content Moderation Automated Means
If using automated tools:
- Purpose of each tool
- Indicators of accuracy
- Safeguards for errors
Examples:
- Hash-matching for CSAM
- Text filters for hate speech
- Image recognition for violent content
- Spam detection algorithms
(c) Trust and Safety Resources
| Resource | Details |
|---|---|
| Staff numbers | Content moderators, by category and language |
| Qualifications | Training, expertise |
| Languages covered | Languages moderation available in |
Purpose: Demonstrate adequate resourcing for content moderation
Article 24(3): Disaggregation
Data must be disaggregated by:
- Member State (where identifiable)
- Content type (illegal content categories, terms violations)
- Language of notice/complaint
- Whether automated means used
Rationale: Enable analysis of patterns, effectiveness, potential biases
Article 24(4): Format and Accessibility
Reports must be:
| Requirement | Details |
|---|---|
| Machine-readable | Structured data (JSON, XML, CSV) |
| Publicly available | On platform website |
| Easy to find | Clearly labeled, accessible without registration |
| Archived | Previous reports available for at least 5 years |
Best practices:
- Interactive dashboards
- Downloadable datasets
- API access for researchers
- Visualizations of key trends
Article 24(5): First Report Deadline
First report due:
- Within 4 months of regulation application (by June 17, 2024 for most platforms)
- Then every 6 months thereafter
Additional Transparency for VLOPs [Article 25]
Article 25: Enhanced Transparency Obligations
Very Large Online Platforms (45M+ monthly EU users) must include additional information in transparency reports:
| Additional Requirement | Details |
|---|---|
| Systemic risks | Overview of measures to mitigate risks identified in Art 34 risk assessments |
| Content recommendations | Number of impressions, main parameters for recommender systems |
| Advertising | Number of ads displayed, disaggregated by type, targeting used |
| Data access | Information on data access granted to vetted researchers under Art 40 |
Purpose: Greater accountability for platforms with significant societal impact
Article 25 Reporting Requirements Detail
(a) Risk Mitigation Measures
Report must describe:
- Risks identified in risk assessment (Art 34)
- Measures implemented to mitigate risks
- Effectiveness of mitigation measures
- Changes to measures during reporting period
Types of systemic risks (from Art 34):
- Dissemination of illegal content
- Actual or foreseeable negative effects on fundamental rights
- Manipulation of service (coordinated inauthentic behavior)
- Public security, health, minors protection, civic discourse impacts
(b) Recommender Systems Transparency
Data on content ranking:
- Number of content impressions
- Main parameters used for recommendations
- Options for modifying/influencing parameters
- Changes to recommender systems
“Main parameters” examples:
- Engagement signals (likes, shares, time spent)
- User connections/network
- Content characteristics
- Recency
- Personalization factors
(c) Advertising Transparency
Enhanced ad reporting:
- Total number of ads displayed
- Breakdown by ad type
- Use of recommender systems for ad delivery
- Use of profiling and targeting
- Reach by demographic (aggregated)
Already covered by Art 26, but Art 25 requires aggregate reporting
(d) Data Access for Researchers
Report on vetted researcher access (Art 40):
- Number of access requests
- Number approved/denied
- Types of data provided
- Research topics
- Safeguards applied
Purpose: Demonstrate compliance with researcher access obligations
Practical Compliance
Implementing Trusted Flagger System
Checklist:
- ✅ Create application process for trusted flagger status
- ✅ Establish criteria for assessment
- ✅ Review applications against criteria
- ✅ Designate internal team to handle trusted flagger notices
- ✅ Create priority review queue
- ✅ Establish faster SLAs for trusted flagger notices
- ✅ Monitor trusted flagger performance
- ✅ Publish list of trusted flaggers (update every 6 months)
- ✅ Create revocation procedure
- ✅ Document all decisions
Recommended trusted flagger agreement:
- Responsibilities of flagger
- Platform’s commitments
- Data protection provisions
- Status review process
- Termination provisions
Implementing Suspension Measures
Policy development:
- ✅ Define “frequently” (e.g., 3+ violations in 30 days)
- ✅ Define “manifestly illegal” by content type
- ✅ Define “manifestly unfounded” for notices
- ✅ Establish warning system
- ✅ Create suspension duration guidelines
- ✅ Implement statement of reasons template
- ✅ Connect to complaint system (Art 20)
- ✅ Train moderators on policy
- ✅ Monitor for consistency
Proportionality assessment:
- Severity scoring system
- Escalation ladder (warning → temp suspension → permanent)
- Manual review for serious cases
- Fundamental rights considerations
Creating Transparency Reports
Data collection systems:
- ✅ Log all relevant events (notices, removals, complaints, orders)
- ✅ Tag with required metadata (Member State, content type, language)
- ✅ Track timing (receipt to decision)
- ✅ Record automated tool usage
- ✅ Maintain resource statistics (staff, languages)
- ✅ Export capabilities for reporting
Report production:
- ✅ Extract data for reporting period
- ✅ Disaggregate as required
- ✅ Calculate medians and aggregates
- ✅ Describe automated tools
- ✅ Include resource information
- ✅ For VLOPs: add systemic risk, recommender, ad, research data
- ✅ Generate machine-readable format
- ✅ Create human-readable version
- ✅ Internal review
- ✅ Publish on website
- ✅ Archive previous reports
- ✅ Notify Digital Services Coordinator
Publishing best practices:
- Dedicated transparency page
- Summary dashboard with key metrics
- Downloadable structured data
- Explanatory text for context
- Comparative data (previous periods)
- Interactive visualizations
Common Mistakes
Treating trusted flaggers same as users:
- Must provide priority treatment
- Faster review and action required
- Direct communication channel
No clear criteria for trusted flagger status:
- Must have objective eligibility criteria
- Document assessment process
- Transparent about decisions
Suspending without warning:
- Prior warning required (except serious cases)
- Clear communication of violations
- Statement of reasons mandatory
Not considering proportionality:
- Must balance severity, intent, fundamental rights
- Escalation approach recommended
- Permanent bans only for serious/repeated violations
Transparency reports missing required data:
- All Art 24(2) elements mandatory
- Must disaggregate by Member State, content type, language
- Machine-readable format required
Reports not accessible:
- Must be publicly available
- Easy to find on website
- No registration barriers
- Archived for 5 years
VLOPs not including enhanced data:
- Art 25 additional requirements mandatory
- Systemic risk mitigation reporting
- Recommender system data
- Researcher access information
Citation
- Digital Services Act - Article 22
- Digital Services Act - Article 23
- Digital Services Act - Article 24
- Digital Services Act - Article 25