Online Safety Act 2023: User-to-User Services Duties
User-to-User Services Duties [Sections 6-23]
Rule: All regulated user-to-user services must comply with duties of care to protect users from illegal content and (where applicable) protect children from harmful material. Category 1 services have additional obligations regarding democratic content, journalism, and user empowerment.
Effective: January 10, 2024 (most provisions)
Section 6: Overview of Part 3
6.1 — Purpose of Part 3
Part 3 establishes:
- Duties of care on providers of regulated user-to-user and search services
- Codes of practice explaining how to comply
- Interpretive provisions defining content categories
Chapter structure:
- Chapter 2: User-to-user services duties ← This document
- Chapter 3: Search services duties
- Chapter 4: Children’s access assessments
- Chapter 5: Fraudulent advertising
- Chapter 6: Codes of practice
- Chapter 7: Interpretation (illegal content definitions, harmful content categories)
Sections 7-8: Which Duties Apply
7.1 — Duty Framework by Service Type
All regulated user-to-user services:
| Duty | Section | Applies To |
|---|---|---|
| Illegal content risk assessment | 9 | ALL services |
| Illegal content safety measures | 10 | ALL services |
| Content reporting | 20 | ALL services |
| Complaints procedures | 21 | ALL services |
| Freedom of expression & privacy | 22 | ALL services |
| Record-keeping and review | 23 | ALL services |
Services likely accessed by children (additional):
| Duty | Section | Trigger |
|---|---|---|
| Children’s risk assessment | 11 | If children likely to access (Section 35-37) |
| Children protection measures | 12 | If children likely to access |
| Age verification/estimation | 13 | If primary priority harmful content present |
Category 1 services only (additional):
| Duty | Section | Note |
|---|---|---|
| Adult user empowerment | 15 | Content filtering tools |
| Democratic content protection | 17 | Political content |
| News publisher content protection | 18 | Notice & response procedures |
| Journalistic content protection | 19 | Expedited complaints |
7.2 — What Duties DON’T Apply To
Exclusions:
- ✅ Regulated provider pornographic content (Part 5 applies instead)
- ✅ Search functionality in combined services (Chapter 3 applies)
- ✅ Non-UK design/operation aspects
Practical effect: If you operate a multi-function service (e.g., social platform + search engine), different parts trigger different duties.
Sections 9-10: Illegal Content Duties
9.1 — Illegal Content Risk Assessment
Requirement:
Providers must complete and keep up to date “suitable and sufficient” illegal content risk assessments.
Assessment must cover:
| Risk Factor | What to Assess |
|---|---|
| User base | Who uses the service? Demographics, locations |
| Encounter risks | How likely are users to encounter priority illegal content? |
| Facilitation | Does service design facilitate priority offences? |
| Dissemination | How quickly does illegal content spread? |
| Harm severity | How serious is potential harm from illegal content? |
Priority Illegal Content
Focus areas (Schedule 7):
| Category | Examples |
|---|---|
| Terrorism content | Encouraging terrorism, disseminating terrorist publications |
| CSEA content | Child sexual abuse and exploitation material |
| Controlled drugs | Supply, production, offering drugs |
| Fraud | Dishonestly making false representations |
| Harassment & stalking | Putting people in fear, persistent unwanted contact |
| Hate crimes | Incitement to racial hatred, hatred based on sexual orientation/religion |
| Immigration offences | Assisting unlawful immigration |
| Public order offences | Threatening, abusive, insulting content |
| Inchoate offences | Encouraging/assisting offences |
When to update assessment:
- OFCOM publishes significant risk profile changes
- Before implementing major design modifications
- At least annually (good practice)
9.2 — Illegal Content Safety Duties (Section 10)
Core obligation:
Take proportionate steps to mitigate and manage risks identified in assessment.
Required measures:
| Category | Examples |
|---|---|
| Design & policies | Terms of service prohibiting illegal content |
| Algorithms | De-ranking illegal content, preventing recommendations |
| Access controls | Limiting who can post, contact limits |
| Content moderation | Proactive detection, user reporting, removal |
| User support | In-app tools, warnings, safety resources |
| Staff practices | Moderation guidelines, training, escalation procedures |
Swift Removal Obligation
Critical requirement:
“Take swift action to prevent users from encountering priority illegal content” identified by provider or users.
“Swift” means:
- Immediately for terrorism and CSEA content
- Within hours for other priority illegal content
- Documented decision timelines
Removal vs. restriction:
- Removal = content deleted entirely
- Restriction = content hidden, limited distribution, warnings applied
Both can satisfy “prevent from encountering” if effective.
Terms of Service Requirements
Must specify:
- Which illegal content is prohibited
- How content is moderated
- How priority illegal content is handled (terrorism, CSEA, etc.)
- How users can report illegal content
- Consequences for violating terms (account suspension, content removal)
Transparency requirement: Terms must be clear, accessible, and applied consistently.
Sections 11-13: Children’s Protection Duties
11.1 — Children’s Risk Assessment
Trigger: Service is likely to be accessed by children (determined via Section 35-37 assessment).
Assessment must address:
| Risk Area | What to Assess |
|---|---|
| Age-specific harm | Different harms affect different age groups differently |
| Content functionality | How does design amplify harm to children? |
| Algorithms | Do recommendations expose children to harmful content? |
| Adult-child contact | Does design enable adults to contact/groom children? |
Content Harmful to Children
Categories (per codes of practice):
| Type | Examples |
|---|---|
| Primary priority | Suicide/self-harm content, eating disorder promotion, pornography |
| Priority | Bullying, abuse, violence, content undermining parental authority |
| Non-designated | Any other content reasonably harmful to children (provider determines) |
11.2 — Age-Differentiated Risk
Critical principle:
Harm varies by age and maturity. 7-year-olds face different risks than 16-year-olds.
Practical implication for AI agents:
- Assess context: is this appropriate for a 10-year-old? 15-year-old?
- Implement age-gating for different maturity levels
- Don’t treat all under-18s identically
12.1 — Children Safety Measures (Section 12)
Obligation:
Take proportionate measures to mitigate and manage risks to children.
Required approaches:
| Measure Type | Examples |
|---|---|
| Design safety | Age-appropriate defaults, disable features for children |
| Content moderation | Proactive detection of harmful-to-children content |
| Age-gating | Prevent children accessing age-inappropriate content/features |
| Parental controls | Tools for parents to manage children’s accounts |
| Adult-child contact limits | Restrict unknown adults messaging children |
12.2 — Proportionality Test
Factors to consider:
| Factor | Assessment |
|---|---|
| Harm severity | How serious is potential harm? |
| Likelihood | How probable is harm? |
| User numbers | How many children affected? |
| Technical feasibility | What’s reasonably implementable? |
| User rights | Impact on legitimate use? |
Example:
- High severity + high likelihood = stringent measures required
- Low severity + low likelihood = lighter measures acceptable
Section 13: Age Verification/Estimation
13.1 — Mandatory Age-Gating
Rule:
If service includes primary priority content harmful to children, provider MUST use age verification or age estimation to prevent children from encountering it.
Exception: Service entirely prohibits such content for all users (not just children).
What is “primary priority content”?
- Suicide/self-harm encouragement
- Eating disorder promotion content
- Pornography
- (Codes of practice will specify full list)
13.2 — Age Verification vs. Age Estimation
| Method | Accuracy | Examples |
|---|---|---|
| Age verification | High confidence of actual age | ID document check, credit card verification, facial age estimation (high accuracy) |
| Age estimation | Probabilistic age range | Self-declaration + behavioral signals, device data, AI age estimation (lower accuracy) |
Standard: Method must be highly effective at correctly determining age.
Cannot claim “children can’t access”: If service lacks robust age barriers, must assume children CAN access.
Sections 14-19: Category 1 Service Duties
14.1 — Who is Category 1?
OFCOM determines based on:
- Number of UK users (typically millions)
- Functionality (livestreaming, algorithms, etc.)
- User profiles (children present, adult content)
As of 2025: ~20 platforms (Meta, Google/YouTube, X, TikTok, Snapchat, Reddit, etc.)
15.1 — Adult User Empowerment (Section 15)
Requirement:
Provide features enabling adult users to filter content they don’t want to see.
Mandatory filter categories:
| Content Type | Description |
|---|---|
| Suicide/self-harm | Content encouraging, promoting, or providing instructions |
| Eating disorders | Content promoting anorexia, bulimia, extreme dieting |
| Abuse targeting protected characteristics | Content abusing individuals based on race, religion, sex, sexual orientation, disability, age |
| Incitement to hatred | Content inciting hatred against protected groups |
User control requirements:
- ✅ Easy to access — prominently offered to users
- ✅ Granular — users can select specific categories
- ✅ Effective — filters actually work
- ✅ Opt-in or opt-out — users can enable/disable
15.2 — Verification Filtering
Additional requirement:
Enable adults to filter content from unverified accounts.
Practical effect: Users can choose to only see content from verified users (those who proved identity).
Why it matters: Reduces exposure to bots, trolls, anonymous abuse.
17.1 — Content of Democratic Importance (Section 17)
Rule:
Systems and processes must ensure free expression of democratic content is considered in moderation decisions.
Democratic content includes:
- Political debate and commentary
- Discussion of public policy
- Electoral campaigning
- Criticism of government/public figures
Requirement: Before removing/restricting democratic content, consider:
- Is this protected political speech?
- Is removal proportionate?
- Could less restrictive measures work?
NOT an exemption: Illegal democratic content (incitement, harassment, etc.) must still be removed.
18.1 — News Publisher Content (Section 18)
Who is a “recognised news publisher”?
- UK-based news outlets
- Regulated by OFCOM, IPSO, IMPRESS, or equivalent
- Meets editorial standards
Special protections:
| Obligation | Timing | Purpose |
|---|---|---|
| Pre-removal notice | Before action | Notify publisher of planned removal/restriction |
| Detailed reasoning | With notice | Explain why content violates terms |
| Response opportunity | Reasonable time | Publisher can contest decision |
| Document decision | After action | Record reasoning, publisher response |
Exceptions (immediate removal allowed):
- Criminal liability risk to provider
- Relevant offence committed (terrorism, CSEA, etc.)
- Urgent harm prevention
18.2 — Practical Application for AI Agents
Detection:
- Check if content originates from recognised news publisher
- Flag for special handling if yes
Before removal:
- Send notice to publisher
- Provide detailed explanation
- Allow response period (e.g., 24-48 hours)
- Document entire process
Balance:
- Don’t exempt news publishers from illegal content rules
- But give them procedural fairness before action
19.1 — Journalistic Content (Section 19)
Broader than news publishers: Includes freelance journalists, citizen journalists, documentary makers.
Protection:
Expedited complaints procedure for content takedowns or user sanctions affecting journalism in the public interest.
If complaint succeeds:
- Content must be swiftly reinstated
- User account unsuspended
- No penalties for user
Journalistic content test:
- Content has journalistic character (reporting, investigation, analysis)
- Serves public interest
- Created by someone engaged in journalism (professional or citizen)
AI agent implication: Flag content as potentially journalistic → human review before actioning.
Sections 20-21: Reporting and Complaints
20.1 — Content Reporting (Section 20)
Who can report:
- Users of the service
- Affected persons (even if not users)
What can be reported:
| Service Type | Reportable Content |
|---|---|
| All services | Illegal content |
| Services accessed by children | Illegal content + content harmful to children |
Mechanism requirements:
- ✅ Easy to use — obvious, accessible
- ✅ Clear process — users understand how reporting works
- ✅ Accessible to children — age-appropriate language/design
21.1 — Complaints Procedures (Section 21)
Mandatory complaints handling:
| Complaint Type | Who Can Complain | Applies To |
|---|---|---|
| Illegal content | Users, affected persons | All services |
| Duty non-compliance | Users | All services |
| Content takedown | Content creator | All services |
| Account suspension | Affected user | All services |
| Proactive tech misuse | Affected user | All services |
Category 1 services (additional complaints):
- User empowerment features not working
- Democratic importance content wrongly removed
- News publisher content improperly actioned
- Journalistic content wrongly takedown
Procedural Requirements
Complaints process must:
- ✅ Be transparent (published, easy to find)
- ✅ Be accessible (including to children)
- ✅ Provide clear outcomes
- ✅ Include appeal mechanism
- ✅ Have reasonable timelines
Good practice:
- Acknowledge complaints within 24 hours
- Resolve within 5-7 days for straightforward cases
- Escalate complex cases to human reviewers
- Provide detailed reasoning for decisions
Sections 22-23: Freedom of Expression, Privacy, and Records
22.1 — Freedom of Expression and Privacy (Section 22)
All providers must:
Have regard to the importance of freedom of expression and privacy when designing safety measures.
Balancing test:
| Consideration | Weight |
|---|---|
| User safety | Paramount |
| Free expression | Fundamental right |
| Privacy | Fundamental right |
| Proportionality | Least restrictive effective measure |
Practical examples:
| Safety Measure | Expression/Privacy Impact | Proportionate? |
|---|---|---|
| Proactive scanning of all private messages | High privacy invasion | ⚠️ Likely disproportionate for general content; may be justified for CSEA |
| Keyword filters for public posts | Moderate expression restriction | ✅ Likely proportionate if accurate |
| User reporting with human review | Low impact | ✅ Proportionate |
| Blanket age restrictions (e.g., 18+) | High expression restriction | ⚠️ May be disproportionate if age-gating works |
22.2 — Category 1 Impact Assessments
Additional requirement for Category 1:
Conduct and publish impact assessments analyzing effects on expression and privacy.
Assessment must:
- Identify how safety measures affect expression/privacy
- Quantify impact (how many users, how severely)
- Explain mitigations taken
- Demonstrate proportionality
Publication: Must be publicly available and updated regularly.
23.1 — Record-Keeping (Section 23)
Required records:
| Record Type | Content | Retention |
|---|---|---|
| Risk assessments | Full illegal content and children’s assessments | Current + historical |
| Compliance measures | How duties are met | Ongoing |
| Reviews | Regular compliance reviews | Annual minimum |
Category 1 services: Must provide assessment records to OFCOM on request.
23.2 — Regular Review Obligation
Frequency:
- Regular reviews (at least annually)
- Additional reviews after significant design changes
Scope of review:
- Are risk assessments still accurate?
- Are safety measures still effective?
- Have new risks emerged?
- Are terms of service up to date?
Practical Compliance Checklist for AI Agents
For ALL User-to-User Services:
Risk assessment:
- Complete illegal content risk assessment (Section 9)
- Update assessment when OFCOM publishes risk profiles
- Review before major design changes
Safety measures:
- Implement measures to prevent illegal content encounters (Section 10)
- Swift removal of priority illegal content (terrorism, CSEA)
- Clear terms of service prohibiting illegal content
User tools:
- Easy content reporting mechanism (Section 20)
- Transparent complaints procedure (Section 21)
- Record-keeping system (Section 23)
Rights protections:
- Consider freedom of expression in moderation (Section 22)
- Balance privacy with safety measures
If Service Likely Accessed by Children:
Additional assessments:
- Complete children’s access assessment (Sections 35-37)
- Complete children’s risk assessment (Section 11)
Additional protections:
- Implement age-appropriate safety measures (Section 12)
- Age verification/estimation for primary priority content (Section 13)
- Child-accessible reporting and complaints
If Category 1 Service:
User empowerment:
- Content filters (suicide, eating disorders, abuse, hate) (Section 15)
- Verification filtering option
Content protections:
- Democratic content procedures (Section 17)
- News publisher notice & response (Section 18)
- Journalistic content expedited complaints (Section 19)
Transparency:
- Publish freedom of expression & privacy impact assessment (Section 22)
- Provide risk assessment records to OFCOM (Section 23)
Key Takeaways
- Tiered obligations — All services have basic duties; child-accessible services add children protections; Category 1 adds empowerment/democratic content
- Risk-based approach — Assess risks first, then implement proportionate measures
- Swift action required — Terrorism and CSEA content must be removed immediately
- Children = mandatory age-gating — If primary priority harmful content present, must verify age
- Category 1 = enhanced protections — Democratic content, news publishers, journalism get special procedures
- Transparency paramount — Complaints, terms of service, reporting must be clear and accessible
- Rights balancing — Safety important but must respect expression and privacy
Citation
Part 3, Chapter 1 — Overview, Online Safety Act 2023
Part 3, Chapter 2 — User-to-User Services, Online Safety Act 2023
Related: