UK

Online Safety Act 2023: User-to-User Services Duties

User-to-User Services Duties [Sections 6-23]

Rule: All regulated user-to-user services must comply with duties of care to protect users from illegal content and (where applicable) protect children from harmful material. Category 1 services have additional obligations regarding democratic content, journalism, and user empowerment.

Effective: January 10, 2024 (most provisions)

Section 6: Overview of Part 3

6.1 — Purpose of Part 3

Part 3 establishes:

  1. Duties of care on providers of regulated user-to-user and search services
  2. Codes of practice explaining how to comply
  3. Interpretive provisions defining content categories

Chapter structure:

  • Chapter 2: User-to-user services duties ← This document
  • Chapter 3: Search services duties
  • Chapter 4: Children’s access assessments
  • Chapter 5: Fraudulent advertising
  • Chapter 6: Codes of practice
  • Chapter 7: Interpretation (illegal content definitions, harmful content categories)

Sections 7-8: Which Duties Apply

7.1 — Duty Framework by Service Type

All regulated user-to-user services:

DutySectionApplies To
Illegal content risk assessment9ALL services
Illegal content safety measures10ALL services
Content reporting20ALL services
Complaints procedures21ALL services
Freedom of expression & privacy22ALL services
Record-keeping and review23ALL services

Services likely accessed by children (additional):

DutySectionTrigger
Children’s risk assessment11If children likely to access (Section 35-37)
Children protection measures12If children likely to access
Age verification/estimation13If primary priority harmful content present

Category 1 services only (additional):

DutySectionNote
Adult user empowerment15Content filtering tools
Democratic content protection17Political content
News publisher content protection18Notice & response procedures
Journalistic content protection19Expedited complaints

7.2 — What Duties DON’T Apply To

Exclusions:

  • ✅ Regulated provider pornographic content (Part 5 applies instead)
  • ✅ Search functionality in combined services (Chapter 3 applies)
  • ✅ Non-UK design/operation aspects

Practical effect: If you operate a multi-function service (e.g., social platform + search engine), different parts trigger different duties.

Sections 9-10: Illegal Content Duties

9.1 — Illegal Content Risk Assessment

Requirement:

Providers must complete and keep up to date “suitable and sufficient” illegal content risk assessments.

Assessment must cover:

Risk FactorWhat to Assess
User baseWho uses the service? Demographics, locations
Encounter risksHow likely are users to encounter priority illegal content?
FacilitationDoes service design facilitate priority offences?
DisseminationHow quickly does illegal content spread?
Harm severityHow serious is potential harm from illegal content?

Priority Illegal Content

Focus areas (Schedule 7):

CategoryExamples
Terrorism contentEncouraging terrorism, disseminating terrorist publications
CSEA contentChild sexual abuse and exploitation material
Controlled drugsSupply, production, offering drugs
FraudDishonestly making false representations
Harassment & stalkingPutting people in fear, persistent unwanted contact
Hate crimesIncitement to racial hatred, hatred based on sexual orientation/religion
Immigration offencesAssisting unlawful immigration
Public order offencesThreatening, abusive, insulting content
Inchoate offencesEncouraging/assisting offences

When to update assessment:

  1. OFCOM publishes significant risk profile changes
  2. Before implementing major design modifications
  3. At least annually (good practice)

9.2 — Illegal Content Safety Duties (Section 10)

Core obligation:

Take proportionate steps to mitigate and manage risks identified in assessment.

Required measures:

CategoryExamples
Design & policiesTerms of service prohibiting illegal content
AlgorithmsDe-ranking illegal content, preventing recommendations
Access controlsLimiting who can post, contact limits
Content moderationProactive detection, user reporting, removal
User supportIn-app tools, warnings, safety resources
Staff practicesModeration guidelines, training, escalation procedures

Swift Removal Obligation

Critical requirement:

“Take swift action to prevent users from encountering priority illegal content” identified by provider or users.

“Swift” means:

  • Immediately for terrorism and CSEA content
  • Within hours for other priority illegal content
  • Documented decision timelines

Removal vs. restriction:

  • Removal = content deleted entirely
  • Restriction = content hidden, limited distribution, warnings applied

Both can satisfy “prevent from encountering” if effective.

Terms of Service Requirements

Must specify:

  • Which illegal content is prohibited
  • How content is moderated
  • How priority illegal content is handled (terrorism, CSEA, etc.)
  • How users can report illegal content
  • Consequences for violating terms (account suspension, content removal)

Transparency requirement: Terms must be clear, accessible, and applied consistently.

Sections 11-13: Children’s Protection Duties

11.1 — Children’s Risk Assessment

Trigger: Service is likely to be accessed by children (determined via Section 35-37 assessment).

Assessment must address:

Risk AreaWhat to Assess
Age-specific harmDifferent harms affect different age groups differently
Content functionalityHow does design amplify harm to children?
AlgorithmsDo recommendations expose children to harmful content?
Adult-child contactDoes design enable adults to contact/groom children?

Content Harmful to Children

Categories (per codes of practice):

TypeExamples
Primary prioritySuicide/self-harm content, eating disorder promotion, pornography
PriorityBullying, abuse, violence, content undermining parental authority
Non-designatedAny other content reasonably harmful to children (provider determines)

11.2 — Age-Differentiated Risk

Critical principle:

Harm varies by age and maturity. 7-year-olds face different risks than 16-year-olds.

Practical implication for AI agents:

  • Assess context: is this appropriate for a 10-year-old? 15-year-old?
  • Implement age-gating for different maturity levels
  • Don’t treat all under-18s identically

12.1 — Children Safety Measures (Section 12)

Obligation:

Take proportionate measures to mitigate and manage risks to children.

Required approaches:

Measure TypeExamples
Design safetyAge-appropriate defaults, disable features for children
Content moderationProactive detection of harmful-to-children content
Age-gatingPrevent children accessing age-inappropriate content/features
Parental controlsTools for parents to manage children’s accounts
Adult-child contact limitsRestrict unknown adults messaging children

12.2 — Proportionality Test

Factors to consider:

FactorAssessment
Harm severityHow serious is potential harm?
LikelihoodHow probable is harm?
User numbersHow many children affected?
Technical feasibilityWhat’s reasonably implementable?
User rightsImpact on legitimate use?

Example:

  • High severity + high likelihood = stringent measures required
  • Low severity + low likelihood = lighter measures acceptable

Section 13: Age Verification/Estimation

13.1 — Mandatory Age-Gating

Rule:

If service includes primary priority content harmful to children, provider MUST use age verification or age estimation to prevent children from encountering it.

Exception: Service entirely prohibits such content for all users (not just children).

What is “primary priority content”?

  • Suicide/self-harm encouragement
  • Eating disorder promotion content
  • Pornography
  • (Codes of practice will specify full list)

13.2 — Age Verification vs. Age Estimation

MethodAccuracyExamples
Age verificationHigh confidence of actual ageID document check, credit card verification, facial age estimation (high accuracy)
Age estimationProbabilistic age rangeSelf-declaration + behavioral signals, device data, AI age estimation (lower accuracy)

Standard: Method must be highly effective at correctly determining age.

Cannot claim “children can’t access”: If service lacks robust age barriers, must assume children CAN access.

Sections 14-19: Category 1 Service Duties

14.1 — Who is Category 1?

OFCOM determines based on:

  • Number of UK users (typically millions)
  • Functionality (livestreaming, algorithms, etc.)
  • User profiles (children present, adult content)

As of 2025: ~20 platforms (Meta, Google/YouTube, X, TikTok, Snapchat, Reddit, etc.)

15.1 — Adult User Empowerment (Section 15)

Requirement:

Provide features enabling adult users to filter content they don’t want to see.

Mandatory filter categories:

Content TypeDescription
Suicide/self-harmContent encouraging, promoting, or providing instructions
Eating disordersContent promoting anorexia, bulimia, extreme dieting
Abuse targeting protected characteristicsContent abusing individuals based on race, religion, sex, sexual orientation, disability, age
Incitement to hatredContent inciting hatred against protected groups

User control requirements:

  • Easy to access — prominently offered to users
  • Granular — users can select specific categories
  • Effective — filters actually work
  • Opt-in or opt-out — users can enable/disable

15.2 — Verification Filtering

Additional requirement:

Enable adults to filter content from unverified accounts.

Practical effect: Users can choose to only see content from verified users (those who proved identity).

Why it matters: Reduces exposure to bots, trolls, anonymous abuse.

17.1 — Content of Democratic Importance (Section 17)

Rule:

Systems and processes must ensure free expression of democratic content is considered in moderation decisions.

Democratic content includes:

  • Political debate and commentary
  • Discussion of public policy
  • Electoral campaigning
  • Criticism of government/public figures

Requirement: Before removing/restricting democratic content, consider:

  • Is this protected political speech?
  • Is removal proportionate?
  • Could less restrictive measures work?

NOT an exemption: Illegal democratic content (incitement, harassment, etc.) must still be removed.

18.1 — News Publisher Content (Section 18)

Who is a “recognised news publisher”?

  • UK-based news outlets
  • Regulated by OFCOM, IPSO, IMPRESS, or equivalent
  • Meets editorial standards

Special protections:

ObligationTimingPurpose
Pre-removal noticeBefore actionNotify publisher of planned removal/restriction
Detailed reasoningWith noticeExplain why content violates terms
Response opportunityReasonable timePublisher can contest decision
Document decisionAfter actionRecord reasoning, publisher response

Exceptions (immediate removal allowed):

  • Criminal liability risk to provider
  • Relevant offence committed (terrorism, CSEA, etc.)
  • Urgent harm prevention

18.2 — Practical Application for AI Agents

Detection:

  1. Check if content originates from recognised news publisher
  2. Flag for special handling if yes

Before removal:

  1. Send notice to publisher
  2. Provide detailed explanation
  3. Allow response period (e.g., 24-48 hours)
  4. Document entire process

Balance:

  • Don’t exempt news publishers from illegal content rules
  • But give them procedural fairness before action

19.1 — Journalistic Content (Section 19)

Broader than news publishers: Includes freelance journalists, citizen journalists, documentary makers.

Protection:

Expedited complaints procedure for content takedowns or user sanctions affecting journalism in the public interest.

If complaint succeeds:

  • Content must be swiftly reinstated
  • User account unsuspended
  • No penalties for user

Journalistic content test:

  1. Content has journalistic character (reporting, investigation, analysis)
  2. Serves public interest
  3. Created by someone engaged in journalism (professional or citizen)

AI agent implication: Flag content as potentially journalistic → human review before actioning.

Sections 20-21: Reporting and Complaints

20.1 — Content Reporting (Section 20)

Who can report:

  • Users of the service
  • Affected persons (even if not users)

What can be reported:

Service TypeReportable Content
All servicesIllegal content
Services accessed by childrenIllegal content + content harmful to children

Mechanism requirements:

  • Easy to use — obvious, accessible
  • Clear process — users understand how reporting works
  • Accessible to children — age-appropriate language/design

21.1 — Complaints Procedures (Section 21)

Mandatory complaints handling:

Complaint TypeWho Can ComplainApplies To
Illegal contentUsers, affected personsAll services
Duty non-complianceUsersAll services
Content takedownContent creatorAll services
Account suspensionAffected userAll services
Proactive tech misuseAffected userAll services

Category 1 services (additional complaints):

  • User empowerment features not working
  • Democratic importance content wrongly removed
  • News publisher content improperly actioned
  • Journalistic content wrongly takedown

Procedural Requirements

Complaints process must:

  • ✅ Be transparent (published, easy to find)
  • ✅ Be accessible (including to children)
  • ✅ Provide clear outcomes
  • ✅ Include appeal mechanism
  • ✅ Have reasonable timelines

Good practice:

  • Acknowledge complaints within 24 hours
  • Resolve within 5-7 days for straightforward cases
  • Escalate complex cases to human reviewers
  • Provide detailed reasoning for decisions

Sections 22-23: Freedom of Expression, Privacy, and Records

22.1 — Freedom of Expression and Privacy (Section 22)

All providers must:

Have regard to the importance of freedom of expression and privacy when designing safety measures.

Balancing test:

ConsiderationWeight
User safetyParamount
Free expressionFundamental right
PrivacyFundamental right
ProportionalityLeast restrictive effective measure

Practical examples:

Safety MeasureExpression/Privacy ImpactProportionate?
Proactive scanning of all private messagesHigh privacy invasion⚠️ Likely disproportionate for general content; may be justified for CSEA
Keyword filters for public postsModerate expression restriction✅ Likely proportionate if accurate
User reporting with human reviewLow impact✅ Proportionate
Blanket age restrictions (e.g., 18+)High expression restriction⚠️ May be disproportionate if age-gating works

22.2 — Category 1 Impact Assessments

Additional requirement for Category 1:

Conduct and publish impact assessments analyzing effects on expression and privacy.

Assessment must:

  1. Identify how safety measures affect expression/privacy
  2. Quantify impact (how many users, how severely)
  3. Explain mitigations taken
  4. Demonstrate proportionality

Publication: Must be publicly available and updated regularly.

23.1 — Record-Keeping (Section 23)

Required records:

Record TypeContentRetention
Risk assessmentsFull illegal content and children’s assessmentsCurrent + historical
Compliance measuresHow duties are metOngoing
ReviewsRegular compliance reviewsAnnual minimum

Category 1 services: Must provide assessment records to OFCOM on request.

23.2 — Regular Review Obligation

Frequency:

  • Regular reviews (at least annually)
  • Additional reviews after significant design changes

Scope of review:

  1. Are risk assessments still accurate?
  2. Are safety measures still effective?
  3. Have new risks emerged?
  4. Are terms of service up to date?

Practical Compliance Checklist for AI Agents

For ALL User-to-User Services:

Risk assessment:

  • Complete illegal content risk assessment (Section 9)
  • Update assessment when OFCOM publishes risk profiles
  • Review before major design changes

Safety measures:

  • Implement measures to prevent illegal content encounters (Section 10)
  • Swift removal of priority illegal content (terrorism, CSEA)
  • Clear terms of service prohibiting illegal content

User tools:

  • Easy content reporting mechanism (Section 20)
  • Transparent complaints procedure (Section 21)
  • Record-keeping system (Section 23)

Rights protections:

  • Consider freedom of expression in moderation (Section 22)
  • Balance privacy with safety measures

If Service Likely Accessed by Children:

Additional assessments:

  • Complete children’s access assessment (Sections 35-37)
  • Complete children’s risk assessment (Section 11)

Additional protections:

  • Implement age-appropriate safety measures (Section 12)
  • Age verification/estimation for primary priority content (Section 13)
  • Child-accessible reporting and complaints

If Category 1 Service:

User empowerment:

  • Content filters (suicide, eating disorders, abuse, hate) (Section 15)
  • Verification filtering option

Content protections:

  • Democratic content procedures (Section 17)
  • News publisher notice & response (Section 18)
  • Journalistic content expedited complaints (Section 19)

Transparency:

  • Publish freedom of expression & privacy impact assessment (Section 22)
  • Provide risk assessment records to OFCOM (Section 23)

Key Takeaways

  1. Tiered obligations — All services have basic duties; child-accessible services add children protections; Category 1 adds empowerment/democratic content
  2. Risk-based approach — Assess risks first, then implement proportionate measures
  3. Swift action required — Terrorism and CSEA content must be removed immediately
  4. Children = mandatory age-gating — If primary priority harmful content present, must verify age
  5. Category 1 = enhanced protections — Democratic content, news publishers, journalism get special procedures
  6. Transparency paramount — Complaints, terms of service, reporting must be clear and accessible
  7. Rights balancing — Safety important but must respect expression and privacy

Citation

Part 3, Chapter 1 — Overview, Online Safety Act 2023

Part 3, Chapter 2 — User-to-User Services, Online Safety Act 2023

Related:

Contains public sector information licensed under the Open Government Licence v3.0 where applicable. This is not legal advice. Always refer to official sources for authoritative text.

llms.txt