Online Safety Act 2023: Codes of Practice and Guidance
Codes of Practice and Guidance [Sections 41-54]
Rule: OFCOM must prepare codes of practice explaining how providers can comply with their Online Safety Act duties. Following code recommendations = presumed compliance. Alternative approaches allowed but must protect freedom of expression and privacy.
Effective: Codes being issued in phases 2024-2025. Most duties activate when their corresponding code comes into force.
Section 41: Codes of Practice About Duties
41.1 — What Codes Must OFCOM Prepare?
Mandatory codes:
| Code | Covers | Priority |
|---|---|---|
| Terrorism code | Illegal content duties for terrorism offences (Schedule 5) | HIGHEST |
| CSEA code | Illegal content duties for child sexual exploitation and abuse (Schedule 6) | HIGHEST |
| General illegal content code | All other illegal content duties | HIGH |
| Fraudulent advertising code | Category 1 and 2A services’ fraud ad duties (Chapter 5) | HIGH |
Purpose of codes:
Describe recommended measures for complying with duties.
Not prescriptive:
- Codes recommend, not mandate (except where law already mandates)
- Providers can use alternative approaches if they achieve same outcomes
- Focus on “measures for complying with” duties
41.2 — What’s in a Code?
Each code contains:
- Recommended measures — Specific technical, operational, or policy steps
- Best practices — Proven approaches from industry leaders
- Graduated approach — Different measures for different service sizes/types
- Flexibility provisions — Acknowledges one-size doesn’t fit all
Examples of recommended measures:
| Area | Example Measures |
|---|---|
| Content moderation | AI detection tools, human review processes, user reporting mechanisms |
| Age verification | Age estimation technology, government ID verification, credit card checks |
| Risk assessment | Data collection requirements, algorithmic risk analysis, external audits |
| User safety | Block/mute features, content warnings, safety settings |
| Transparency | Terms of service clarity, moderation decision explanations, appeals processes |
41.3 — Consultation Requirements
OFCOM must consult with:
| Stakeholder Category | Examples |
|---|---|
| Government | Secretary of State (mandatory) |
| Service providers | Platforms, search engines, industry associations |
| User representatives | Consumer groups, digital rights organizations |
| Children’s interests | Children’s Commissioner, child safety NGOs |
| Affected persons | Victim support groups, anti-hate organizations |
| Equality & rights experts | Human rights lawyers, equality organizations |
| Information Commissioner | Data protection authority |
| Specialized commissioners | Victims’ Commissioner, Domestic Abuse Commissioner |
| Technical experts | AI researchers, cybersecurity professionals |
Additional consultation for specific codes:
For terrorism and CSEA codes, also consult:
- ✅ Law enforcement agencies
- ✅ National security experts
- ✅ Counter-terrorism specialists
- ✅ Child protection authorities
Why this matters: Codes must balance competing interests (safety vs freedom of expression vs privacy vs innovation).
41.4 — Code Amendment and Withdrawal
OFCOM’s powers:
- ✅ Amend codes as technology/risks evolve
- ✅ Issue replacement codes
- ✅ Withdraw codes (if no longer needed)
When amendments needed:
- New types of illegal content emerge
- Technology changes (new AI capabilities)
- Risk landscape shifts
- Evidence shows current measures ineffective
Section 42: Principles for Code Preparation
42.1 — Schedule 4 Objectives
Codes must align with:
- Online safety objectives — Protect users from harm
- Freedom of expression safeguards — Don’t suppress lawful speech
- Privacy protections — Minimize surveillance/data collection
- Innovation preservation — Don’t stifle legitimate services
- Proportionality — Measures suited to risk level
42.2 — Proactive Technology Constraints
Critical principle:
Codes cannot recommend measures requiring providers to use “proactive technology” to find all instances of specific content types.
What this means:
| Can Recommend | Cannot Recommend |
|---|---|
| ✅ AI tools to detect known illegal content (hashes) | ❌ Scan ALL content for unknown terrorism material |
| ✅ User reporting mechanisms | ❌ Monitor ALL private messages for CSEA |
| ✅ Keyword filters for high-risk content | ❌ Read every post looking for illegal content |
| ✅ Risk-based targeting of likely harmful content | ❌ Blanket surveillance of all user activity |
Rationale: Balance safety with privacy — avoid turning platforms into mass surveillance systems.
Exceptions: Certain duties (e.g., CSEA scanning in specific circumstances) may require proactive measures, but codes can’t broadly mandate them.
Section 43: Procedure for Issuing Codes
43.1 — Parliamentary Approval Process
Steps to issue a code:
1. OFCOM drafts code (after consultation)
↓
2. Submit draft to Secretary of State
↓
3. Secretary of State review (60 days max)
↓
4. Lay draft before Parliament
↓
5. 40-day parliamentary review period
↓
6. Either House can block approval (motion to annul)
↓
7. If not blocked → OFCOM issues code
↓
8. Code comes into force (duties activate)
43.2 — Timelines for First Codes
Statutory deadlines:
| Code Type | Submission Deadline |
|---|---|
| Terrorism code | 18 months from Royal Assent |
| CSEA code | 18 months from Royal Assent |
| Other codes | ”As soon as reasonably practicable” |
Extension possible: Secretary of State can extend terrorism/CSEA deadlines by 12 months if necessary.
As of 2026:
- Terrorism code: In force (issued 2024)
- CSEA code: In force (issued 2024)
- General illegal content code: In force (issued 2025)
- Fraudulent advertising code: In force (issued 2025)
43.3 — Why Parliamentary Scrutiny?
Codes are quasi-legislation:
- Determine how duties are met
- Affect millions of users
- Balance fundamental rights
- Set industry standards
Parliament’s role:
- Ensure codes don’t overreach
- Protect freedom of expression
- Verify proportionality
- Represent public interest
Section 44: Secretary of State’s Direction Powers
44.1 — When Can Secretary of State Intervene?
Three grounds for directing OFCOM to modify draft codes:
| Ground | Examples |
|---|---|
| 1. International obligations | EU law, Council of Europe conventions, UN treaties |
| 2. National security or public safety | Terrorism prevention, critical infrastructure protection |
| 3. Foreign relations | Avoiding diplomatic incidents, treaty compliance |
44.2 — Direction Process
How it works:
- OFCOM submits draft → Secretary of State
- Secretary of State identifies issue → Notifies OFCOM
- Direction issued → Must specify required modifications
- OFCOM must comply → Modify draft as directed
- Publication → Direction published (unless security concerns)
Transparency requirement:
Directions must be published, EXCEPT where publication would harm national security, public safety, or foreign relations.
44.3 — Special Rules for Terrorism/CSEA Codes
Enhanced scrutiny:
- Independent reviewer must assess modifications
- Additional consultation with security experts
- Stricter justification requirements
Why different? Most sensitive content areas — balance safety with rights is critical.
Section 45: Post-Direction Parliamentary Procedures
45.1 — Two Procedures
After Secretary of State directs modifications:
| Procedure Type | When Used | What It Means |
|---|---|---|
| Affirmative | Significant modifications | Both Houses must actively approve |
| Negative | Minor modifications | Either House can veto within 40 days |
Affirmative procedure:
- Draft laid before Parliament
- Motion to approve required in each House
- No approval = code doesn’t come into force
Negative procedure:
- Draft laid before Parliament
- 40-day review period
- Either House can motion to annul
- If not annulled = code proceeds
45.2 — Parliamentary Safeguards
Purpose: Prevent executive overreach — Secretary of State can’t impose codes without parliamentary check.
Practical effect: Controversial modifications face higher bar (affirmative approval).
Section 46: Publication Requirements
46.1 — Publication Timelines
OFCOM must publish codes:
| Event | Publication Deadline |
|---|---|
| Code issued | Within 3 days |
| Code amended | Within 3 days of amendment |
| Code withdrawn | Within 3 days of withdrawal |
Publication format:
- Publicly accessible website
- Free to download
- Plain text + PDF
- Accessible formats available
46.2 — Why Speed Matters
Providers need to know:
- What measures are recommended
- When duties activate
- How to demonstrate compliance
Users need to know:
- What protections platforms should provide
- How to hold providers accountable
Section 47: Code Review Obligations
47.1 — Continuous Review Duty
OFCOM must:
Keep all published codes under review.
Review triggers:
| Trigger | Action |
|---|---|
| Technology changes | Assess if code measures still effective |
| New risks emerge | Update code to address new harms |
| Evidence of ineffectiveness | Revise recommended measures |
| Stakeholder feedback | Consider industry/user concerns |
No fixed schedule: Continuous monitoring, but formal review at least every 3 years.
47.2 — Secretary of State May Require Review
Special power for terrorism/CSEA codes:
Secretary of State may require OFCOM to review these codes for national security or public safety reasons.
OFCOM must respond:
- Within specified timeframe
- Explaining whether amendment needed
- If yes → begin amendment process
Why this power exists: Terrorism/CSEA threats evolve rapidly — may need urgent code updates.
Section 48: Minor Amendments
48.1 — Simplified Process
For non-controversial amendments:
Normal process:
- ❌ Full consultation (months)
- ❌ Parliamentary procedure (40+ days)
- ❌ Secretary of State review
Minor amendment process:
- ✅ OFCOM drafts amendment
- ✅ Secretary of State agrees it’s “minor”
- ✅ Direct publication (no parliamentary procedure)
48.2 — What Qualifies as “Minor”?
Examples:
| Minor Amendment | Not Minor |
|---|---|
| ✅ Correcting typos | ❌ Adding new recommended measures |
| ✅ Updating outdated references | ❌ Removing safeguards |
| ✅ Clarifying ambiguous wording | ❌ Changing compliance approach |
| ✅ Technical updates (e.g., new URL) | ❌ Expanding scope of duties |
Who decides: Secretary of State must agree amendment is minor.
Section 49: Effect of Codes on Compliance
49.1 — Safe Harbor Provision
Core principle:
Providers following code recommendations are presumed compliant with corresponding duties.
How it works:
Provider uses measures described in code
↓
Presumption: Provider complies with relevant duty
↓
OFCOM cannot find breach based solely on different approach
↓
Burden on OFCOM to prove code measures insufficient
Example:
| Duty | Code Recommends | Provider Does | Result |
|---|---|---|---|
| Illegal content removal | AI detection + human review | Exactly as code says | ✅ Presumed compliant |
| Children’s age verification | Age estimation technology | Uses government ID instead | ⚠️ Must prove equally effective |
49.2 — Alternative Compliance Approaches
Providers can deviate from codes IF:
- ✅ Alternative measures achieve same outcome
- ✅ Measures protect freedom of expression
- ✅ Measures protect privacy
- ✅ Measures are proportionate
OFCOM’s assessment:
When provider uses non-code measures, OFCOM examines:
- Do they achieve the duty’s purpose?
- Are safeguards for rights equivalent?
- Are they applied across entire service?
Example scenarios:
| Scenario | Code Approach | Alternative Approach | Compliant? |
|---|---|---|---|
| CSEA detection | PhotoDNA hash matching | Custom AI trained on public datasets | ✅ Yes (if equally effective) |
| Age verification | Third-party age estimation | In-house developed technology | ✅ Yes (if accuracy equivalent) |
| Content moderation | 24/7 human review team | AI-only with high accuracy | ⚠️ Depends (may lack nuance) |
49.3 — Burden of Proof
If provider follows code:
- ✅ Presumed compliant
- ❌ OFCOM must prove measures insufficient for that specific service
If provider deviates from code:
- ⚠️ No presumption
- ⚠️ Provider must demonstrate compliance
- ⚠️ OFCOM scrutinizes more closely
Practical implication: Following codes = easier compliance defense.
49.4 — Freedom of Expression and Privacy Safeguards
Critical requirement:
Alternative approaches must demonstrate “appropriate safeguards” for freedom of expression and privacy.
What this means:
| Safeguard | Example |
|---|---|
| Freedom of expression | Don’t over-block legal content, transparent appeals process, human review of removals |
| Privacy | Minimize data collection, end-to-end encryption where possible, data minimization |
Assessment factors:
- Proportionality — Are measures narrowly tailored?
- Necessity — Could less intrusive measures work?
- Transparency — Do users understand what’s happening?
- Accountability — Can users challenge decisions?
Example:
| Measure | Freedom of Expression Check | Privacy Check |
|---|---|---|
| AI content scanning | ⚠️ Risk of false positives blocking legal speech | ⚠️ Accesses all user content |
| Safeguards needed | Human review of AI decisions, transparent criteria | Scan only high-risk content, don’t store scans |
Section 50: Evidentiary Role of Codes
50.1 — Codes in Legal/Regulatory Proceedings
Two key principles:
- Not automatically binding — Code violations ≠ automatic breach of duty
- But admissible evidence — Courts and OFCOM must consider codes
50.2 — How Codes Are Used
In OFCOM enforcement:
OFCOM investigates alleged breach
↓
Reviews whether provider followed code
↓
If NO → Evidence of potential non-compliance
↓
Provider must explain why alternative approach sufficient
↓
OFCOM assesses explanation
In court proceedings:
Legal dispute over provider's actions
↓
Court examines relevant code provisions
↓
Code informs court's interpretation of duties
↓
But court can find compliance even if code not followed
(if alternative measures adequate)
50.3 — Codes as Interpretive Guidance
Codes help answer:
- What does “proportionate” mean in practice?
- What counts as “reasonable care and skill”?
- When is age verification “effective”?
Example:
Duty: “Operate service using proportionate systems to mitigate risk of children encountering primary priority content.”
Code says: “Age estimation with 95%+ accuracy is recommended.”
Court/OFCOM interpretation: 95% becomes benchmark for “proportionate” — provider using 80% accurate system likely non-compliant.
50.4 — No Criminal/Civil Liability for Code Violations
Important limitation:
Breach of code ≠ breach of statutory duty for civil liability purposes.
What this means:
| Scenario | Legal Effect |
|---|---|
| Provider violates code | ❌ User cannot sue for code breach alone |
| Provider violates underlying duty | ✅ OFCOM can enforce (fines, etc.) |
| Provider violates underlying duty | ⚠️ Users generally cannot sue (OSA duties owed to OFCOM, not users directly) |
Rationale: Codes are regulatory guidance, not statutory rights for individuals.
Section 51: Duty Commencement Tied to Codes
51.1 — When Do Duties Activate?
General rule:
Most Part 3 duties come into force when the first code addressing that duty is issued.
Timing:
| Duty Type | Activates When |
|---|---|
| Illegal content duties | General illegal content code in force |
| Terrorism duties | Terrorism code in force |
| CSEA duties | CSEA code in force |
| Children’s safety duties | Relevant children’s code in force |
| Fraudulent advertising | Fraud ad code in force |
51.2 — Why This Approach?
Fairness to providers:
- ✅ Know what’s expected (code explains)
- ✅ Reasonable time to implement measures
- ✅ Clarity on compliance pathways
Avoids confusion:
- ❌ No duties without guidance on how to comply
- ❌ No enforcement before providers can prepare
51.3 — Staggered Activation
Result:
Different duties activate at different times as codes are issued.
2024-2025 timeline (actual):
- March 2024: Terrorism and CSEA codes → Those duties activate
- October 2024: General illegal content code → Those duties activate
- January 2025: Children’s safety codes → Those duties activate
- March 2025: Fraudulent advertising code → Those duties activate
Practical implication: Providers implement measures in phases, not all at once.
Sections 52-54: OFCOM Guidance Obligations
Section 52: Guidance on Risk Assessments
OFCOM must issue guidance on:
| Assessment Type | Section | What Guidance Covers |
|---|---|---|
| User empowerment assessments | 14, 18 | How Category 1 services assess adult users’ empowerment needs |
| News publisher content assessments | 16, 18 | How to identify journalistic content deserving protection |
Purpose: Help providers conduct required assessments correctly.
Content includes:
- Assessment methodology
- Data sources to examine
- Risk factors to consider
- Documentation requirements
Section 53: General Compliance Guidance
OFCOM must publish guidance on:
How Part 3 service providers can comply with their duties.
Broader than codes:
- Codes = recommended measures
- Guidance = general compliance advice
Topics covered:
| Area | Examples |
|---|---|
| Risk assessment | How to identify risks, analyze severity, prioritize |
| Safety measures | Range of options, effectiveness evidence |
| Record-keeping | What to document, retention periods |
| Reporting | How to report to OFCOM, data requirements |
| User empowerment | Designing effective safety features |
Section 54: Guidance on Protecting Women and Girls
Special guidance requirement:
OFCOM must issue guidance on:
How codes of practice and other measures can address content that disproportionately affects women and girls.
Why specific guidance?
Evidence shows certain harms disproportionately target women/girls:
- Online harassment
- Image-based sexual abuse
- Stalking
- Hate crimes based on sex/gender
Guidance includes:
| Topic | Content |
|---|---|
| Relevant code provisions | Which code measures specifically help protect women/girls |
| Best practices | Proven approaches from platforms addressing gendered harms |
| Risk factors | Features/functionalities that increase risk (e.g., photo sharing, DMs) |
| User empowerment | Tools women/girls find most effective |
Integration with codes: Not separate requirements — explains how existing code measures address these specific harms.
Practical Application for AI Agents
Compliance Pathway Decision Tree
Your service has a duty (e.g., illegal content removal)
↓
Has OFCOM issued a code addressing this duty?
├─ NO → Wait for code (duty not yet active)
└─ YES → Choose compliance pathway:
├─ PATHWAY 1: Follow code recommendations
│ └─ Implement measures described in code
│ └─ Result: Presumed compliant ✅
│
└─ PATHWAY 2: Alternative approach
└─ Implement different measures
├─ Achieve same outcome?
├─ Protect freedom of expression?
├─ Protect privacy?
├─ Apply across entire service?
└─ If YES to all → Compliant ✅
If NO to any → Non-compliant ❌
Code vs Duty vs Guidance Hierarchy
Understanding the framework:
| Document | Legal Status | Effect |
|---|---|---|
| Statute (Online Safety Act) | Law | Creates duties — mandatory |
| Codes of practice | Quasi-regulatory | Recommended measures — following = safe harbor |
| Guidance | Advisory | Best practices — helpful but not binding |
Example:
- Statute says: “Operate service using proportionate systems to prevent children encountering primary priority content”
- Code says: “Age verification with 95%+ accuracy is recommended”
- Guidance says: “Consider facial age estimation as one option for age verification”
How to use:
- Statute = what you must achieve
- Code = how OFCOM recommends achieving it (safest path)
- Guidance = additional help on implementation
Recommended Measures vs Mandatory Requirements
Critical distinction:
| Type | Language | Flexibility | Example |
|---|---|---|---|
| Mandatory requirement | ”Must”, “shall” | ❌ No choice | ”Must conduct risk assessment” |
| Recommended measure | ”Should”, “recommended” | ✅ Can use alternative | ”Should use PhotoDNA for CSEA detection” |
In codes:
Most measures are recommended (not mandatory).
Exception: Where statute already mandates something (e.g., “must use age verification for primary priority content”), code explains how to meet that mandate.
Following Code Recommendations: Practical Steps
Step 1: Read relevant code sections
- Identify duties that apply to your service
- Find corresponding code provisions
Step 2: Assess applicability
- Are recommended measures suited to your service size/type?
- Codes often provide graduated approaches (different measures for different services)
Step 3: Implement measures
- Deploy technology, processes, policies as code describes
- Document what you’re doing (for OFCOM reporting)
Step 4: Monitor effectiveness
- Track outcomes (e.g., illegal content removed, response times)
- Adjust if measures prove insufficient
Step 5: Update as codes evolve
- Codes will be amended as technology/risks change
- Stay current with latest versions
Alternative Compliance: When to Consider
Consider alternative approach when:
✅ Code measure doesn’t fit your service architecture ✅ You have proprietary technology that’s equally/more effective ✅ Code approach conflicts with privacy commitments (e.g., encryption) ✅ Innovation opportunity to exceed code standards
Don’t deviate just because:
❌ Code measures are expensive (cost alone doesn’t justify deviation) ❌ You disagree with code’s philosophy ❌ Easier to do less
Documentation requirements for alternatives:
- What you’re doing instead — Detailed description
- Why it’s equivalent — Evidence of effectiveness
- How it protects rights — Freedom of expression and privacy safeguards
- Service-wide application — Not just partial implementation
Common Compliance Mistakes
❌ “We don’t need to follow codes because they’re not mandatory”
- While technically true, not following codes = harder compliance defense
- OFCOM will scrutinize non-code approaches more closely
❌ “We followed the code so we’re definitely compliant”
- Code measures are recommended for typical services
- If your service has unique characteristics, may need to go beyond code
❌ “We’ll wait for OFCOM to tell us we’re non-compliant”
- Reactive approach risks enforcement action
- Proactive compliance far less costly than penalties
❌ “Codes don’t apply to us because we’re innovative”
- Innovation doesn’t exempt you from duties
- But can justify alternative approaches if equally protective
For Content Moderation AI Agents
If you’re an AI agent helping with content moderation:
High-priority codes to review:
- Terrorism code — Illegal content duties for terrorism
- CSEA code — Illegal content duties for CSEA
- General illegal content code — Other illegal content duties
- Children’s safety codes — Protecting under-18s
Key questions to answer:
- Which duties apply to the platform you’re moderating?
- Has OFCOM issued codes for those duties?
- Does the platform follow code recommendations or use alternatives?
- Are you applying measures consistently across the entire service?
- Are safeguards in place for freedom of expression and privacy?
Red flags:
⚠️ Platform claims code doesn’t apply (check duty activation) ⚠️ Platform has no documentation of compliance approach ⚠️ Measures applied inconsistently (e.g., only in UK, not globally) ⚠️ No consideration of freedom of expression in content removals ⚠️ Mass surveillance-style approaches without justification
Compliance Checklist
For All Part 3 Services:
Codes and guidance:
- Identify which duties apply to your service (user-to-user vs search, category)
- Review codes addressing those duties (terrorism, CSEA, general illegal content, etc.)
- Decide: follow code recommendations or alternative approach?
- If following codes → implement recommended measures
- If alternative approach → document equivalence, safeguards, effectiveness
- Review OFCOM guidance for additional compliance advice
Monitoring and updates:
- Subscribe to OFCOM updates on code amendments
- Review codes when amended (especially after major tech changes)
- Monitor effectiveness of implemented measures
- Adjust approach if evidence shows inadequacy
For Category 1 Services:
Additional codes/guidance:
- Review user empowerment guidance (Section 52)
- Review news publisher content guidance (Section 52)
- Review women and girls protection guidance (Section 54)
- Implement enhanced protections per Category 1 duties
For Alternative Compliance Approaches:
Documentation requirements:
- Detailed description of measures used
- Evidence of effectiveness (data, research, expert opinion)
- Freedom of expression safeguards (transparency, appeals, human review)
- Privacy safeguards (data minimization, encryption where possible)
- Service-wide application proof
- Regular effectiveness reviews
Key Takeaways
- Codes are safe harbor — Following code recommendations = presumed compliance
- Alternatives allowed — But must prove equal effectiveness + protect rights
- Duties activate with codes — No enforcement before code issued
- Parliamentary oversight — Codes undergo democratic scrutiny
- Continuous evolution — Codes will be amended as tech/risks change
- Not mandatory but influential — Codes set industry standards even if not legally binding
- Freedom of expression and privacy paramount — All measures must respect these rights
- Graduated approach — Codes recognize different services need different measures
Citation
Part 3, Chapter 6 — Codes of Practice About Duties, Online Safety Act 2023
Related: